Unsupervised Learning[论文合集]

https://handong1587.github.io/deep_learning/2015/10/09/unsupervised-learning.html


Jump to...

     1. Papers

     1. Clustering

     2. Auto-encoder

     3. RBM (Restricted Boltzmann Machine)

     1. Papers

     2. Blogs

     3. ​​​​​​​Projects

     4. ​​​​​​​Videos


Restricted Boltzmann Machine (RBM)

Sparse Coding

Fast Convolutional Sparse Coding in the Dual Domain

https://arxiv.org/abs/1709.09479

Auto-encoder

Papers

On Random Weights and Unsupervised Feature Learning

Unsupervised Learning of Spatiotemporally Coherent Metrics

Unsupervised Learning of Visual Representations using Videos

Unsupervised Visual Representation Learning by Context Prediction

Unsupervised Learning on Neural Network Outputs

Unsupervised Domain Adaptation by Backpropagation

Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles

Tagger: Deep Unsupervised Perceptual Grouping

Regularization for Unsupervised Deep Neural Nets

Sparse coding: A simple exploration

Navigating the unsupervised learning landscape

Unsupervised Learning using Adversarial Networks

Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction

Learning Features by Watching Objects Move

CNN features are also great at unsupervised classification

Supervised Convolutional Sparse Coding

https://arxiv.org/abs/1804.02678

Clustering

Deep clustering: Discriminative embeddings for segmentation and separation

Neural network-based clustering using pairwise constraints

Unsupervised Deep Embedding for Clustering Analysis

Joint Unsupervised Learning of Deep Representations and Image Clusters

Single-Channel Multi-Speaker Separation using Deep Clustering

Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering

Deep Unsupervised Clustering with Gaussian Mixture Variational

Variational Deep Embedding: A Generative Approach to Clustering

A new look at clustering through the lens of deep convolutional neural networks

Deep Subspace Clustering Networks

SpectralNet: Spectral Clustering using Deep Neural Networks

Clustering with Deep Learning: Taxonomy and New Methods

Deep Continuous Clustering

Learning to Cluster

Learning Neural Models for End-to-End Clustering

Deep Clustering for Unsupervised Learning of Visual Features

Improving Image Clustering With Multiple Pretrained CNN Feature Extractors

Deep clustering: On the link between discriminative models and K-means

https://arxiv.org/abs/1810.04246

Deep Density-based Image Clustering

https://arxiv.org/abs/1812.04287

Deep Representation Learning Characterized by Inter-class Separation for Image Clustering

Auto-encoder

Auto-Encoding Variational Bayes

The Potential Energy of an Autoencoder

Importance Weighted Autoencoders

Review of Auto-Encoders

Stacked What-Where Auto-encoders

Ladder Variational Autoencoders

How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks

Rank Ordered Autoencoders

Decoding Stacked Denoising Autoencoders

Keras autoencoders (convolutional/fcc)

Building Autoencoders in Keras

Review of auto-encoders

Autoencoders: Torch implementations of various types of autoencoders

Tutorial on Variational Autoencoders

Variational Autoencoders Explained

Introducing Variational Autoencoders (in Prose and Code)

Under the Hood of the Variational Autoencoder (in Prose and Code)

The Unreasonable Confusion of Variational Autoencoders

Variational Autoencoder for Deep Learning of Images, Labels and Captions

Convolutional variational autoencoder with PyMC3 and Keras

http://nbviewer.jupyter.org/github/taku-y/pymc3/blob/89b8634a2fd30ef96429953558bf360132b6153f/docs/source/notebooks/convolutional_vae_keras_advi.ipynb

Pixelvae: A Latent Variable Model For Natural Images

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

Variational Lossy Autoencoder

Convolutional Autoencoders

Convolutional Autoencoders in Tensorflow

A Deep Convolutional Auto-Encoder with Pooling - Unpooling Layers in Caffe

Deep Matching Autoencoders

Understanding Autoencoders with Information Theoretic Concepts

Hyperspherical Variational Auto-Encoders

Spatial Frequency Loss for Learning Convolutional Autoencoders

https://arxiv.org/abs/1806.02336

DAQN: Deep Auto-encoder and Q-Network

https://arxiv.org/abs/1806.00630

Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer

RBM (Restricted Boltzmann Machine)

Papers

Deep Boltzmann Machines

On the Equivalence of Restricted Boltzmann Machines and Tensor Network States

Matrix Product Operator Restricted Boltzmann Machines

https://arxiv.org/abs/1811.04608

Blogs

A Tutorial on Restricted Boltzmann Machines

http://xiangjiang.live/2016/02/12/a-tutorial-on-restricted-boltzmann-machines/

Dreaming of names with RBMs

on Cheap Learning: Partition Functions and RBMs

Improving RBMs with physical chemistry

Projects

Restricted Boltzmann Machine (Haskell)

tensorflow-rbm: Tensorflow implementation of Restricted Boltzman Machine

Videos

Modelling a text corpus using Deep Boltzmann Machines

Foundations of Unsupervised Deep Learning

 

25篇机器学习经典论文合集,有需要欢迎积分自取 Efficient sparse coding algorithms论文附有代码 [1] Zheng S, Kwok J T. Follow the moving leader in deep learning[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 4110-4119. [2] Kalai A, Vempala S. Efficient algorithms for online decision problems[J]. Journal of Computer and System Sciences, 2005, 71(3): 291-307. [3] Kingma, D. and Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference for Learning Representations, 2015. [4] Lee H, Battle A, Raina R, et al. Efficient sparse coding algorithms[C]//Advances in neural information processing systems. 2007: 801-808. [5] Fan J, Ding L, Chen Y, et al. Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery[J]. 2019. [6] Z. Lai, Y. Chen, J. Wu, W. W. Keung, and F. Shen, “Jointly sparse hashing for image retrieval,” IEEE Transactions on Image Processing, vol. 27, no. 12, pp. 6147–6158, 2018. [7] Z. Zhang, Y. Chen, and V. Saligrama, “Efficient training of very deep neural networks for supervised hashing,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2016, pp. 1487–1495. [8] Wei-Shi Zheng, Shaogang Gong, Tao Xiang. Person re-identification by probabilistic relative distance comparison[C]// CVPR 2011. IEEE, 2011. [9] Liao S, Hu Y, Zhu X, et al. Person re-identification by local maximal occurrence representation and metric learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 2197-2206. [10] Liu X, Li H, Shao J, et al. Show, tell and discriminate: Image captioning by self-retrieval with partially labeled data[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 338-354. [11] Yao T, Pan Y, Li Y, et al. Exploring visual relationship for image captioning[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 684-699. [12] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang., ”Image Super-Resolution Using Deep Convolutional Networks, ” IEEE Transactions on Pattern Analysis and Machine Intelligence, Preprint, 2015. [13] M. D. Zeiler, D. Krishnan, Taylor, G. W., and R. Fergus, "Deconvolutional networks," in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recog., 2010, pp. 2528-2535. [14] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 580-587. [15] Girshick R . Fast R-CNN[J]. Computer Science, 2015. [16] Joseph Redmon, Santosh Divvala, Ross Girshick, et al. You Only Look Once: Unified, Real-Time Object Detection[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. [17] LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. [18] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. science, 2006, 313(5786): 504-507. [19] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012: 1097-1105. [20] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//European conference on computer vision. Springer, Cham, 2014: 818-833. [21] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9. [22] Wu, Y., & He, K. (2018). Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). [23] Goodfellow I,Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems. 2014: 2672-2680. [24] Tran, L., Yin, X., & Liu, X. (2017). Disentangled representation learning gan for pose-invariant face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1415-1424). [25] Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., & Carin, L. (2016). Variational autoencoder for deep learning of images, labels and captions. In Advances in neural information processing systems (pp. 2352-2360).
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值