本文为Letters to nature上文章Learning the parts of objects by non-negativematrix factorization的读书笔记,针对如何基于NMF在神经网络中学习一个object的各层part做出理论上的分析,并在人脸part学习和text语义特征学习上做了相应实验。本文不含如何去解NMF,只给出非负约束下矩阵分解的结果。
Learning the parts of object by NMF
Rachel Zhang
1. Theoretical basis and Motivation
1. Part-based representation
There is psychological and physiological evidence for parts-based representations in the brain, and certaincomputational theories of object recognition rely on such representations.
2. What is NMF? Why use NFM in NeuralNetwork?
NMF: non-negative matrix factorization
Difference: PCA, VQ learn holistic, not part-based representations, NMFdifferent from them via non-negative constrains.
Virtue in Neural Network:
1. Firing rates(number of spikes in a window) of neurons are nevernegative.
2. Synaptic strengths do not changesign.
2. Applied Experiments
3. Applied result of PCA, VQ and NMF
Figure 1 shows the 3 methods learn to represent a face as a linearcombination of basis images.
VQ: discovers a basisconsisting of whole-face prototypes.
PCA: discovers a basis of ‘eigenfaces’,some of which resemble distorted versions of wholefaces.
NMF: discovers a basis consisting localized features that correspond better withintuitive notions of the parts of faces.

Figure 1.Basis of NMF, VQ and PCA
对于图中的encoding,红色表示负数,灰黑表示正数,颜色程度表示大小。
4. Matrix Factorization framework
为什么NMF会得出与PCA和VQ迥然不同的基呢?我们这里将这三种方法展示在Matrix factorizationframework里,首先看一下这个框架。图像数据库用n*m的矩阵V表示,每列包括m张人脸图中一张的n个非负像素值。这三种方法构造近似矩阵分解:V≈WH,or
Viu≈(WH)iu≈ΣaWiaHau
W∈Rn*r中的r列是basis images. H∈Rr*m中的每一列是

本文探讨非负矩阵分解(NMF)如何在神经网络中用于学习对象的分块表示,特别是在人脸部分学习和文本语义分析上的应用。与PCA和VQ不同,NMF发现的基由更符合直觉的局部特征组成。NMF优化目标通过迭代更新算法达到局部极大值,提供一种概率隐藏变量模型来描述图像像素和编码变量的关系。
最低0.47元/天 解锁文章
839

被折叠的 条评论
为什么被折叠?



