业余民科,垃圾内容勿看
神经网络与拓扑
在CNN的读书笔记之前,这里先记录一下神经网络与拓扑的关系,源于脑洞文章《Neural Networks, Manifolds, and Topology》,只要你站的角度够高,可以直击问题的本质。
With each layer, the network transforms the data, creating a new representation. We can look at the data in each of these representations and how the network classfies them. - 《Neural Networks, Manifolds, and Topology》
A t a n h tanh tanh layer t a n h tanh tanh(W x x x + b b b) consists of:
- A linear transformation by the “weight” matrix W W W
- A translation by the vector b b b
- Point-wise application of tanh
Each layer streches and squishes space, but it never cuts, breaks, or folds it. Intuitively, we can that it preserves topological properties. - 《Neural Networks, Manifolds, and Topology》
博客作者证明了 t a n h tanh tanh(and s i g m o i d sigmoid sigmoid and s o f t p l u s softplus softplus but not ReLU)的layer具有homeomorphism(同胚)。
但是博客后面的两个同心圆的例子,我还没有彻底理解。
toy 2d classification with 2-layer neural network
深层学习为何要“Deep”
http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
神经网络,流形和拓扑
http://playground.tensorflow.org/
CNN
卷积(convolution)
卷积起到了一个特征提取的作用。
卷积一般有多个。
池化(Pooling)
减少数据量?
防止过拟合?
全连接层(Fully Connected layer)
传统的NN其实都是全连接层。
CNN的本质是什么
通过挖掘局部结构的特点减少了训练的参数个数并提高了训练效果(存疑)。而局部信息关联最为紧密的莫过于图形了,图像就是通过像素与像素之间的关系产生了意义。
CNN优势
参数共享,减少训练数据量,避免过拟合,平移不变性