文章目录
Week 9: 深度学习补遗:自编码器与生成式模型
摘要
本周,跟随李宏毅老师的课程学习了自编码器和生成式模型的课程,两方面的内容关系比较紧密。主要从抽象角度以及一些数学角度对自编码器进行了学习,编码器-解码器架构作为现在模型的一大主流结构,具有比较重要的学习意义,对自编码器以及生成式模型的学习使我对编码器-解码器架构有了一定了解。
Abstract
This week, I followed Mr. Hung-yi Lee’s course to learn about self-encoders and generative models, which are closely related to each other. Mainly from an abstract point of view as well as some mathematical point of view on the self-encoder learning, encoder-decoder architecture as a major mainstream structure of the model now, has a more important learning significance, on the self-encoder as well as the generative model of the learning of the encoder-decoder architecture so that I have a certain understanding of the encoder-decoder architecture.
1. t分布随机邻居嵌入 t-Distributed Stochastic Neighbor Embedding
LLE与拉普拉斯映射只考虑了“临近点必须接近”,并没有考虑“非临近点要远离”。所以实际上,LLE在会把同一个类别的点聚在一起的同时,难以避免不同的类别的点重叠在一起。导致类别之间难以区分。
而t-SNE的要点是,在原空间内计算每一对点 x i x^i xi与 x j x^j xj之间的相似度 S ( x i , x j ) S(x^i,x^j) S(xi,xj),然后做归一化 P ( x j ∣ x i ) = S ( x i , x j ) ∑ x ≠ k S ( x i , x k ) P(x^j|x^i)=\frac{S(x^i,x^j)}{\sum_{x\neq k}S(x^i,x^k)} P(xj∣xi)=∑x=kS(xi,xk)S(xi,xj)。在降维之后获取到的点 z i z^i zi、 z j z^j z

最低0.47元/天 解锁文章
887

被折叠的 条评论
为什么被折叠?



