深度学习
一有bug就哭给你看
这个作者很懒,什么都没留下…
展开
专栏收录文章
- 默认排序
- 最新发布
- 最早发布
- 最多阅读
- 最少阅读
-
MLE&MAP
文章目录Intro两大学派的争论先验与后验、似然 Intro 学MLE时遇到这么一句话 A nice feature of this view is that we can now also interpret the regularization term R(W) in the full loss function as coming from a Gaussian prior over t...原创 2020-03-05 18:19:54 · 294 阅读 · 0 评论 -
Softmax classifier & cross-entropy loss
文章目录IntroFormulationProbabilistic interpretationMLEInformation theory viewKullback-Leibler divergencePractical issues: Numeric stabilityPossibly confusing naming conventions Intro Softmax classifier 是...原创 2020-03-04 18:15:27 · 309 阅读 · 0 评论 -
Regularization
Intro Regularization is an additional part in loss function or optimiazation loss, which prevent the loss function from doing too well on training data. Formulation indeed, the regularation prefer a...原创 2020-03-03 19:55:00 · 249 阅读 · 0 评论
分享