<div id="article_content" class="article_content clearfix csdn-tracking-statistics" data-pid="blog" data-mod="popu_307" data-dsm="post" style="height: 1820px; overflow: hidden;">
<link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/template/css/htmledit_views-0a60691e80.css">
<div class="htmledit_views">
<p><span style="font-size:14px;">我学习使用的是带中文翻译字幕的网易课程,公开课地址:<a href="http://study.163.com/course/courseLearn.htm?courseId=1003223001#/learn/video?lessonId=1003734105&courseId=1003223001" target="_blank">http://study.163.com/course/courseLearn.htm?courseId=1003223001#/learn/video?lessonId=1003734105&courseId=1003223001</a></span></p>
<p><span style="font-size:14px;">该节课中提到了一种叫作softmax的函数,因为之前对这个概念不了解,所以本篇就这个函数进行整理,如下:</span></p>
<p><span style="font-size:14px;">维基给出的解释:softmax函数,也称指数归一化函数,它是一种<span style="color:#cc0000;">logistic函数</span>的归一化形式,可以将K维实数向量压缩成范围[0-1]的新的K维实数向量。函数形式为:</span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171127213109950?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""> (1)<br></span></p>
<p><span style="font-size:14px;">其中,分母部分起到归一化的作用。至于取指数的原因,第一是要模拟max的行为,即使得大的数值更大;第二是方便求导运算。</span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171127214016170?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""><br></span></p>
<p><span style="font-size:14px;"><img src="https://pic4.zhimg.com/50/v2-11758fbc2fc5bbbc60106926625b3a4f_hd.jpg" alt=""><br></span></p>
<p><span style="font-size:14px;">在概率论中,softmax函数输出可以代表一个类别分布--有k个可能结果的概率分布。<br></span></p>
<p><span style="font-size:14px;">从定义中也可以看出,softmax函数与logistic函数有着紧密的的联系,对于logistic函数,定义如下:</span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171127214513836?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""><br></span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171127214523348?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""><br></span></p>
<p><span style="font-size:14px;">最显著的区别:<span style="color:#ff0000;">logistic 回归是针对二分类问题,softmax则是针对多分类问题,logistic可看成softmax的特例。</span><br></span></p>
<p><span style="font-size:14px;">二分类器(two-class classifier)要最大化数据集的似然值等价于将每个数据点的线性回归输出推向正无穷(类1)和负无穷(类2)。逻辑回归的损失方程(Loss Function):<br></span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171127215037856?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""><br></span></p>
<p><span style="font-size:14px;"><span style="font-family:sans-serif;">对于给定的测试输入 </span><img class="tex" alt="\textstyle x" src="http://ufldl.stanford.edu/wiki/images/math/f/6/c/f6c0f8758a1eb9c99c0bbe309ff2c5a5.png" style="border:none;vertical-align:middle;margin:0px;font-family:sans-serif;"><span style="font-family:sans-serif;">,假如想用假设函数针对每一个类别j估算出概率值 </span><img class="tex" alt="\textstyle p(y=j | x)" src="http://ufldl.stanford.edu/wiki/images/math/c/1/d/c1d5aaee0724f2183116cb8860f1b9e4.png" style="border:none;vertical-align:middle;margin:0px;font-family:sans-serif;"><span style="font-family:sans-serif;">。即估计 </span><img class="tex" alt="\textstyle x" src="http://ufldl.stanford.edu/wiki/images/math/f/6/c/f6c0f8758a1eb9c99c0bbe309ff2c5a5.png" style="border:none;vertical-align:middle;margin:0px;font-family:sans-serif;"><span style="font-family:sans-serif;"> 的每一种分类结果出现的概率。因此,假设函数将要输出一个 </span><img class="tex" alt="\textstyle k" src="http://ufldl.stanford.edu/wiki/images/math/b/0/0/b0066e761791cae480158b649e5f5a69.png" style="border:none;vertical-align:middle;margin:0px;font-family:sans-serif;"><span style="font-family:sans-serif;"> 维的向量(向量元素的和为1)来表示这 </span><img class="tex" alt="\textstyle k" src="http://ufldl.stanford.edu/wiki/images/math/b/0/0/b0066e761791cae480158b649e5f5a69.png" style="border:none;vertical-align:middle;margin:0px;font-family:sans-serif;"><span style="font-family:sans-serif;"> 个估计的概率值。
假设函数 </span><img class="tex" alt="\textstyle h_{\theta}(x)" src="http://ufldl.stanford.edu/wiki/images/math/8/8/7/887e72d0a7b7eb5083120e23a909a554.png" style="border:none;vertical-align:middle;margin:0px;font-family:sans-serif;"><span style="font-family:sans-serif;"> 形式如下:</span><br></span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171127215854632?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""><br></span></p>
<p><span style="font-size:14px;"><span style="font-family:sans-serif;">其中 </span><img class="tex" alt="\theta_1, \theta_2, \ldots, \theta_k \in \Re^{n+1}" src="http://ufldl.stanford.edu/wiki/images/math/f/d/9/fd93be6ab8e2b869691579202d7b4417.png" style="border:none;vertical-align:middle;margin:0px;font-family:sans-serif;"><span style="font-family:sans-serif;"> 是模型的参数。请注意 </span><img class="tex" alt="\frac{1}{ \sum_{j=1}^{k}{e^{ \theta_j^T x^{(i)} }} }" src="http://ufldl.stanford.edu/wiki/images/math/a/a/b/aab84964dbe1a2f77c9c91327ea0d6d6.png" style="border:none;vertical-align:middle;margin:0px;font-family:sans-serif;"><span style="font-family:sans-serif;">这一项对概率分布进行归一化,使得所有概率之和为
1 。</span><br></span></p>
<p><span style="font-family:sans-serif;"><span style="font-size:14px;">其<strong>代价函数</strong>可以写为:</span></span></p>
<p><span style="font-family:sans-serif;"><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171127220138270?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""><br></span></span></p>
<p><span style="font-family:sans-serif;"><span style="font-size:14px;">其中,1{真}=1,1{假}=0.</span></span></p>
<p><span style="font-family:sans-serif;"><span style="font-size:14px;color:#ff0000;"><em><strong>12.23补充:</strong></em></span></span></p>
<p><span style="font-family:sans-serif;"><span style="font-size:14px;">关于代价函数,softmax用的是cross-entropy loss,</span></span><span style="font-size:16px;font-family:'microsoft yahei';">信息论中有个重要的概念叫做交叉熵cross-entropy,
</span><span style="font-size:16px;font-family:'microsoft yahei';">公式是: </span></p>
<p><span style="font-family:sans-serif;"><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171223112802040?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""></span></span></p>
<p><span style="font-family:sans-serif;"><span style="font-size:14px;"><span style="font-family:'microsoft yahei';font-size:16px;">香农熵的公式:</span></span></span></p>
<p><span style="font-family:sans-serif;"><span style="font-size:14px;"><img src="http://images.cnitblog.com/blog/571227/201412/112112589313898.png" alt="这里写图片描述" title="" style="border:0px;vertical-align:middle;margin-top:15px;margin-bottom:15px;font-family:'microsoft yahei';font-size:16px;"><br style="font-family:'microsoft yahei';font-size:16px;"><span style="font-family:'microsoft yahei';font-size:16px;">交叉熵与 loss的联系,</span><span style="font-family:'microsoft yahei';font-size:16px;">设p(x)代表的是真实的概率分布</span><span class="MathJax_Preview" style="margin:0px;padding:0px;font-family:'microsoft yahei';font-size:16px;"></span><span style="font-family:'microsoft yahei';font-size:16px;">,那么可以看出上式是概率分布为<img src="https://img-blog.youkuaiyun.com/20171223113004779?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""></span><span class="MathJax_Preview" style="margin:0px;padding:0px;font-family:'microsoft yahei';font-size:16px;"></span><span style="font-family:'microsoft yahei';font-size:16px;">的相对熵公式,<img src="https://img-blog.youkuaiyun.com/20171223113004779?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" style="font-family:'microsoft yahei';font-size:16px;"></span><span style="font-family:'microsoft yahei';font-size:16px;">是对第i个类别概率的估计。使用损失函数可以描述真实分布于估计分布的交叉熵。交叉熵可以看做熵与相对熵之和:<img src="https://img-blog.youkuaiyun.com/20171223113113934?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="">,</span><span style="font-family:'microsoft yahei';font-size:16px;">这里的相对熵也叫作kl距离,在信息论中,D(P||Q)表示当用概率分布Q来拟合真实分布P时,产生的信息损耗,其中P表示真实分布,Q表示P的拟合分布。又因为真实值的熵是不变的,交叉熵也描述预测结果与真实结果的相似性,用来做损失函数可保证预测值符合真实值。 </span><br></span></span></p>
<h1><a name="t0"></a><span style="font-family:sans-serif;"><span style="font-size:14px;">softmax的应用:</span></span></h1>
<p><span><span><span style="font-family:sans-serif;"><span style="font-size:14px;">在人工神经网络(ANN)中,Softmax常被用作输出层的激活函数。<span style="line-height:33px;">其</span><span style="margin:0px;padding:0px;font-family:'Times New Roman';line-height:33px;"><span style="margin:0px;padding:0px;font-family:Arial, 'Microsoft YaHei';">中,<span><img src="https://img-blog.youkuaiyun.com/20160402203524206?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" style="border:none;vertical-align:middle;font-family:'Times New Roman';line-height:33px;"></span>表示第L层(通常是最后一层)第j个神经元的输入,<span><img src="https://img-blog.youkuaiyun.com/20160402203430831?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" style="border:none;vertical-align:middle;"></span>表示第L层第j个神经元的输出,<span><img src="https://img-blog.youkuaiyun.com/20160402203643644?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" style="border:none;vertical-align:middle;"></span>表示自然常数。<span style="line-height:33px;"><span style="font-family:Arial;">注意</span></span><span style="line-height:33px;"><span style="font-family:Arial;">看,<span><img src="https://img-blog.youkuaiyun.com/20160402203914176?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" style="border:none;vertical-align:middle;"></span>表示了第L层所有神经元的输入之和。</span></span></span></span><br></span></span></span></span></p>
<p><span style="font-size:14px;"><span style="font-family:sans-serif;">不仅是因为它的效果好,而且它使得ANN的输出值更易于理解,即</span><span style="font-family:sans-serif;">神经元的输出值越大,则该神经元对应的类别是真实类别的可能性更高。</span><br></span></p>
<h1><a name="t1"></a><span style="font-family:sans-serif;"><span style="font-size:14px;">12.17补充:softmax求导</span></span></h1>
<p><span style="font-size:14px;">由公式(1)可知,softmax函数仅与分类有关:</span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171217113132547?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""><br></span></p>
<p><span style="font-size:14px;">其负对数似然函数为:</span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171217113325834?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""></span></p>
<p><span style="font-size:14px;">对该似然函数求导,得:<br></span></p>
<p><span style="font-size:14px;"><img src="https://img-blog.youkuaiyun.com/20171217132601530?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""></span></p>
<p><span style="font-size:14px;"><span style="color:#CC0000;"><em>注:参考博客里上面求导公式有误,已更正。</em></span><br></span></p>
<p><span style="font-size:14px;">对于①条件:先Copy一下Softmax的结果(即prob_data)到bottom_diff,再对k位置的unit减去1<br>
对于②条件:直接Copy一下Softmax的结果(即prob_data)到bottom_diff<br>
对于③条件:找到ignore位置的unit,强行置为0。<br><img src="https://img-blog.youkuaiyun.com/20171217113544387?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcGlhb3h1ZXpob25n/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt=""><br></span></p>
<p><span style="font-size:14px;">参考:</span></p>
<p><span style="font-size:14px;">https://en.wikipedia.org/wiki/Softmax_function<br></span></p>
<p><span style="font-size:14px;">https://zhuanlan.zhihu.com/p/25723112<br></span></p>
<p><span style="font-size:14px;">http://ufldl.stanford.edu/wiki/index.php/Softmax%E5%9B%9E%E5%BD%92<br></span></p>
<p><span style="font-size:14px;">https://www.cnblogs.com/maybe2030/p/5678387.html?utm_source=tuicool&utm_medium=referral<br></span></p>
<p><span style="font-size:14px;">http://blog.youkuaiyun.com/bea_tree/article/details/51489969#t10</span></p>
<p><span style="font-size:14px;">https://github.com/YuDamon/Softmax</span></p>
<p><span style="font-size:14px;">https://www.cnblogs.com/neopenx/p/5590756.html</span><br></p>
</div>
</div>
斯坦福大学深度学习公开课cs231n学习笔记(1)softmax函数理解与应用
最新推荐文章于 2023-12-25 00:53:02 发布