交叉熵代价函数
改进神经网络学习方法——交叉熵代价函数
《Neural Networks and Deep Learning》
(Michael Nielsen)
\text{(Michael Nielsen)}
(Michael Nielsen) 笔记 (三)
对二次代价函数:
C
=
(
y
−
a
)
2
2
(
1
)
C=\frac{(y-a)^{2}}{2} \qquad(1)
C=2(y−a)2(1)
a
a
a是神经元的输出,训练输入为
x
=
1
,
y
=
0
x=1,y=0
x=1,y=0我们有
a
=
σ
(
z
)
a=\sigma(z)
a=σ(z),其中
z
=
w
x
+
b
z=wx+b
z=wx+b,使用链式法则求权重和偏置的偏导数:
∂
C
∂
w
=
(
a
−
y
)
σ
′
(
z
)
x
=
a
σ
′
(
z
)
(
2
)
∂
C
∂
b
=
(
a
−
y
)
σ
′
(
z
)
=
a
σ
′
(
z
)
(
3
)
\begin{aligned} \frac{\partial C}{\partial w} &=(a-y) \sigma^{\prime}(z) x=a \sigma^{\prime}(z)\qquad(2) \\ \frac{\partial C}{\partial b} &=(a-y) \sigma^{\prime}(z)=a \sigma^{\prime}(z) \qquad(3)\end{aligned}
∂w∂C∂b∂C=(a−y)σ′(z)x=aσ′(z)(2)=(a−y)σ′(z)=aσ′(z)(3)

从上图可以看出,当神经元的输出接近
1
1
1时,曲线变得相当平,所以
σ
′
(
z
)
\sigma^{\prime}(z)
σ′(z)就很小,方程
(
2
)
(2)
(2)和
(
3
)
(3)
(3)也告诉我们
∂
C
∂
w
\frac{\partial C} { \partial w}
∂w∂C和
∂
C
∂
b
\frac{\partial C} { \partial b}
∂b∂C会非常小,这其实是学习缓慢的原因所在。
引入交叉熵代价函数

神经元的输出就是
a
=
σ
(
z
)
a=\sigma(z)
a=σ(z),
z
=
∑
j
w
j
x
j
+
b
z=\sum_{j} w_{j} x_{j}+b
z=∑jwjxj+b
定义这个神经元的交叉熵代价函数:
C
=
−
1
n
∑
x
[
y
ln
a
+
(
1
−
y
)
ln
(
1
−
a
)
]
(
4
)
C=-\frac{1}{n} \sum_{x}[y \ln a+(1-y) \ln (1-a)] \qquad(4)
C=−n1x∑[ylna+(1−y)ln(1−a)](4)其中
n
n
n训练数据的总数,求和是在所有的训练输入
x
x
x上进行的,
y
y
y是对应的目标输出。
将交叉熵看作是代价函数的两点原因:一、交叉熵是非负的。二、如果对于所有的训练输入
x
x
x,神经元实际的输出接近目标值,达到很好的正确率,那么交叉熵将接近
0
0
0。
交叉熵代价函数有一个比二次代价函数更好的特性是它避免了学习速度下降的问题。
我们将
a
=
σ
(
z
)
a=\sigma(z)
a=σ(z)带入到
(
4
)
(4)
(4)中应用两次链式法则,得
∂
C
∂
w
j
=
−
1
n
∑
x
(
y
σ
(
z
)
−
(
1
−
y
)
1
−
σ
(
z
)
)
∂
σ
∂
w
j
=
−
1
n
∑
x
(
y
σ
(
z
)
−
(
1
−
y
)
1
−
σ
(
z
)
)
σ
′
(
z
)
x
j
\begin{aligned} \frac{\partial C}{\partial w_{j}} &=-\frac{1}{n} \sum_{x}\left(\frac{y}{\sigma(z)}-\frac{(1-y)}{1-\sigma(z)}\right) \frac{\partial \sigma}{\partial w_{j}} \\ &=-\frac{1}{n} \sum_{x}\left(\frac{y}{\sigma(z)}-\frac{(1-y)}{1-\sigma(z)}\right) \sigma^{\prime}(z) x_{j} \end{aligned}
∂wj∂C=−n1x∑(σ(z)y−1−σ(z)(1−y))∂wj∂σ=−n1x∑(σ(z)y−1−σ(z)(1−y))σ′(z)xj注:
∂
C
∂
w
j
=
−
1
n
∑
∂
∂
w
j
[
y
ln
a
+
(
1
−
y
)
ln
(
1
−
a
)
]
=
−
1
n
∑
∂
∂
a
[
y
ln
a
+
(
1
−
y
)
ln
(
1
−
a
)
]
⋅
∂
a
∂
w
j
=
−
1
n
∑
(
y
a
−
1
−
y
1
−
a
)
⋅
∂
a
∂
w
j
=
−
1
n
∑
(
y
s
(
z
)
−
1
−
y
1
−
s
(
z
)
)
∂
ζ
(
z
)
∂
w
j
=
−
1
n
∑
(
y
ζ
(
z
)
−
1
−
y
1
−
ζ
(
z
)
)
ζ
′
(
z
)
x
j
\begin{aligned} \frac{\partial C}{\partial w_{j}} &=-\frac{1}{n} \sum \frac{\partial}{\partial w_{j}}[y \ln a+(1-y) \ln (1-a)] \\ &=-\frac{1}{n} \sum \frac{\partial}{\partial a}[y \ln a+(1-y) \ln (1-a)] \cdot \frac{\partial a}{\partial w_{j}} \\ &=-\frac{1}{n} \sum\left(\frac{y}{a}-\frac{1-y}{1-a}\right)\cdot \frac{\partial a}{\partial w_{j}} \\ &=-\frac{1}{n} \sum\left(\frac{y}{s(z)}-\frac{1-y}{1-s(z)}\right) \frac{\partial \zeta(z)}{\partial w_{j}} \\ &=-\frac{1}{n} \sum\left(\frac{y}{\zeta(z)}-\frac{1-y}{1-\zeta(z)}\right) \zeta^{\prime}(z) x_{j} \end{aligned}
∂wj∂C=−n1∑∂wj∂[ylna+(1−y)ln(1−a)]=−n1∑∂a∂[ylna+(1−y)ln(1−a)]⋅∂wj∂a=−n1∑(ay−1−a1−y)⋅∂wj∂a=−n1∑(s(z)y−1−s(z)1−y)∂wj∂ζ(z)=−n1∑(ζ(z)y−1−ζ(z)1−y)ζ′(z)xj合并结果简化成:
∂
C
∂
w
j
=
1
n
∑
x
σ
′
(
z
)
x
j
σ
(
z
)
(
1
−
σ
(
z
)
)
(
σ
(
z
)
−
y
)
\frac{\partial C}{\partial w_{j}}=\frac{1}{n} \sum_{x} \frac{\sigma^{\prime}(z) x_{j}}{\sigma(z)(1-\sigma(z))}(\sigma(z)-y)
∂wj∂C=n1x∑σ(z)(1−σ(z))σ′(z)xj(σ(z)−y)根据
σ
(
z
)
=
1
(
1
+
e
−
z
)
\sigma(z)=\frac{1} {\left(1+e^{-z}\right)}
σ(z)=(1+e−z)1,
σ
′
(
z
)
=
σ
(
z
)
(
1
−
σ
(
z
)
)
\sigma^{\prime}(z)=\sigma(z)(1-\sigma(z))
σ′(z)=σ(z)(1−σ(z)),最终形式是:
∂
C
∂
w
j
=
1
n
∑
x
x
j
(
σ
(
z
)
−
y
)
(
5
)
\frac{\partial C}{\partial w_{j}}=\frac{1}{n} \sum_{x} x_{j}(\sigma(z)-y) \qquad(5)
∂wj∂C=n1x∑xj(σ(z)−y)(5)类似地得到:
∂
C
∂
b
=
1
n
∑
x
(
σ
(
z
)
−
y
)
(
6
)
\frac{\partial C}{\partial b}=\frac{1}{n} \sum_{x}(\sigma(z)-y) \qquad(6)
∂b∂C=n1x∑(σ(z)−y)(6)从方程
(
5
)
(5)
(5)和
(
6
)
(6)
(6),我们看到了权重学习的速度受到
σ
(
z
)
−
y
\sigma(z)-y
σ(z)−y,也就是输出中的误差的控制。更大的误差,更快的学习速度。
σ
′
(
z
)
\sigma^{\prime}(z)
σ′(z)被约掉了,避免了像在二次代价函数在
σ
′
(
z
)
\sigma^{\prime}(z)
σ′(z)导致的学习缓慢。
推广到有很多神经元的多层神经网络,特别地,假设
y
=
y
1
,
y
2
,
…
y=y_{1}, y_{2}, \ldots
y=y1,y2,…是输出神经元上的目标值,而
a
1
L
,
a
2
L
,
…
a_{1}^{L}, a_{2}^{L}, \ldots
a1L,a2L,…是实际输出值,那么我们定义如下的交叉熵
C
=
−
1
n
∑
x
∑
j
[
y
j
ln
a
j
L
+
(
1
−
y
j
)
ln
(
1
−
a
j
L
)
]
C=-\frac{1}{n} \sum_{x} \sum_{j}\left[y_{j} \ln a_{j}^{L}+\left(1-y_{j}\right) \ln \left(1-a_{j}^{L}\right)\right]
C=−n1x∑j∑[yjlnajL+(1−yj)ln(1−ajL)]这里需要对所有的输出神经元进行求和
∑
j
\sum_{j}
∑j
多层神经网络的二次代价函数关于输出层的权重的偏导数是;
∂
C
∂
w
j
k
L
=
1
n
∑
x
a
k
L
−
1
(
a
j
L
−
y
j
)
σ
′
(
z
j
L
)
\frac{\partial C}{\partial w_{j k}^{L}}=\frac{1}{n} \sum_{x} a_{k}^{L-1}\left(a_{j}^{L}-y_{j}\right) \sigma^{\prime}\left(z_{j}^{L}\right)
∂wjkL∂C=n1x∑akL−1(ajL−yj)σ′(zjL)项
σ
′
(
z
j
L
)
\sigma^{\prime}\left(z_{j}^{L}\right)
σ′(zjL)会在一个输出神经元困在错误值时导致学习速度的下降
交叉熵代价函数关于输出层的权重的偏导数为
∂
C
∂
w
j
k
L
=
1
n
∑
x
a
k
L
−
1
(
a
j
L
−
y
j
)
\frac{\partial C}{\partial w_{j k}^{L}}=\frac{1}{n} \sum_{x} a_{k}^{L-1}\left(a_{j}^{L}-y_{j}\right)
∂wjkL∂C=n1x∑akL−1(ajL−yj)这里
σ
′
(
z
j
L
)
\sigma^{\prime}\left(z_{j}^{L}\right)
σ′(zjL)消失了,交叉熵避免了学习的缓慢问题
在输出层使用线性神经元时使用二次代价函数
当输出层的神经元都是线性神经元,输出不再是
S
S
S型函数的作用结果,而是
a
j
L
=
z
j
L
a_{j}^{L}=z_{j}^{L}
ajL=zjL,单个样本的输出误差是
δ
L
=
a
L
−
y
\delta^{L}=a^{L}-y
δL=aL−y
关于输出层的权重和偏置的偏导数为:
∂
C
∂
w
j
k
L
=
1
n
∑
x
a
k
L
−
1
(
a
j
L
−
y
j
)
\frac{\partial C}{\partial w_{j k}^{L}}=\frac{1}{n} \sum_{x} a_{k}^{L-1}\left(a_{j}^{L}-y_{j}\right)
∂wjkL∂C=n1x∑akL−1(ajL−yj)
∂
C
∂
b
j
L
=
1
n
∑
x
(
a
j
L
−
y
j
)
\frac{\partial C}{\partial b_{j}^{L}}=\frac{1}{n} \sum_{x}\left(a_{j}^{L}-y_{j}\right)
∂bjL∂C=n1x∑(ajL−yj)上式表明如果输出神经元是线性的那么二次代价函数不再会导致学习速度下降的问题。在此情形下,二次代价函数是一种合适的选择。
交叉熵源自哪里?
研究发现学习速度变慢主要原因在于
σ
′
(
z
)
\sigma^{\prime}(z)
σ′(z),我们会想到选择一个不包含
σ
′
(
z
)
\sigma^{\prime}(z)
σ′(z)的代价函数的导数,
所以这个时候一个训练样本其代价满足:
∂
C
∂
w
j
=
x
j
(
a
−
y
)
\frac{\partial C}{\partial w_{j}}=x_{j}(a-y)
∂wj∂C=xj(a−y)
∂
C
∂
b
=
(
a
−
y
)
\frac{\partial C}{\partial b}=(a-y)
∂b∂C=(a−y)此时初始误差越大,神经元学习的越快
由链式法则我们有:
∂
C
∂
b
=
∂
C
∂
a
σ
′
(
z
)
\frac{\partial C}{\partial b}=\frac{\partial C}{\partial a} \sigma^{\prime}(z)
∂b∂C=∂a∂Cσ′(z)使用
σ
′
(
z
)
=
σ
(
z
)
(
1
−
σ
(
z
)
)
=
a
(
1
−
a
)
\sigma^{\prime}(z)=\sigma(z)(1-\sigma(z))=a(1-a)
σ′(z)=σ(z)(1−σ(z))=a(1−a),上式变为:
∂
C
∂
b
=
∂
C
∂
a
a
(
1
−
a
)
\frac{\partial C}{\partial b}=\frac{\partial C}{\partial a} a(1-a)
∂b∂C=∂a∂Ca(1−a)对比公式
∂
C
∂
b
=
(
a
−
y
)
\frac{\partial C}{\partial b}=(a-y)
∂b∂C=(a−y),有
∂
C
∂
a
=
a
−
y
a
(
1
−
a
)
\frac{\partial C}{\partial a}=\frac{a-y}{a(1-a)}
∂a∂C=a(1−a)a−y对此方程关于
a
a
a积分,得到
C
=
−
[
y
ln
a
+
(
1
−
y
)
ln
(
1
−
a
)
]
+
constant
C=-[y \ln a+(1-y) \ln (1-a)]+\text { constant }
C=−[ylna+(1−y)ln(1−a)]+ constant
constant
\text { constant }
constant 为积分常量,对所有训练样本平均得到:
C
=
−
1
n
∑
x
[
y
ln
a
+
(
1
−
y
)
ln
(
1
−
a
)
]
+
constant
C=-\frac{1}{n} \sum_{x}[y \ln a+(1-y) \ln (1-a)]+\text { constant }
C=−n1x∑[ylna+(1−y)ln(1−a)]+ constant 从信息论的角度,交叉熵衡量我们学习到的正确值的平均起来的不确定性
交叉熵损失函数的由来及交叉熵对参数的导数的推导
交叉熵损失函数的由来
设有
m
m
m个样本数据,
(
x
(
i
)
,
y
(
i
)
)
\left(x^{(i)}, y^{(i)}\right)
(x(i),y(i))表示第组数据及其对应的类别标记。其中
x
(
i
)
=
(
1
,
x
1
(
i
)
,
x
2
(
i
)
,
…
,
x
p
(
i
)
)
T
x^{(i)}=\left(1, x_{1}^{(i)}, x_{2}^{(i)}, \ldots, x_{p}^{(i)}\right)^{T}
x(i)=(1,x1(i),x2(i),…,xp(i))T为
p
+
1
p+1
p+1维向量(考虑了偏置项),
y
(
i
)
y^{(i)}
y(i)为表示类别的一个数:
logistic
\text{logistic}
logistic回归(是非问题)中,
y
(
i
)
y^{(i)}
y(i)取
0
0
0或
1
1
1
softmax
\text{softmax}
softmax回归(多分类问题)中,
y
(
i
)
y^{(i)}
y(i)取
1
,
2
…
k
1,2 \ldots \mathrm{k}
1,2…k中的一个表示类别标号的一个数(假设共有
k
k
k类)
logistic
\text{logistic}
logistic回归中,输入样本数据
x
(
i
)
=
(
1
,
x
1
(
i
)
,
x
2
(
i
)
,
…
,
x
p
(
i
)
)
T
x^{(i)}=\left(1, x_{1}^{(i)}, x_{2}^{(i)}, \ldots, x_{p}^{(i)}\right)^{T}
x(i)=(1,x1(i),x2(i),…,xp(i))T,模型的参数为
θ
=
(
θ
0
,
θ
1
,
θ
2
,
…
,
θ
p
)
T
\theta=\left(\theta_{0}, \theta_{1}, \theta_{2}, \dots, \theta_{p}\right)^{T}
θ=(θ0,θ1,θ2,…,θp)T,因此有
θ
T
x
(
i
)
:
=
θ
0
+
θ
1
x
1
(
i
)
+
⋯
+
θ
p
x
p
(
i
)
\theta^{T} x^{(i)} :=\theta_{0}+\theta_{1} x_{1}^{(i)}+\cdots+\theta_{p} x_{p}^{(i)}
θTx(i):=θ0+θ1x1(i)+⋯+θpxp(i)假设函数
(
hypothesis function
)
(\text{hypothesis function})
(hypothesis function)定义为:
h
θ
(
x
(
i
)
)
=
1
1
+
e
−
θ
T
x
(
i
)
h_{\theta}\left(x^{(i)}\right)=\frac{1}{1+e^{-\theta^{T} x^{(i)}}}
hθ(x(i))=1+e−θTx(i)1由于
logistic
\text{logistic}
logistic回归问题是
0
−
1
0-1
0−1二分类问题,因此有
P
(
y
^
(
i
)
=
1
∣
x
(
i
)
;
θ
)
=
h
θ
(
x
(
i
)
)
P\left(\hat{y}^{(i)}=1 | x^{(i)} ; \theta\right)=h_{\theta}\left(x^{(i)}\right)
P(y^(i)=1∣x(i);θ)=hθ(x(i))
P
(
y
^
(
i
)
=
0
∣
x
(
i
)
;
θ
)
=
1
−
h
θ
(
x
(
i
)
)
P\left(\hat{y}^{(i)}=0 | x^{(i)} ; \theta\right)=1-h_{\theta}\left(x^{(i)}\right)
P(y^(i)=0∣x(i);θ)=1−hθ(x(i))对上式取对数:
log
P
(
y
^
(
i
)
=
1
∣
x
(
i
)
;
θ
)
=
log
h
θ
(
x
(
i
)
)
=
log
1
1
+
e
−
θ
T
x
(
i
)
\log P\left(\hat{y}^{(i)}=1 | x^{(i)} ; \theta\right)=\log h_{\theta}\left(x^{(i)}\right)=\log \frac{1}{1+e^{-\theta^{T} x^{(i)}}}
logP(y^(i)=1∣x(i);θ)=loghθ(x(i))=log1+e−θTx(i)1
log
P
(
y
^
(
i
)
=
0
∣
x
(
i
)
;
θ
)
=
log
(
1
−
h
θ
(
x
(
i
)
)
)
=
log
e
−
θ
T
x
(
i
)
1
+
e
−
θ
T
x
(
i
)
\log P\left(\hat{y}^{(i)}=0 | x^{(i)} ; \theta\right)=\log \left(1-h_{\theta}\left(x^{(i)}\right)\right)=\log \frac{e^{-\theta^{T} x^{(i)}}}{1+e^{-\theta^{T} x^{(i)}}}
logP(y^(i)=0∣x(i);θ)=log(1−hθ(x(i)))=log1+e−θTx(i)e−θTx(i)对于第
i
i
i组样本,假设函数正确的组合对数概率为:
I
{
y
(
i
)
=
1
}
log
P
(
y
^
(
i
)
=
1
∣
x
(
i
)
;
θ
)
+
I
{
y
(
i
)
=
0
}
log
P
(
y
^
(
i
)
=
0
∣
x
(
i
)
;
θ
)
I\left\{y^{(i)}=1\right\} \log P\left(\hat{y}^{(i)}=1 | x^{(i)} ; \theta\right)+I\left\{y^{(i)}=0\right\} \log P\left(\hat{y}^{(i)}=0 | x^{(i)} ; \theta\right)
I{y(i)=1}logP(y^(i)=1∣x(i);θ)+I{y(i)=0}logP(y^(i)=0∣x(i);θ)
=
y
(
i
)
log
P
(
y
^
(
i
)
=
1
∣
x
(
i
)
;
θ
)
+
(
1
−
y
(
i
)
)
log
P
(
y
^
(
i
)
=
0
∣
x
(
i
)
;
θ
)
=
y
(
i
)
log
(
h
θ
(
x
(
i
)
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
h
θ
(
x
(
i
)
)
)
\begin{aligned}=y^{(i)} \log P\left(\hat{y}^{(i)}=\right.& 1 | x^{(i)} ; \theta )+\left(1-y^{(i)}\right) \log P\left(\hat{y}^{(i)}=0 | x^{(i)} ; \theta\right) \\=& y^{(i)} \log \left(h_{\theta}\left(x^{(i)}\right)\right)+\left(1-y^{(i)}\right) \log \left(1-h_{\theta}\left(x^{(i)}\right)\right) \end{aligned}
=y(i)logP(y^(i)==1∣x(i);θ)+(1−y(i))logP(y^(i)=0∣x(i);θ)y(i)log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i)))其中,
I
{
y
(
i
)
=
1
}
I\left\{y^{(i)}=1\right\}
I{y(i)=1}和
I
{
y
(
i
)
=
0
}
I\left\{y^{(i)}=0\right\}
I{y(i)=0}是示性函数,这样对于一共
m
m
m组样本,可以得到整体训练样本的表现能力:
∑
i
=
1
m
y
(
i
)
log
(
h
θ
(
x
(
i
)
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
h
θ
(
x
(
i
)
)
)
\sum_{i=1}^{m} y^{(i)} \log \left(h_{\theta}\left(x^{(i)}\right)\right)+\left(1-y^{(i)}\right) \log \left(1-h_{\theta}\left(x^{(i)}\right)\right)
i=1∑my(i)log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i)))由以上表征正确的概率含义可知,我们希望其值越大,模型对数据的表达能力越好。而我们在参数更新或衡量模型优劣时是需要一个能充分反映模型表现误差的损失函数
(
L
o
s
s
f
u
n
c
t
i
o
n
)
(Loss function)
(Lossfunction)或者代价函数
(
C
o
s
t
f
u
n
c
t
i
o
n
)
(Cost function)
(Costfunction)的,而且我们希望损失函数越小越好。由这两个矛盾,那么我们不妨领代价函数为上述组合对数概率的相反数:
J
(
θ
)
=
−
1
m
∑
i
=
1
m
y
(
i
)
log
(
h
θ
(
x
(
i
)
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
h
θ
(
x
(
i
)
)
)
J(\theta)=-\frac{1}{m} \sum_{i=1}^{m} y^{(i)} \log \left(h_{\theta}\left(x^{(i)}\right)\right)+\left(1-y^{(i)}\right) \log \left(1-h_{\theta}\left(x^{(i)}\right)\right)
J(θ)=−m1i=1∑my(i)log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i)))上式即为交叉熵损失函数。
交叉熵损失函数的求导推导:
已知
J
(
θ
)
=
−
1
m
∑
i
=
1
m
y
(
i
)
log
(
h
θ
(
x
(
i
)
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
h
θ
(
x
(
i
)
)
)
J(\theta)=-\frac{1}{m} \sum_{i=1}^{m} y^{(i)} \log \left(h_{\theta}\left(x^{(i)}\right)\right)+\left(1-y^{(i)}\right) \log \left(1-h_{\theta}\left(x^{(i)}\right)\right)
J(θ)=−m1i=1∑my(i)log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i)))由上面公式:
log
h
θ
(
x
(
i
)
)
=
log
1
1
+
e
−
θ
T
x
(
i
)
=
−
log
(
1
+
e
−
θ
T
x
(
i
)
)
\log h_{\theta}\left(x^{(i)}\right)=\log \frac{1}{1+e^{-\theta^{T} x^{(i)}}}=-\log \left(1+e^{-\theta^{T} x^{(i)}}\right)
loghθ(x(i))=log1+e−θTx(i)1=−log(1+e−θTx(i))
log
(
1
−
h
θ
(
x
(
i
)
)
)
=
log
(
1
−
1
1
+
e
−
θ
T
x
(
i
)
)
  
=
log
(
e
−
θ
T
x
(
i
)
1
+
e
−
θ
T
x
(
i
)
)
=
log
(
e
−
θ
T
x
(
i
)
)
−
log
(
1
+
e
−
θ
T
x
(
i
)
)
=
−
θ
T
x
(
i
)
−
log
(
1
+
e
−
θ
T
x
(
i
)
)
\begin{aligned} & \log \left(1-h_{\theta}\left(x^{(i)}\right)\right)=\log \left(1-\frac{1}{1+e^{-\theta^{T} x^{(i)}}}\right) \;=\log \left(\frac{e^{-\theta^{T} x^{(i)}}}{1+e^{-\theta^{T} x^{(i)}}}\right) \\=& \log \left(e^{-\theta^{T} x^{(i)}}\right)-\log \left(1+e^{-\theta^{T} x^{(i)}}\right)=-\theta^{T} x^{(i)}-\log \left(1+e^{-\theta^{T} x^{(i)}}\right) \end{aligned}
=log(1−hθ(x(i)))=log(1−1+e−θTx(i)1)=log(1+e−θTx(i)e−θTx(i))log(e−θTx(i))−log(1+e−θTx(i))=−θTx(i)−log(1+e−θTx(i))
J
(
θ
)
=
−
1
m
∑
i
=
1
m
[
−
y
(
i
)
(
log
(
1
+
e
−
θ
T
x
(
i
)
)
)
+
(
1
−
y
(
i
)
)
(
−
θ
T
x
(
i
)
−
log
(
1
+
e
−
θ
T
x
(
i
)
)
)
]
J(\theta)=-\frac{1}{m} \sum_{i=1}^{m}\left[-y^{(i)}\left(\log \left(1+e^{-\theta^{T} x^{(i)}}\right)\right)+\left(1-y^{(i)}\right)\left(-\theta^{T} x^{(i)}-\log \left(1+e^{-\theta^{T} x^{(i)}}\right)\right)\right]
J(θ)=−m1i=1∑m[−y(i)(log(1+e−θTx(i)))+(1−y(i))(−θTx(i)−log(1+e−θTx(i)))]
=
−
1
m
∑
i
=
1
m
[
y
(
i
)
θ
T
x
(
i
)
−
θ
T
x
(
i
)
−
log
(
1
+
e
−
θ
T
x
(
i
)
)
]
=
−
1
m
∑
i
=
1
m
[
y
(
i
)
θ
T
x
(
i
)
−
log
e
θ
T
x
(
i
)
−
log
(
1
+
e
−
θ
T
x
(
i
)
)
]
=
−
1
m
∑
i
=
1
m
[
y
(
i
)
θ
T
x
(
i
)
−
(
log
e
θ
T
x
(
i
)
+
log
(
1
+
e
−
θ
T
x
(
i
)
)
)
]
=
−
1
m
∑
i
=
1
m
[
y
(
i
)
θ
T
x
(
i
)
−
log
(
1
+
e
θ
T
x
(
i
)
)
]
\begin{aligned} &=-\frac{1}{m} \sum_{i=1}^{m}\left[y^{(i)} \theta^{T} x^{(i)}-\theta^{T} x^{(i)}-\log \left(1+e^{-\theta^{T} x^{(i)}}\right)\right] \\ &=-\frac{1}{m} \sum_{i=1}^{m}\left[y^{(i)} \theta^{T} x^{(i)}-\log e^{\theta^{T} x^{(i)}}-\log \left(1+e^{-\theta^{T} x^{(i)}}\right)\right]\\ &=-\frac{1}{m} \sum_{i=1}^{m}\left[y^{(i)} \theta^{T} x^{(i)}-\left(\log e^{\theta^{T} x^{(i)}}+\log \left(1+e^{-\theta^{T} x^{(i)}}\right)\right)\right] \\ &=-\frac{1}{m} \sum_{i=1}^{m}\left[y^{(i)} \theta^{T} x^{(i)}-\log \left(1+e^{\theta^{T} x^{(i)}}\right)\right] \end{aligned}
=−m1i=1∑m[y(i)θTx(i)−θTx(i)−log(1+e−θTx(i))]=−m1i=1∑m[y(i)θTx(i)−logeθTx(i)−log(1+e−θTx(i))]=−m1i=1∑m[y(i)θTx(i)−(logeθTx(i)+log(1+e−θTx(i)))]=−m1i=1∑m[y(i)θTx(i)−log(1+eθTx(i))]计算
J
(
θ
)
J(\theta)
J(θ)对第
j
j
j个参数分量
θ
j
\theta_j
θj求偏导:
∂
∂
θ
j
J
(
θ
)
=
∂
∂
θ
j
(
1
m
∑
i
=
1
m
[
log
(
1
+
e
θ
T
x
(
i
)
)
−
y
(
i
)
θ
T
x
(
i
)
]
)
\frac{\partial}{\partial \theta_{j}} J(\theta)=\frac{\partial}{\partial \theta_{j}}\left(\frac{1}{m} \sum_{i=1}^{m}\left[\log \left(1+e^{\theta^{T} x^{(i)}}\right)-y^{(i)} \theta^{T} x^{(i)}\right]\right)
∂θj∂J(θ)=∂θj∂(m1i=1∑m[log(1+eθTx(i))−y(i)θTx(i)])
=
1
m
∑
i
=
1
m
[
∂
∂
θ
j
log
(
1
+
e
θ
T
x
(
i
)
)
−
∂
∂
θ
j
(
y
(
i
)
θ
T
x
(
i
)
)
]
=
1
m
∑
i
=
1
m
(
x
j
(
i
)
e
θ
T
x
(
i
)
1
+
e
θ
T
x
(
i
)
−
y
(
i
)
x
j
(
i
)
)
=
1
m
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
x
j
(
i
)
\begin{aligned}=& \frac{1}{m} \sum_{i=1}^{m}\left[\frac{\partial}{\partial \theta_{j}} \log \left(1+e^{\theta^{T} x^{(i)}}\right)-\frac{\partial}{\partial \theta_{j}}\left(y^{(i)} \theta^{T} x^{(i)}\right)\right] \\=& \frac{1}{m} \sum_{i=1}^{m}\left(\frac{x_{j}^{(i)} e^{\theta^{T} x^{(i)}}}{1+e^{\theta^{T} x^{(i)}}}-y^{(i)} x_{j}^{(i)}\right) \\ &=\frac{1}{m} \sum_{i=1}^{m}\left(h_{\theta}\left(x^{(i)}\right)-y^{(i)}\right) x_{j}^{(i)} \end{aligned}
==m1i=1∑m[∂θj∂log(1+eθTx(i))−∂θj∂(y(i)θTx(i))]m1i=1∑m(1+eθTx(i)xj(i)eθTx(i)−y(i)xj(i))=m1i=1∑m(hθ(x(i))−y(i))xj(i)这就是交叉熵对参数的导数:
∂
∂
θ
j
J
(
θ
)
=
1
m
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
x
j
(
i
)
\frac{\partial}{\partial \theta_{j}} J(\theta)=\frac{1}{m} \sum_{i=1}^{m}\left(h_{\theta}\left(x^{(i)}\right)-y^{(i)}\right) x_{j}^{(i)}
∂θj∂J(θ)=m1i=1∑m(hθ(x(i))−y(i))xj(i)交叉熵损失函数的由来及交叉熵对参数的导数的推导参考自:http://blog.youkuaiyun.com/jasonzzj/article/details/52017438