1. KL divergence是什么
KL 散度是一个距离衡量指标,衡量的是两个概率分布之间的差异。
y
p
r
e
d
y_{pred}
ypred指的是模型的输出的预测概率,形如[0.35,0.25,0.4];
y
t
r
u
e
y_{true}
ytrue是一个one-hot形式的真实概率,形如[0,1,0]。
神经网络的训练目标是使得输出的预测概率尽可能接近真实的概率分布。
KL divergence loss的计算公式为: K L ( y p r e d , y t r u e ) = y t r u e l o g ( y t r u e y p r e d ) KL(y_{pred},y_{true}) = y_{true}log(\frac{y_{true}}{y_{pred}} ) KL(ypred,ytrue)=ytruelog(ypredytrue)
2. logits是什么?
logits是几率,神经网络最后一层的输出如果不经过激活函数,比如softmax的话,那么这个输出就叫做logits。
logits经过softamx激活函数得到概率值,比如:logits = [4,3.9,1],经过softmax激活后,得到 probability = [0.5116072 0.46292138 0.02547142]
p i = e i ∑ j e j p_{i}=\frac{e^{i}}{\sum_{j} e^{j}} pi=∑jejei,
比如上面probability的每一个元素的计算过程为:
e
4
e
4
+
e
3.9
+
e
1
=
0.5116072
,
e
3.9
e
4
+
e
3.9
+
e
1
=
0.46292138
,
e
1
e
4
+
e
3.9
+
e
1
=
0.0254714
\frac{e^{4}}{e^{4}+e^{3.9}+e^{1}} = 0.5116072, \frac{e^{3.9}}{e^{4}+e^{3.9}+e^{1}} = 0.46292138,\frac{e^{1}}{e^{4}+e^{3.9}+e^{1}}= 0.0254714
e4+e3.9+e1e4=0.5116072,e4+e3.9+e1e3.9=0.46292138,e4+e3.9+e1e1=0.0254714
3.使用Pytorch计算KL divergence loss
Pytorch计算KL divergence loss的 [官方文档在这里]。(https://pytorch.org/docs/master/generated/torch.nn.KLDivLoss.html#torch.nn.KLDivLoss)
import torch
import torch.nn as nn
from torch.nn import functional as F
p_logits = torch.tensor([4,3.9,1],dtype = torch.float32)
p = F.log_softmax(p_logits,dim=-1)
q_logits = torch.tensor([5,4,0.1],dtype = torch.float32)
q = F.softmax(q_logits,dim=-1)
q_soft = F.softmax(q_logits/5,dim=-1)
loss_1 = nn.KLDivLoss(reduction='sum')(F.log_softmax(p_logits ,dim=0), F.softmax(q_logits,dim=0))
loss_2 = nn.KLDivLoss(reduction='sum')(p,q)
soft_loss1 = nn.KLDivLoss(reduction='sum')(F.log_softmax(p_logits ,dim=0), F.softmax(q_logits/5,dim=0))
soft_loss2 = nn.KLDivLoss(reduction='sum')(p,q_soft)
import numpy as np
print(f'student predict: {np.exp(p.numpy())}')
print(f'q target: {q.numpy()}')
print(f'q soft probility: {q_soft.numpy()}')
print(f'loss_1: {loss_1}')
print(f'loss_2: {loss_2}')
print(f'soft loss1: {soft_loss1}')
print(f'soft loss2: {soft_loss2}')
打印一下输出可以看到:
student predict: [0.5116072 0.46292138 0.02547142]
q target: [0.7271004 0.2674853 0.00541441]
q soft probility: [0.4557798 0.37316093 0.1710592 ]
loss_1: 0.10048329085111618
loss_2: 0.10048329085111618
soft loss1: 0.1926761120557785
soft loss2: 0.1926761120557785
4.使用Tensorflow计算KL divergence loss
import tensorflow as tf
import numpy as np
onehot_labels = tf.nn.softmax(q_logits)
logits = [0.5116072, 0.46292138, 0.02547142]
tf_loss1 = tf.keras.losses.KLDivergence()(tf.nn.softmax(q_logits),tf.nn.softmax(p_logits))
print(tf_loss1.numpy())
tensorflow loss: 0.1004832461476326
可以看到,pytorch以及tensorflow中计算的结果几乎是一致的。