CodingTips-torch.nn.functional.cross_entropy(x, y) ——pytorch中提供的交叉熵损失函数
目录
Reference:
https://blog.youkuaiyun.com/m0_38133212/article/details/88087206
https://www.cnblogs.com/henry-zhao/p/13087275.html
交叉熵损失函数与softmax,nll_loss之间的关系
核心图片:
尝试代码如下:
#encoding:utf-8
import numpy as np
import torch
import torch.nn.functional as F
if __name__ == '__main__':
x = np.array([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]).astype(np.float32) # 模拟训练出来的输出数据
y = np.array([1,1,0]) # 三个样本,两个属于1类,一个属于0类
x = torch.from_numpy(x)
y = torch.from_numpy(y)
soft_out = F.softmax(x, dim=1) # 进行softmax在每个样本上归一化,这时候可以看做概率,概率加和为1
print("softmax_out:", soft_out)
log_soft_out = torch.log(soft_out) # 取对数
print("log_softmax_out:", log_soft_out)
loss = F.nll_loss(log_soft_out, y) # nll_loss
print("final loss", loss)
print("直接使用cross entropy loss", F.cross_entropy(x, y))
补充内容:进化版交叉熵损失函数
Complement objective training:Chen, H. Y., Wang, P. H., Liu, C. H., Chang, S. C., Pan, J. Y., Chen, Y. T., ... & Juan, D. C. (2019). Complement objective training. arXiv preprint arXiv:1903.01182.
Guided Complement Entropy:Chen, H. Y., Liang, J. H., Chang, S. C., Pan, J. Y., Chen, Y. T., Wei, W., & Juan, D. C. (2019). Improving adversarial robustness via guided complement entropy. In Proceedings of the IEEE International Conference on Computer Vision (pp. 4881-4889).
曾经看到过这样两篇对于交叉熵损失函数的改进,从上边的图中可以明显看到交叉熵损失函数只利用了预测正确的类,而忽略了预测错误的类,该篇文章基于这个存在的问题,将预测错误的label也引入loss函数中进行考虑。
讲的比较好的博客:
https://blog.youkuaiyun.com/qq_36663791/article/details/103437368