Pytorch 常见损失函数实现
1.CrossEntropyLoss()函数
1.1.基本计算公式
1.2.Pytorch自带计算
Pytorch中计算的交叉熵:
其中自带的CrossEntropyLoss()函数的主要是将softmax-log-NLLLoss合并到一块得到的结果:
- Softmax后的数值都在0~1之间,所以ln之后值域是负无穷到0
- 然后将Softmax之后的结果取log,将乘法改成加法减少计算量,同时保障函数的单调性
- NLLLoss的结果就是把上面的输出与Label对应的那个值拿出来,去掉负号,再求均值。
nn.CrossEntropyLoss(weight: Optional[Tensor] = None, size_average=None,
ignore_index: int = -100,reduce=None, reduction: str = 'mean')
- weight(tensor): 1-D tensor,n个元素,分别代表n类的权重,如果你的训练样本很不均衡的话,是非常有用的。默认值为None。
- input : 包含每个类的得分,2-D tensor,shape为 batch*n
- target: 大小为 n 的 1—D tensor,包含类别的索引(0到 n-1)。
2.Focal Loss
2.1.基本计算公式基本计算表达式:
2.2.Pytorch实现
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class FocalLoss(nn.Module):
r"""
Args:
alpha(1D Tensor, Variable) : the scalar factor for this criterion
gamma(float, double) : gamma > 0; reduces the relative loss for
well-classified examples (p > .5), putting more focus on hard,
misclassified examples
size_average(bool): By default, the losses are averaged over observations
for each minibatch.However, if the field size_average is set to False,
the losses are instead summed for each minibatch.
"""
def __init__(self, class_num, alpha=None, gamma=2, size_average=True):
super(FocalLoss, self).__init__()
if alpha is None:
self.alpha = Variable(torch.ones(class_num, 1))
else:
if isinstance(alpha, Variable):
self.alpha = alpha
else:
self.alpha = Variable(alpha)
self.gamma = gamma
self.class_num =