来源:https://github.com/HobbitLong/RepDistiller收录的方法
NeurIPS2015: Distilling the Knowledge in a Neural Network

知识蒸馏开山之作,从logits中蒸馏知识,KL散度损失
ICLR2015:FitNets: Hints for Thin Deep Nets
A hint is defined as the output of a teacher’s hidden layer responsible for guiding the student’s learning process,
we choose a hidden layer of the student, the guided layer, to learn from the teacher’s hint layer
Similarly, we choose the guided layer to be the middle layer of the student network

与输出的logits概率分布不同,hints最小化教师与学生之间的特征距离,采用L2距离。

ICLR2017:Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer

propose attention as a mechanism of transferring knowledge from one network to another 提出将注意力机制作为知识蒸馏的一种方式
propose the use of both activation-based and gradient-based spatial attention maps 提出同时使用基于激活和基于梯度的空间注意力图
show experimentally that our approach provides significant improvements across a variety of datasets and deep network architectures, including both residual and non-residual networks 实验展示了好的效果
show that activation-based attention transfer gives better improvements than full activation transfer, and can be combined with knowledge distillation 基于激活的注意力转移方式优于完全激活的转移
基于激活的注意力图
论文中定义了三种spatial attention的计算方式


并给出L1或者L2标准化之后的attention蒸馏损失,还强调it is worth emphasizing that normalization of attention maps is important for the success of the student tr
知识蒸馏相关论文方法汇总

最低0.47元/天 解锁文章
1092

被折叠的 条评论
为什么被折叠?



