【Distill 系列:一】bmvc2019 Learning Efficient Detector with Semi-supervised Adaptive Distillation

本文探讨了在目标检测任务中如何使用Adaptive Distillation(ADL)来提升学生网络的学习效率。ADL通过调整权重,对难以学习和模仿的样本加大损失,减少对简单样本的关注。研究比较了单阶段和两阶段检测器中的应用,并指出在处理密集锚点的单阶段检测器中,样本不平衡问题尤其具有挑战性。ADL结合了focal loss和KL散度,通过调整参数控制不同难度样本的权重,优化知识蒸馏效果。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

bmvc 2019

motivation

more attention paid on two types of hard samples:

  • hard-to-learn samples predicted by teacher with low certainty
  • hard-to-mimic samples with a large gap between the teacher’s and the student’s prediction

ADL

  • enlarges the distillation loss for hard-to-learn and hard-to-mimic samples and reduces distillation loss for the dominant easy samples
  • single-stage detector

However, when applying it on object detection, due to the ”small” capacity of the student network, it is hard to mimic all feature maps or logits well.

two-stage detector

  • Learning efficient object detection models with knowledge distillation , 2017
    weighted cross-entropy loss to underweight matching errors in background regions
  • Mimicking very efficient network for object detection , 2017
    mimicked feature maps between the student
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值