@INPROCEEDINGS{
,
title = {
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels},
author = {
Bo Han and Quanming Yao and Xingrui Yu and Gang Niu and Miao Xu and Weihua Hu and Ivor Tsang and Masashi Sugiyama},
booktitle = {
NeurIPS},
year = {
8535--8545},
pages = {
2018}
}
1. 摘要
Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.
存在的问题!
Nonetheless, recent studies on the memorization effects of deep neural networks show that they would first memorize training data of clean labels and then those of noisy labels.
尽管如此,最近对深度神经网络记忆效果的研究表明,它们会首先记住干净标签的训练数据,然后记住噪声标签的训练数据。
Therefore in this paper, we propose a new deep learning paradigm called “Co-teaching” for combating with noisy labels.
combating with noisy labels → \rightarrow → 对抗嘈杂的标签
Namely, we train two deep neural networks simultaneously, and let them teach each other given every mini-batch: firstly, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this mini-batch should be used for training; finally, each network back propagates the data selected by its peer network and updates itself.
感觉上和Co-training的内容很像。
Empirical results on noisy versions of MNIST, CIFAR-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.
算法实现结果。
2. 算法描述

本文提出的算法描述比较简单,看懂这个伪代码就几乎明白算法的细节了。
D ˉ f = arg min D ′ : ∣ D ′ ∣ ≥ R ( T ) ∣ D ˉ ∣ ℓ ( f , D ′ ) (1) \bar{\mathcal{D}}_f = \argmin_{\mathcal{D}^{'}:|\mathcal{D}^{'}| \geq R(T)|\bar{\mathcal{D}}|} \ell(f, \mathcal{D}^{'}) \tag{1} Dˉ

文章提出了一种名为Co-teaching的深度学习方法,用于对抗噪声标签。通过同时训练两个神经网络并让它们相互教授,筛选出可能的清洁标签数据进行训练,以此提高模型鲁棒性。不同于其他研究,Co-teaching利用了深度网络的记忆特性,特别适用于噪声标签的学习任务。
最低0.47元/天 解锁文章


被折叠的 条评论
为什么被折叠?



