deep neural networks can easily overfit to training biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions.
They propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. And they propose an online reweighting method that leverages an additional small validation set and adaptively assigns importance weights to examples in every iteration. Basiclly, their method improves the training objective through a weighted loss rather than an average loss and is an instantiation of meta-learning, i.t. learning to learn better.
To determine the example weights, their method performs a meta gradient descent step on the current mini-batch example weights to minimize the loss on a clean unbiased validation set.
Training set bias includes class imbalance and label noise. (others, e.g., dataset adversarial a