- Main
Sample Reweighting Using Exponentiated Gradient Updates for Robust Training Under Label noise and Beyond
- Majidi, Negin
- Advisor(s): Manduchi, Roberto
Abstract
Learning tasks in machine learning usually involve taking a gradient step towards minimizing an objective. Most of the time, the objective is the average loss of the training batch. In many cases, the dataset is noisy, so treating the examples equally during the training can cause overfitting to the noise in the data and poor generalization. Noisy examples generally incur a larger loss in comparison to clean examples. Inspired by the expert setting in online learning, we propose an algorithm for learning from noisy examples. We take each example as an expert and maintain a probability distribution over all examples as their weights. We update the parameters of the model using gradient descent and example weights using exponentiated gradients, alternatingly. Unlike other methods, our method handles a general class of loss functions and noise types. Our experiments show that our approach outperforms the existing baseline methods in supervised settings such as classification problems and unsupervised settings such as principle component analysis.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-