We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Classification vs regression in overparameterized regimes: Does the loss function matter?
- Authors
Muthukumar, Vidya; Narang, Adhyyan; Subramanian, Vignesh; Belkin, Mikhail; Hsu, Daniel; Sahai, Anant
- Abstract
We compare classification and regression tasks in an overparameterized linear model with Gaussian features. On the one hand, we show that with sufficient overparameterization all training points are support vectors: solutions obtained by least-squares minimum-norm interpolation, typically used for regression, are identical to those produced by the hardmargin support vector machine (SVM) that minimizes the hinge loss, typically used for training classifiers. On the other hand, we show that there exist regimes where these interpolating solutions generalize well when evaluated by the 0-1 test loss function, but do not generalize if evaluated by the square loss function, i.e. they approach the null risk. Our results demonstrate the very different roles and properties of loss functions used at the training phase (optimization) and the testing phase (generalization).
- Subjects
SUPPORT vector machines; CLASSIFICATION
- Publication
Journal of Machine Learning Research, 2021, Vol 22, p1
- ISSN
1532-4435
- Publication type
Article