Skip to main content
eScholarship
Open Access Publications from the University of California

Loss Functions Modulate the Optimal Bias-Variance Trade-off

Creative Commons 'BY' version 4.0 license
Abstract

Prediction problems vary in the extent to which accuracy isrewarded and inaccuracy is penalized—i.e., in their loss func-tions. Here, we focus on a particular feature of loss functionsthat controls how much large errors are penalized relative tohow much precise correctness is rewarded: convexity. Weshow that prediction problems with convex loss functions (i.e.,those in which large errors are particularly harmful) favor sim-pler models that tend to be biased, but exhibit low variability.Conversely, problems with concave loss functions (in whichprecise correctness is particularly rewarded) favor more com-plex models that are less biased, but exhibit higher variabil-ity. We discuss how this relationship between the bias-variancetrade-off and the shape of the loss function may help explainfeatures of human psychology, such as dual-process psychol-ogy and fast versus slow learning strategies, and inform statis-tical inference.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View