Skip to main content
eScholarship
Open Access Publications from the University of California

Manipulation-Proof Machine Learning

Abstract

An increasing number of decisions are guided by machine learning algorithms. In many settings, from consumer credit to criminal justice, those decisions are made by applying an estimator to data on an individual’s observed behavior. But when consequential decisions are encoded in rules, individuals may strategically alter their behavior to achieve desired outcomes. This paper develops a class of estimator that is stable under manipulation, even when the decision rule is fully transparent. We explicitly model the costs of manipulating different behaviors, and identify decision rules that are stable in equilibrium. This approach also makes it possible to quantify the performance cost of making a decision algorithm transparent. Through a large field experiment in Kenya, we show that decision rules estimated with our strategy-robust method outperform those based on standard supervised learning approaches.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View