Machine learning is a promising tool for processing complex information, but it remains anunreliable tool for control and decision making. Applying techniques developed for static
datasets to real world problems requires grappling with the effects of feedback and systems
that change over time. In these settings, classic statistical and algorithmic guarantees do
not always hold. How do we anticipate the dynamical behavior of machine learning
systems before we deploy them? Towards the goal of ensuring reliable behavior, this
thesis takes steps towards developing an understanding of the trade-offs and limitations
that arise in feedback settings.
In Part I, we focus on the application of machine learning to automatic feedback control.Inspired by physical autonomous systems, we attempt to build a theoretical foundation
for the data-driven design of optimal controllers. We focus on systems governed by linear
dynamics with unknown components that must be characterized from data. We study
unknown dynamics in the setting of the Linear Quadratic Regulator (LQR), a classical
optimal control problem, and show that a procedure of least-squares estimation followed
by robust control design guarantees safety and bounded sub-optimality. Inspired by the
use of cameras in robotics, we also study a setting in which the controller must act on
the basis of complex observations, where a subset of the state is encoded by an unknown
nonlinear and potentially high dimensional sensor. We propose using a perception map,
which acts as an approximate inverse, and show that the resulting perception-control loop
has favorable properties, so long as either a) the controller is robustly designed to account
for perception errors or b) the perception map is learned from sufficiently dense data.
In Part II, we shift our attention to algorithmic decision making systems, where machinelearning models are used in feedback with people. Due to the difficulties of measure-
ment, limited predictability, and the indeterminacy of translating human values into
mathematical objectives, we eschew the framework of optimal control. Instead, our goal
is to articulate the impacts of simple decision rules under one-step feedback models. We
first consider consequential decisions, inspired by the example of lending in the presence
of credit score. Under a simple model of impact, we show that several group fairness
constraints, proposed to mitigate inequality, may harm the groups they aim to protect.
In fact, fairness criteria can be viewed as a special case of a broader framework for de-
signing decision policies that trade off between private and public objectives, in which
notions of impact and wellbeing can be encoded directly. Finally, we turn to the setting of
recommendation systems, which make selections from a wide array of choices based on
personalized relevance predictions. We develop a novel perspective based on reachability
that quantifies agency and access. While empirical audits show that models optimized
for accuracy may limit reachability, theoretical results show that this is not due to an in-
herent trade-off, suggesting a path forward. Broadly, this work attempts to re-imagine the
goals of predictive models ubiquitous in machine learning, moving towards new design
principles that prioritize human values.