- Main
Learning Objective Functions from Many Diverse Signals
- Milli, Smitha
- Advisor(s): Dragan, Anca D;
- Hardt, Moritz
Abstract
Specifying the correct objective function for a machine learning system is often difficult and error-prone. This thesis focuses on learning objective functions from many kinds of human inputs. It is comprised of three parts. First, we present our formalism, reward-rational choice (RRC), that unifies reward learning from many diverse signals. The key insight is that human behavior can often be modeled as a reward-rational implicit choice – a choice from an implicit set of options, that is approximately rational for the intended reward. We show how much of the prior work, despite using many diverse modalities of feedback, can be seen as an instantiation of RRC.
In the second part, we discuss implications of the RRC formalism. In particular, it allows us to learn from multiple feedback types at once. Through case studies and experiments, we show how RRC can be used to combine and actively select from feedback types. Furthermore, once a person has access to multiple types of feedback, even their choice of feedback type itself provides information to learn the reward function from. We use RRC to formalize and learn from a person's meta-choice, the choice of feedback type itself.
Finally, in the third part, we study settings in which the human may violate the reward-rational assumption. First, we consider the case where the human may be pedagogic, i.e., optimizing for teaching the reward function. We show that the reward-rational assumption provides robust reward inference even when the human is pedagogic. Second, we consider the case where the human may face temptation and act in ways that systematically deviate from their target preferences. We theoretically analyze such a setting and show that, with the right feedback type, one can still efficiently recover the individual's preferences. Lastly, we consider the recommender system setting. There, it is difficult to model all user behaviors as rational, but by leveraging one strong, explicit signal (e.g. ``don't show me this''), we are still able to operationalize and optimize for a notion of ``value'' on these systems.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-