Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Towards Trustworthy Machine Learning

Abstract

Real-world applications of machine learning often have complex objectives and safety-critical constraints. Contemporary machine learning systems excel at achieving high average-case performance at tasks with simple procedurally specified objectives, but they struggle at many more demanding real-world tasks. In this thesis, we work towards developing trustworthy machine learning systems that understand human values and reliably optimize them.

Machine learning’s key insight was that it is often easier to learn an algorithm than to write it down directly—yet many machine learning systems still have a hard-coded, procedurally specified objective. The field of reward learning applies this insight to instead learn the objective itself. As there is a many-to-one mapping between reward functions and objectives, we start by introducing the notion of equivalence classes consisting of reward functions that specify the same objective.

In the first part of the dissertation, we apply this notion of equivalence classes to three distinct settings. First, we study reward function identifiability: what set of reward functions is compatible with the data? We start by categorizing the equivalence classes of reward functions that induce the same data. By comparing these to the aforementioned optimalpolicy equivalence class, we can determine whether a given data source provides sufficient information to recover the optimal policy.

Second, we address the fundamental question of how similar or dissimilar two reward function equivalence classes are. We introduce a distance metric over these equivalence classes, the Equivalent-Policy Invariant Comparison (EPIC), and show rewards with low EPIC distance induce policies with similar returns even under different transition dynamics. Finally, weintroduce an interpretability method for reward function equivalence classes. The method selects the easiest to understand representative from the equivalence class, and then visualizes the representative function.

In the second part of the dissertation, we study the adversarial robustness of models. We start by introducing a physically realistic threat model consisting of an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial to the defender. We train the adversary using deep RL against a frozen state-of-the-artdefender that was trained via self-play to be robust to opponents. We find this attack reliably wins against state-of-the-art simulated robotics RL agents, and superhuman Go programs.

Finally, we investigate ways to improve agent robustness. We find adversarial training is ineffective, however population-based training offers hope as a partial defense: it does not prevent the attack, but it does increase the computational burden of the attacker. Using explicit planning also helps, as we find that defenders with large amounts of search are harder to exploit.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View