Skip to main content
eScholarship
Open Access Publications from the University of California

Concept Alignment as a Prerequisite for Value Alignment

Abstract

Value alignment is essential for building AI systems that can safely and reliably interact with people. However, what a person values---and is even capable of valuing---depends on the concepts that they are currently using to understand and evaluate what happens in the world. The dependence of values on concepts means that concept alignment is a prerequisite for value alignment---agents need to align their representation of a situation with that of humans in order to successfully align their values. Here, we formally analyze the concept alignment problem in the inverse reinforcement learning setting, show how neglecting concept alignment can lead to systematic value mis-alignment, and describe an approach that helps minimize such failure modes by jointly reasoning about a person's concepts and values. Additionally, we report experimental results with human participants showing that humans reason about the concepts used by an agent when acting intentionally, in line with our joint reasoning model.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View