Skip to main content
eScholarship
Open Access Publications from the University of California

Do Neural Language Representations Learn Physical Commonsense?

Abstract

Humans understand language based on the rich backgroundknowledge about how the physical world works, which in turn,allows us to reason about the physical world through language.In addition to the properties of objects (e.g., boats require fuel)and their affordances, i.e., the actions that are applicable tothem (e.g., boats can be driven), we can also reason about if–then inferences between what properties of objects imply thekind of actions that are applicable to them (e.g., that if we candrive something then it likely requires fuel).In this paper, we investigate the extent to which state-of-the-art neural language representations, trained on a vast amount ofnatural language text, demonstrate physical commonsense rea-soning. While recent advancements of neural language mod-els have demonstrated strong performance on various types ofnatural language inference tasks, our study based on a datasetof over 200k newly collected annotations suggests that neurallanguage representations still only learn associations that areexplicitly written down.1

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View