Skip to main content
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Leveraging deep neural networks to study human cognition

  • Author(s): Peterson, Joshua Caleb
  • Advisor(s): Griffiths, Thomas L
  • et al.

The majority of computational theories of inductive processes in psychology derive from small-scale experiments with simple stimuli that are easy to represent. However, real-world stimuli are complex, hard to represent efficiently, and likely require very different cognitive strategies to cope with. Indeed, the difficulty of such tasks are part of what make humans so impressive, yet methodological resources for modeling their solutions are limited. This presents a fundamental challenge to the precision of psychology as a science, especially if traditional laboratory methods fail to generalize. Recently, a number of computationally tractable, data-driven methods such as deep neural networks have emerged in machine learning for deriving useful representations of complex perceptual stimuli, but they are explicitly optimized in service to engineering objectives rather than modeling human cognition. It has remained unclear to what extent engineering models, while often state-of-the-art in terms of human-level task performance, can be leveraged to model, predict, and understand humans.

In the following, I outline a methodology by which psychological research can confidently leverage representations learned by deep neural networks to model and predict complex human behavior, potentially extending the scope of the field. In Chapter 1, I discuss the challenges to ecological validity in the laboratory that may be partially circumvented by technological advances and trends in machine learning, and weigh the advantages and disadvantages of bootstrapping from largely uninterpretable models. In Chapter 2, I contrast methods from psychology and machine learning for representing complex stimuli like images. Chapter 3 provides a first case study of applying deep neural networks to predict whether objects in a large database of images will be remembered by humans. Chapter 4 provides the central argument for using representations from deep neural networks as proxies for human psychological representations in general. To do this, I establish and demonstrate methods for quantifying their correspondence, improving their correspondence with minimal cost, and applying the result to the modeling of downstream cognitive processes. Building on this, Chapter 5 develops a method for modeling human subjective probability over deep representations in order to capture multimodal mental visual concepts such as "landscape". Finally, in Chapter 6, I discuss the implications of the overall paradigm espoused in the current work, along with the most crucial challenges ahead and potential ways forward. The overall endeavor is almost certainly a stepping stone to methods that may look very different in the near future, as the gains in leveraging machine learning methods are consolidated and made more interpretable/useful. The hope is that a synergy can be formed between the two fields, each bootstrapping and learning from the other.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View