Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Emergence of structured representations in neural network learning

Abstract

Learning is one of the hallmarks of human intelligence. It marks a level of flexibility and adaptation to new information that no artificial model has achieved at this point. This remarkable ability to learn makes it possible to accomplish a multitude of cognitive tasks without requiring a multitude of information from any single task. In this thesis, we describe emergent phenomena which occur during learning for artificial neural network models. First, we observe how learning well-defined tasks can lead to the emergence of structured representations. This emergent structure appears at multiple levels within these models. From semantic factors of variation appearing in the hidden units of an autoencoder to physical structure appearing at the sensory input of an attention model, learning appears to influence all parts of a model. With this in mind, we develop a new method to guide this learning process for acquiring multiple tasks within a single model. Such methods will endow neural networks with greater flexibility to adapt to new environments without sacrificing the emergent structures which have been acquired previously from learning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View