Skip to main content
Open Access Publications from the University of California


UCLA Electronic Theses and Dissertations bannerUCLA

Communicative Learning: A Unified Learning Formalism


The uniqueness and superiority of the cognitive infrastructure for human cooperative communication have been widely acknowledged and received systematical analysis in cognitive science in the past decade. Pedagogy and learning, as two of the most common types of human communication, also benefit from the sophistication of this infrastructure. Recently, the efficiency of human learning has engaged researchers in combining cooperative pedagogy with the booming field of machine learning. The pedagogical insight facilitates the adoption of alternative data sources, besides random sampling, in machine learning, e.g. intentional messages given by a helpful teacher. In this dissertation, we propose a communicative learning (CL) formalism rooted in human cooperative communication to unify existing machine learning paradigms, e.g. passive learning, active learning, algorithmic teaching etc., and en-lighten new ones. In this formalism, a teacher and a student communicate with each other in the process of teaching and learning certain knowledge. Each agent has a mind, including the agent’s knowledge, utility, and mental dynamics. To communicate with each other effectively, each agent must also have an estimation of its partner’s mind. We argue this modeling is necessary for the development of general human-like intelligence and justify the necessity with a prototypical human-machine collaboration task. We rigorously give the CL formalism and use it to survey existing learning algorithms through a unifying lens and show them as special cases of this formalism. We proved the theoretical guarantee of the CL formalism over non-CL algorithms and illustrate how efficient learning protocols can emerge between agents conducting CL. We also verified the practicability and scalability of this learning formalism with a generic human-robot interaction task. Last but not the least, we illustrate that CL allows learning protocol to go beyond Shannon’s communication limit and put forth the halting problem of learning to discuss the implications of CL.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View