Skip to main content
eScholarship
Open Access Publications from the University of California

A Computational Account Of Self-Supervised Visual Learning From Egocentric Object Play

Creative Commons 'BY' version 4.0 license
Abstract

Research in child development has shown that embodied experience handling physical objects contributes to many cognitive abilities, including visual learning. One characteristic of such experience is that the learner sees the same object from several different viewpoints. In this paper, we study how learning signals that equate different viewpoints---e.g., assigning similar representations to different views of a single object---can support robust visual learning. We use the Toybox dataset, which contains egocentric videos of humans manipulating different objects, and conduct experiments using a computer vision framework for self-supervised contrastive learning. We find that representations learned by equating different physical viewpoints of an object benefit downstream image classification accuracy. Further experiments show that this performance improvement is robust to variations in the gaps between viewpoints, and that the benefits transfer to several different image classification tasks.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View