Skip to main content
eScholarship
Open Access Publications from the University of California

Catastrophic interference in neural network models is mitigated when the training data reflect a power-law environmental structure

Creative Commons 'BY' version 4.0 license
Abstract

Sequential learning in artificial neural networks is known to trigger catastrophic interference (CI), where previously learned skills are forgotten after learning new skills. This is in direct contrast to humans’ ability to learn increasingly complex skills across the lifespan without major instances of CI. The present work builds on techniques for mitigating CI that have been proposed in prior work. Anderson and Schooler (1991) first documented that the memory environment has a lawful structure. Following from their observation, we constructed a training environment where previously mastered tasks (Boolean functions) decrease in frequency over time according to a power law. It was predicted that training in this environment would (1) mitigate CI, (2) replicate human performance in learning curves following a power law of practice, and (3) promote positive transfer of training to new skills, all without the need to posit additional mechanisms. The present results support all three predictions.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View