Skip to main content
eScholarship
Open Access Publications from the University of California

Random Replaying Consolidated Knowledge in the Continual Learning Model

Creative Commons 'BY' version 4.0 license
Abstract

A continual learning (CL) model is designed to solve the catastrophic forgetting problem, which damages the performance of neural networks by overwriting previous knowledge with new knowledge. The fundamental cause of this problem is that previous data is not available when training new data in the CL setting. The memory-based CL methods leverage a memory buffer to address this problem by storing a limited subset of previous data for replay, and most methods of this type adopt random storage and replay strategies. In the human brain, the hippocampus replays consolidated knowledge from the neocortex in a random manner, e.g., random dreaming. Inspired by this memory mechanism, we propose a memory-based method, which replays more consolidated memory data while maintaining the randomness. Our work highlights that random replaying is important for the CL model, which confirms the effectiveness of random dreaming in the human brain.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View