pith. machine review for the scientific record. sign in

arxiv: 1705.08690 · v3 · submitted 2017-05-24 · 💻 cs.AI · cs.CV· cs.LG

Recognition: unknown

Continual Learning with Deep Generative Replay

Authors on Pith no claims yet
classification 💻 cs.AI cs.CVcs.LG
keywords generativedatadeepmodeltaskslearningmemoryprevious
0
0 comments X
read the original abstract

Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Attention to task structure for cognitive flexibility

    cs.NE 2026-04 unverdicted novelty 5.0

    Task connectivity in graph-structured multi-task environments enhances generalization and stability, with stronger benefits for attention models than MLPs.