pith. machine review for the scientific record. sign in

arxiv: 1708.01547 · v11 · submitted 2017-08-04 · 💻 cs.LG

Recognition: unknown

Lifelong Learning with Dynamically Expandable Networks

Authors on Pith no claims yet
classification 💻 cs.LG
keywords networkdynamicallylearninglifelongtasksbatchcapacitydeep
0
0 comments X
read the original abstract

We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks. DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting/duplicating units and timestamping them. We validate DEN on multiple public datasets under lifelong learning scenarios, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch counterparts with substantially fewer number of parameters. Further, the obtained network fine-tuned on all tasks obtained significantly better performance over the batch models, which shows that it can be used to estimate the optimal network structure even when all tasks are available in the first place.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 8 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning

    cs.AI 2023-06 conditional novelty 8.0

    LIBERO is a new benchmark for lifelong robot learning that evaluates transfer of declarative, procedural, and mixed knowledge across 130 manipulation tasks with provided demonstration data.

  2. Unlocking Patch-Level Features for CLIP-Based Class-Incremental Learning

    cs.CV 2026-05 unverdicted novelty 7.0

    SPA unlocks patch-level features in CLIP for class-incremental learning via semantic-guided selection and optimal transport alignment with class descriptions, plus projectors and pseudo-feature replay to reduce forgetting.

  3. Beyond Single-Model Optimization: Preserving Plasticity in Continual Reinforcement Learning

    cs.LG 2026-04 unverdicted novelty 7.0

    TeLAPA maintains archives of behaviorally diverse yet competent policies aligned in a shared latent space to preserve plasticity and enable faster recovery after interference in continual reinforcement learning.

  4. NORACL: Neurogenesis for Oracle-free Resource-Adaptive Continual Learning

    cs.LG 2026-04 unverdicted novelty 6.0

    NORACL dynamically grows network capacity via neurogenesis-inspired signals to achieve oracle-level continual learning performance without pre-specifying architecture size.

  5. Leveraging Complementary Embeddings for Replay Selection in Continual Learning with Small Buffers

    cs.LG 2026-04 unverdicted novelty 6.0

    MERS improves replay buffer selection in continual learning by integrating supervised and self-supervised embeddings via a graph-based approach, outperforming single-embedding baselines on CIFAR-100 and TinyImageNet i...

  6. Continual Few-shot Adaptation for Synthetic Fingerprint Detection

    cs.CV 2026-03 unverdicted novelty 6.0

    A continual few-shot adaptation method combining binary cross-entropy and supervised contrastive losses with replay achieves a good trade-off between fast adaptation to unseen synthetic fingerprint styles and retentio...

  7. HEDP: A Hybrid Energy-Distance Prompt-based Framework for Domain Incremental Learning

    cs.AI 2026-05 unverdicted novelty 5.0

    HEDP uses energy regularization inspired by Helmholtz free energy plus hybrid energy-distance weighting in prompts to improve domain selection and achieve a 2.57% accuracy gain on benchmarks like CORe50 while mitigati...

  8. A Domain Incremental Continual Learning Benchmark for ICU Time Series Model Transportability

    cs.LG 2026-05 unverdicted novelty 5.0

    Proposes a domain incremental continual learning benchmark for ICU time series model transportability across US regions and evaluates data replay and EWC methods.