pith. machine review for the scientific record. sign in

arxiv: 1701.08734 · v1 · submitted 2017-01-30 · 💻 cs.NE · cs.LG

Recognition: unknown

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

Authors on Pith no claims yet
classification 💻 cs.NE cs.LG
keywords networkneuraltasklearningalgorithmlearnedpathnettasks
0
0 comments X
read the original abstract

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. MILE: Mixture of Incremental LoRA Experts for Continual Semantic Segmentation across Domains and Modalities

    cs.CV 2026-05 unverdicted novelty 6.0

    MILE combines incremental LoRA experts with prototype-guided gating to support continual semantic segmentation across domains and modalities while adding only a small number of parameters per task.

  2. Learning Without Losing Identity: Capability Evolution for Embodied Agents

    cs.RO 2026-04 unverdicted novelty 6.0

    Embodied agents maintain a persistent identity while evolving capabilities via modular ECMs, raising simulated task success from 32.4% to 91.3% over 20 iterations with zero policy drift or safety violations.

  3. Evidence of an Emergent "Self" in Continual Robot Learning

    cs.RO 2026-03 unverdicted novelty 6.0

    Continual learning robots form a significantly more stable invariant subnetwork than constant-task controls, and preserving it improves adaptation while damaging it hurts performance.

  4. MoRe: Modular Representations for Principled Continual Representation Learning on Squantial Data

    cs.LG 2026-05 unverdicted novelty 5.0

    MoRe decomposes representations into identifiable hierarchical modules to enable principled continual adaptation on sequential data.

  5. Incremental learning for audio classification with Hebbian Deep Neural Networks

    eess.AS 2026-04 unverdicted novelty 5.0

    A kernel plasticity approach in Hebbian DNNs for incremental sound classification achieves 76.3% accuracy over five steps on ESC-50, outperforming the 68.7% baseline without plasticity.