pith. machine review for the scientific record. sign in

arxiv: 1707.03141 · v3 · submitted 2017-07-11 · 💻 cs.AI · cs.LG· cs.NE· stat.ML

Recognition: unknown

A Simple Neural Attentive Meta-Learner

Authors on Pith no claims yet
classification 💻 cs.AI cs.LGcs.NEstat.ML
keywords meta-learnertasksmeta-learningneuralsimplearchitecturesattentivedata
0
0 comments X
read the original abstract

Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Solving Rubik's Cube with a Robot Hand

    cs.LG 2019-10 accept novelty 7.0

    Reinforcement learning models trained only in simulation using automatic domain randomization solve Rubik's cube with a real robot hand.

  2. Where to Bind Matters: Hebbian Fast Weights in Vision Transformers for Few-Shot Character Recognition

    cs.NE 2026-04 unverdicted novelty 4.0

    Placing one Hebbian fast-weight module after the final stage of Swin-Tiny achieves 96.2% accuracy on 5-way 1-shot Omniglot classification, outperforming the non-Hebbian baseline by 0.3 points.