pith. machine review for the scientific record. sign in

arxiv: 1901.05761 · v2 · submitted 2019-01-17 · 💻 cs.LG · stat.ML

Recognition: unknown

Attentive Neural Processes

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords contextobservedconditioneddatadistributiondistributionsfunctionsinput
0
0 comments X
read the original abstract

Neural Processes (NPs) (Garnelo et al 2018a;b) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an input, conditioned on the context. NPs have the benefit of fitting observed data efficiently with linear complexity in the number of context input-output pairs, and can learn a wide family of conditional distributions; they learn predictive distributions conditioned on context sets of arbitrary size. Nonetheless, we show that NPs suffer a fundamental drawback of underfitting, giving inaccurate predictions at the inputs of the observed data they condition on. We address this issue by incorporating attention into NPs, allowing each input location to attend to the relevant context points for the prediction. We show that this greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Personalized Multi-Interest Modeling for Cross-Domain Recommendation to Cold-Start Users

    cs.IR 2026-04 unverdicted novelty 7.0

    NF-NPCDR enhances neural processes with normalizing flows to model personalized multi-interest preferences and uses a preference pool plus adaptive decoder to improve cross-domain recommendations for cold-start users.

  2. Spectral Transformer Neural Processes

    cs.LG 2026-05 unverdicted novelty 6.0

    STNPs extend TNPs with a spectral aggregator that estimates context spectra, forms spectral mixtures, and injects task-adaptive frequency features to better handle periodicity.

  3. Learning to Theorize the World from Observation

    cs.LG 2026-05 unverdicted novelty 6.0

    NEO induces compositional latent programs as world theories from observations and executes them to enable explanation-driven generalization.