pith. machine review for the scientific record. sign in

arxiv: 1506.00019 · v4 · submitted 2015-05-29 · 💻 cs.LG · cs.NE

Recognition: unknown

A Critical Review of Recurrent Neural Networks for Sequence Learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.NE
keywords networkslearningneuralrecurrentsequencestasksarchitecturescaptioning
0
0 comments X
read the original abstract

Countless learning tasks require dealing with sequential data. Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis, and musical information retrieval, a model must learn from inputs that are sequences. Interactive tasks, such as translating natural language, engaging in dialogue, and controlling a robot, often demand both capabilities. Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes. Unlike standard feedforward neural networks, recurrent networks retain a state that can represent information from an arbitrarily long context window. Although recurrent neural networks have traditionally been difficult to train, and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful large-scale learning with them. In recent years, systems based on long short-term memory (LSTM) and bidirectional (BRNN) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this survey, we review and synthesize the research that over the past three decades first yielded and then made practical these powerful learning models. When appropriate, we reconcile conflicting notation and nomenclature. Our goal is to provide a self-contained explication of the state of the art together with a historical perspective and references to primary research.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. SELF-EMO: Emotional Self-Evolution from Recognition to Consistent Expression

    cs.AI 2026-04 unverdicted novelty 7.0

    SELF-EMO lets LLMs bootstrap better emotion recognition and expression via self-play, data flywheel filtering with smoothed IoU rewards, and SELF-GRPO reinforcement learning, yielding SOTA gains on IEMOCAP, MELD, and ...

  2. Geometry-Induced Long-Range Correlations in Recurrent Neural Network Quantum States

    quant-ph 2026-04 conditional novelty 7.0

    Dilated RNN wave functions induce power-law correlations for the critical 1D transverse-field Ising model and the Cluster state, unlike the exponential decay of conventional RNN ansatze.

  3. Selective Correlation Based Knowledge Distillation for Ground Reaction Force Estimation

    cs.CV 2026-04 unverdicted novelty 5.0

    Selective Correlation Based Knowledge Distillation trains smaller models to accurately estimate ground reaction forces from wearable insole sensors by focusing on temporal features in correlation maps for efficient kn...

  4. Learning Invariant Modality Representation for Robust Multimodal Learning from a Causal Inference Perspective

    cs.LG 2026-04 unverdicted novelty 5.0

    CmIR uses causal inference to separate invariant causal representations from spurious ones in multimodal data, improving generalization under distribution shifts and noise via invariance, mutual information, and recon...