pith. machine review for the scientific record. sign in

arxiv: 1507.06527 · v4 · submitted 2015-07-23 · 💻 cs.LG

Recognition: unknown

Deep Recurrent Q-Learning for Partially Observable MDPs

Authors on Pith no claims yet
classification 💻 cs.LG
keywords observationsdeeprecurrentdrqngameperformancerecurrencywhen
0
0 comments X
read the original abstract

Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting \textit{Deep Recurrent Q-Network} (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Contextual Control without Memory Growth in a Context-Switching Task

    cs.AI 2026-04 unverdicted novelty 7.0

    Intervention on a fixed-size recurrent state enables contextual control in sequential decisions without memory growth or direct context input.

  2. Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions

    cs.LG 2026-04 unverdicted novelty 6.0

    ARL lifts states into signature-augmented manifolds and employs self-consistent proxies of future path-laws to enable deterministic expected-return evaluation while preserving contraction mappings in jump-diffusion en...

  3. ALFWorld: Aligning Text and Embodied Environments for Interactive Learning

    cs.CL 2020-10 conditional novelty 6.0

    ALFWorld aligns text-based and embodied visual environments so agents can learn abstract policies in TextWorld that transfer to better performance on ALFRED tasks than visual-only training.

  4. Belief-State RWKV for Reinforcement Learning under Partial Observability

    cs.LG 2026-04 unverdicted novelty 5.0

    Belief-state RWKV maintains an uncertainty-aware recurrent state for RL policies in partial observability and shows modest gains over standard recurrent baselines in a pilot with observation noise.

  5. Deep Learning for Sequential Decision Making under Uncertainty: Foundations, Frameworks, and Frontiers

    math.OC 2026-04 unverdicted novelty 2.0

    A tutorial framing deep learning as a complement to optimization for sequential decision-making under uncertainty, with applications in supply chains, healthcare, and energy.