pith. machine review for the scientific record. sign in

arxiv: 1810.12894 · v1 · submitted 2018-10-30 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Exploration by Random Network Distillation

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords networkbonusexplorationgamedeepdistillationfirstintroduce
0
0 comments X
read the original abstract

We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 14 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Quality-Aware Exploration Budget Allocation for Cooperative Multi-Agent Reinforcement Learning

    cs.MA 2026-05 unverdicted novelty 7.0

    A quality-aware exploration method using return-conditioned sigmoid scheduling and per-agent RSQ metrics achieves top-tier returns on seven cooperative MARL benchmarks.

  2. Beyond Single-Model Optimization: Preserving Plasticity in Continual Reinforcement Learning

    cs.LG 2026-04 unverdicted novelty 7.0

    TeLAPA maintains archives of behaviorally diverse yet competent policies aligned in a shared latent space to preserve plasticity and enable faster recovery after interference in continual reinforcement learning.

  3. Dota 2 with Large Scale Deep Reinforcement Learning

    cs.LG 2019-12 accept novelty 7.0

    OpenAI Five achieved superhuman performance in Dota 2 by defeating the world champions using scaled self-play reinforcement learning.

  4. Solving Rubik's Cube with a Robot Hand

    cs.LG 2019-10 accept novelty 7.0

    Reinforcement learning models trained only in simulation using automatic domain randomization solve Rubik's cube with a real robot hand.

  5. Learning What Matters: Adaptive Information-Theoretic Objectives for Robot Exploration

    cs.RO 2026-05 unverdicted novelty 6.0

    QOED selects identifiable parameter directions via Fisher matrix eigenspace analysis and modifies exploration objectives to approximate ideal information gain under bounded nuisance assumptions, yielding 21-35% perfor...

  6. Shaping Zero-Shot Coordination via State Blocking

    cs.LG 2026-05 unverdicted novelty 6.0

    SBC generates virtual environments via state blocking to expose agents to diverse suboptimal partner policies, yielding superior zero-shot coordination performance including with humans.

  7. Learning to Theorize the World from Observation

    cs.LG 2026-05 unverdicted novelty 6.0

    NEO induces compositional latent programs as world theories from observations and executes them to enable explanation-driven generalization.

  8. QHyer: Q-conditioned Hybrid Attention-mamba Transformer for Offline Goal-conditioned RL

    cs.LG 2026-05 unverdicted novelty 6.0

    QHyer achieves state-of-the-art results in offline goal-conditioned RL by replacing return-to-go with a state-conditioned Q-estimator and introducing a gated hybrid attention-mamba backbone for content-adaptive histor...

  9. QHyer: Q-conditioned Hybrid Attention-mamba Transformer for Offline Goal-conditioned RL

    cs.LG 2026-05 unverdicted novelty 6.0

    QHyer replaces return-to-go with a state-conditioned Q-estimator and adds a gated hybrid attention-mamba backbone to achieve state-of-the-art performance in offline goal-conditioned RL on both Markovian and non-Markov...

  10. Breaking the Computational Barrier: Provably Efficient Actor-Critic for Low-Rank MDPs

    cs.LG 2026-05 unverdicted novelty 6.0

    An actor-critic RL algorithm for low-rank MDPs achieves improved sample efficiency using solely a policy evaluation oracle.

  11. Distill-Belief: Closed-Loop Inverse Source Localization and Characterization in Physical Fields

    cs.AI 2026-04 unverdicted novelty 6.0

    Distill-Belief distills Bayesian information-gain signals from a particle-filter teacher into a compact student policy for fast closed-loop source localization and parameter estimation while avoiding reward hacking.

  12. Improving Zero-Shot Offline RL via Behavioral Task Sampling

    cs.AI 2026-04 unverdicted novelty 6.0

    Extracting task vectors from the offline dataset for policy training improves zero-shot offline RL performance by an average of 20% over random sampling baselines.

  13. Dual-Timescale Memory in a Spiking Neuron-Astrocyte Network for Efficient Navigation

    q-bio.QM 2026-04 unverdicted novelty 6.0

    A neuron-astrocyte network with dual-timescale memory reduces median path lengths up to sixfold in partially observable grid-world navigation tasks.

  14. Learning-Based Sparsification of Dynamic Graphs in Robotic Exploration Algorithms

    cs.RO 2026-04 unverdicted novelty 6.0

    A PPO-trained transformer policy sparsifies dynamic graphs during RRT frontier exploration, cutting size by up to 96% and yielding the most consistent exploration rates across environments.