pith. machine review for the scientific record. sign in

arxiv: 1707.01495 · v3 · submitted 2017-07-05 · 💻 cs.LG · cs.AI· cs.NE· cs.RO

Recognition: unknown

Hindsight Experience Replay

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.NEcs.RO
keywords experiencehindsightreplayrewardstaskbinarylearningsparse
0
0 comments X
read the original abstract

Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Multi-scale Predictive Representations for Goal-conditioned Reinforcement Learning

    cs.LG 2026-05 unverdicted novelty 6.0

    Ms.PR applies multi-scale predictive supervision to enforce goal-directed alignment in latent spaces for offline GCRL, yielding improved representation quality and performance on vision and state-based tasks.

  2. Hindsight-Anchored Policy Optimization: Turning Failure into Feedback in Sparse Reward Settings

    cs.LG 2026-03 unverdicted novelty 6.0

    HAPO adds a hindsight-anchored SSI operator with Thompson gating to GRPO-style RLVR, achieving asymptotic consistency that recovers unbiased on-policy gradients as the policy improves.

  3. Gymnasium: A Standard Interface for Reinforcement Learning Environments

    cs.LG 2024-07 accept novelty 5.0

    Gymnasium establishes a standardized API for RL environments to improve interoperability, reproducibility, and ease of development in reinforcement learning.