pith. machine review for the scientific record. sign in

arxiv: 1811.12530 · v1 · submitted 2018-11-29 · 💻 cs.LG · stat.ML

Recognition: unknown

Learning Finite State Representations of Recurrent Policy Networks

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords finiterepresentationsmemorypolicyfeatureslearningnetworkspolicies
0
0 comments X
read the original abstract

Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems. RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features. In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features. The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior. We present results of this approach on synthetic environments and six Atari games. The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy. We also show that these finite policy representations lead to improved interpretability.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Emergent Neural Automaton Policies: Learning Symbolic Structure from Visuomotor Trajectories

    cs.RO 2026-03 unverdicted novelty 6.0

    ENAP extracts an emergent Mealy automaton from visuomotor trajectories to act as a high-level planner for a low-level residual policy, yielding up to 27% higher success than end-to-end VLA policies in low-data regimes.