pith. machine review for the scientific record. sign in

arxiv: 1702.08892 · v3 · submitted 2017-02-28 · 💻 cs.AI · cs.LG· stat.ML

Recognition: unknown

Bridging the Gap Between Value and Policy Based Reinforcement Learning

Authors on Pith no claims yet
classification 💻 cs.AI cs.LGstat.ML
keywords policyactionconsistencylearningsoftmaxvalueactor-criticalong
0
0 comments X
read the original abstract

We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization. Specifically, we show that softmax consistent action values correspond to optimal entropy regularized policy probabilities along any action sequence, regardless of provenance. From this observation, we develop a new RL algorithm, Path Consistency Learning (PCL), that minimizes a notion of soft consistency error along multi-step action sequences extracted from both on- and off-policy traces. We examine the behavior of PCL in different scenarios and show that PCL can be interpreted as generalizing both actor-critic and Q-learning algorithms. We subsequently deepen the relationship by showing how a single model can be used to represent both a policy and the corresponding softmax state values, eliminating the need for a separate critic. The experimental evaluation demonstrates that PCL significantly outperforms strong actor-critic and Q-learning baselines across several benchmarks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Multi-Agent Decision-Focused Learning via Value-Aware Sequential Communication

    cs.LG 2026-04 unverdicted novelty 7.0

    SeqComm-DFL uses value-aware sequential messages with Stackelberg conditioning to achieve 4-6x higher rewards and over 13% better win rates in multi-agent tasks under partial observability.

  2. Multi-Agent Decision-Focused Learning via Value-Aware Sequential Communication

    cs.LG 2026-04 unverdicted novelty 6.0

    SeqComm-DFL generates value-aware sequential messages via Stackelberg conditioning and trains them end-to-end with decision-focused learning and QMIX to deliver four-to-six times higher rewards on healthcare and SMAC ...