Reflective Prompted Policy Optimization uses a Critic-LLM to inspect full trajectories and propose grounded revisions, yielding higher mean best rewards, faster near-optimal performance, and greater stability than scalar-reward baselines across ten environments.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Reflective Prompted Policy Optimization: Trajectory-Grounded Revision and Salience Bias
Reflective Prompted Policy Optimization uses a Critic-LLM to inspect full trajectories and propose grounded revisions, yielding higher mean best rewards, faster near-optimal performance, and greater stability than scalar-reward baselines across ten environments.