pith. machine review for the scientific record. sign in

arxiv: 2605.14712 · v1 · submitted 2026-05-14 · 💻 cs.RO · cs.AI· cs.CL· cs.CV

Recognition: unknown

IntentVLA: Short-Horizon Intent Modeling for Aliased Robot Manipulation

Authors on Pith no claims yet
classification 💻 cs.RO cs.AIcs.CLcs.CV
keywords short-horizondifferentintentvlaacrossaliasbenchchunkdataintent
0
0 comments X
read the original abstract

Robot imitation data are often multimodal: similar visual-language observations may be followed by different action chunks because human demonstrators act with different short-horizon intents, task phases, or recent context. Existing frame-conditioned VLA policies infer each chunk from the current observation and instruction alone, so under partial observability they may resample different intents across adjacent replanning steps, leading to inter-chunk conflict and unstable execution. We introduce IntentVLA, a history-conditioned VLA framework that encodes recent visual observations into a compact short-horizon intent representation and uses it to condition chunk generation. We further introduce AliasBench, a 12-task ambiguity-aware benchmark on RoboTwin2 with matched training data and evaluation environments that isolate short-horizon observation aliasing. Across AliasBench, SimplerEnv, LIBERO, and RoboCasa, IntentVLA improves rollout stability and outperforms strong VLA baselines

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.