Gaze regularization aligns VLA attention with human visual patterns via KL divergence on patch distributions, yielding 4-12% gains on manipulation benchmarks.
Robomamba: Efficient vision-language-action model for robotic reasoning and ma- nipulation, 2024
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CV 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Gaze-Regularized Vision-Language-Action Models for Robotic Manipulation
Gaze regularization aligns VLA attention with human visual patterns via KL divergence on patch distributions, yielding 4-12% gains on manipulation benchmarks.