pith. machine review for the scientific record. sign in

arxiv: 2603.04676 · v2 · submitted 2026-03-04 · 💻 cs.CV · cs.AI

Recognition: unknown

Decoding the Pulse of Reasoning VLMs in Multi-Image Understanding Tasks

Authors on Pith no claims yet
classification 💻 cs.CV cs.AI
keywords attentionreasoningmulti-imagevlmsfocusgatingimageimages
0
0 comments X
read the original abstract

Multi-image reasoning remains a significant challenge for vision-language models (VLMs). We investigate a previously overlooked phenomenon: during chain-of-thought (CoT) generation, the text-to-image (T2I) attention of reasoning VLMs exhibits diffuse "pulses": sporadic and unfocused attention patterns that fail to concentrate on task-relevant images. We further reveal a systematic positional bias in attention allocation across images. Motivated by these observations, we propose PulseFocus, a training-free, inference-time method that structures CoT reasoning into interleaved plan/focus blocks with soft attention gating. By forcing the model to explicitly plan which image to examine and then gating decode-time attention to the referenced image, PulseFocus sharpens attention focus and yields consistent improvements on multi-image benchmarks like BLINK benchmark (+3.7%) and MuirBench (+1.07%).

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.