VLRS-Bench is the first benchmark dedicated to complex vision-language reasoning in remote sensing, with 2000 QA pairs across 14 tasks in cognition, decision, and prediction dimensions.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
cs.CV 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
MeCSAFNet reports mIoU gains of 4.8-19.6% over U-Net and SegFormer baselines on FBP and Potsdam datasets by processing spectral channels separately and fusing features with CBAM attention.
citing papers explorer
-
VLRS-Bench: A Vision-Language Reasoning Benchmark for Remote Sensing
VLRS-Bench is the first benchmark dedicated to complex vision-language reasoning in remote sensing, with 2000 QA pairs across 14 tasks in cognition, decision, and prediction dimensions.
-
Multi-encoder ConvNeXt Network with Smooth Attentional Feature Fusion for Multispectral Semantic Segmentation
MeCSAFNet reports mIoU gains of 4.8-19.6% over U-Net and SegFormer baselines on FBP and Potsdam datasets by processing spectral channels separately and fusing features with CBAM attention.