VitaTouch combines vision-tactile encoders with a dual Q-Former and contrastive alignment to an LLM, achieving 88.89% hardness and 75.13% roughness accuracy on a new 186-object dataset plus 94% success in robotic sorting trials.
Vision-based tactile sensing: From performance parameters to device design
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
citation-role summary
background 1
citation-polarity summary
fields
cs.CV 1years
2026 1verdicts
UNVERDICTED 1roles
background 1polarities
background 1representative citing papers
citing papers explorer
-
VitaTouch: Property-Aware Vision-Tactile-Language Model for Robotic Quality Inspection in Manufacturing
VitaTouch combines vision-tactile encoders with a dual Q-Former and contrastive alignment to an LLM, achieving 88.89% hardness and 75.13% roughness accuracy on a new 186-object dataset plus 94% success in robotic sorting trials.