LLMs contain a format-agnostic reasoning subspace (FARS) in middle layers that captures concept structure across symbolic forms while suppressing format information, with 10 dimensions preserving 90-96% of cross-form output.
Emerging cross-lingual structure in pretrained language models
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Beyond Language: Format-Agnostic Reasoning Subspaces in Large Language Models
LLMs contain a format-agnostic reasoning subspace (FARS) in middle layers that captures concept structure across symbolic forms while suppressing format information, with 10 dimensions preserving 90-96% of cross-form output.