LLMs dynamically construct and causally rely on structured conceptual subspaces in middle-to-late layers for in-context inference.
In-context learning creates task vectors
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Emergent Structured Representations Support Flexible In-Context Inference in Large Language Models
LLMs dynamically construct and causally rely on structured conceptual subspaces in middle-to-late layers for in-context inference.