LLMs dynamically construct and causally rely on structured conceptual subspaces in middle-to-late layers for in-context inference.
Rethinking the role of demonstrations: What makes in-context learning work? In Goldberg, Y ., Kozareva, Z., and Zhang, Y
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Emergent Structured Representations Support Flexible In-Context Inference in Large Language Models
LLMs dynamically construct and causally rely on structured conceptual subspaces in middle-to-late layers for in-context inference.