pith. machine review for the scientific record. sign in

Can Aha Moments Be Fake? Identifying True and Decorative Thinking Steps in Chain-of-Thought

2 Pith papers cite this work. Polarity classification is still indexing.

2 Pith papers citing it
abstract

Large language models can generate long chain-of-thought (CoT) reasoning, but it remains unclear whether the verbalized steps reflect the models' internal thinking. In this work, we propose a True Thinking Score (TTS) to quantify the causal contribution of each step in CoT to the model's final prediction. Our experiments show that LLMs often interleave between true-thinking steps (which are genuinely used to compute the final output) and decorative-thinking steps (which give the appearance of reasoning but have minimal causal influence). We reveal that only a small subset of the total reasoning steps causally drive the model's prediction: e.g., on AIME, only an average of 2.3% of reasoning steps in CoT have a TTS >= 0.7 (range: 0-1) for Qwen-2.5. Furthermore, we find that LLMs can be steered to internally follow or disregard specific steps in their verbalized CoT using the identified TrueThinking direction. We highlight that self-verification steps in CoT (i.e., aha moments) can be decorative, while steering along the TrueThinking direction can force internal reasoning over these steps. Overall, our work reveals that LLMs often verbalize reasoning steps without performing them internally, challenging the efficiency of LLM reasoning and the trustworthiness of CoT.

fields

cs.CL 2

years

2026 2

verdicts

UNVERDICTED 2

representative citing papers

When Chain-of-Thought Fails, the Solution Hides in the Hidden States

cs.CL · 2026-04-25 · unverdicted · novelty 7.0

Activation patching shows individual CoT tokens encode sufficient task-relevant information to recover correct answers on GSM8K, often outperforming both direct prompting and the original (sometimes incorrect) CoT trace.

citing papers explorer

Showing 2 of 2 citing papers.