pith. machine review for the scientific record. sign in

arxiv: 2509.25835 · v4 · submitted 2025-09-30 · 💻 cs.AI

Recognition: unknown

Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search

Authors on Pith no claims yet
classification 💻 cs.AI
keywords treebn-dpreasoningsearchbn-scchain-in-treeinferencelits
0
0 comments X
read the original abstract

Test-time scaling improves large language models (LLMs) on long-horizon reasoning tasks by allocating more compute at inference. LLM inference via tree search (LITS) achieves strong performance but is highly inefficient. We propose Chain-in-Tree (CiT), a plug-in framework that decides when to branch during search instead of expanding at every step. CiT introduces lightweight Branching Necessity (BN) evaluations, including BN-DP (direct prompting) and BN-SC (self-consistency). Integrated into Tree of Thoughts, ReST-MCTS, and RAP, BN-DP reduces token generation, model calls, and runtime by 75-85% on GSM8K and Math500, with often negligible or no accuracy loss. BN-SC typically yields substantial savings (up to 80%) generally but shows instability in 1-4 out of 14 settings, caused by a small subset of examples that produce extremely long reasoning steps. We theoretically prove that BN-DP never increases policy invocations and release unified implementations applicable across LITS frameworks. The full codebase is publicly available at https://github.com/xinzhel/chain_in_tree.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Your Model Diversity, Not Method, Determines Reasoning Strategy

    cs.AI 2026-04 unverdicted novelty 5.0

    The optimal reasoning strategy for LLMs depends on the model's diversity profile rather than the exploration method itself.