pith. machine review for the scientific record. sign in

arxiv: 2605.02241 · v3 · submitted 2026-05-04 · 💻 cs.AI · cs.CL· cs.ET

Recognition: unknown

Zero-Shot Confidence Estimation for Small LLMs: When Supervised Baselines Aren't Worth Training

Authors on Pith no claims yet

Pith reviewed 2026-05-09 16:29 UTC · model grok-4.3

classification 💻 cs.AI cs.CLcs.ET
keywords zero-shot confidencesmall LLMstoken log-probabilitymodel routingself-assessmentout-of-distribution performanceAUROC evaluationretrieval augmentation
0
0 comments X

The pith

Small LLMs can estimate their own correctness with a zero-shot signal that needs no training data and beats supervised baselines out of distribution.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that average token log-probability provides a reliable zero-shot measure of whether a small language model's output is correct. This signal matches supervised methods trained on labeled examples when queries resemble the training distribution and outperforms them when queries differ. The result matters for local-to-cloud routing systems because it removes the need to collect and label thousands of examples for each model before deployment. The authors also show that a retrieval-conditional self-assessment variant can improve basic self-assessment while running at much lower latency than full generation.

Core claim

Average token log-probability, computed directly from a model's generation, serves as an effective zero-shot confidence estimator for small LLMs. Across three 7-8B model families and two datasets, it achieves AUROC values of 0.650-0.714 in-distribution, comparable to supervised baselines at 0.644-0.676, and 0.717-0.833 out-of-distribution, where supervised methods drop to 0.512-0.564. The paper further introduces retrieval-conditional self-assessment, which selectively adds retrieved knowledge before generation when similarity is high and improves AUROC by up to 0.069 while using 3-10x less latency than log-probability calculation. A supervised baseline trained on 1,000 labeled examples does

What carries the argument

Average token log-probability as a direct measure of generation uncertainty that reflects the model's internal state rather than query distribution.

If this is right

  • Local-to-cloud query routing becomes feasible for new models without any supervised training step.
  • Zero-shot signals maintain accuracy when query topics shift, unlike methods that overfit to training distributions.
  • A supervised baseline trained on 1,000 examples never surpasses the zero-shot log-probability signal.
  • Retrieval-conditional self-assessment offers a practical low-latency alternative that can be applied before generation.
  • The performance gap widens in out-of-distribution settings, favoring zero-shot approaches for open-ended use.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Routing systems could be deployed across many small models simultaneously without maintaining separate labeled datasets for each.
  • Combining log-probability with retrieval-conditional self-assessment might produce even stronger pre-generation filters at acceptable speed.
  • The approach invites direct tests on whether AUROC improvements translate to measurable dollar savings in production inference budgets.
  • If the pattern holds on larger query volumes, it reduces the barrier to using multiple specialized small models instead of a single large one.

Load-bearing premise

The two chosen evaluation datasets and their out-of-distribution splits are representative enough of real user queries that the measured AUROC gaps will translate to actual routing cost savings.

What would settle it

Running a live routing experiment on a stream of diverse real-user queries and checking whether zero-shot log-probability routing yields lower total inference cost than supervised routing or no routing.

Figures

Figures reproduced from arXiv: 2605.02241 by Luong N. Nguyen.

Figure 2
Figure 2. Figure 2: RouteLLM learning curve (solid) vs. logprob zero-shot AUROC view at source ↗
read the original abstract

How reliably can a small language model estimate its own correctness? The answer determines whether local-to-cloud routing-escalating queries a cheap local model cannot handle-can work without supervised training data. As inference costs dominate large language model (LLM) deployment budgets, routing most queries to a cheap local model while reserving expensive cloud calls for hard cases is an increasingly common cost-control strategy. We compare zero-shot confidence signals against RouteLLM-style supervised baselines across three 7-8B model families and two datasets (1,000 and 500 queries per model, respectively). Average token log-probability, which requires no training data, matches or exceeds supervised baselines in-distribution (Area Under the Receiver Operating Characteristic curve (AUROC) 0.650-0.714 vs. 0.644-0.676) and substantially outperforms them out-of-distribution (0.717-0.833 vs. 0.512-0.564), because it measures a property of the model's generation rather than the query distribution. This paper further proposes retrieval-conditional self-assessment, a pre-generation signal that selectively injects retrieved knowledge when similarity is high, improving over bare self-assessment by up to +0.069 AUROC at 3-10x lower latency than log-probability. A supervised baseline trained on 1,000 labeled examples never exceeds the zero-shot signal. We release all code, data, and experiment logs.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript evaluates zero-shot confidence estimation for small (7-8B) LLMs on two datasets (1,000 and 500 queries), comparing average token log-probability and a proposed retrieval-conditional self-assessment against RouteLLM-style supervised baselines. It reports that the zero-shot log-probability signal matches supervised AUROC in-distribution (0.650-0.714 vs. 0.644-0.676) and substantially outperforms it out-of-distribution (0.717-0.833 vs. 0.512-0.564), concluding that supervised training on labeled data is not worth the effort for local-to-cloud routing applications. The work releases all code, data, and experiment logs.

Significance. If the reported AUROC results hold under broader validation, the findings have clear practical significance for efficient LLM deployment: they indicate that simple zero-shot signals derived from the model's own generation can provide reliable confidence estimates without collecting supervision, especially in out-of-distribution settings where supervised models degrade. This could lower barriers to implementing cost-saving routing systems. The explicit release of code, data, and logs is a strength that supports reproducibility and enables independent verification or extension.

major comments (2)
  1. [Abstract and evaluation sections] Abstract and evaluation sections: The headline claim that supervised baselines 'aren't worth training' for routing is not fully supported by the presented evidence. While AUROC values are reported, the manuscript contains no end-to-end routing simulation, threshold selection analysis, or cost modeling that would demonstrate actual reductions in inference cost or escalation rates at fixed accuracy. AUROC quantifies ranking but does not specify operating points, the fraction of queries sent to cloud, or sensitivity to cost ratios between local and cloud inference.
  2. [Datasets and OOD splits] Datasets and OOD splits: The evaluation relies on modest dataset sizes (1,000 and 500 queries) with specific in- vs. out-of-distribution partitions. It is unclear whether these splits are representative of real user query streams or whether the observed OOD gains would persist under different distribution shifts; additional sensitivity checks or larger-scale OOD tests would be needed to support the generalization argument.
minor comments (2)
  1. [Methods] Clarify the precise computation of average token log-probability (e.g., whether normalized by length, handling of EOS tokens) in the methods section to aid replication.
  2. [Results] Include error bars or statistical tests on the AUROC differences in tables or figures to indicate whether reported gaps are significant.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment point by point below, indicating where revisions have been made to the manuscript.

read point-by-point responses
  1. Referee: [Abstract and evaluation sections] Abstract and evaluation sections: The headline claim that supervised baselines 'aren't worth training' for routing is not fully supported by the presented evidence. While AUROC values are reported, the manuscript contains no end-to-end routing simulation, threshold selection analysis, or cost modeling that would demonstrate actual reductions in inference cost or escalation rates at fixed accuracy. AUROC quantifies ranking but does not specify operating points, the fraction of queries sent to cloud, or sensitivity to cost ratios between local and cloud inference.

    Authors: We agree that an explicit end-to-end routing simulation with cost modeling would provide stronger support for the practical claim. AUROC is the established metric for confidence estimation quality because it evaluates ranking of query difficulty, which is the core requirement for any threshold-based routing policy. In the revised version we add a dedicated subsection that derives operating-point estimates from the reported AUROC curves, showing the implied fraction of queries escalated to the cloud and the resulting cost-accuracy trade-offs under a range of local-to-cloud cost ratios. We have also moderated the abstract language from 'aren't worth training' to 'may not be worth the effort' to align more precisely with the evidence presented. revision: partial

  2. Referee: [Datasets and OOD splits] Datasets and OOD splits: The evaluation relies on modest dataset sizes (1,000 and 500 queries) with specific in- vs. out-of-distribution partitions. It is unclear whether these splits are representative of real user query streams or whether the observed OOD gains would persist under different distribution shifts; additional sensitivity checks or larger-scale OOD tests would be needed to support the generalization argument.

    Authors: The chosen sizes enable complete release of every query, label, and log for full reproducibility while remaining large enough for stable AUROC estimation. The OOD partitions are constructed from clearly distinct domains, and the performance gap is consistent across two independent datasets and three model families. We have added an explicit limitations paragraph that acknowledges the scope of the tested shifts and calls for future validation on larger, more varied query streams. Because all code and data are publicly released, readers can readily perform additional sensitivity checks or scale the experiments. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical AUROC measurements on held-out labels

full rationale

The paper is an empirical comparison of zero-shot signals (average token log-probability, retrieval-conditional self-assessment) versus supervised baselines across fixed datasets of 500-1000 queries. Reported AUROCs are computed from model generations and external correctness labels; no quantity is defined in terms of itself, no fitted parameter is relabeled as a prediction, and no load-bearing premise reduces to a self-citation or ansatz. The central results are direct measurements against independent ground truth and remain self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper rests on standard machine-learning evaluation assumptions rather than new postulates; no free parameters or invented entities are introduced in the abstract.

axioms (1)
  • domain assumption AUROC is an appropriate scalar summary for comparing confidence estimation methods intended for routing decisions
    Used throughout the reported comparisons.

pith-pipeline@v0.9.0 · 5561 in / 1287 out tokens · 40581 ms · 2026-05-09T16:29:32.504233+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

27 extracted references · 20 canonical work pages · 5 internal anchors

  1. [1]

    RouteLLM: Learning to Route LLMs with Preference Data

    I. Ong, A. Almahairi, V . Wu, W.-L. Chiang, T. Wu, J. E. Gonzalez, M. W. Kadous, and I. Stoica, “Routellm: Learning to route llms with preference data,”arXiv preprint arXiv:2406.18665, 2024

  2. [2]

    Pan, H., Tennenholtz, G., Mannor, S., Chi, C.-W., Brekel- mans, R., Shah, P., and Tewari, A

    P. Pandaet al., “Adaptive llm routing under budget constraints,” in Findings of EMNLP, 2025. arXiv:2508.21141

  3. [3]

    BaRP: Bandit-feedback routing with preferences for multi-LLM inference.arXiv preprint arXiv:2510.07429, 2025

    W. Wang, T. Yang, H. Chen, Y . Zhao, F. Dernoncourt, R. A. Rossi, and H. Eldardiry, “Learning to route llms from bandit feedback,”arXiv preprint arXiv:2510.07429, 2025

  4. [4]

    Llm routing with dueling feedback,

    C.-K. Chiang, T. Ishida, and M. Sugiyama, “Llm routing with dueling feedback,”arXiv preprint arXiv:2510.00841, 2025

  5. [5]

    ParetoBandit: Budget-Paced Adaptive Routing for Non-Stationary LLM Serving

    A. Taberner-Milleret al., “Paretobandit: Budget-paced adaptive routing for non-stationary llm serving,”arXiv preprint arXiv:2604.00136, 2026

  6. [6]

    Doing more with less – implementing routing strategies in large language model-based systems: An extended survey.arXiv preprint arXiv:2502.00409, 2025

    R. Gomeset al., “Doing more with less: Implementing routing strategies in large language model-based systems,”arXiv preprint arXiv:2502.00409, 2025

  7. [7]

    Language Models (Mostly) Know What They Know

    S. Kadavath, T. Conerly, A. Askell, T. Henighan, D. Drain, E. Perez, N. Schiefer, Z. Hatfield-Dodds, N. DasSarma, E. Tran-Johnsonet al., “Language models (mostly) know what they know,”arXiv preprint arXiv:2207.05221, 2022

  8. [8]

    On calibration of modern neural networks,

    C. Guo, G. Pleiss, Y . Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” inInternational Conference on Machine Learning (ICML), 2017

  9. [9]

    The Mean-Difference: A Simple and Effective Method for Zero-Shot Classification

    H. Huanget al., “Log probabilities are a reliable estimate of seman- tic plausibility in base and instruction-tuned language models,”arXiv preprint arXiv:2403.14859, 2024

  10. [10]

    Liu, and Balaji Lakshminarayanan

    A. Renet al., “Self-evaluation improves selective generation in large language models,”arXiv preprint arXiv:2312.09300, 2023

  11. [11]

    When can we trust llm graders? calibrating confidence for automated assessment,

    Z. Chenet al., “When can we trust llm graders? calibrating confidence for automated assessment,”arXiv preprint arXiv:2603.29559, 2024

  12. [12]

    UQLM: A python package for uncertainty quantification in large language models,

    D. Bouchard, M. S. Chauhan, D. Skarbrevik, H.-K. Ra, V . Bajaj, and Z. Ahmad, “UQLM: A python package for uncertainty quantification in large language models,”Journal of Machine Learning Research, vol. 27, no. 13, pp. 1–10, 2026

  13. [13]

    Uncertainty quantification for language models: A suite of black-box, white-box, LLM judge, and ensemble scorers,

    D. Bouchard and M. S. Chauhan, “Uncertainty quantification for language models: A suite of black-box, white-box, LLM judge, and ensemble scorers,”Transactions on Machine Learning Research, 2025

  14. [14]

    Muse: Multi-signal uncertainty estimation for llms,

    J. Genget al., “Muse: Multi-signal uncertainty estimation for llms,” arXiv preprint arXiv:2507.07236, 2025

  15. [15]

    Confidence-aware routing for large language model reliability enhancement,

    N. Mukkunnothet al., “Confidence-aware routing for large language model reliability enhancement,”arXiv preprint arXiv:2510.01237, 2025

  16. [16]

    Self-Consistency Improves Chain of Thought Reasoning in Language Models

    X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdh- ery, and D. Zhou, “Self-consistency improves chain of thought reasoning in language models,”arXiv preprint arXiv:2203.11171, 2022

  17. [17]

    Confident or Seek Stronger: Exploring Uncertainty-Based On-Device LLM Rout- ing

    Y .-N. Chuanget al., “Confident or seek stronger: Exploring uncertainty- based on-device llm routing from benchmarking to generalization,” arXiv preprint arXiv:2502.04428, 2025

  18. [18]

    Leveraging uncertainty estimation for efficient llm routing.arXiv preprint arXiv:2502.11021, 2025

    T. Zhanget al., “Leveraging uncertainty estimation for efficient llm routing,”arXiv preprint arXiv:2502.11021, 2025

  19. [19]

    LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations

    W. Lugoloobi, T. Foster, W. Bankes, and C. Russell, “Llms encode their failures: Predicting success from pre-generation activations,”arXiv preprint arXiv:2602.09924, 2026

  20. [20]

    Ensemble methods in machine learning,

    T. G. Dietterich, “Ensemble methods in machine learning,” inInterna- tional Workshop on Multiple Classifier Systems (MCS). Springer, 2000, pp. 1–15

  21. [21]

    Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation,

    L. Kuhn, Y . Gal, and S. Farquhar, “Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation,” Nature, vol. 630, pp. 625–630, 2023

  22. [22]

    Malik, and Yarin Gal

    S. Farquharet al., “Robust and cheap hallucination detection in llms,” arXiv preprint arXiv:2406.15927, 2024

  23. [23]

    arXiv preprint arXiv:2305.14975 (2023) Confidence Estimation in Automatic Short Answer Grading with LLMs 15

    K. Tianet al., “Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models,”arXiv preprint arXiv:2305.14975, 2023

  24. [24]

    Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms,

    M. Xionget al., “Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms,” inInternational Conference on Learning Representations (ICLR), 2024

  25. [25]

    A survey on uncertainty quantification of large language models,

    J. Genget al., “A survey on uncertainty quantification of large language models,”arXiv preprint arXiv:2412.05563, 2024

  26. [26]

    Putra Manggala, Atalanti A Mastakouri, Elke Kirschbaum, Shiva Kasiviswanathan, and Aaditya Ramdas

    Y . Huanget al., “Uncertainty quantification and confidence calibration in large language models,”arXiv preprint arXiv:2503.15850, 2025

  27. [27]

    Competing biases underlie overconfidence and underconfidence in LLMs,

    D. Kumaran, S. M. Fleming, L. Markeeva, J. Heyward, A. Banino, M. Mathur, R. Pascanu, S. Osindero, B. De Martino, P. Veli ˇckovi´c, and V . Patraucean, “Competing biases underlie overconfidence and underconfidence in LLMs,”Nature Machine Intelligence, vol. 8, pp. 614–627, 2026