pith. machine review for the scientific record. sign in

arxiv: 2510.18196 · v2 · submitted 2025-10-21 · 💻 cs.CL · cs.AI

Recognition: unknown

Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge

Authors on Pith no claims yet
classification 💻 cs.CL cs.AI
keywords scorebiaschallengerangecontrastivedecodingjudgemodels
0
0 comments X
read the original abstract

Large Language Models (LLMs) are commonly used as evaluators in various applications, but the reliability of the outcomes remains a challenge. One such challenge is using LLMs-as-judges for direct assessment, i.e., assigning scores from a specified range without any references. Focusing on summarization, we first show that this challenge stems from LLM judge outputs being associated with score range bias, i.e., LLM judge outputs are highly sensitive to pre-defined score ranges. We also show that similar biases exist among models from the same family. We then mitigate this bias through contrastive decoding, achieving up to 11.7% relative improvement on average in Spearman correlation with human judgments across different score ranges.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.