pith. machine review for the scientific record. sign in

arxiv: 2508.05452 · v7 · submitted 2025-08-07 · 💻 cs.CL

Recognition: unknown

LLMEval-Fair: A Large-Scale Longitudinal Study on Robust and Fair Evaluation of Large Language Models

Authors on Pith no claims yet
classification 💻 cs.CL
keywords evaluationllmeval-fairdatallmsmodelsbenchmarkscapabilitiescontamination
0
0 comments X
read the original abstract

Existing evaluation of Large Language Models (LLMs) on static benchmarks is vulnerable to data contamination and leaderboard overfitting, critical issues that obscure true model capabilities. To address this, we introduce LLMEval-Fair, a framework for dynamic evaluation of LLMs. LLMEval-Fair is built on a proprietary bank of 220k graduate-level questions, from which it dynamically samples unseen test sets for each evaluation run. Its automated pipeline ensures integrity via contamination-resistant data curation, a novel anti-cheating architecture, and a calibrated LLM-as-a-judge process achieving 90% agreement with human experts, complemented by a relative ranking system for fair comparison. A 30-month longitudinal study of nearly 60 leading models reveals a performance ceiling on knowledge memorization and exposes data contamination vulnerabilities undetectable by static benchmarks. The framework demonstrates exceptional robustness in ranking stability and consistency, providing strong empirical validation for the dynamic evaluation paradigm. LLMEval-Fair offers a robust and credible methodology for assessing the true capabilities of LLMs beyond leaderboard scores, promoting the development of more trustworthy evaluation standards. Our code and data are publicly available at https://github.com/llmeval/LLMEval-Fair.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Coordinates of Capability: A Unified MTMM-Geometric Framework for LLM Evaluation

    cs.CL 2026-05 unverdicted novelty 6.0

    A new MTMM-geometric framework unifies LLM evaluation metrics into three latent dimensions to separate method variance from true capabilities.

  2. DynT2I-Eval: A Dynamic Evaluation Framework for Text-to-Image Models

    cs.CV 2026-05 unverdicted novelty 6.0

    DynT2I-Eval creates fresh prompts via dimension decomposition and dynamic sampling to evaluate text-to-image models on text alignment, quality, and aesthetics while maintaining a stable leaderboard.

  3. Coordinates of Capability: A Unified MTMM-Geometric Framework for LLM Evaluation

    cs.CL 2026-05 unverdicted novelty 5.0

    A systematization of knowledge unifies nine LLM metrics into three orthogonal latent dimensions via an MTMM-geometric framework to improve construct validity in evaluation.