pith. machine review for the scientific record. sign in

arxiv: 2511.21285 · v3 · submitted 2025-11-26 · 💻 cs.CL

Recognition: unknown

PEFT-Bench: A Parameter-Efficient Fine-Tuning Methods Benchmark

Authors on Pith no claims yet
classification 💻 cs.CL
keywords peftmethodsaccountbenchmarkdatasetsdespitefine-tuninginference
0
0 comments X
read the original abstract

Despite the state-of-the-art performance of Large Language Models (LLMs) achieved on many tasks, their massive scale often leads to high computational and environmental costs, limiting their accessibility. Parameter-Efficient Fine-Tuning (PEFT) methods address this challenge by reducing the number of trainable parameters while maintaining strong downstream performance. Despite the advances in PEFT methods, current evaluations remain limited (in terms of evaluated models and datasets) and difficult to reproduce. To bridge this gap, we introduce PEFT-Bench, a unified end-to-end benchmark for evaluating diverse PEFT methods on autoregressive LLMs. We demonstrate its usage across 27 NLP datasets and 7 PEFT methods. To account for different PEFT training and inference factors, we also introduce the PEFT Soft Cost Penalties (PSCP) metric, which takes trainable parameters, inference speed, and training memory usage into account.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Low-Data Supervised Adaptation Outperforms Prompting for Cloud Segmentation Under Domain Shift

    cs.CV 2026-04 unverdicted novelty 5.0

    Supervised fine-tuning with 0.1% labeled data outperforms all 60 tested prompt variants for CLIPSeg cloud segmentation on satellite imagery under domain shift.