Recognition: unknown
Task-Dependent Evaluation of LLM Output Homogenization: A Taxonomy-Guided Framework
read the original abstract
Large language models often generate homogeneous outputs, but whether this is problematic depends on the specific task. For objective math tasks, responses may vary in terms of problem-solving strategy but should maintain the same verifiable answer. Whereas, for creative writing tasks, we often expect variation in key narrative components (e.g. plot, setting, etc.) beyond mere vocabulary diversity. Prior work on homogenization rarely conceptualizes diversity in a task-dependent way. We address this gap with four contributions: (1) a task taxonomy with distinct notions of functional diversity -- whether a user would perceive two responses as meaningfully different for a given task; (2) a small user study validating that the taxonomy aligns with human perception of functional diversity; (3) a task-dependent sampling technique that increases diversity only where homogenization is undesired; (4) evidence challenging the perceived diversity-quality trade-off, showing it may stem from mis-conceptualizing both diversity and quality in a task-agnostic way.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Where does output diversity collapse in post-training?
Diversity collapse in post-trained LLMs is driven by data composition during training, occurs at stages like supervised fine-tuning, and is embedded in model weights rather than imposed by generation format.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.