pith. machine review for the scientific record. sign in

arxiv: 2402.17245 · v1 · submitted 2024-02-27 · 💻 cs.CV · cs.AI

Recognition: no theorem link

Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation

Authors on Pith no claims yet

Pith reviewed 2026-05-15 16:36 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords text-to-image generationdiffusion modelsaesthetic qualitynoise scheduleaspect ratioshuman preference alignmentPlayground v2.5
0
0 comments X

The pith

Three targeted changes to diffusion training produce text-to-image outputs with better color, contrast, and human details than prior open and closed models.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper sets out to show that aesthetic quality in text-to-image diffusion models can be lifted substantially by three specific adjustments rather than by scaling data or compute alone. First, altering the noise schedule during training improves color fidelity and overall realism. Second, training on a dataset whose aspect ratios are deliberately balanced prevents quality drops on non-square images. Third, further training to match human preference ratings sharpens faces, hands, and other fine details. If these changes work as described, researchers gain a short, practical checklist for raising visual appeal without rebuilding entire pipelines.

Core claim

The authors demonstrate that a carefully chosen noise schedule during diffusion training increases realism and visual fidelity, that a balanced bucketed dataset allows consistent generation quality across aspect ratios, and that additional alignment with human preference data improves fine-grained human-centric details; together these steps produce Playground v2.5, which the authors report outperforms SDXL, Playground v2, DALL-E 3, and Midjourney v5.2 on aesthetic quality across varied conditions.

What carries the argument

The three insights—noise-schedule adjustment for color and contrast, balanced bucketed datasets for multi-aspect-ratio handling, and human-preference alignment for fine details—carry the performance gains.

If this is right

  • Diffusion models trained with the revised noise schedule generate images with measurably higher color accuracy and contrast.
  • Models trained on balanced aspect-ratio buckets maintain quality when asked to produce wide or tall images.
  • Preference-aligned fine-tuning reduces visible artifacts in faces, hands, and other human elements.
  • The open-sourced model supplies a concrete reference point for testing whether the same three steps improve other diffusion architectures.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The noise-schedule change may transfer to video or 3D diffusion models that also rely on progressive denoising.
  • Standardized public benchmarks with fixed prompts and seeds would be needed to confirm the reported ranking against commercial systems.
  • The bucket-balancing approach could be extended to other conditioning variables such as style or content type.

Load-bearing premise

The three listed changes are the main cause of the reported quality gains rather than differences in total training data or compute.

What would settle it

A controlled experiment that applies only the three changes to an existing baseline model such as SDXL and measures no improvement in human preference scores or aesthetic metrics would undermine the claim.

read the original abstract

In this work, we share three insights for achieving state-of-the-art aesthetic quality in text-to-image generative models. We focus on three critical aspects for model improvement: enhancing color and contrast, improving generation across multiple aspect ratios, and improving human-centric fine details. First, we delve into the significance of the noise schedule in training a diffusion model, demonstrating its profound impact on realism and visual fidelity. Second, we address the challenge of accommodating various aspect ratios in image generation, emphasizing the importance of preparing a balanced bucketed dataset. Lastly, we investigate the crucial role of aligning model outputs with human preferences, ensuring that generated images resonate with human perceptual expectations. Through extensive analysis and experiments, Playground v2.5 demonstrates state-of-the-art performance in terms of aesthetic quality under various conditions and aspect ratios, outperforming both widely-used open-source models like SDXL and Playground v2, and closed-source commercial systems such as DALLE 3 and Midjourney v5.2. Our model is open-source, and we hope the development of Playground v2.5 provides valuable guidelines for researchers aiming to elevate the aesthetic quality of diffusion-based image generation models.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces Playground v2.5, a text-to-image diffusion model, and claims that three insights—optimizing the noise schedule for better color/contrast and realism, preparing a balanced bucketed dataset to handle multiple aspect ratios, and aligning outputs with human preferences—enable state-of-the-art aesthetic quality. Through extensive experiments, it reports outperforming open-source baselines (SDXL, Playground v2) and closed-source systems (DALL-E 3, Midjourney v5.2) across various conditions and aspect ratios, with the model released open-source to provide guidelines for diffusion-based image generation.

Significance. If the empirical gains are reproducible under matched conditions, the work offers practical, actionable insights for improving visual fidelity in diffusion models, particularly the emphasis on noise scheduling and aspect-ratio bucketing. The open-source release strengthens its utility for the community by allowing direct replication and extension.

major comments (3)
  1. [Experiments / SOTA comparisons] Experiments section (around the SOTA comparisons): the headline claim of outperforming DALL-E 3 and Midjourney v5.2 rests on preference scores, but the manuscript does not document the exact prompt sets, inference steps, guidance scales, or post-processing steps used for the closed-source models. Without these matched conditions, the observed differences could arise from evaluation protocol rather than the three claimed insights.
  2. [Section 3.1] Section 3.1 (noise schedule): while the paper demonstrates impact on realism, the specific schedule parameters are presented as tuned values without an ablation isolating their contribution relative to the other two insights or to standard schedules (e.g., the linear vs. cosine schedules in prior work). This makes it hard to confirm they are load-bearing for the reported gains.
  3. [Section 3.2] Section 3.2 (bucketed dataset): the balanced bucket proportions are listed among the free parameters, yet no quantitative analysis shows how much the aspect-ratio coverage alone improves metrics versus simply increasing total data volume or using standard padding/cropping.
minor comments (2)
  1. [Figures] Figure captions and axis labels in the qualitative comparison figures could be clarified to indicate whether images are cherry-picked or randomly sampled from the same prompt set.
  2. [Section 3.3] The human-preference alignment section would benefit from citing the exact preference dataset size and annotation protocol to allow readers to assess potential biases.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below, agreeing where revisions strengthen the paper and outlining specific changes.

read point-by-point responses
  1. Referee: [Experiments / SOTA comparisons] Experiments section (around the SOTA comparisons): the headline claim of outperforming DALL-E 3 and Midjourney v5.2 rests on preference scores, but the manuscript does not document the exact prompt sets, inference steps, guidance scales, or post-processing steps used for the closed-source models. Without these matched conditions, the observed differences could arise from evaluation protocol rather than the three claimed insights.

    Authors: We agree that full documentation of the evaluation protocol is necessary for reproducibility. The comparisons used a fixed set of 100 prompts spanning diverse categories and styles; our model was run with 50 inference steps and guidance scale 7.5, while closed-source models were queried via their public interfaces using default parameters and no custom post-processing. We will add a new subsection to the Experiments section that lists the prompt set (with examples), all inference hyperparameters for Playground v2.5, and explicit statements of the defaults applied to DALL-E 3 and Midjourney v5.2. revision: yes

  2. Referee: [Section 3.1] Section 3.1 (noise schedule): while the paper demonstrates impact on realism, the specific schedule parameters are presented as tuned values without an ablation isolating their contribution relative to the other two insights or to standard schedules (e.g., the linear vs. cosine schedules in prior work). This makes it hard to confirm they are load-bearing for the reported gains.

    Authors: Section 3.1 shows visual and quantitative differences when the optimized schedule is used versus the training schedule of Playground v2. We acknowledge that an isolated ablation would make the contribution clearer. In the revision we will add an ablation that holds the balanced dataset and preference alignment fixed while varying only the noise schedule, directly comparing our parameters against the linear schedule of DDPM and the cosine schedule of Improved DDPM. revision: yes

  3. Referee: [Section 3.2] Section 3.2 (bucketed dataset): the balanced bucket proportions are listed among the free parameters, yet no quantitative analysis shows how much the aspect-ratio coverage alone improves metrics versus simply increasing total data volume or using standard padding/cropping.

    Authors: The current experiments keep total training tokens constant across bucket configurations. We agree that an explicit comparison to padding/cropping baselines would isolate the benefit of balanced aspect-ratio coverage. We will add quantitative results in the revised Section 3.2 that train otherwise identical models on the same data volume using (i) standard center-crop padding and (ii) unbalanced bucket sampling, reporting aesthetic scores and aspect-ratio fidelity metrics for each. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical tuning of standard diffusion components

full rationale

The paper reports three practical insights (noise schedule effects, bucketed aspect-ratio training data, and human-preference alignment) validated through experiments and side-by-side comparisons. No equations, predictions, or first-principles derivations are presented that reduce claimed performance gains to quantities defined by the same fitted parameters or by self-citation chains. The central SOTA claim is supported by empirical evaluation rather than any self-definitional loop or renamed known result. This is a standard empirical engineering paper whose derivation chain is self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

The central claims rest on empirical observations from training runs rather than derivations; the noise schedule, bucket construction rules, and preference alignment procedure are treated as tunable engineering choices whose optimal values are discovered experimentally.

free parameters (2)
  • noise schedule parameters
    Specific functional form and hyperparameters of the noise schedule are selected to improve realism and fidelity.
  • bucket proportions
    The relative sizes of aspect-ratio buckets in the training dataset are chosen to achieve balance.
axioms (2)
  • domain assumption Adjusting the noise schedule during diffusion training materially changes output realism and visual fidelity
    Invoked in the first insight as a foundational lever for aesthetic quality.
  • domain assumption A balanced bucketed dataset is required to support high-quality generation across multiple aspect ratios
    Invoked in the second insight as the solution to aspect-ratio accommodation.

pith-pipeline@v0.9.0 · 5523 in / 1378 out tokens · 43592 ms · 2026-05-15T16:36:42.742709+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 19 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Drift-AR: Single-Step Visual Autoregressive Generation via Anti-Symmetric Drifting

    cs.CV 2026-03 unverdicted novelty 7.0

    Drift-AR achieves 3.8-5.5x speedup in AR-diffusion image models by using entropy to enable entropy-informed speculative decoding and single-step (1-NFE) anti-symmetric drifting decoding.

  2. WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation

    cs.CV 2025-03 unverdicted novelty 7.0

    Text-to-image models show significant limitations in integrating world knowledge, as measured by the new WISE benchmark and WiScore metric across 20 models.

  3. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation

    cs.CV 2024-06 conditional novelty 7.0

    Scaled vanilla autoregressive models based on Llama achieve 2.18 FID on ImageNet 256x256 image generation, beating popular diffusion models without visual inductive biases.

  4. Extending One-Step Image Generation from Class Labels to Text via Discriminative Text Representation

    cs.CV 2026-04 unverdicted novelty 6.0

    By requiring and using highly discriminative LLM text features, the work enables the first effective one-step text-conditioned image generation with MeanFlow.

  5. IncreFA: Breaking the Static Wall of Generative Model Attribution

    cs.CV 2026-04 unverdicted novelty 6.0

    IncreFA uses hierarchical constraints with learnable orthogonal priors and a latent memory bank to enable continual adaptation for attributing images to new generative models, reporting SOTA accuracy and 98.93% unseen...

  6. TwoHamsters: Benchmarking Multi-Concept Compositional Unsafety in Text-to-Image Models

    cs.CR 2026-04 unverdicted novelty 6.0

    TwoHamsters benchmark shows T2I models like FLUX generate unsafe multi-concept images at 99.52% rate while defenses like LLaVA-Guard achieve only 41.06% recall.

  7. Self-Adversarial One Step Generation via Condition Shifting

    cs.CV 2026-04 unverdicted novelty 6.0

    APEX derives self-adversarial gradients from condition-shifted velocity fields in flow models to achieve high-fidelity one-step generation, outperforming much larger models and multi-step teachers.

  8. Nucleus-Image: Sparse MoE for Image Generation

    cs.CV 2026-04 unverdicted novelty 6.0

    A 17B-parameter sparse MoE diffusion transformer activates 2B parameters per pass and reaches competitive quality on image generation benchmarks without post-training.

  9. BiasIG: Benchmarking Multi-dimensional Social Biases in Text-to-Image Models

    cs.CY 2026-04 conditional novelty 6.0

    BiasIG is a multi-dimensional benchmark for social biases in T2I models that shows debiasing interventions frequently cause confounding discrimination effects.

  10. SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers

    cs.CV 2024-10 unverdicted novelty 6.0

    Sana-0.6B produces high-resolution images with strong text alignment at 20x smaller size and 100x higher throughput than Flux-12B by combining 32x image compression, linear DiT blocks, and a decoder-only LLM text encoder.

  11. Emu3: Next-Token Prediction is All You Need

    cs.CV 2024-09 unverdicted novelty 6.0

    Emu3 shows that next-token prediction on a unified discrete token space for text, images, and video lets a single transformer outperform task-specific models such as SDXL and LLaVA-1.6 in multimodal generation and perception.

  12. Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer

    cs.CV 2025-11 unverdicted novelty 5.0

    Z-Image is an efficient 6B-parameter foundation model for image generation that rivals larger commercial systems in photorealism and bilingual text rendering through a new single-stream diffusion transformer and strea...

  13. Qwen-Image Technical Report

    cs.CV 2025-08 unverdicted novelty 5.0

    Qwen-Image is a foundation model that reaches state-of-the-art results in image generation and editing by combining a large-scale text-focused data pipeline with curriculum learning and dual semantic-reconstructive en...

  14. Emerging Properties in Unified Multimodal Pretraining

    cs.CV 2025-05 unverdicted novelty 5.0

    BAGEL is a unified decoder-only model that develops emerging complex multimodal reasoning abilities after pretraining on large-scale interleaved data and outperforms prior open-source unified models.

  15. BLIP3-o: A Family of Fully Open Unified Multimodal Models-Architecture, Training and Dataset

    cs.CV 2025-05 conditional novelty 5.0

    BLIP3-o uses a diffusion transformer to generate CLIP image features and a sequential pretraining strategy to build open models that perform strongly on both image understanding and generation benchmarks.

  16. Adaptive Forensic Feature Refinement via Intrinsic Importance Perception

    cs.CV 2026-04 unverdicted novelty 4.0

    I2P adaptively selects the most discriminative layers from visual foundation models for synthetic image detection and constrains task updates to low-sensitivity parameter subspaces to improve specificity without harmi...

  17. Training-Free Object-Background Compositional T2I via Dynamic Spatial Guidance and Multi-Path Pruning

    cs.CV 2026-04 unverdicted novelty 4.0

    A training-free method with time-dependent attention gating and trajectory pruning enhances object-background balance in diffusion-based image synthesis.

  18. Show-o2: Improved Native Unified Multimodal Models

    cs.CV 2025-06 unverdicted novelty 4.0

    Show-o2 unifies text, image, and video understanding and generation in a single autoregressive-plus-flow-matching model built on 3D causal VAE representations.

  19. Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling

    cs.AI 2025-01 conditional novelty 3.0

    Scaling data, model size, and training optimization on the Janus architecture yields better multimodal understanding and more stable, instruction-following text-to-image generation.

Reference graph

Works this paper leans on

33 extracted references · 33 canonical work pages · cited by 19 Pith papers · 1 internal anchor

  1. [1]

    Introducing stable cascade

    Stability AI. Introducing stable cascade. https://stability.ai/news/ introducing-stable-cascade, 2024. Accessed: 2024-02-20

  2. [2]

    Improving image generation with better captions

    James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf , 2(3):8, 2023

  3. [3]

    PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis

    Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart- α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023

  4. [4]

    On the importance of noise scheduling for diffusion models, 2023

    Ting Chen. On the importance of noise scheduling for diffusion models, 2023

  5. [5]

    Emu: Enhancing image generation models using photogenic needles in a haystack, 2023

    Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, Matthew Yu, Abhishek Kadian, Filip Radenovic, Dhruv Mahajan, Kunpeng Li, Yue Zhao, Vladan Petrovic, Mitesh Kumar Singh, Simran Motwani, Yi Wen, Yiwen Song, Roshan Sumbaly, Vignesh Ramanathan, Zijian He, Peter Vajda, and ...

  6. [6]

    Diffusion models beat gans on image synthesis, 2021

    Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis, 2021

  7. [7]

    Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio

    Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014

  8. [8]

    Diffusion with offset noise

    Nicholas Guttenberg. Diffusion with offset noise. https://www.crosslabs.org/blog/ diffusion-with-offset-noise , 2023. Accessed: 2024-02-20

  9. [9]

    Deep residual learning for image recognition, 2015

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015

  10. [10]

    Clipscore: A reference-free evaluation metric for image captioning, 2022

    Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning, 2022

  11. [11]

    Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2018

    Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2018

  12. [12]

    Denoising diffusion probabilistic models, 2020

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models, 2020

  13. [13]

    Simple diffusion: End-to-end diffusion for high resolution images, 2023

    Emiel Hoogeboom, Jonathan Heek, and Tim Salimans. Simple diffusion: End-to-end diffusion for high resolution images, 2023

  14. [14]

    Elucidating the design space of diffusion-based generative models, 2022

    Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models, 2022

  15. [15]

    A style-based generator architecture for generative adversarial networks, 2019

    Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks, 2019

  16. [16]

    Analyzing and improving the image quality of stylegan, 2020

    Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan, 2020

  17. [17]

    Kingma, Tim Salimans, Ben Poole, and Jonathan Ho

    Diederik P. Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models, 2023

  18. [18]

    Pick-a-pic: An open dataset of user preferences for text-to-image generation, 2023

    Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation, 2023

  19. [19]

    Generalization and network design strategies

    Yann LeCun et al. Generalization and network design strategies. Connectionism in perspective, 19(143- 155):18, 1989

  20. [20]

    Playground v2

    Daiqing Li, Aleks Kamko, Ali Sabet, Ehsan Akhgari, Linmiao Xu, and Suhail Doshi. Playground v2

  21. [21]

    Common diffusion noise schedules and sample steps are flawed, 2024

    Shanchuan Lin, Bingchen Liu, Jiashi Li, and Xiao Yang. Common diffusion noise schedules and sample steps are flawed, 2024

  22. [22]

    Alvarez, Zhiding Yu, Sanja Fidler, and Marc T

    Rafid Mahmood, James Lucas, David Acuna, Daiqing Li, Jonah Philion, Jose M. Alvarez, Zhiding Yu, Sanja Fidler, and Marc T. Law. How much more data do i need? estimating requirements for downstream tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 275–284, June 2022. 15

  23. [23]

    Improved denoising diffusion probabilistic models, 2021

    Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models, 2021

  24. [24]

    Novelai improvements on stable diffusion

    NovelAI. Novelai improvements on stable diffusion. https://blog.novelai.net/ novelai-improvements-on-stable-diffusion-e10d38db82ac , 2022. Accessed: 2024-02- 20

  25. [25]

    Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022

  26. [26]

    Attributes for classifier feedback

    Amar Parkash and Devi Parikh. Attributes for classifier feedback. In Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part III 12 , pages 354–368. Springer, 2012

  27. [27]

    Scalable diffusion models with transformers, 2023

    William Peebles and Saining Xie. Scalable diffusion models with transformers, 2023

  28. [28]

    Sdxl: Improving latent diffusion models for high-resolution image synthesis, 2023

    Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis, 2023

  29. [29]

    High-resolution image synthesis with latent diffusion models, 2022

    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2022

  30. [30]

    Laion-5b: An open large-scale dataset for training next generation image-text models, 2022

    Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kun- durthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. Laion-5b: An open large-scale dataset for training next generation image-text models, 2022

  31. [31]

    Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole

    Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations, 2021

  32. [32]

    Less: Selecting influential data for targeted instruction tuning, 2024

    Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. Less: Selecting influential data for targeted instruction tuning, 2024

  33. [33]

    Lima: Less is more for alignment, 2023

    Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. Lima: Less is more for alignment, 2023. 16