Recognition: no theorem link
Is Monotonic Sampling Necessary in Diffusion Models?
Pith reviewed 2026-05-13 07:31 UTC · model grok-4.3
The pith
Monotonic sampling schedules remain optimal for diffusion models, with no nonmonotonic variant outperforming the baseline across 90 tested configurations.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
No tested nonmonotonic schedule improves on the monotonic baseline in any of the 90 configurations. The magnitude of the resulting penalty is a stable property of the trained denoiser, largest in DDPM, intermediate in Flow Matching, and indistinguishable from zero in EDM; this property is formalized as the Schedule Sensitivity Coefficient, a cheap diagnostic that signals incomplete convergence to the Bayes-optimal denoiser at critical noise levels.
What carries the argument
The Schedule Sensitivity Coefficient, which quantifies the performance degradation a trained denoiser experiences when the noise schedule is perturbed away from monotonicity.
If this is right
- Current practice of using monotonic schedules incurs no measurable performance cost in the models examined.
- Differences in schedule sensitivity across architectures point to varying degrees of convergence to the Bayes-optimal denoiser.
- The Schedule Sensitivity Coefficient supplies a new, low-cost probe of model quality that is complementary to FID.
- Efforts to relax monotonicity are unlikely to yield gains until denoisers reach better convergence at critical noise levels.
Where Pith is reading between the lines
- Models trained to stricter convergence might eventually benefit from or even require nonmonotonic trajectories.
- Training objectives could be modified to reduce schedule sensitivity and thereby enlarge the space of usable samplers.
- The same diagnostic could be applied to continuous-time formulations to check whether the monotonicity preference persists beyond discrete schedules.
Load-bearing premise
The four families of nonmonotonic schedules and the 42-cell ablation are representative enough that any practically useful nonmonotonic trajectory would have appeared in the experiments.
What would settle it
Finding even one nonmonotonic schedule that produces higher sample quality than the monotonic baseline on the same trained model, dataset, and NFE budget would falsify the central claim.
Figures
read the original abstract
Diffusion models generate samples by iteratively denoising a Gaussian prior, traversing a sequence of noise levels that, in every published sampler, decreases monotonically. Six years of intensive work has refined nearly every aspect of this recipe, including the corruption operator, the training objective, the schedule shape, the architecture, and the ODE solver. Yet the assumption of monotonicity itself has never been systematically tested. Here we ask whether monotonic sampling is load-bearing or merely conventional. We design four families of structured nonmonotonic schedules and apply them to three architecturally distinct generative models, DDPM, EDM, and Flow Matching, across NFE budgets ranging from 10 to 200 function evaluations, plus a 42-cell hyperparameter ablation, on CIFAR-10. Across all 90 tested configurations, no tested nonmonotonic schedule improves on the monotonic baseline. The magnitude of the penalty, however, spans nearly three orders of magnitude: persistent and substantial in DDPM, intermediate in Flow Matching, and indistinguishable from zero in EDM. We show that this variation is not noise but a structural property of each trained denoiser, and we formalize it as the Schedule Sensitivity Coefficient, a cheap, architecture-agnostic diagnostic that provides evidence of non-convergence to the Bayes-optimal denoiser at the critical noise level. Our findings justify the field's tacit reliance on monotonic schedules and supply a new probe of diffusion model quality complementary to sample-quality metrics such as Frechet Inception Distance.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript empirically tests whether monotonic noise schedules are necessary for diffusion models by introducing four families of structured nonmonotonic schedules and evaluating them on three distinct models (DDPM, EDM, Flow Matching) across NFE budgets from 10 to 200 on CIFAR-10, plus a 42-cell hyperparameter ablation. Across all 90 configurations, no nonmonotonic schedule outperforms the monotonic baseline, with performance penalties varying substantially by architecture (large in DDPM, intermediate in Flow Matching, near-zero in EDM). The authors attribute this variation to a structural property formalized as the Schedule Sensitivity Coefficient, a diagnostic for denoiser quality relative to the Bayes-optimal solution.
Significance. If the empirical results hold under broader testing, the work supplies direct evidence justifying the field's longstanding use of monotonic schedules and introduces a lightweight, architecture-agnostic probe (the Schedule Sensitivity Coefficient) that complements FID and other sample-quality metrics for diagnosing incomplete convergence in trained denoisers. This could guide both sampler design and future training objectives.
major comments (3)
- [Results] Results section: The headline claim that no tested nonmonotonic schedule improves on the monotonic baseline across 90 configurations is supported only by summary statements; the absence of full per-configuration tables, error bars, and statistical tests (e.g., paired t-tests or confidence intervals) leaves the reported three-order-of-magnitude variation in penalties difficult to assess for robustness.
- [Section 3] Section 3 (schedule families): The four structured nonmonotonic families plus the 42-cell ablation within those families do not address whether qualitatively different trajectories (e.g., learned non-monotonic paths or schedules explicitly matched to per-model sensitivity) could still yield gains, especially on EDM where the observed penalty is already indistinguishable from zero; this coverage gap is load-bearing for any implication that monotonicity is necessary rather than merely sufficient for the tested cases.
- [Schedule Sensitivity Coefficient] Definition of Schedule Sensitivity Coefficient: The coefficient is presented as a cheap diagnostic of non-convergence to the Bayes-optimal denoiser, yet the manuscript provides no explicit formula, derivation, or controlled validation (e.g., on synthetic data where the optimal denoiser is known) showing that it is independent of the particular monotonic baseline chosen.
minor comments (3)
- [Figures] Figure captions and legends should explicitly state whether error bars represent standard deviation over seeds or over NFE runs; several plots appear to omit them.
- [Related Work] The manuscript cites prior schedule work but omits direct comparison to recent non-monotonic or adaptive sampling methods outside the four families (e.g., learned schedulers).
- [Preliminaries] Notation for noise levels and step indices should be unified across equations and pseudocode to avoid ambiguity between continuous and discrete formulations.
Simulated Author's Rebuttal
We thank the referee for their detailed and constructive feedback on our manuscript. We address each of the major comments below and outline the revisions we will make to strengthen the paper.
read point-by-point responses
-
Referee: [Results] Results section: The headline claim that no tested nonmonotonic schedule improves on the monotonic baseline across 90 configurations is supported only by summary statements; the absence of full per-configuration tables, error bars, and statistical tests (e.g., paired t-tests or confidence intervals) leaves the reported three-order-of-magnitude variation in penalties difficult to assess for robustness.
Authors: We agree that including full per-configuration tables, error bars from multiple random seeds, and statistical tests would improve the robustness assessment. In the revised version, we will add comprehensive tables showing FID or relevant metrics for all 90 configurations with error bars, and include paired t-tests or confidence intervals to quantify the significance of the observed differences. revision: yes
-
Referee: [Section 3] Section 3 (schedule families): The four structured nonmonotonic families plus the 42-cell ablation within those families do not address whether qualitatively different trajectories (e.g., learned non-monotonic paths or schedules explicitly matched to per-model sensitivity) could still yield gains, especially on EDM where the observed penalty is already indistinguishable from zero; this coverage gap is load-bearing for any implication that monotonicity is necessary rather than merely sufficient for the tested cases.
Authors: Our work systematically tests four families of structured nonmonotonic schedules to probe whether monotonicity is necessary under standard sampling practices. We do not claim that no conceivable nonmonotonic schedule could ever improve performance, particularly for models like EDM where the penalty is negligible. We will revise the manuscript to explicitly state the scope: our results show that monotonic schedules are sufficient and that the tested structured nonmonotonic variants do not yield improvements. We acknowledge that exploring learned or sensitivity-matched schedules is a valuable future direction but lies outside the current study's focus on structured families. revision: partial
-
Referee: [Schedule Sensitivity Coefficient] Definition of Schedule Sensitivity Coefficient: The coefficient is presented as a cheap diagnostic of non-convergence to the Bayes-optimal denoiser, yet the manuscript provides no explicit formula, derivation, or controlled validation (e.g., on synthetic data where the optimal denoiser is known) showing that it is independent of the particular monotonic baseline chosen.
Authors: We thank the referee for pointing out this omission. We will include the explicit mathematical formula for the Schedule Sensitivity Coefficient, a step-by-step derivation, and a controlled validation on synthetic data where the Bayes-optimal denoiser is known to show independence from the monotonic baseline choice. revision: yes
Circularity Check
No significant circularity: purely empirical comparison with post-hoc diagnostic
full rationale
The paper reports direct experimental results from applying four families of structured nonmonotonic schedules to DDPM, EDM, and Flow Matching models across 90 configurations on CIFAR-10. The headline claim (no tested nonmonotonic schedule beats the monotonic baseline) follows immediately from the measured FID or equivalent metrics and does not reduce to any fitted parameter, self-referential definition, or self-citation chain. The Schedule Sensitivity Coefficient is introduced as a derived diagnostic computed from the observed variation in penalties across noise levels; it is not used to derive the main result and carries no load-bearing assumption that loops back to the experimental inputs. No uniqueness theorems, ansatzes smuggled via citation, or renamings of known results appear in the derivation chain. The study is therefore self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption The three tested models (DDPM, EDM, Flow Matching) are representative of current diffusion architectures.
- domain assumption The four families of structured nonmonotonic schedules adequately sample the space of possible non-monotonic trajectories.
invented entities (1)
-
Schedule Sensitivity Coefficient
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Denoising Diffusion Probabilistic Models , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[2]
International Conference on Learning Representations (ICLR) , year =
Denoising Diffusion Implicit Models , author =. International Conference on Learning Representations (ICLR) , year =
-
[3]
International Conference on Learning Representations (ICLR) , year =
Score-Based Generative Modeling through Stochastic Differential Equations , author =. International Conference on Learning Representations (ICLR) , year =
-
[4]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Elucidating the Design Space of Diffusion-Based Generative Models , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[5]
International Conference on Machine Learning (ICML) , year=
Deep Unsupervised Learning using Nonequilibrium Thermodynamics , author=. International Conference on Machine Learning (ICML) , year=
-
[6]
International Conference on Learning Representations (ICLR) , year =
Flow Matching for Generative Modeling , author =. International Conference on Learning Representations (ICLR) , year =
-
[7]
International Conference on Learning Representations (ICLR) , year =
Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow , author =. International Conference on Learning Representations (ICLR) , year =
-
[8]
International Conference on Learning Representations (ICLR) , year =
Building Normalizing Flows with Stochastic Interpolants , author =. International Conference on Learning Representations (ICLR) , year =
-
[10]
International Conference on Machine Learning (ICML) , year =
Scaling Rectified Flow Transformers for High-Resolution Image Synthesis , author =. International Conference on Machine Learning (ICML) , year =
-
[11]
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
Analyzing and Improving the Training Dynamics of Diffusion Models , author =. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
-
[12]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Guiding a Diffusion Model with a Bad Version of Itself , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[13]
IEEE/CVF International Conference on Computer Vision (ICCV) , year =
Scalable Diffusion Models with Transformers , author =. IEEE/CVF International Conference on Computer Vision (ICCV) , year =
-
[14]
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
High-Resolution Image Synthesis with Latent Diffusion Models , author =. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
-
[15]
International Conference on Machine Learning (ICML) , year =
Improved Denoising Diffusion Probabilistic Models , author =. International Conference on Machine Learning (ICML) , year =
-
[16]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Variational Diffusion Models , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[18]
International Conference on Machine Learning (ICML) , year =
Simple Diffusion: End-to-End Diffusion for High Resolution Images , author =. International Conference on Machine Learning (ICML) , year =
-
[19]
IEEE/CVF International Conference on Computer Vision (ICCV) , year =
Efficient Diffusion Training via Min-SNR Weighting Strategy , author =. IEEE/CVF International Conference on Computer Vision (ICCV) , year =
-
[21]
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , year =
Common Diffusion Noise Schedules and Sample Steps Are Flawed , author =. IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , year =
-
[22]
International Conference on Machine Learning (ICML) , year =
Align Your Steps: Optimizing Sampling Schedules in Diffusion Models , author =. International Conference on Machine Learning (ICML) , year =
-
[23]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[24]
International Conference on Machine Learning (ICML) , year =
Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers , author =. International Conference on Machine Learning (ICML) , year =
-
[25]
Advances in Neural Information Processing Systems (NeurIPS) , year =
DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[27]
International Conference on Learning Representations (ICLR) , year =
Fast Sampling of Diffusion Models with Exponential Integrator , author =. International Conference on Learning Representations (ICLR) , year =
-
[28]
Advances in Neural Information Processing Systems (NeurIPS) , year =
UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[29]
Advances in Neural Information Processing Systems (NeurIPS) , year =
DPM-Solver-v3: Improved Diffusion ODE Solver with Empirical Model Statistics , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[30]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Restart Sampling for Improving Generative Processes , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[31]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[32]
Transactions on Machine Learning Research (TMLR) , year =
Soft Diffusion: Score Matching with General Corruptions , author =. Transactions on Machine Learning Research (TMLR) , year =
-
[33]
International Conference on Learning Representations (ICLR) , year =
Blurring Diffusion Models , author =. International Conference on Learning Representations (ICLR) , year =
-
[34]
International Conference on Learning Representations (ICLR) , year =
Generative Modelling with Inverse Heat Dissipation , author =. International Conference on Learning Representations (ICLR) , year =
-
[35]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be Consistent , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[36]
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
RePaint: Inpainting using Denoising Diffusion Probabilistic Models , author =. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , year =
-
[38]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Diffusion Models Beat GANs on Image Synthesis , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[39]
A Connection Between Score Matching and Denoising Autoencoders , author =. Neural Computation , volume =
-
[40]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Generative Modeling by Estimating Gradients of the Data Distribution , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[41]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Improved Techniques for Training Score-Based Generative Models , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[42]
Advances in Neural Information Processing Systems (NeurIPS) , year =
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[43]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Assessing Generative Models via Precision and Recall , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[44]
Advances in Neural Information Processing Systems (NeurIPS) , year =
Improved Precision and Recall Metric for Assessing Generative Models , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =
-
[45]
Chong, Min Jin and Forsyth, David , booktitle =. Effectively Unbiased
-
[46]
Learning Multiple Layers of Features from Tiny Images , author =
-
[47]
Albergo, M. S. and Vanden-Eijnden, E. Building normalizing flows with stochastic interpolants. In International Conference on Learning Representations (ICLR), 2023
work page 2023
-
[48]
Stochastic Interpolants: A Unifying Framework for Flows and Diffusions
Albergo, M. S., Boffi, N. M., and Vanden-Eijnden, E. Stochastic interpolants: A unifying framework for flows and diffusions. arXiv preprint arXiv:2303.08797, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[49]
S., Kazemi, H., Huang, F., Goldblum, M., Geiping, J., and Goldstein, T
Bansal, A., Borgnia, E., Chu, H.-M., Li, J. S., Kazemi, H., Huang, F., Goldblum, M., Geiping, J., and Goldstein, T. Cold diffusion: Inverting arbitrary image transforms without noise. In Advances in Neural Information Processing Systems (NeurIPS), 2023
work page 2023
-
[50]
On the importance of noise scheduling for diffusion models
Chen, T. On the importance of noise scheduling for diffusion models. arXiv preprint arXiv:2301.10972, 2023
-
[51]
Chong, M. J. and Forsyth, D. Effectively unbiased FID and inception score and where to find them. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020
work page 2020
-
[52]
Crowson, K., Baumann, S. A., Birch, A., et al. Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers. In International Conference on Machine Learning (ICML), 2024
work page 2024
-
[53]
Daras, G., Dagan, Y., Dimakis, A. G., and Daskalakis, C. Consistent diffusion models: Mitigating sampling drift by learning to be consistent. In Advances in Neural Information Processing Systems (NeurIPS), 2023 a
work page 2023
-
[54]
Daras, G., Delbracio, M., Talebi, H., Dimakis, A. G., and Milanfar, P. Soft diffusion: Score matching with general corruptions. Transactions on Machine Learning Research (TMLR), 2023 b
work page 2023
-
[55]
Dhariwal, P. and Nichol, A. Q. Diffusion models beat gans on image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2021
work page 2021
-
[56]
Scaling rectified flow transformers for high-resolution image synthesis
Esser, P., Kulal, S., Blattmann, A., et al. Scaling rectified flow transformers for high-resolution image synthesis. In International Conference on Machine Learning (ICML), 2024
work page 2024
-
[57]
Efficient diffusion training via min-snr weighting strategy
Hang, T., Gu, S., Li, C., Bao, J., Chen, D., Hu, H., Geng, X., and Guo, B. Efficient diffusion training via min-snr weighting strategy. In IEEE/CVF International Conference on Computer Vision (ICCV), 2023
work page 2023
-
[58]
Improved noise schedule for diffusion training
Hang, T., Gu, S., Geng, X., and Guo, B. Improved noise schedule for diffusion training. arXiv preprint arXiv:2407.03297, 2024
-
[59]
Gans trained by a two time-scale update rule converge to a local nash equilibrium
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems (NeurIPS), 2017
work page 2017
-
[60]
Classifier-Free Diffusion Guidance
Ho, J. and Salimans, T. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[61]
Denoising diffusion probabilistic models
Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems (NeurIPS), 2020
work page 2020
-
[62]
Hoogeboom, E. and Salimans, T. Blurring diffusion models. In International Conference on Learning Representations (ICLR), 2023
work page 2023
-
[63]
Simple diffusion: End-to-end diffusion for high resolution images
Hoogeboom, E., Heek, J., and Salimans, T. Simple diffusion: End-to-end diffusion for high resolution images. In International Conference on Machine Learning (ICML), 2023
work page 2023
-
[64]
Elucidating the design space of diffusion-based generative models
Karras, T., Aittala, M., Aila, T., and Laine, S. Elucidating the design space of diffusion-based generative models. In Advances in Neural Information Processing Systems (NeurIPS), 2022
work page 2022
-
[65]
Guiding a diffusion model with a bad version of itself
Karras, T., Aittala, M., Kynk \"a \"a nniemi, T., Lehtinen, J., Aila, T., and Laine, S. Guiding a diffusion model with a bad version of itself. In Advances in Neural Information Processing Systems (NeurIPS), 2024 a
work page 2024
-
[66]
Analyzing and improving the training dynamics of diffusion models
Karras, T., Aittala, M., Lehtinen, J., Hellsten, J., Aila, T., and Laine, S. Analyzing and improving the training dynamics of diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024 b
work page 2024
-
[67]
Kingma, D. P. and Gao, R. Understanding diffusion objectives as the elbo with simple data augmentation. In Advances in Neural Information Processing Systems (NeurIPS), 2023
work page 2023
-
[68]
P., Salimans, T., Poole, B., and Ho, J
Kingma, D. P., Salimans, T., Poole, B., and Ho, J. Variational diffusion models. In Advances in Neural Information Processing Systems (NeurIPS), 2021
work page 2021
-
[69]
Learning multiple layers of features from tiny images
Krizhevsky, A. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009
work page 2009
-
[70]
Improved precision and recall metric for assessing generative models
Kynk \"a \"a nniemi, T., Karras, T., Laine, S., Lehtinen, J., and Aila, T. Improved precision and recall metric for assessing generative models. In Advances in Neural Information Processing Systems (NeurIPS), 2019
work page 2019
-
[71]
Common diffusion noise schedules and sample steps are flawed
Lin, S., Liu, B., Li, J., and Yang, X. Common diffusion noise schedules and sample steps are flawed. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
work page 2024
-
[72]
Lipman, Y., Chen, R. T. Q., Ben-Hamu, H., Nickel, M., and Le, M. Flow matching for generative modeling. In International Conference on Learning Representations (ICLR), 2023
work page 2023
-
[73]
Flow straight and fast: Learning to generate and transfer data with rectified flow
Liu, X., Gong, C., and Liu, Q. Flow straight and fast: Learning to generate and transfer data with rectified flow. In International Conference on Learning Representations (ICLR), 2023
work page 2023
-
[74]
Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps
Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., and Zhu, J. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In Advances in Neural Information Processing Systems (NeurIPS), 2022 a
work page 2022
-
[75]
Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu
Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., and Zhu, J. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022 b
-
[76]
Repaint: Inpainting using denoising diffusion probabilistic models
Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R., and Van Gool, L. Repaint: Inpainting using denoising diffusion probabilistic models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
work page 2022
-
[77]
Nichol, A. Q. and Dhariwal, P. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning (ICML), 2021
work page 2021
-
[78]
Peebles, W. and Xie, S. Scalable diffusion models with transformers. In IEEE/CVF International Conference on Computer Vision (ICCV), 2023
work page 2023
-
[79]
Generative modelling with inverse heat dissipation
Rissanen, S., Heinonen, M., and Solin, A. Generative modelling with inverse heat dissipation. In International Conference on Learning Representations (ICLR), 2023
work page 2023
-
[80]
High-resolution image synthesis with latent diffusion models
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
work page 2022
-
[81]
Align your steps: Optimizing sampling schedules in diffusion models
Sabour, A., Fidler, S., and Kreis, K. Align your steps: Optimizing sampling schedules in diffusion models. In International Conference on Machine Learning (ICML), 2024
work page 2024
-
[82]
Sajjadi, M. S. M., Bachem, O., Lucic, M., Bousquet, O., and Gelly, S. Assessing generative models via precision and recall. In Advances in Neural Information Processing Systems (NeurIPS), 2018
work page 2018
-
[83]
A., Maheswaranathan, N., and Ganguli, S
Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (ICML), 2015
work page 2015
-
[84]
Denoising diffusion implicit models
Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. In International Conference on Learning Representations (ICLR), 2021 a
work page 2021
-
[85]
Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. In Advances in Neural Information Processing Systems (NeurIPS), 2019
work page 2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.