pith. machine review for the scientific record. sign in

arxiv: 2605.06829 · v1 · submitted 2026-05-07 · 💻 cs.LG · cs.CV· cs.ET· cs.IT· cs.NE· math.IT

Recognition: 2 theorem links

· Lean Theorem

A Unified Measure-Theoretic View of Diffusion, Score-Based, and Flow Matching Generative Models

Authors on Pith no claims yet

Pith reviewed 2026-05-11 01:08 UTC · model grok-4.3

classification 💻 cs.LG cs.CVcs.ETcs.ITcs.NEmath.IT
keywords diffusion modelsscore-based generative modelsflow matchingunified frameworkcontinuity equationFokker-Planck equationgenerative modelingvector field
0
0 comments X

The pith

Diffusion models, score-based models, and flow matching all reduce to learning one time-dependent vector field on probability measures.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that diffusion models, score-based generative models, and flow matching are different ways of learning a time-dependent vector field. This field transports a simple reference distribution to the data distribution by generating a continuous family of intermediate probability measures that obey the continuity and Fokker-Planck equations. A single view matters because the three approaches have been converging in practice while their separate notations and derivations have hidden common structure and the real tradeoffs in sampling speed, numerical stability, and computational cost. The framework then derives reverse-time sampling procedures, shows that the deterministic probability-flow ODE matches the stochastic process marginals and connects to normalizing flows, and recasts flow matching as direct velocity regression under an interpolation path.

Core claim

We present a unified framework in which diffusion models, score-based generative models, and flow matching are instances of learning a time-dependent vector field that induces a family of marginals (ρ_t) governed by continuity and Fokker-Planck equations. Within this framework we derive reverse-time sampling for diffusion and score-based models as controlled stochastic dynamics, show that the probability flow ODE yields identical marginals and connects diffusion to likelihood-based normalizing flows, and interpret flow matching as direct regression of the velocity field under a chosen interpolation.

What carries the argument

A time-dependent vector field that induces a family of marginals (ρ_t) governed by the continuity and Fokker-Planck equations.

If this is right

  • Reverse-time sampling for diffusion and score-based models follows directly as controlled stochastic dynamics inside the same vector-field picture.
  • The probability flow ODE produces exactly the same marginals as the stochastic process and links diffusion models to normalizing flows.
  • Flow matching reduces to regressing the velocity field along a chosen interpolation path and coincides with score-based training only for specific paths.
  • Objectives, sampling schemes, and discretization errors of all three families can be compared and bounded under identical notation.
  • Connections appear to Schrödinger bridges and entropic optimal transport through the shared continuity-equation structure.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Stability or discretization techniques developed for one family can be transferred to the others by working directly with the shared vector field and continuity equation.
  • New generative procedures could be built by varying the interpolation path or adding controls inside the same framework rather than inventing separate derivations.
  • Uniform error analysis across discretization schemes becomes possible once all methods are expressed as approximations to the same underlying continuity equation.
  • Scalability questions can be reframed as questions about how well the vector field can be approximated while preserving the marginal evolution.

Load-bearing premise

The distinct methods can be rewritten exactly as learning this vector field without losing their separate training objectives or practical strengths.

What would settle it

An explicit example in which the sampling trajectories or marginal evolution of one method cannot be reproduced by any vector field satisfying the continuity equation while matching that method's original training loss.

Figures

Figures reproduced from arXiv: 2605.06829 by Aditya Ranganath, Mukesh Singhal.

Figure 1
Figure 1. Figure 1: Unified view of diffusion, score-based, and flow-matching generative models as probability transport between a data distribution ρ0 and a reference distribution ρ1. to discrete or constrained settings (Hoogeboom et al., 2022; Bansal et al., 2022; Campbell et al., 2022) [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
read the original abstract

We survey continuous-time generative modeling methods based on transporting a simple reference distribution to a data distribution via stochastic or deterministic dynamics. We present a unified framework in which diffusion models, score-based generative models, and flow matching are instances of learning a time-dependent vector field that induces a family of marginals $(\rho_t)_{t \in [0,1]}$ governed by continuity and Fokker-Planck equations. Such a unified theory is timely because these methods are converging methodologically, yet fragmented notation and competing derivations continue to obscure their shared structure and the practical tradeoffs governing sampling, stability, and computation. Within this framework, we (i) derive reverse-time sampling for diffusion and score-based models as controlled stochastic dynamics, (ii) show that the probability flow ODE yields identical marginals and connects diffusion to likelihood-based normalizing flows, and (iii) interpret flow matching as direct regression of the velocity field under a chosen interpolation, clarifying when it coincides with or differs from score-based training. We compare objectives, sampling schemes, and discretization errors under unified notation, discuss connections to Schrodinger bridges and entropic optimal transport, and summarize theoretical guarantees and open problems on approximation, stability, and scalability.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The manuscript surveys continuous-time generative modeling methods that transport a simple reference distribution to a data distribution via stochastic or deterministic dynamics. It presents a unified measure-theoretic framework in which diffusion models, score-based generative models, and flow matching are instances of learning a time-dependent vector field inducing a family of marginals (ρ_t) governed by the continuity and Fokker-Planck equations. Within this view the paper derives reverse-time sampling as controlled stochastic dynamics, shows that the probability-flow ODE produces identical marginals and links diffusion to likelihood-based normalizing flows, interprets flow matching as direct velocity-field regression under a chosen interpolation, compares objectives and discretization errors in unified notation, and discusses connections to Schrödinger bridges and entropic optimal transport together with theoretical guarantees and open problems.

Significance. If the unification is accurate, the work supplies a clarifying perspective on methods that are converging methodologically yet remain fragmented in notation. The explicit links to optimal transport and Schrödinger bridges, the side-by-side comparison of sampling schemes and error sources, and the enumeration of open problems on approximation, stability, and scalability could help researchers design hybrid algorithms and diagnose practical trade-offs more systematically.

minor comments (3)
  1. [Abstract and §3] The abstract states that the framework 'clarifies when [flow matching] coincides with or differs from score-based training,' yet the manuscript does not appear to supply a concise table or corollary that lists the precise conditions (e.g., choice of interpolation, noise schedule) under which the two objectives become identical; adding such a summary would strengthen the comparative claim.
  2. [Section on discretization errors] In the discussion of discretization errors, the text compares schemes under unified notation but does not report explicit convergence rates or numerical examples that quantify how the error scales with step size for each method; a short table of leading-order error terms would make the practical trade-offs easier to assess.
  3. [Discussion of Schrödinger bridges] The connections to Schrödinger bridges and entropic optimal transport are mentioned as future directions, but the manuscript does not indicate whether the unified vector-field view immediately recovers the dynamic formulation of the Schrödinger bridge or requires additional assumptions; a brief remark clarifying this point would avoid potential reader confusion.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive and accurate summary of our manuscript, which correctly identifies the unified measure-theoretic perspective on diffusion, score-based, and flow matching models as instances of learning time-dependent vector fields under continuity and Fokker-Planck dynamics. We appreciate the recognition of the links to Schrödinger bridges, entropic optimal transport, and the enumeration of open problems. Given the recommendation for minor revision and the absence of specific major comments, we will incorporate minor editorial improvements to enhance clarity and presentation in the revised version.

Circularity Check

0 steps flagged

No significant circularity; unification synthesizes external results

full rationale

The paper surveys and reframes diffusion, score-based, and flow-matching models as instances of learning a time-dependent vector field inducing marginals (ρ_t) via the continuity and Fokker-Planck equations. All load-bearing steps—derivation of reverse-time sampling as controlled dynamics, equivalence of the probability-flow ODE to the same marginals, and interpretation of flow matching as velocity regression—rely on standard stochastic calculus and measure-theoretic identities rather than self-referential definitions, fitted parameters renamed as predictions, or self-citation chains. Connections to Schrödinger bridges and entropic optimal transport are invoked as external context, not as load-bearing premises justified only by the authors' prior work. The framework is presented as a clarifying perspective with explicit comparisons of objectives and discretization errors, remaining self-contained against independent benchmarks in stochastic processes.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Relies on standard measure-theoretic and PDE assumptions from probability theory and prior generative modeling literature; no new free parameters or invented entities introduced in the abstract.

axioms (1)
  • domain assumption Marginal distributions (ρ_t) are governed by continuity and Fokker-Planck equations.
    Central to the unified framework as stated in the abstract.

pith-pipeline@v0.9.0 · 5529 in / 1122 out tokens · 36828 ms · 2026-05-11T01:08:38.906106+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

111 extracted references · 111 canonical work pages · 11 internal anchors

  1. [1]

    Denoising Diffusion Probabilistic Models

    Denoising Diffusion Probabilistic Models , author =. arXiv preprint arXiv:2006.11239 , year =

  2. [2]

    Score-Based Generative Modeling through Stochastic Differential Equations

    Score-Based Generative Modeling through Stochastic Differential Equations , author =. arXiv preprint arXiv:2011.13456 , year =

  3. [3]

    Flow Matching for Generative Modeling

    Flow Matching for Generative Modeling , author =. arXiv preprint arXiv:2210.02747 , year =

  4. [4]

    Journal of Machine Learning Research , volume =

    Estimation of Non-Normalized Statistical Models by Score Matching , author =. Journal of Machine Learning Research , volume =. 2005 , url =

  5. [5]

    Neural Computation , volume =

    A Connection Between Score Matching and Denoising Autoencoders , author =. Neural Computation , volume =. 2011 , url =

  6. [6]

    Advances in Neural Information Processing Systems , year =

    Neural Ordinary Differential Equations , author =. Advances in Neural Information Processing Systems , year =

  7. [7]

    Stochastic Processes and their Applications , volume =

    Reverse-Time Diffusion Equation Models , author =. Stochastic Processes and their Applications , volume =. 1982 , doi =

  8. [8]

    Diffusion Schr

    De Bortoli, Valentin and Thornton, James and Heng, Jeremy and Doucet, Arnaud , journal =. Diffusion Schr. 2021 , url =

  9. [9]

    International Conference on Machine Learning , year =

    Deep Unsupervised Learning using Nonequilibrium Thermodynamics , author =. International Conference on Machine Learning , year =

  10. [10]

    Improved denois- ing diffusion probabilistic models.arXiv preprint arXiv:2102.09672,

    Improved Denoising Diffusion Probabilistic Models , author =. arXiv preprint arXiv:2102.09672 , year =

  11. [11]

    FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models

    FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models , author =. arXiv preprint arXiv:1810.01367 , year =

  12. [12]

    Sliced score matching: A scalable approach to density and score estimation, 2019

    Sliced Score Matching: A Scalable Approach to Density and Score Estimation , author =. arXiv preprint arXiv:1905.07088 , year =

  13. [13]

    Improved techniques for training score-based generative models

    Improved Techniques for Training Score-Based Generative Models , author =. arXiv preprint arXiv:2006.09011 , year =

  14. [14]

    Denoising Diffusion Implicit Models

    Denoising Diffusion Implicit Models , author =. arXiv preprint arXiv:2010.02502 , year =

  15. [15]

    Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow

    Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow , author =. arXiv preprint arXiv:2209.03003 , year =

  16. [16]

    Elucidating the Design Space of Diffusion-Based Generative Models

    Elucidating the Design Space of Diffusion-Based Generative Models , author =. arXiv preprint arXiv:2206.00364 , year =

  17. [17]

    Auto-Encoding Variational Bayes

    Auto-Encoding Variational Bayes , author =. arXiv preprint arXiv:1312.6114 , year =

  18. [18]

    Stochastic Backpropagation and Approximate Inference in Deep Generative Models

    Stochastic Backpropagation and Approximate Inference in Deep Generative Models , author =. arXiv preprint arXiv:1401.4082 , year =

  19. [19]

    Advances in Neural Information Processing Systems , year =

    Generative Adversarial Nets , author =. Advances in Neural Information Processing Systems , year =

  20. [20]

    International Conference on Learning Representations , year =

    NICE: Non-linear Independent Components Estimation , author =. International Conference on Learning Representations , year =

  21. [21]

    Density Estimation using Real

    Dinh, Laurent and Sohl-Dickstein, Jascha and Bengio, Samy , booktitle =. Density Estimation using Real. 2017 , url =

  22. [22]

    Towards Principled Methods for Training Generative Adversarial Networks

    Towards Principled Methods for Training Generative Adversarial Networks , author =. arXiv preprint arXiv:1701.04862 , year =

  23. [23]

    Which Training Methods for

    Mescheder, Lars and Geiger, Andreas and Nowozin, Sebastian , booktitle =. Which Training Methods for. 2018 , url =

  24. [24]

    Improved Techniques for Training

    Salimans, Tim and Goodfellow, Ian and Zaremba, Wojciech and Cheung, Vicki and Radford, Alec and Chen, Xi , journal =. Improved Techniques for Training. 2016 , url =

  25. [25]

    2016 , url =

    Deep Learning , author =. 2016 , url =

  26. [26]

    International Conference on Learning Representations , year=

    Denoising Diffusion Implicit Models , author=. International Conference on Learning Representations , year=

  27. [27]

    International Conference on Machine Learning , pages=

    Improved Denoising Diffusion Probabilistic Models , author=. International Conference on Machine Learning , pages=. 2021 , organization=

  28. [28]

    Variational diffusion models

    Variational Diffusion Models , author =. arXiv preprint arXiv:2107.00630 , year =

  29. [29]

    Journal of Machine Learning Research , volume =

    Normalizing Flows for Probabilistic Modeling and Inference , author =. Journal of Machine Learning Research , volume =. 2021 , url =

  30. [30]

    Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence , series =

    Sliced Score Matching: A Scalable Approach to Density and Score Estimation , author =. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence , series =. 2020 , url =

  31. [31]

    Advances in Neural Information Processing Systems , volume =

    Robust Compressed Sensing MRI with Deep Generative Priors , author =. Advances in Neural Information Processing Systems , volume =. 2021 , url =

  32. [32]

    Medical Image Analysis , volume =

    Score-based Diffusion Models for Accelerated MRI , author =. Medical Image Analysis , volume =. 2022 , url =

  33. [33]

    Advances in Neural Information Processing Systems , volume =

    Denoising Diffusion Restoration Models , author =. Advances in Neural Information Processing Systems , volume =. 2022 , url =

  34. [34]

    International Conference on Learning Representations , year =

    Diffusion Posterior Sampling for General Noisy Inverse Problems , author =. International Conference on Learning Representations , year =

  35. [35]

    A Survey of the Schr

    L. A Survey of the Schr. Discrete and Continuous Dynamical Systems - Series A , volume =. 2014 , url =

  36. [36]

    Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein

    Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise , author =. arXiv preprint arXiv:2208.09392 , year =

  37. [37]

    Stochastic Interpolants: A Unifying Framework for Flows and Diffusions

    Stochastic Interpolants: A Unifying Framework for Flows and Diffusions , author =. arXiv preprint arXiv:2303.08797 , year =

  38. [38]

    arXiv preprint arXiv:2308.07037 , year =

    Bayesian Flow Networks , author =. arXiv preprint arXiv:2308.07037 , year =

  39. [39]

    Consistency Models

    Consistency Models , author =. arXiv preprint arXiv:2303.01469 , year =

  40. [40]

    Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion, March 2024

    Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion , author =. arXiv preprint arXiv:2310.02279 , year =

  41. [41]

    Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution

    Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution , author =. arXiv preprint arXiv:2310.16834 , year =

  42. [42]

    arXiv preprint arXiv:2512.04985 , year =

    Towards a Unified Framework for Guided Diffusion Models , author =. arXiv preprint arXiv:2512.04985 , year =

  43. [43]

    arXiv preprint arXiv:2510.23985 , year =

    Score-based Constrained Generative Modeling via Langevin Diffusions with Boundary Conditions , author =. arXiv preprint arXiv:2510.23985 , year =

  44. [44]

    Journal of Machine Learning Research , volume =

    Stochastic Interpolants: A Unifying Framework for Flows and Diffusions , author =. Journal of Machine Learning Research , volume =. 2025 , url =

  45. [45]

    International Conference on Learning Representations , year =

    Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion , author =. International Conference on Learning Representations , year =

  46. [46]

    Proceedings of the 41st International Conference on Machine Learning , series =

    Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution , author =. Proceedings of the 41st International Conference on Machine Learning , series =. 2024 , url =

  47. [47]

    Flow Matching Guide and Code

    Flow Matching Guide and Code , author =. arXiv preprint arXiv:2412.06264 , year =

  48. [48]

    Transactions on Machine Learning Research , year =

    Flow Map Matching with Stochastic Interpolants: A Mathematical Framework for Consistency Models , author =. Transactions on Machine Learning Research , year =

  49. [49]

    Proceedings of the 42nd International Conference on Machine Learning , series =

    On the Guidance of Flow Matching , author =. Proceedings of the 42nd International Conference on Machine Learning , series =. 2025 , url =

  50. [50]

    Advances in Neural Information Processing Systems , year =

    Consistency Diffusion Bridge Models , author =. Advances in Neural Information Processing Systems , year =

  51. [51]

    arXiv preprint arXiv:2404.15766 , year =

    Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations , author =. arXiv preprint arXiv:2404.15766 , year =

  52. [52]

    arXiv preprint arXiv:2412.11024 , year =

    Exploring Diffusion and Flow Matching Under Generator Matching , author =. arXiv preprint arXiv:2412.11024 , year =

  53. [53]

    International Conference on Learning Representations , year =

    Progressive Distillation for Fast Sampling of Diffusion Models , author =. International Conference on Learning Representations , year =

  54. [54]

    Theoretical Guarantees in

    Gentiloni Silveri, Marta and Conforti, Giovanni and Durmus, Alain , booktitle =. Theoretical Guarantees in. 2024 , url =

  55. [55]

    International Conference on Learning Representations , year =

    Generator Matching: Generative Modeling with Arbitrary Markov Processes , author =. International Conference on Learning Representations , year =

  56. [56]

    International Conference on Learning Representations , year =

    Flow Matching Achieves Almost Minimax Optimal Convergence , author =. International Conference on Learning Representations , year =

  57. [57]

    International Conference on Learning Representations , year =

    Score-Based Generative Modeling with Critically-Damped Langevin Diffusion , author =. International Conference on Learning Representations , year =

  58. [58]

    International Conference on Learning Representations , year =

    Autoregressive Diffusion Models , author =. International Conference on Learning Representations , year =

  59. [59]

    Advances in Neural Information Processing Systems , volume =

    A Continuous Time Framework for Discrete Denoising Models , author =. Advances in Neural Information Processing Systems , volume =. 2022 , url =

  60. [60]

    Advances in Neural Information Processing Systems , volume =

    Poisson Flow Generative Models , author =. Advances in Neural Information Processing Systems , volume =. 2022 , url =

  61. [61]

    Proceedings of the 40th International Conference on Machine Learning , series =

    PFGM++: Unlocking the Potential of Physics-Inspired Generative Models , author =. Proceedings of the 40th International Conference on Machine Learning , series =. 2023 , url =

  62. [62]

    International Conference on Learning Representations , year =

    Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning , author =. International Conference on Learning Representations , year =

  63. [63]

    Advances in Neural Information Processing Systems , volume =

    Simplified and Generalized Masked Diffusion for Discrete Data , author =. Advances in Neural Information Processing Systems , volume =. 2024 , url =

  64. [64]

    Advances in Neural Information Processing Systems , volume =

    Categorical Flow Matching on Statistical Manifolds , author =. Advances in Neural Information Processing Systems , volume =. 2024 , url =

  65. [65]

    Proceedings of The 27th International Conference on Artificial Intelligence and Statistics , series =

    Functional Flow Matching , author =. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics , series =. 2024 , url =

  66. [66]

    Proceedings of the 42nd International Conference on Machine Learning , series =

    Wasserstein Flow Matching: Generative Modeling Over Families of Distributions , author =. Proceedings of the 42nd International Conference on Machine Learning , series =. 2025 , url =

  67. [67]

    Proceedings of the 40th International Conference on Machine Learning , series =

    Diffusion Models are Minimax Optimal Distribution Estimators , author =. Proceedings of the 40th International Conference on Machine Learning , series =. 2023 , url =

  68. [68]

    Proceedings of the 40th International Conference on Machine Learning , series =

    Score Approximation, Estimation and Distribution Recovery of Diffusion Models on Low-Dimensional Data , author =. Proceedings of the 40th International Conference on Machine Learning , series =. 2023 , url =

  69. [69]

    Proceedings of The 27th International Conference on Artificial Intelligence and Statistics , series =

    Adaptivity of Diffusion Models to Manifold Structures , author =. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics , series =. 2024 , url =

  70. [70]

    Proceedings of the 41st International Conference on Machine Learning , series =

    Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions , author =. Proceedings of the 41st International Conference on Machine Learning , series =. 2024 , url =

  71. [71]

    Proceedings of the 41st International Conference on Machine Learning , series =

    Diffusion Models Encode the Intrinsic Dimension of Data Manifolds , author =. Proceedings of the 41st International Conference on Machine Learning , series =. 2024 , url =

  72. [72]

    Advances in Neural Information Processing Systems , volume =

    One-Step Diffusion Distillation through Score Implicit Matching , author =. Advances in Neural Information Processing Systems , volume =. 2024 , url =

  73. [73]

    Advances in Neural Information Processing Systems , volume =

    EM Distillation for One-step Diffusion Models , author =. Advances in Neural Information Processing Systems , volume =. 2024 , url =

  74. [74]

    Proceedings of the 42nd International Conference on Machine Learning , series =

    Gaussian Mixture Flow Matching Models , author =. Proceedings of the 42nd International Conference on Machine Learning , series =. 2025 , url =

  75. [75]

    Advances in Neural Information Processing Systems , volume =

    Structured Denoising Diffusion Models in Discrete State-Spaces , author =. Advances in Neural Information Processing Systems , volume =. 2021 , url =

  76. [76]

    Advances in Neural Information Processing Systems , volume =

    Star-Shaped Denoising Diffusion Probabilistic Models , author =. Advances in Neural Information Processing Systems , volume =. 2023 , url =

  77. [77]

    Advances in Neural Information Processing Systems , volume =

    Discrete Flow Matching , author =. Advances in Neural Information Processing Systems , volume =. 2024 , url =

  78. [78]

    Advances in Neural Information Processing Systems , volume =

    Fisher Flow Matching for Generative Modeling over Discrete Data , author =. Advances in Neural Information Processing Systems , volume =. 2024 , url =

  79. [79]

    Advances in Neural Information Processing Systems , volume =

    Variational Flow Matching for Graph Generation , author =. Advances in Neural Information Processing Systems , volume =. 2024 , url =

  80. [80]

    Advances in Neural Information Processing Systems , volume =

    Simple and Effective Masked Diffusion Language Models , author =. Advances in Neural Information Processing Systems , volume =. 2024 , url =

Showing first 80 references.