pith. machine review for the scientific record. sign in

arxiv: 2512.16768 · v3 · submitted 2025-12-18 · 📊 stat.ML · cs.LG· math.PR

Recognition: 2 theorem links

· Lean Theorem

On The Hidden Biases of Flow Matching Samplers

Authors on Pith no claims yet

Pith reviewed 2026-05-16 21:07 UTC · model grok-4.3

classification 📊 stat.ML cs.LGmath.PR
keywords flow matchingfinite-sample biasplug-in estimationaffine conditional flowskernel mixture estimatorflux-null correctionskinetic energy tails
0
0 comments X

The pith

Replacing the target distribution with finite-sample surrogates in flow matching introduces three coupled biases that alter learned paths and dynamics.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines flow matching by viewing it as a finite-sample plug-in problem, where the target law is replaced by an empirical measure or smoothed estimator. For affine conditional flows it derives the exact empirical minimizer and shows that a smoothed plug-in regime produces a terminal law that is precisely a kernel-mixture estimator. It further shows that the empirical minimizer is generally not a gradient field and that any fixed empirical marginal path admits multiple particle dynamics because zero-divergence vector fields can be added without changing the path. The base distribution is identified as the main control on the upper tails of kinetic energy, with Gaussian bases giving exponential bounds and polynomially tailed bases giving polynomial bounds.

Core claim

For affine conditional flows the exact empirical minimizer is derived, and in the smoothed plug-in regime the terminal law equals a kernel-mixture estimator. Fixed empirical marginal paths do not determine unique dynamics: explicit families of flux-null corrections can be added while preserving the marginal path. The source distribution supplies the primary mechanism for upper-tail control on instantaneous and integrated kinetic energy.

What carries the argument

Affine conditional flows under a hierarchy of plug-in estimators, which permit explicit derivation of the empirical minimizer and construction of flux-null vector-field corrections.

If this is right

  • The statistical target learned by empirical flow matching differs from the intended target distribution.
  • The learned vector field need not be a gradient even when each conditional flow is.
  • Multiple distinct particle dynamics can realize the same marginal paths via addition of flux-null fields.
  • Gaussian base distributions yield exponential upper-tail bounds on kinetic energy while polynomially tailed bases yield polynomial bounds.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Training routines may need auxiliary regularization to select among the non-unique dynamics that share the same marginal path.
  • The choice of base distribution could be used deliberately to enforce desired kinetic-energy tail behavior in generated trajectories.
  • The plug-in bias analysis suggests that similar hidden corrections may exist for non-affine flows once suitable approximations are introduced.

Load-bearing premise

The derivations assume that conditional flows are affine and that replacing the target by an empirical or smoothed surrogate is the appropriate finite-sample stand-in.

What would settle it

Generate samples from a smoothed empirical flow-matching model with affine paths and check whether the resulting terminal distribution exactly equals the kernel-mixture estimator built from the same samples.

read the original abstract

Flow matching (FM) constructs continuous-time ODE samplers by prescribing probability paths between a base distribution and a target distribution. In this note, we study FM through the lens of finite-sample plug-in estimation. In addition to replacing population expectations by sample averages, one may replace the target distribution itself by a finite-sample surrogate, ranging from the empirical measure to a smoothed estimator. This viewpoint yields a natural hierarchy of empirical FM models. For affine conditional flows, we derive the exact empirical minimizer and identify a smoothed plug-in regime in which the terminal law is exactly a kernel-mixture estimator. This plug-in perspective clarifies several coupled finite-sample biases of empirical FM. First, replacing the target law by a finite-sample surrogate changes the statistical target. Second, the empirical minimizer is generally not a gradient field, even when each conditional flow is. Third, a fixed empirical marginal path does not determine a unique particle dynamics: one may add extra vector fields whose probability flux has zero divergence without changing the marginal path. For Gaussian affine conditional paths, we give explicit families of such flux-null corrections. Finally, the source distribution provides a primary mechanism controlling upper tails of kinetic energy. In particular, Gaussian bases yield exponential upper-tail bounds for instantaneous and integrated kinetic energies, whereas polynomially tailed bases yield corresponding polynomial upper-tail bounds.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper analyzes flow matching (FM) samplers through finite-sample plug-in estimation, replacing population quantities with empirical or smoothed surrogates. For affine conditional flows it derives the exact empirical minimizer and shows that a smoothed plug-in regime produces a terminal law that is exactly a kernel-mixture estimator. It identifies three coupled biases: the statistical target changes when the target law is replaced by a finite-sample surrogate; the empirical minimizer is generally not a gradient field; and a fixed empirical marginal path does not determine unique particle dynamics because divergence-free vector fields can be added without altering the marginals. Explicit families of such flux-null corrections are supplied for Gaussian affine paths. Finally, the source distribution controls upper tails of kinetic energy, with Gaussian bases yielding exponential bounds and polynomially tailed bases yielding polynomial bounds.

Significance. If the derivations hold, the work supplies a precise account of finite-sample biases that arise in practical FM implementations. The explicit empirical minimizer, kernel-mixture terminal law, and flux-null corrections give concrete tools for diagnosing and potentially correcting these biases. The tail-bound results further link the choice of base distribution to kinetic-energy control, which is directly relevant to sampler stability and efficiency. These contributions strengthen the theoretical foundation of FM and may inform the design of more robust training and sampling procedures.

major comments (2)
  1. The central derivations for the exact empirical minimizer and the kernel-mixture terminal law are stated to hold under the affine conditional-flow assumption; the manuscript should make explicit whether the same closed-form results extend to non-affine flows or whether the bias analysis is restricted to this subclass.
  2. The flux-null corrections for Gaussian affine paths are presented as preserving the marginal path; it is unclear from the given claims whether these corrections also preserve the optimality property of the empirical minimizer or whether they introduce additional finite-sample variance that must be quantified.
minor comments (3)
  1. The plug-in hierarchy (empirical measure to smoothed estimator) is introduced in the abstract but should be given a compact formal definition early in the text, with explicit notation for each level.
  2. The tail-bound statements would benefit from a short remark on how the exponential versus polynomial bounds translate into practical recommendations for base-distribution choice.
  3. A brief comparison, even qualitative, between the derived empirical minimizer and the population FM objective would help readers gauge the magnitude of the identified biases.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the careful review and the recommendation for minor revision. We appreciate the positive summary of our contributions and address each major comment below.

read point-by-point responses
  1. Referee: The central derivations for the exact empirical minimizer and the kernel-mixture terminal law are stated to hold under the affine conditional-flow assumption; the manuscript should make explicit whether the same closed-form results extend to non-affine flows or whether the bias analysis is restricted to this subclass.

    Authors: We agree that the derivations for the exact empirical minimizer and the kernel-mixture terminal law, as well as the associated bias analysis, are restricted to the affine conditional-flow setting. This scope is indicated in the abstract and main text, but we will revise the introduction and conclusion to state explicitly that the closed-form results do not extend to non-affine flows in general and that the bias analysis is specific to this subclass. revision: yes

  2. Referee: The flux-null corrections for Gaussian affine paths are presented as preserving the marginal path; it is unclear from the given claims whether these corrections also preserve the optimality property of the empirical minimizer or whether they introduce additional finite-sample variance that must be quantified.

    Authors: The flux-null corrections are added to the empirical minimizer and preserve the marginal path by construction (zero divergence flux). However, because they are nonzero vector fields, the resulting dynamics do not preserve the optimality property with respect to the empirical loss. We have not quantified any additional finite-sample variance introduced by these corrections, as the focus of the note was on existence and explicit construction. We will add a clarifying sentence noting that optimality is not preserved and that variance quantification lies beyond the current scope. revision: partial

Circularity Check

0 steps flagged

No significant circularity; derivations are self-contained under explicit assumptions

full rationale

The paper's central results consist of explicit derivations of the empirical minimizer, kernel-mixture terminal law, and flux-null corrections for affine conditional flows under a defined plug-in hierarchy. These follow directly from replacing population measures with sample surrogates and imposing the affine path assumption, without any reduction to fitted parameters, self-referential definitions, or load-bearing self-citations. The abstract and stated claims present the statistical target change, non-uniqueness of dynamics, and tail bounds as consequences of the model construction itself, with no steps that equate outputs to inputs by construction or import uniqueness via prior author work. The derivation chain remains internally consistent and independent of external fitted results.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The analysis rests on standard assumptions of flow matching and finite-sample statistics without introducing new entities or fitted parameters beyond the choice of plug-in estimator.

axioms (2)
  • domain assumption Conditional flows are affine for the exact minimizer derivation
    Invoked to obtain closed-form empirical minimizer and kernel-mixture terminal law
  • standard math Plug-in replaces population quantities by sample averages or smoothed surrogates
    Core modeling choice that generates the hierarchy of empirical FM models

pith-pipeline@v0.9.0 · 5529 in / 1244 out tokens · 27666 ms · 2026-05-16T21:07:49.314302+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Is Flow Matching Just Trajectory Replay for Sequential Data?

    stat.ML 2026-02 unverdicted novelty 7.0

    Flow matching on time series targets a closed-form nonparametric velocity field that is a similarity-weighted mixture of observed transition velocities, making neural models approximations to an ideal memory-augmented...

Reference graph

Works this paper leans on

51 extracted references · 51 canonical work pages · cited by 1 Pith paper · 7 internal anchors

  1. [1]

    Stochastic Interpolants: A Unifying Framework for Flows and Diffusions

    Michael S Albergo, Nicholas M Boffi, and Eric Vanden-Eijnden. Stochastic interpolants: A unifying framework for flows and diffusions.arXiv preprint arXiv:2303.08797, 2023

  2. [2]

    Learning to sample better.Journal of Sta- tistical Mechanics: Theory and Experiment, 2024(10):104014, 2024

    Michael S Albergo and Eric Vanden-Eijnden. Learning to sample better.Journal of Sta- tistical Mechanics: Theory and Experiment, 2024(10):104014, 2024

  3. [3]

    Springer, 2005

    Luigi Ambrosio, Nicola Gigli, and Giuseppe Savar´ e.Gradient Flows: In Metric Spaces And In the Space of Probability Measures. Springer, 2005

  4. [4]

    Carr´ e du champ flow matching: better quality-generalisation tradeoff in generative models.arXiv preprint arXiv:2510.05930, 2025

    Jacob Bamberger, Iolo Jones, Dennis Duncan, Michael M Bronstein, Pierre Vandergheynst, and Adam Gosztolai. Carr´ e du champ flow matching: better quality-generalisation tradeoff in generative models.arXiv preprint arXiv:2510.05930, 2025

  5. [5]

    Memorization and regularization in generative diffusion models.arXiv preprint arXiv:2501.15785, 2025

    Ricardo Baptista, Agnimitra Dasgupta, Nikola B Kovachki, Assad Oberai, and Andrew M Stuart. Memorization and regularization in generative diffusion models.arXiv preprint arXiv:2501.15785, 2025

  6. [6]

    A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem.Numerische Mathematik, 84(3):375–393, 2000

    Jean-David Benamou and Yann Brenier. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem.Numerische Mathematik, 84(3):375–393, 2000. 10

  7. [7]

    On the closed- form of flow matching: Generalization does not arise from target stochasticity.arXiv preprint arXiv:2506.03719, 2025

    Quentin Bertrand, Anne Gagneux, Mathurin Massias, and R´ emi Emonet. On the closed- form of flow matching: Generalization does not arise from target stochasticity.arXiv preprint arXiv:2506.03719, 2025

  8. [8]

    Iterated vector fields and conservatism, with applications to federated learning

    Zachary Charles and Keith Rush. Iterated vector fields and conservatism, with applications to federated learning. InInternational Conference on Algorithmic Learning Theory, pages 130–147. PMLR, 2022

  9. [9]

    Lipschitz-guided design of interpolation schedules in generative models.arXiv preprint arXiv:2509.01629, 2025

    Yifan Chen, Eric Vanden-Eijnden, and Jiawei Xu. Lipschitz-guided design of interpolation schedules in generative models.arXiv preprint arXiv:2509.01629, 2025

  10. [10]

    On the Interpolation Effect of Score Smoothing in Diffusion Models

    Zhengdao Chen. On the interpolation effect of score smoothing.arXiv preprint arXiv:2502.19499, 2025

  11. [11]

    A Wasserstein-type distance in the space of gaussian mixture models.SIAM Journal on Imaging Sciences, 13(2):936–970, 2020

    Julie Delon and Agnes Desolneux. A Wasserstein-type distance in the space of gaussian mixture models.SIAM Journal on Imaging Sciences, 13(2):936–970, 2020

  12. [12]

    FLEX: A backbone for diffusion-based modeling of spatio- temporal physical systems.arXiv preprint arXiv:2505.17351, 2025

    N Benjamin Erichson, Vinicius Mikuni, Dongwei Lyu, Yang Gao, Omri Azencot, Soon Hoe Lim, and Michael W Mahoney. FLEX: A backbone for diffusion-based modeling of spatio- temporal physical systems.arXiv preprint arXiv:2505.17351, 2025

  13. [13]

    Scaling rectified flow transformers for high-resolution image synthesis

    Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M¨ uller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. InForty-first International Conference on Machine Learning, 2024

  14. [14]

    On the guidance of flow matching.arXiv preprint arXiv:2502.02150, 2025

    Ruiqi Feng, Chenglei Yu, Wenhao Deng, Peiyan Hu, and Tailin Wu. On the guidance of flow matching.arXiv preprint arXiv:2502.02150, 2025

  15. [15]

    The generation phases of flow matching: a denoising perspective.arXiv preprint arXiv:2510.24830, 2025

    Anne Gagneux, S´ egol` ene Martin, R´ emi Gribonval, and Mathurin Massias. The generation phases of flow matching: a denoising perspective.arXiv preprint arXiv:2510.24830, 2025

  16. [16]

    How do flow matching models memorize and generalize in sample data subspaces?arXiv preprint arXiv:2410.23594, 2024

    Weiguo Gao and Ming Li. How do flow matching models memorize and generalize in sample data subspaces?arXiv preprint arXiv:2410.23594, 2024

  17. [17]

    On the relation between rectified flows and optimal transport.arXiv preprint arXiv:2505.19712, 2025

    Johannes Hertrich, Antonin Chambolle, and Julie Delon. On the relation between rectified flows and optimal transport.arXiv preprint arXiv:2505.19712, 2025

  18. [18]

    On Gauge freedom, conservativity and intrinsic dimensionality estimation in diffusion models.arXiv preprint arXiv:2402.03845, 2024

    Christian Horvat and Jean-Pascal Pfister. On Gauge freedom, conservativity and intrinsic dimensionality estimation in diffusion models.arXiv preprint arXiv:2402.03845, 2024

  19. [19]

    From score match- ing to diffusion: A fine-grained error analysis in the Gaussian setting.arXiv preprint arXiv:2503.11615, 2025

    Samuel Hurault, Matthieu Terris, Thomas Moreau, and Gabriel Peyr´ e. From score match- ing to diffusion: A fine-grained error analysis in the Gaussian setting.arXiv preprint arXiv:2503.11615, 2025

  20. [20]

    Distribution estimation via flow matching with Lipschitz guarantees.arXiv preprint arXiv:2509.02337, 2025

    Lea Kunkel. Distribution estimation via flow matching with Lipschitz guarantees.arXiv preprint arXiv:2509.02337, 2025

  21. [21]

    On the minimax optimality of flow matching through the connection to kernel density estimation.arXiv preprint arXiv:2504.13336, 2025

    Lea Kunkel and Mathias Trabs. On the minimax optimality of flow matching through the connection to kernel density estimation.arXiv preprint arXiv:2504.13336, 2025

  22. [22]

    The principles of diffusion models

    Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, and Stefano Ermon. The prin- ciples of diffusion models.arXiv preprint arXiv:2510.21890, 2025

  23. [23]

    A good score does not lead to a good generative model.arXiv preprint arXiv:2401.04856,

    Sixu Li, Shi Chen, and Qin Li. A good score does not lead to a good generative model. arXiv preprint arXiv:2401.04856, 2024

  24. [24]

    Generation properties of stochastic interpolation under finite training set.arXiv preprint arXiv:2509.21925, 2025

    Yunchen Li, Shaohui Lin, and Zhou Yu. Generation properties of stochastic interpolation under finite training set.arXiv preprint arXiv:2509.21925, 2025. 11

  25. [25]

    EnfoPath: Energy-informed analysis of generative trajectories in flow matching.arXiv preprint arXiv:2511.19087, 2025

    Ziyun Li, Ben Dai, Huancheng Hu, Henrik Bostr¨ om, and Soon Hoe Lim. EnfoPath: Energy-informed analysis of generative trajectories in flow matching.arXiv preprint arXiv:2511.19087, 2025

  26. [26]

    Elucidating the design choice of probability paths in flow matching for forecasting.arXiv preprint arXiv:2410.03229, 2024

    Soon Hoe Lim, Yijin Wang, Annan Yu, Emma Hart, Michael W Mahoney, Xiaoye S Li, and N Benjamin Erichson. Elucidating the design choice of probability paths in flow matching for forecasting.arXiv preprint arXiv:2410.03229, 2024

  27. [27]

    Flow Matching for Generative Modeling

    Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling.arXiv preprint arXiv:2210.02747, 2022

  28. [28]

    Flow Matching Guide and Code

    Yaron Lipman, Marton Havasi, Peter Holderrieth, Neta Shaul, Matt Le, Brian Karrer, Ricky TQ Chen, David Lopez-Paz, Heli Ben-Hamu, and Itai Gat. Flow matching guide and code.arXiv preprint arXiv:2412.06264, 2024

  29. [29]

    Rectified flow: A marginal preserving approach to optimal transport.arXiv preprint arXiv:2209.14577, 2022

    Qiang Liu. Rectified flow: A marginal preserving approach to optimal transport.arXiv preprint arXiv:2209.14577, 2022

  30. [30]

    Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow

    Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow.arXiv preprint arXiv:2209.03003, 2022

  31. [31]

    Resolving memorization in empirical diffusion model for manifold data in high-dimensional spaces.arXiv preprint arXiv:2505.02508, 2025

    Yang Lyu, Tan Minh Nguyen, Yuchun Qian, and Xin T Tong. Resolving memorization in empirical diffusion model for manifold data in high-dimensional spaces.arXiv preprint arXiv:2505.02508, 2025

  32. [32]

    Plugin estimation of smooth optimal transport maps.The Annals of Statistics, 52(3):966– 998, 2024

    Tudor Manole, Sivaraman Balakrishnan, Jonathan Niles-Weed, and Larry Wasserman. Plugin estimation of smooth optimal transport maps.The Annals of Statistics, 52(3):966– 998, 2024

  33. [33]

    Statistical properties of rectified flow.arXiv preprint arXiv:2511.03193, 2025

    Gonzalo Mena, Arun Kumar Kuchibhotla, and Larry Wasserman. Statistical properties of rectified flow.arXiv preprint arXiv:2511.03193, 2025

  34. [34]

    Optimal and diffusion transports in machine learning.arXiv preprint arXiv:2512.06797, 2025

    Gabriel Peyr´ e. Optimal and diffusion transports in machine learning.arXiv preprint arXiv:2512.06797, 2025

  35. [35]

    Score-based generative models detect manifolds.Advances in Neural Information Processing Systems, 35:35852–35865, 2022

    Jakiw Pidstrigach. Score-based generative models detect manifolds.Advances in Neural Information Processing Systems, 35:35852–35865, 2022

  36. [36]

    Gradi- ent variance reveals failure modes in flow-based generative models.arXiv preprint arXiv:2510.18118, 2025

    Teodora Reu, Sixtine Dromigny, Michael Bronstein, and Francisco Vargas. Gradi- ent variance reveals failure modes in flow-based generative models.arXiv preprint arXiv:2510.18118, 2025

  37. [37]

    Closed-form diffusion models.arXiv preprint arXiv:2310.12395, 2023

    Christopher Scarvelis, Haitz S´ aez de Oc´ ariz Borde, and Justin Solomon. Closed-form diffusion models.arXiv preprint arXiv:2310.12395, 2023

  38. [38]

    On kinetic optimal probability paths for generative models

    Neta Shaul, Ricky TQ Chen, Maximilian Nickel, Matthew Le, and Yaron Lipman. On kinetic optimal probability paths for generative models. InInternational Conference on Machine Learning, pages 30883–30907. PMLR, 2023

  39. [39]

    Score-Based Generative Modeling through Stochastic Differential Equations

    Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020

  40. [40]

    Entropic time schedulers for generative diffusion models.arXiv preprint arXiv:2504.13612, 2025

    Dejan Stancevic, Florian Handke, and Luca Ambrogioni. Entropic time schedulers for generative diffusion models.arXiv preprint arXiv:2504.13612, 2025

  41. [41]

    Improving and generalizing flow-based generative models with minibatch optimal transport

    Alexander Tong, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks, Kilian Fatras, Guy Wolf, and Yoshua Bengio. Improving and generalizing flow-based gen- erative models with minibatch optimal transport.arXiv preprint arXiv:2302.00482, 2023. 12

  42. [42]

    American Mathematical Soc., 2021

    C´ edric Villani.Topics in Optimal Transportation, volume 58. American Mathematical Soc., 2021

  43. [43]

    Flow matching: Markov kernels, stochastic processes and transport plans.Variational and Information Flows in Machine Learning and Optimal Transport, 2025

    Christian Wald and Gabriele Steidl. Flow matching: Markov kernels, stochastic processes and transport plans.Variational and Information Flows in Machine Learning and Optimal Transport, 2025

  44. [44]

    Elucidating flow matching ode dynamics via data geometry and denoisers

    Zhengchao Wan, Qingsong Wang, Gal Mishne, and Yusu Wang. Elucidating flow matching ode dynamics via data geometry and denoisers. InForty-second International Conference on Machine Learning

  45. [45]

    Provable separations between memo- rization and generalization in diffusion models.arXiv preprint arXiv:2511.03202, 2025

    Zeqi Ye, Qijie Zhu, Molei Tao, and Minshuo Chen. Provable separations between memo- rization and generalization in diffusion models.arXiv preprint arXiv:2511.03202, 2025

  46. [46]

    Deter- ministic guidance diffusion model for probabilistic weather forecasting.arXiv preprint arXiv:2312.02819, 2023

    Donggeun Yoon, Minseok Seo, Doyi Kim, Yeji Choi, and Donghyeon Cho. Deter- ministic guidance diffusion model for probabilistic weather forecasting.arXiv preprint arXiv:2312.02819, 2023

  47. [47]

    Understanding generalization in diffusion models via probability flow distance.arXiv preprint arXiv:2505.20123, 2025

    Huijie Zhang, Zijian Huang, Siyi Chen, Jinfan Zhou, Zekai Zhang, Peng Wang, and Qing Qu. Understanding generalization in diffusion models via probability flow distance.arXiv preprint arXiv:2505.20123, 2025

  48. [48]

    Flow priors for linear inverse problems via iterative corrupted trajectory matching

    Yasi Zhang, Peiyu Yu, Yaxuan Zhu, Yingshan Chang, Feng Gao, Ying Nian Wu, and Oscar Leong. Flow priors for linear inverse problems via iterative corrupted trajectory matching. Advances in Neural Information Processing Systems, 37:57389–57417, 2024. 13 Appendix A Related Work Flow Matching and Related Models.Flow Matching (FM) [27, 28] and Conditional Flow...

  49. [49]

    Our work adds to this view by characterizing the kinetic energy and tail behavior induced by empirical FM

    examines how score estimation affects sampling quality. Our work adds to this view by characterizing the kinetic energy and tail behavior induced by empirical FM. B Proof of Theoretical Results B.1 Proof of Proposition 1 Proof of Proposition 1.Let ˆLCFM[v′] =E t,X,Zt∥v′(t, Zt)−v(t, Z t |X)∥ 2. SinceX∼ˆp 1, the expectation overXcan be written as: ˆLCFM[v′]...

  50. [50]

    Thus q 1−2td 2 i ≥ p 1/2, and 1√ 1−2td2 i ≤ √

  51. [51]

    Since 1−2td 2 i ≥1/2, t˜m2 i 1−2td 2 i ≤2t˜m2 i = ˜m2 i 2ρ

    Thus, for the term in the exponent ofM i: t˜m2 i + 4t2 ˜m2 i d2 i 2(1−2td 2 i ) =t˜m2 i 1 + 2td2 i 1−2td 2 i = t˜m2 i 1−2td 2 i . Since 1−2td 2 i ≥1/2, t˜m2 i 1−2td 2 i ≤2t˜m2 i = ˜m2 i 2ρ . Combining these, we have: Mi ≤ √ 2 exp ˜m2 i 2ρ . Therefore, E[etE]≤ dY i=1 √ 2e ˜m2 i 2ρ = 2d/2 exp P ˜m2 i 2ρ = 2d/2 exp ∥m1∥2 2ρ =:C. Finally, substituting this in...