pith. machine review for the scientific record. sign in

arxiv: 2603.14135 · v3 · submitted 2026-03-14 · 📊 stat.ML · cs.LG

Recognition: 2 theorem links

· Lean Theorem

Conditional flow matching for physics-constrained inverse problems with finite training data

Authors on Pith no claims yet

Pith reviewed 2026-05-15 10:53 UTC · model grok-4.3

classification 📊 stat.ML cs.LG
keywords conditional flow matchingBayesian inverse problemsprobability flow ODEfinite training dataphysics-constrained modelsposterior samplingvelocity field
0
0 comments X

The pith

A neural network learns the velocity field of a conditional probability flow ODE that transports source samples directly to the measurement-conditioned posterior in physics inverse problems.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces conditional flow matching as a way to solve Bayesian inverse problems when samples from the joint distribution of variables and measurements are available but explicit prior and likelihood densities cannot be evaluated. A neural network is trained to approximate the velocity field of a probability flow ordinary differential equation that pushes draws from a chosen source distribution to the posterior conditioned on observations. The formulation works for nonlinear, high-dimensional, and non-differentiable forward models without assumptions on the noise. With finite training data the learned field can produce degenerate outputs such as variance collapse and selective memorization around training points with similar observations. Standard early stopping on test loss is shown to prevent these degeneracies while preserving the ability to recover complex multimodal posteriors.

Core claim

Conditional flow matching trains a neural network to learn the velocity field of a probability flow ordinary differential equation that transports samples from a chosen source distribution directly to the posterior distribution conditioned on observed measurements, without requiring explicit evaluation of the prior and likelihood densities.

What carries the argument

The conditional velocity field of the probability flow ODE, parameterized by a neural network and trained via the flow-matching objective on joint samples.

If this is right

  • The approach accommodates nonlinear and non-differentiable physics forward models.
  • Multimodal posteriors are recovered without explicit density evaluations.
  • Early stopping on held-out test loss prevents variance collapse and selective memorization.
  • Both Gaussian and data-informed source distributions can be used as starting points for transport.
  • Computational cost remains low compared with repeated forward-model evaluations inside traditional sampling methods.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same transport mechanism could be applied to other conditional sampling tasks where only paired data are observed.
  • Selective memorization implies that performance will degrade on test measurements far from the training distribution unless the source is chosen to cover the relevant range.
  • The method offers a route to amortize posterior sampling once the network is trained, enabling rapid inference on new measurements.

Load-bearing premise

Samples from the joint distribution of inferred variables and measurements are available.

What would settle it

Run the trained conditional flow model on a synthetic inverse problem whose true posterior is known analytically or by exhaustive sampling and check whether the generated samples reproduce the correct multimodal structure and marginal variances.

Figures

Figures reproduced from arXiv: 2603.14135 by Agnimitra Dasgupta, Ali Fardisi, Assad Oberai, Brianna Binder, Bryan Shaddy, Mehrnegar Aminy, Saeed Moazami.

Figure 1
Figure 1. Figure 1: Visualizing Case 1 corresponding to Eq. (32). and that they are interpolatory at the training data points y (i) . That is, ϕj (y (i) ) = δij . (33) We note that these conditions are often satisfied by the basis functions used in the finite element method (also see [PITH_FULL_IMAGE:figures/full_fig_p011_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Visualizing Case 2 where ϕj -s are non-overlapping indicator functions. 12 [PITH_FULL_IMAGE:figures/full_fig_p012_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: (a) Train and test data for the toy example used to illustrate effects of overfitting. (b) Train and test loss [PITH_FULL_IMAGE:figures/full_fig_p014_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Kernel density estimates of the conditional distribution [PITH_FULL_IMAGE:figures/full_fig_p015_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Mean and one-standard-deviation interval of the conditional distribution [PITH_FULL_IMAGE:figures/full_fig_p016_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Training dataset for the spiral problem. [PITH_FULL_IMAGE:figures/full_fig_p018_6.png] view at source ↗
Figure 8
Figure 8. Figure 8: Histograms of samples generated using the trained velocity field compared to the samples from the true [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Training loss, test loss, and the moving average of the test loss for the velocity network trained on the [PITH_FULL_IMAGE:figures/full_fig_p021_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Particles from the prior ρX (•) and posterior ρX|Y (•), and observation yˆ (–) on the (x1, x3) and (x2, x3) planes for the one-step data assimilation problem. First row: reference solution obtained using the SIR filter with 100,000 particles. Second and third row: 1,000 particles from ρX (part of the training dataset), and samples from ρX|Y generated by the conditional flow matching model with a Gaussian … view at source ↗
Figure 11
Figure 11. Figure 11: A realization from the advection-diffusion-reaction dataset (a) Piecewise-constant top and bottom wall [PITH_FULL_IMAGE:figures/full_fig_p024_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Training and test loss for the advection-diffusion-reaction problem with 400 training samples. [PITH_FULL_IMAGE:figures/full_fig_p025_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Posterior mean and standard deviation of the inferred flux and the true flux for a test case with data size [PITH_FULL_IMAGE:figures/full_fig_p025_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Training and test loss for the advection-diffusion-reaction problem with 4,000 training samples. [PITH_FULL_IMAGE:figures/full_fig_p026_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Five realizations of X and Y sampled from the joint distribution of the training dataset for the synthetic quasi-static elastography application. The first row shows the shear modulus field, and the second row shows the corresponding noisy vertical displacement measurements. the domain. Therefore, the parametric prior in this experiment is two-dimensional. For a given re￾alization x of X, the correspondin… view at source ↗
Figure 16
Figure 16. Figure 16: Training loss, test loss, and moving average of the test loss for the synthetic quasi-static elastography [PITH_FULL_IMAGE:figures/full_fig_p028_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Posterior statistics estimated using the MCS and trained velocity network on two test samples for the [PITH_FULL_IMAGE:figures/full_fig_p029_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Posterior statistics estimated using a severely trained velocity network ( [PITH_FULL_IMAGE:figures/full_fig_p030_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: Five typical realizations of X and Y sampled from the training dataset for the quasi-static elastography application with experimental data. The first row shows the spatial distribution of the shear modulus field, and the second row shows the corresponding measurements of the noisy vertical displacement field. subject the top and bottom edges of the specimen to vertical displacements of 0.084 mm and 0.392… view at source ↗
Figure 20
Figure 20. Figure 20: Training loss, test loss, and the moving average of the test loss for the velocity network in the quasi-static [PITH_FULL_IMAGE:figures/full_fig_p032_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: Posterior statistics estimated using the trained velocity network on two test samples for the quasi-static [PITH_FULL_IMAGE:figures/full_fig_p033_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Posterior statistics estimated using a severely over-trained velocity network on two test samples shown in [PITH_FULL_IMAGE:figures/full_fig_p033_22.png] view at source ↗
Figure 23
Figure 23. Figure 23: Posterior statistics estimated using the trained velocity network on experimental data for the quasi-static [PITH_FULL_IMAGE:figures/full_fig_p034_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: Experimental setup in the tumor spheroid application (adapted from [ [PITH_FULL_IMAGE:figures/full_fig_p035_24.png] view at source ↗
Figure 25
Figure 25. Figure 25: Five realizations of X and Y sampled from the joint distribution forming the training dataset for the tumor spheroid application. In the first row are instances of the log-normalized Young’s modulus fields, and in the second row are corresponding instances of the noisy measurements. All values have been normalized to [-1,1]. The velocity network is trained using a Gaussian source distribution for 160,000 … view at source ↗
Figure 26
Figure 26. Figure 26: Training and test losses for the velocity network used in the tumor spheroid application, with moving [PITH_FULL_IMAGE:figures/full_fig_p036_26.png] view at source ↗
Figure 27
Figure 27. Figure 27: Posterior statistics estimated using the trained velocity network for select synthetic and experimental cases [PITH_FULL_IMAGE:figures/full_fig_p038_27.png] view at source ↗
read the original abstract

This study presents a conditional flow matching framework for solving physics-constrained Bayesian inverse problems. In this setting, samples from the joint distribution of inferred variables and measurements are assumed available, while explicit evaluation of the prior and likelihood densities is not required. We derive a simple and self-contained formulation of both the unconditional and conditional flow matching algorithms, tailored specifically to inverse problems. In the conditional setting, a neural network is trained to learn the velocity field of a probability flow ordinary differential equation that transports samples from a chosen source distribution directly to the posterior distribution conditioned on observed measurements. This black-box formulation accommodates nonlinear, high-dimensional, and potentially non-differentiable forward models without restrictive assumptions on the noise model. We further analyze the behavior of the learned velocity field in the regime of finite training data. Under mild architectural assumptions, we show that overtraining can induce degenerate behavior in the generated conditional distributions, including variance collapse and a phenomenon termed selective memorization, wherein generated samples concentrate around training data points associated with similar observations. A simplified theoretical analysis explains this behavior, and numerical experiments confirm it in practice. We demonstrate that standard early-stopping criteria based on monitoring test loss effectively mitigate such degeneracy. The proposed method is evaluated on several physics-based inverse problems. We investigate the impact of different choices of source distributions, including Gaussian and data-informed priors. Across these examples, conditional flow matching accurately captures complex, multimodal posterior distributions while maintaining computational efficiency.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper presents a conditional flow matching framework for physics-constrained Bayesian inverse problems. Assuming joint samples of inferred variables and measurements are available (but not explicit prior/likelihood densities), it derives unconditional and conditional flow-matching objectives, trains a neural network to learn the velocity field of a probability-flow ODE that transports source samples directly to the conditional posterior p(x|y*), analyzes finite-data degeneracy (selective memorization and variance collapse) under mild architectural assumptions, shows that early-stopping on held-out test loss mitigates these effects, and validates the approach on several physics-based inverse problems with different source distributions.

Significance. If the finite-data analysis and mitigation hold, the method supplies a practical, black-box sampler for multimodal posteriors in high-dimensional nonlinear inverse problems without density evaluations or differentiability assumptions on the forward model. This could be useful for settings with limited joint training pairs and complex physics simulators.

major comments (1)
  1. [Finite-data analysis] Finite-data analysis (abstract and associated section): the assertion that early-stopping on unconditional held-out test loss reliably prevents selective memorization for out-of-sample y* is load-bearing for the central finite-data claim. The test loss does not directly penalize mismatch between the generated conditional law and the true posterior; a concrete diagnostic (e.g., posterior predictive checks or mode-recovery metrics on held-out y*) is needed to confirm the stopped network recovers modes rather than merely interpolating training pairs.
minor comments (2)
  1. Abstract and methods: no equations for the conditional velocity field, no network architecture details, and no description of how the velocity is optimized are provided; these must be added for reproducibility.
  2. Experiments: report error bars or multiple runs when claiming accurate capture of multimodal posteriors; clarify quantitative metrics used to assess posterior fidelity.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive and insightful review. The major comment on the finite-data analysis raises a valid point about the indirect nature of the test loss, and we address it directly below while outlining the revisions we will make.

read point-by-point responses
  1. Referee: [Finite-data analysis] Finite-data analysis (abstract and associated section): the assertion that early-stopping on unconditional held-out test loss reliably prevents selective memorization for out-of-sample y* is load-bearing for the central finite-data claim. The test loss does not directly penalize mismatch between the generated conditional law and the true posterior; a concrete diagnostic (e.g., posterior predictive checks or mode-recovery metrics on held-out y*) is needed to confirm the stopped network recovers modes rather than merely interpolating training pairs.

    Authors: We agree with the referee that the unconditional held-out test loss serves as an indirect proxy and does not explicitly measure fidelity of the generated conditional distribution to the true posterior for unseen y*. Our theoretical analysis (under the stated mild architectural assumptions) demonstrates that overtraining induces selective memorization and variance collapse by driving the velocity field to map source samples toward training pairs with similar observations. The early-stopping rule is motivated by the fact that the minimum of the unconditional test loss occurs before this degeneracy sets in, and our numerical experiments across multiple physics inverse problems show that the resulting models recover multimodal structure for out-of-sample y*. Nevertheless, to make this claim more robust, we will add in the revised manuscript explicit diagnostics on held-out y* values: posterior predictive checks (comparing simulated measurements from generated samples against observed y*) and quantitative mode-recovery metrics (e.g., number of recovered modes via clustering and Wasserstein-2 distance to reference posterior samples obtained by long-run MCMC). These additions will directly confirm that the stopped network captures posterior modes rather than merely interpolating training data. revision: yes

Circularity Check

0 steps flagged

Derivation self-contained from joint samples; no reduction to fitted inputs or self-citation chains

full rationale

The paper derives the unconditional and conditional flow matching objectives directly from available joint samples of (x, y) pairs, without explicit prior/likelihood densities. The velocity field is regressed to the conditional vector field implied by the probability flow ODE, and the finite-data degeneracy analysis (variance collapse, selective memorization) follows from the regression objective under mild architectural assumptions. Early-stopping is justified by monitoring the same test loss that defines the training objective, with numerical confirmation on physics examples. No load-bearing step reduces by construction to a fitted parameter renamed as prediction, nor relies on self-citation for uniqueness or ansatz. The formulation is presented as black-box and self-contained, keeping the central claim independent of its inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Central claim rests on availability of joint samples and on the neural network's ability to approximate the conditional velocity field; no explicit free parameters, invented entities, or additional axioms are stated in the abstract.

axioms (1)
  • domain assumption Joint samples of inferred variables and measurements are available
    Explicitly stated as the setting in which the method operates.

pith-pipeline@v0.9.0 · 5581 in / 1314 out tokens · 56652 ms · 2026-05-15T10:53:31.202245+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

57 extracted references · 57 canonical work pages · 6 internal anchors

  1. [1]

    A. M. Stuart, Inverse problems: a Bayesian perspective, Acta numerica 19 (2010) 451–559

  2. [2]

    Calvetti, E

    D. Calvetti, E. Somersalo, Inverse problems: From regularization to Bayesian inference, Wiley Interdisciplinary Reviews: Computational Statistics 10 (2018) e1427

  3. [3]

    Tierney, J

    L. Tierney, J. B. Kadane, Accurate approximations for posterior moments and marginal densities, Journal of the american statistical association 81 (1986) 82–86

  4. [4]

    Brooks, Markov chain Monte Carlo method and its application, Journal of the royal statistical society: series D (the Statistician) 47 (1998) 69–100

    S. Brooks, Markov chain Monte Carlo method and its application, Journal of the royal statistical society: series D (the Statistician) 47 (1998) 69–100

  5. [5]

    R. M. Neal, et al., MCMC using Hamiltonian dynamics, Handbook of markov chain monte carlo 2 (2011) 2

  6. [6]

    A. G. Dimakis, A. Bora, D. Van Veen, A. Jalal, S. Vishwanath, E. Price, Deep generative models and inverse problems, Mathematical Aspects of Deep Learning 400 (2022)

  7. [7]

    S. Lunz, O. ¨Oktem, C.-B. Sch¨onlieb, Adversarial regularizers in inverse problems, Advances in neural informa- tion processing systems 31 (2018)

  8. [8]

    D. Ray, H. Ramaswamy, D. V . Patel, A. A. Oberai, The efficacy and generalizability of conditional GANs for posterior inference in physics-based inverse problems, arXiv preprint arXiv:2202.07773 (2022)

  9. [9]

    M. Duff, N. D. Campbell, M. J. Ehrhardt, Regularising inverse problems with generative machine learning models, Journal of Mathematical Imaging and Vision 66 (2024) 37–56

  10. [10]

    D. V . Patel, D. Ray, A. A. Oberai, Solution of physics-based Bayesian inverse problems with deep generative priors, Computer Methods in Applied Mechanics and Engineering 400 (2022) 115428

  11. [11]

    D. Ray, J. Murgoitio-Esandi, A. Dasgupta, A. A. Oberai, Solution of physics-based inverse problems using conditional generative adversarial networks with full gradient penalty, Computer Methods in Applied Mechanics and Engineering 417 (2023) 116338

  12. [12]

    Whang, E

    J. Whang, E. Lindgren, A. Dimakis, Composing normalizing flows for inverse problems, in: International Conference on Machine Learning, PMLR, 2021, pp. 11158–11169

  13. [13]

    Hagemann, J

    P. Hagemann, J. Hertrich, G. Steidl, Stochastic normalizing flows for inverse problems: A Markov chains viewpoint, SIAM/ASA Journal on Uncertainty Quantification 10 (2022) 1162–1190

  14. [14]

    Dasgupta, D

    A. Dasgupta, D. V . Patel, D. Ray, E. A. Johnson, A. A. Oberai, A dimension-reduced variational approach for solving physics-based inverse problems using generative adversarial network priors and normalizing flows, Computer Methods in Applied Mechanics and Engineering 420 (2024) 116682

  15. [15]

    Daras, H

    G. Daras, H. Chung, C.-H. Lai, Y . Mitsufuji, J. C. Ye, P. Milanfar, A. G. Dimakis, M. Delbracio, A survey on diffusion models for inverse problems, arXiv preprint arXiv:2410.00083 (2024). 47

  16. [16]

    Chung, B

    H. Chung, B. Sim, D. Ryu, J. C. Ye, Improving diffusion models for inverse problems using manifold constraints, Advances in Neural Information Processing Systems 35 (2022) 25683–25696

  17. [17]

    H. Wang, X. Zhang, T. Li, Y . Wan, T. Chen, J. Sun, DMPlug: A plug-in method for solving inverse problems with diffusion models, Advances in Neural Information Processing Systems 37 (2024) 117881–117916

  18. [18]

    Conditional image generation with score-based diffusion models.arXiv preprint arXiv:2111.13606,

    G. Batzolis, J. Stanczuk, C.-B. Sch ¨onlieb, C. Etmann, Conditional image generation with score-based diffusion models, arXiv preprint arXiv:2111.13606 (2021)

  19. [19]

    Jacobsen, Y

    C. Jacobsen, Y . Zhuang, K. Duraisamy, COCOGEN: Physically consistent and conditioned score-based gener- ative models for forward and inverse problems, SIAM Journal on Scientific Computing 47 (2025) C399–C425

  20. [20]

    Dasgupta, H

    A. Dasgupta, H. Ramaswamy, J. Murgoitio-Esandi, K. Y . Foo, R. Li, Q. Zhou, B. F. Kennedy, A. A. Oberai, Conditional score-based diffusion models for solving inverse elasticity problems, Computer Methods in Applied Mechanics and Engineering 433 (2025) 117425

  21. [21]

    Dasgupta, A

    A. Dasgupta, A. Marciano da Cunha, A. Fardisi, M. Aminy, B. Binder, B. Shaddy, A. A. Oberai, Unifying and extending diffusion models through PDEs for solving inverse problems, Computer Methods in Applied Mechanics and Engineering 448 (2026) 118431

  22. [22]

    Zhang, P

    Y . Zhang, P. Yu, Y . Zhu, Y . Chang, F. Gao, Y . N. Wu, O. Leong, Flow priors for linear inverse problems via iterative corrupted trajectory matching, Advances in Neural Information Processing Systems 37 (2024) 57389– 57417

  23. [23]

    Pourya, B

    M. Pourya, B. E. Rawas, M. Unser, FLOWER: A flow-matching solver for inverse problems, arXiv preprint arXiv:2509.26287 (2025)

  24. [24]

    Tauberschmidt, S

    J. Tauberschmidt, S. Fellenz, S. J. V ollmer, A. B. Duncan, Physics-constrained fine-tuning of flow-matching models for generation and inverse problems, arXiv preprint arXiv:2508.09156 (2025)

  25. [25]

    Utkarsh, P

    U. Utkarsh, P. Cai, A. Edelman, R. Gomez-Bombarelli, C. V . Rackauckas, Physics-constrained flow matching: Sampling generative models with hard constraints, arXiv preprint arXiv:2506.04171 (2025)

  26. [26]

    I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y . Bengio, Generative adversarial nets, Advances in neural information processing systems 27 (2014)

  27. [27]

    Arjovsky, S

    M. Arjovsky, S. Chintala, L. Bottou, Wasserstein generative adversarial networks, in: International conference on machine learning, PMLR, 2017, pp. 214–223

  28. [28]

    Kobyzev, S

    I. Kobyzev, S. J. Prince, M. A. Brubaker, Normalizing flows: An introduction and review of current methods, IEEE transactions on pattern analysis and machine intelligence 43 (2020) 3964–3979

  29. [29]

    L. Dinh, D. Krueger, Y . Bengio, NICE: Non-linear independent components estimation, arXiv preprint arXiv:1410.8516 (2014)

  30. [30]

    L. Dinh, J. Sohl-Dickstein, S. Bengio, Density estimation using Real NVP, arXiv preprint arXiv:1605.08803 (2016)

  31. [31]

    R. T. Chen, Y . Rubanova, J. Bettencourt, D. K. Duvenaud, Neural ordinary differential equations, Advances in neural information processing systems 31 (2018)

  32. [32]

    J. Ho, A. Jain, P. Abbeel, Denoising diffusion probabilistic models, Advances in neural information processing systems 33 (2020) 6840–6851

  33. [33]

    Sohl-Dickstein, E

    J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, S. Ganguli, Deep unsupervised learning using nonequilibrium thermodynamics, in: International conference on machine learning, pmlr, 2015, pp. 2256–2265

  34. [34]

    Y . Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, B. Poole, Score-based generative modeling through stochastic differential equations, arXiv preprint arXiv:2011.13456 (2020)

  35. [35]

    Flow Matching for Generative Modeling

    Y . Lipman, R. T. Chen, H. Ben-Hamu, M. Nickel, M. Le, Flow matching for generative modeling, arXiv preprint arXiv:2210.02747 (2022)

  36. [36]

    X. Liu, C. Gong, Q. Liu, Flow straight and fast: Learning to generate and transfer data with rectified flow, arXiv preprint arXiv:2209.03003 (2022)

  37. [37]

    M. S. Albergo, E. Vanden-Eijnden, Building normalizing flows with stochastic interpolants, arXiv preprint arXiv:2209.15571 (2022)

  38. [38]

    M. H. Parikh, Y . Chen, J.-X. Wang, D-Flow SGLD: Source-space posterior sampling for scientific inverse problems with flow matching, arXiv preprint arXiv:2602.21469 (2026)

  39. [39]

    Wildberger, M

    J. Wildberger, M. Dax, S. Buchholz, S. Green, J. H. Macke, B. Sch ¨olkopf, Flow matching for scalable simulation-based inference, Advances in Neural Information Processing Systems 36 (2023) 16837–16864. 48

  40. [40]

    L. Lu, P. Jin, G. Pang, Z. Zhang, G. E. Karniadakis, Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators, Nature machine intelligence 3 (2021) 218–229

  41. [41]

    D. Ray, O. Pinti, A. A. Oberai, Deep Learning and Computational Physics, Springer, 2024

  42. [42]

    Baptista, A

    R. Baptista, A. Dasgupta, N. B. Kovachki, A. Oberai, A. M. Stuart, Memorization and regularization in genera- tive diffusion models, arXiv preprint arXiv:2501.15785 (2025)

  43. [43]

    Paszke, S

    A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, PyTorch: An imperative style, high-performance deep learning library, in: Advances in Neural Information Processing System...

  44. [44]

    D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, International Conference on Learning Representations (ICLR) (2015)

  45. [45]

    Virtanen, R

    P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey,˙I. Polat, Y . Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen,...

  46. [46]

    Cuturi, Sinkhorn distances: Lightspeed computation of optimal transport, Advances in Neural Information Processing Systems 27 (2013) 2292 – 2300

    M. Cuturi, Sinkhorn distances: Lightspeed computation of optimal transport, Advances in Neural Information Processing Systems 27 (2013) 2292 – 2300

  47. [47]

    Evensen, The ensemble kalman filter: Theoretical formulation and practical implementation, Ocean dynam- ics 53 (2003) 343–367

    G. Evensen, The ensemble kalman filter: Theoretical formulation and practical implementation, Ocean dynam- ics 53 (2003) 343–367

  48. [48]

    Doucet, S

    A. Doucet, S. Godsill, C. Andrieu, On sequential Monte Carlo sampling methods for Bayesian filtering, Statis- tics and computing 10 (2000) 197–208

  49. [49]

    E. N. Lorenz, Deterministic nonperiodic flow, Journal of Atmospheric Sciences 20 (1963) 130 – 141

  50. [50]

    K.-Y . Lam, S. Liu, Y . Lou, Selected topics on reaction-diffusion-advection models from spatial ecology, Mathe- matics in Applied Sciences and Engineering 1 (2020) 91–206. URL:https://ojs.lib.uwo.ca/index. php/mase/article/view/10644. doi:10.5206/mase/10644

  51. [51]

    Logg, K.-A

    A. Logg, K.-A. Mardal, G. N. Wells, et al., Automated Solution of Differential Equations by the Finite Element Method, Springer, 2012. doi:10.1007/978-3-642-23099-8

  52. [52]

    P. E. Barbone, A. A. Oberai, A review of the mathematical and computational foundations of biomechanical imaging, Computational Modeling in Biomechanics (2009) 375–408

  53. [53]

    T. Z. Pavan, E. L. Madsen, G. R. Frank, J. Jiang, A. A. Carneiro, T. J. Hall, A nonlinear elasticity phantom containing spherical inclusions, Physics in medicine & biology 57 (2012) 4787

  54. [54]

    B. F. Kennedy, K. M. Kennedy, D. D. Sampson, A review of Optical Coherence Elastography: Fundamentals, Techniques and Prospects, IEEE Journal of Selected Topics in Quantum Electronics 20 (2014) 272–288

  55. [55]

    B. F. Kennedy, R. A. McLaughlin, K. M. Kennedy, L. Chin, A. Curatolo, A. Tien, B. Latham, C. M. Saunders, D. D. Sampson, Optical coherence micro-elastography: mechanical-contrast imaging of tissue microstructure, Biomedical optics express 5 (2014) 2113–2124

  56. [56]

    K. Y . Foo, B. Shaddy, J. Murgoitio-Esandi, M. S. Hepburn, J. Li, A. Mowla, D. Vahala, S. E. Amos, Y . S. Choi, A. A. Oberai, K. B. F, Tumor spheroid elasticity estimation using mechano-microscopy combined with a conditional generative adversarial network, Computer Methods and Programs in Biomedicine (2024) 108362

  57. [57]

    E. R. Ferreira, A. A. Oberai, P. E. Barbone, Uniqueness of the elastography inverse problem for incompressible nonlinear planar hyperelasticity, Inverse problems 28 (2012) 065008. 49