pith. machine review for the scientific record. sign in

arxiv: 2605.08328 · v2 · submitted 2026-05-08 · 💻 cs.LG · cs.CV

Recognition: 2 theorem links

· Lean Theorem

P-Flow: Proxy-gradient Flows for Linear Inverse Problems

Authors on Pith no claims yet

Pith reviewed 2026-05-13 07:09 UTC · model grok-4.3

classification 💻 cs.LG cs.CV
keywords flow matchinginverse problemsproxy gradientgenerative modelsimage restorationlinear inverse problemsstability
0
0 comments X

The pith

P-Flow updates the source point via a proxy gradient to stabilize flow-matching reconstructions for linear inverse problems.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Flow-matching models produce straight trajectories useful for inverse problems, yet existing methods must differentiate through the entire unrolled sampling path. This creates numerical instability and excessive memory cost. P-Flow replaces the true gradient with a proxy that updates the initial source point directly. A subsequent Gaussian spherical projection step, justified by concentration of measure, restores consistency with the prior distribution. The paper supplies a Bayesian and Lipschitz analysis and shows competitive results on restoration tasks, particularly when the degradation is severe or noise is high.

Core claim

P-Flow replaces full differentiation through an unrolled flow path with a proxy gradient that directly updates the source point. The update is followed by a Gaussian spherical projection that preserves consistency with the data prior via high-dimensional concentration of measure. The resulting procedure is supported by a Bayesian interpretation and Lipschitz continuity arguments, eliminating the instability and memory overhead of long-chain back-propagation while delivering reconstructions that remain accurate under strong ill-posedness and measurement noise.

What carries the argument

Proxy gradient update to the source point combined with Gaussian spherical projection to enforce prior consistency.

If this is right

  • Reconstruction quality remains stable for severely ill-posed linear operators.
  • Memory cost no longer grows with the number of flow steps.
  • Performance holds under high levels of additive measurement noise.
  • Bayesian and Lipschitz arguments guarantee consistency with the prior distribution.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same proxy-gradient idea may simplify other unrolled generative or optimization pipelines that currently suffer from long differentiation chains.
  • If the spherical projection generalizes, the framework could extend to certain non-linear inverse problems.
  • The concentration-of-measure justification suggests the method benefits from the high dimensionality typical of imaging data.

Load-bearing premise

The proxy gradient remains a close enough surrogate to the true gradient of the full unrolled path that it does not introduce bias large enough to degrade final reconstruction quality.

What would settle it

Compare reconstruction error and memory usage of P-Flow against a full-differentiation baseline on a linear inverse problem with known ground truth; divergence or instability only in the baseline would support the claim.

Figures

Figures reproduced from arXiv: 2605.08328 by Chongwen Huang, Fenghao Zhu, Xinquan Wang, Zehua Jiang, Zhaoyang Zhang.

Figure 1
Figure 1. Figure 1: Overview of the P-Flow framework. The restoration follows a three-step cycle: (i) [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Qualitative comparison of image restoration results on the FFHQ dataset. The PSNR [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Qualitative comparison of image restoration results on the AFHQ-Cat dataset. The PSNR [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Evolution of local and cumulative condition numbers during ODE integration. Each panel [PITH_FULL_IMAGE:figures/full_fig_p013_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Empirical validation of gradient alignment. Cosine similarity between the true gradient [PITH_FULL_IMAGE:figures/full_fig_p014_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visualization of 4 × SR results on the FFHQ dataset with parameter K. C.2 Impact of ODE steps To investigate the sensitivity of P-Flow to the discretization of the ODE trajectory, we evaluate its performance by varying the number of integration steps N ∈ {3, 5, 8, 10, 15}. Other configurations are consistent within Appendix D [PITH_FULL_IMAGE:figures/full_fig_p016_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Quantitative error distributions for denoising. The histograms illustrate the variation in [PITH_FULL_IMAGE:figures/full_fig_p019_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Comparison of image restoration methods on the FFHQ dataset using the PSNR metric. [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Comparison of image restoration methods on the AFHQ-Cat dataset using the PSNR metric. [PITH_FULL_IMAGE:figures/full_fig_p020_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Visual comparison for 8 × SR on the FFHQ dataset. We compare the proposed P-Flow with PnP-Flow and Flower. The PSNR metric for each restoration is indicated in the bottom-right corner. Ground Truth Degraded PnP-Flow Flower P-Flow (Ours) 16×SR [PITH_FULL_IMAGE:figures/full_fig_p022_10.png] view at source ↗
Figure 10
Figure 10. Figure 10: Visual comparison for 8 × SR on the FFHQ dataset. We compare the proposed P-Flow with PnP-Flow and Flower. The PSNR metric for each restoration is indicated in the bottom-right corner. Ground Truth Degraded PnP-Flow Flower P-Flow (Ours) 16×SR [PITH_FULL_IMAGE:figures/full_fig_p021_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Visual comparison for 16 × SR on the FFHQ dataset. We compare the proposed P-Flow with PnP-Flow and Flower. The PSNR metric for each restoration is indicated in the bottom-right corner. 22 [PITH_FULL_IMAGE:figures/full_fig_p022_11.png] view at source ↗
Figure 11
Figure 11. Figure 11: Visual comparison for 16 × SR on the FFHQ dataset. We compare the proposed P-Flow with PnP-Flow and Flower. The PSNR metric for each restoration is indicated in the bottom-right corner. 21 [PITH_FULL_IMAGE:figures/full_fig_p021_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Visual comparison for random inpainting with [PITH_FULL_IMAGE:figures/full_fig_p023_12.png] view at source ↗
Figure 12
Figure 12. Figure 12: Visual comparison for random inpainting with [PITH_FULL_IMAGE:figures/full_fig_p022_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Visual comparison for random inpainting with [PITH_FULL_IMAGE:figures/full_fig_p023_13.png] view at source ↗
Figure 13
Figure 13. Figure 13: Visual comparison for random inpainting with [PITH_FULL_IMAGE:figures/full_fig_p022_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Visual comparison for denoising σ = 1.0 on the FFHQ dataset. We compare the proposed P-Flow with PnP-Flow and Flower. The PSNR metric for each restoration is indicated in the bottom￾right corner. 24 [PITH_FULL_IMAGE:figures/full_fig_p024_14.png] view at source ↗
read the original abstract

Generative models based on flow matching have emerged as a powerful paradigm for inverse problems, offering straighter trajectories and faster sampling compared to diffusion models. However, existing approaches often necessitate differentiating through unrolled paths, leading to numerical instability and prohibitive computational overhead. To address this, we propose P-Flow, a framework that stabilizes the reconstruction process by leveraging a proxy gradient to update the source point. This approach effectively circumvents the numerical instability and memory overhead of long-chain differentiation. To ensure consistency with the prior distribution, we employ a Gaussian spherical projection motivated by the concentration of measure phenomenon in high-dimensional spaces. We further provide a theoretical analysis for P-Flow based on Bayesian theory and Lipschitz continuity. Experiments across diverse restoration tasks demonstrate that P-Flow delivers competitive performance, especially under extreme degradations such as severely ill-posed conditions and high measurement noise.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces P-Flow, a framework for solving linear inverse problems with flow-matching generative models. It proposes using a proxy gradient to update the source point, thereby avoiding the numerical instability and memory costs of differentiating through long unrolled trajectories. Consistency with the prior is maintained via a Gaussian spherical projection motivated by high-dimensional concentration of measure. Theoretical support is claimed from Bayesian analysis and Lipschitz continuity, and experiments on restoration tasks (including severely ill-posed cases and high noise) are said to show competitive performance.

Significance. If the proxy-gradient construction and projection step can be shown to preserve reconstruction quality without introducing bias, the method would address a genuine practical bottleneck in applying flow models to inverse problems, enabling more stable and memory-efficient inference. The combination of a new algorithmic device with Bayesian/Lipschitz theory and empirical results on extreme degradations would constitute a useful contribution to the generative-modeling-for-inverse-problems literature.

major comments (2)
  1. [Abstract and §3] Abstract and §3 (method): the central claim that the proxy gradient serves as an adequate surrogate for the true gradient of the unrolled flow path is asserted but not accompanied by any derivation, error bound, or bias analysis. Without these, it is impossible to verify that the approximation does not degrade reconstruction quality under the ill-posed regimes highlighted in the experiments.
  2. [§4] §4 (theoretical analysis): the appeal to Bayesian theory and Lipschitz continuity is mentioned but no explicit statements, assumptions, or proof sketches are supplied in the provided text. This prevents assessment of whether the claimed guarantees actually support the proxy-gradient construction or the spherical-projection step.
minor comments (1)
  1. [Abstract] The abstract refers to 'diverse restoration tasks' and 'extreme degradations' but does not specify the exact datasets, degradation operators, or baseline methods used; these details are needed for reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which highlight opportunities to strengthen the theoretical foundations of P-Flow. We address each major point below and will incorporate the requested clarifications and analyses in the revised manuscript.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3 (method): the central claim that the proxy gradient serves as an adequate surrogate for the true gradient of the unrolled flow path is asserted but not accompanied by any derivation, error bound, or bias analysis. Without these, it is impossible to verify that the approximation does not degrade reconstruction quality under the ill-posed regimes highlighted in the experiments.

    Authors: We agree that additional rigor is warranted. In the revision we will augment §3 with an explicit derivation showing that the proxy gradient equals the true gradient of the reconstruction loss minus a term controlled by the flow's Jacobian; we will also supply an error bound derived from the Lipschitz continuity of the velocity field (with constant L) and a bias analysis for the Gaussian spherical projection, proving that the bias is O(1/√d) in high dimensions d by concentration of measure. These additions will directly confirm stability under severe ill-posedness and high noise. revision: yes

  2. Referee: [§4] §4 (theoretical analysis): the appeal to Bayesian theory and Lipschitz continuity is mentioned but no explicit statements, assumptions, or proof sketches are supplied in the provided text. This prevents assessment of whether the claimed guarantees actually support the proxy-gradient construction or the spherical-projection step.

    Authors: We acknowledge that the current §4 is too concise. The revised version will state the assumptions explicitly (L-Lipschitz velocity field, flow-matching objective matching the prior, linear forward operator) and include short proof sketches: one establishing that the proxy gradient approximates the unrolled gradient with error bounded by L·Δt, and another showing that the spherical projection yields samples whose distribution is close to the Bayesian posterior in total variation, again via high-dimensional concentration. These sketches will clarify the support for both algorithmic components. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper presents P-Flow as a novel construction that replaces long-chain differentiation with a proxy gradient update and adds a Gaussian spherical projection motivated by concentration of measure. The abstract and description ground the approach in external Bayesian theory and Lipschitz continuity without any equations or steps that reduce by definition to fitted parameters, self-citations, or prior results from the same authors. No load-bearing self-referential definitions, renamed empirical patterns, or ansatzes smuggled via citation appear; the framework is self-contained against external benchmarks and does not force its central claims by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Review conducted from abstract only; no explicit free parameters, new entities, or detailed axioms are stated. The reliance on Bayesian theory and Lipschitz continuity is treated as domain assumptions.

axioms (2)
  • domain assumption Bayesian theory provides a valid framework for analyzing the reconstruction consistency
    Invoked for theoretical analysis of the method
  • domain assumption Lipschitz continuity holds for the relevant flow and gradient functions
    Basis for stability claims

pith-pipeline@v0.9.0 · 5451 in / 1301 out tokens · 40719 ms · 2026-05-13T07:09:55.119005+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

43 extracted references · 43 canonical work pages

  1. [1]

    Nonlinear total variation based noise removal algorithms

    Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1-4):259–268, 1992

  2. [2]

    Compressed sensing.IEEE Transactions on information theory, 52(4):1289–1306, 2006

    David L Donoho. Compressed sensing.IEEE Transactions on information theory, 52(4):1289–1306, 2006

  3. [3]

    Image super-resolution using deep convolutional networks.IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2):295–307, 2016

    Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks.IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2):295–307, 2016

  4. [4]

    Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising.IEEE Transactions on Image Processing, 26(7):3142– 3155, 2017

    Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising.IEEE Transactions on Image Processing, 26(7):3142– 3155, 2017

  5. [5]

    On instabilities of deep learning in image reconstruction and the potential costs of AI.Proceedings of the National Academy of Sciences, 117(48):30088–30095, 2020

    Vegard Antun, Francesco Renna, Clarice Poon, Ben Adcock, and Anders C Hansen. On instabilities of deep learning in image reconstruction and the potential costs of AI.Proceedings of the National Academy of Sciences, 117(48):30088–30095, 2020

  6. [6]

    Venkatakrishnan, Charles A

    Singanallur V . Venkatakrishnan, Charles A. Bouman, and Brendt Wohlberg. Plug-and-play priors for model based reconstruction. InIEEE Global Conference on Signal and Information Processing (GlobalSIP), 2013

  7. [7]

    The little engine that could: Regularization by denoising (RED).SIAM Journal on Imaging Sciences, 10(4):1804–1844, 2017

    Yaniv Romano, Michael Elad, and Peyman Milanfar. The little engine that could: Regularization by denoising (RED).SIAM Journal on Imaging Sciences, 10(4):1804–1844, 2017

  8. [8]

    Denoising diffusion probabilistic models

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. InAdvances in Neural Information Processing Systems (NeurIPS), 2020

  9. [9]

    Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole

    Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. InInternational Conference on Learning Representations (ICLR), 2021

  10. [10]

    Diffusion posterior sampling for general noisy inverse problems

    Hyungjin Chung, Jeongsol Kim, Michael Thompson McCann, Marc Louis Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. InInternational Conference on Learning Representations (ICLR), 2023

  11. [11]

    Pseudoinverse-guided diffusion models for inverse problems

    Jiaming Song, Arash Vahdat, Morteza Mardani, and Jan Kautz. Pseudoinverse-guided diffusion models for inverse problems. InInternational Conference on Learning Representations (ICLR), 2023

  12. [12]

    Denoising diffusion restoration models

    Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. InAdvances in Neural Information Processing Systems (NeurIPS), 2022

  13. [13]

    Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. InInternational Conference on Learning Representations (ICLR), 2023

  14. [14]

    Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, and Ricky T. Q. Chen. Multisample flow matching: Straightening flows with minibatch couplings. In International Conference on Machine Learning (ICML), 2023

  15. [15]

    Muckley, Ricky T

    Ashwini Pokle, Matthew J. Muckley, Ricky T. Q. Chen, and Brian Karrer. Training-free linear image inverses via flows.Transactions on Machine Learning Research, 2024

  16. [16]

    PnP-Flow: Plug-and-play image restoration with flow matching

    Ségolène Tiffany Martin, Anne Gagneux, Paul Hagemann, and Gabriele Steidl. PnP-Flow: Plug-and-play image restoration with flow matching. InInternational Conference on Learning Representations (ICLR), 2025

  17. [17]

    Flower: A flow-matching solver for inverse problems

    Mehrsa Pourya, Bassam El Rawas, and Michael Unser. Flower: A flow-matching solver for inverse problems. InInternational Conference on Learning Representations (ICLR), 2026

  18. [18]

    D-Flow: Differentiat- ing through flows for controlled generation

    Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, and Yaron Lipman. D-Flow: Differentiat- ing through flows for controlled generation. InInternational Conference on Machine Learning (ICML), 2024

  19. [19]

    Now Foundations and Trends, 2019

    Gabriel Peyré and Marco Cuturi.Computational optimal transport: With applications to data science. Now Foundations and Trends, 2019

  20. [20]

    Conditional flow matching for generative modeling

    Alexander Tong, Nikolay Malkin, Guillaume Huguet, Yanlei Zhang, Jarrid Rector-Brooks, Kilian Fatras, Guy Wolf, and Yoshua Bengio. Conditional flow matching for generative modeling. InInternational Conference on Machine Learning (ICML), 2023. 10

  21. [21]

    Cambridge university press, 2018

    Roman Vershynin.High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018

  22. [22]

    Pulse: Self-supervised photo upsampling via latent space exploration of generative models

    Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi, and Cynthia Rudin. Pulse: Self-supervised photo upsampling via latent space exploration of generative models. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020

  23. [23]

    Blind image restoration via fast diffusion inversion

    Hamadi Chihaoui, Abdelhak Lemkhenter, and Paolo Favaro. Blind image restoration via fast diffusion inversion. InAdvances in Neural Information Processing Systems (NeurIPS), 2024

  24. [24]

    Flow straight and fast: Learning to generate and transfer data with rectified flow

    Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. InInternational Conference on Learning Representations (ICLR), 2023

  25. [25]

    On the difficulty of training recurrent neural networks

    Razvan Pascanu, Tomás Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. InInternational Conference on Machine Learning (ICML), 2013

  26. [26]

    Saxe, James L

    Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. InInternational Conference on Learning Representations (ICLR), 2014

  27. [27]

    David Balduzzi, Marcus Frean, Lennox Leary, J. P. Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The shattered gradients problem: If resnets are the answer, then what is the question? InInternational Conference on Machine Learning (ICML), 2017

  28. [28]

    Improving diffusion inverse problem solving with decoupled noise annealing

    Bingliang Zhang, Wenda Chu, Julius Berner, Chenlin Meng, Anima Anandkumar, and Yang Song. Improving diffusion inverse problem solving with decoupled noise annealing. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20895–20905, 2025

  29. [29]

    Denoising diffusion models for plug-and-play image restoration

    Yuanzhi Zhu, Kai Zhang, Jingyun Liang, Jiezhang Cao, Bihan Wen, Radu Timofte, and Luc Van Gool. Denoising diffusion models for plug-and-play image restoration. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023

  30. [30]

    Gradient step denoiser for convergent plug-and- play

    Samuel Hurault, Arthur Leclaire, and Nicolas Papadakis. Gradient step denoiser for convergent plug-and- play. InInternational Conference on Learning Representations (ICLR), 2022

  31. [31]

    Flow priors for linear inverse problems via iterative corrupted trajectory matching

    Yasi Zhang, Peiyu Yu, Yaxuan Zhu, Yingshan Chang, Feng Gao, Ying Nian Wu, and Oscar Leong. Flow priors for linear inverse problems via iterative corrupted trajectory matching. InAdvances in Neural Information Processing Systems (NeurIPS), 2024

  32. [32]

    Image restoration by denoising diffusion models with iteratively precon- ditioned guidance

    Tomer Garber and Tom Tirer. Image restoration by denoising diffusion models with iteratively precon- ditioned guidance. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024

  33. [33]

    Decomposed diffusion sampler for accelerating large-scale inverse problems

    Hyungjin Chung, Suhyeon Lee, and Jong Chul Ye. Decomposed diffusion sampler for accelerating large-scale inverse problems. InInternational Conference on Learning Representations (ICLR), 2024

  34. [34]

    A style-based generator architecture for generative adversarial networks

    Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019

  35. [35]

    StarGAN v2: Diverse image synthesis for multiple domains

    Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. StarGAN v2: Diverse image synthesis for multiple domains. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020

  36. [36]

    Diffusion models beat GANs on image synthesis

    Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat GANs on image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2021

  37. [37]

    U-Net: Convolutional networks for biomedical image segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional networks for biomedical image segmentation. InMedical Image Computing and Computer-Assisted Intervention (MICCAI), 2015

  38. [38]

    Deep residual learning for image recognition

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016

  39. [39]

    Gomez, Lukasz Kaiser, and Illia Polosukhin

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. InAdvances in Neural Information Processing Systems (NeurIPS), 2017

  40. [40]

    Kingma and Jimmy Ba

    Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InInternational Conference on Learning Representations (ICLR), 2015. 11

  41. [41]

    ILVR: conditioning method for denoising diffusion probabilistic models

    Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. ILVR: conditioning method for denoising diffusion probabilistic models. InIEEE/CVF International Conference on Computer Vision (ICCV), 2021

  42. [42]

    Deepinverse: A python package for solving imaging inverse problems with deep learning.arXiv preprint arXiv:2505.20160, 2025

    Julián Tachella, Matthieu Terris, Samuel Hurault, Andrew Wang, Dongdong Chen, Minh-Hai Nguyen, Maxime Song, Thomas Davies, Leo Davy, Jonathan Dong, et al. Deepinverse: A python package for solving imaging inverse problems with deep learning.arXiv preprint arXiv:2505.20160, 2025. A Proof A.1 Proof of proposition 1 We begin by assuming the Lipschitz continu...

  43. [43]

    Substitutingt= 1into the above inequality yields: ∥ψ(x0)−ψ(x ′ 0)∥ ≤exp(L v)∥x0 −x ′ 0∥.(21) Let Lψ = exp(Lv)

    + Z t 0 (vs(xs)−v s(x′ s))ds (Integral form of ODE) ≤ ∥x 0 −x ′ 0∥+ Z t 0 ∥vs(xs)−v s(x′ s)∥ds(Triangle inequality) ≤ ∥x 0 −x ′ 0∥+L v Z t 0 ∥xs −x ′ s∥ds(Lipschitz property ofv t) ≤ ∥x 0 −x ′ 0∥exp(L vt) (Grönwall’s inequality) (20) At the end of the trajectory ( t= 1 ), the generated samples are x1 =ψ(x 0) and x′ 1 =ψ(x ′ 0). Substitutingt= 1into the ab...