pith. machine review for the scientific record. sign in

arxiv: 2605.10642 · v1 · submitted 2026-05-11 · 💻 cs.LG · cond-mat.stat-mech

Recognition: 2 theorem links

· Lean Theorem

Composing diffusion priors with explicit physical context via generative Gibbs sampling

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:37 UTC · model grok-4.3

classification 💻 cs.LG cond-mat.stat-mech
keywords diffusion modelsGibbs samplingaugmented state spacephysical contextreplica exchangegenerative priorssampling
0
0 comments X

The pith

A Gibbs sampler in an augmented state space composes pretrained diffusion priors with explicit physical context and stays exact for quadratic interactions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a training-free framework that treats the combination of learned partial priors from diffusion models and additional physical rules as a single joint sampling problem in an expanded state space. A Gibbs sampler is derived for this joint target, shown to converge to the correct distribution as diffusion time shrinks to zero, and proven to match the target exactly even at finite diffusion times when the interactions are quadratic. Replica exchange across different diffusion times speeds up mixing. Tests on double-well potentials, lattice field models, and molecular systems demonstrate that the approach reproduces how physical context alters the overall distribution and produces collective effects using only the original partial models.

Core claim

By casting composition of diffusion priors and physical context as inference over a joint target in an augmented state space, the authors derive a Gibbs sampler that is asymptotically exact as diffusion time approaches zero and remains exact at finite diffusion times whenever interactions are quadratic; replica exchange over diffusion time further accelerates convergence, allowing pretrained partial priors to be reused for context-modified distributions without retraining.

What carries the argument

The Gibbs sampler on the joint target distribution over the augmented state space, in which auxiliary diffusion variables let each pretrained model contribute its partial prior while the explicit physical context is enforced directly.

If this is right

  • The sampler recovers distribution shifts caused by added physical context in systems with interactions.
  • It produces emergent collective behavior from the interplay of partial priors and explicit rules.
  • The approach applies directly to double-well, lattice, and atomistic peptide models without retraining.
  • Replica exchange over diffusion time improves mixing while preserving the exactness properties.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The method could be applied to non-quadratic systems by taking diffusion time small enough that the asymptotic guarantee becomes practically exact.
  • Subsystem priors trained separately could be assembled into larger physical models once their interaction potentials are written explicitly.
  • Replica exchange across diffusion times may improve convergence in other diffusion-based samplers beyond this specific setting.

Load-bearing premise

The pretrained diffusion models supply sufficiently accurate partial priors on the components, and the Gibbs procedure on the augmented joint introduces negligible bias from the finite diffusion approximation.

What would settle it

Sampling the GG-PA joint at finite diffusion time for a quadratic-interaction system and comparing the resulting marginal distribution against an independent exact reference sampler on the true target; any statistically significant mismatch would refute the finite-time exactness claim.

Figures

Figures reproduced from arXiv: 2605.10642 by Aaron R. Dinner, Jonathan Weare, Weizhou Wang.

Figure 1
Figure 1. Figure 1: Coupled double-well system. (a) The environment induces an asymmetric potential. (b) Marginal density of xsys. GG-PA (blue solid) and GG-PA-RE (purple solid) capture the asymmetry despite their symmetric prior (Direct Diffusion; green dotted). (c) Jensen-Shannon divergence versus t. Below the diffusion time bound (t ≤ 0.28), GG-PA-RE remains consistently near the minimum observed error, whereas fixed-t GG-… view at source ↗
Figure 2
Figure 2. Figure 2: 2D Ginzburg-Landau ϕ 4 model across the phase transition. (a–c): Representative field configurations at h = 0. (d–e): Zero-field thermodynamic observables. GG-PA tracks the MC phase transition. (f): Integrated autocorrelation time τint. GG-PA-RE yields orders-of-magnitude speedups close to Jc (dotted line). (g–i): Scaling and universal data collapse, confirming that GG-PA reproduces the expected critical b… view at source ↗
Figure 3
Figure 3. Figure 3: Alanine dipeptide systems. (a) AD–Na+: GG-PA captures the distribution shift associated with ion coordination. (b,c) AD dimer: two copies of the isolated-monomer prior are composed to form hydrogen-bonded parallel and anti-parallel dimers. In (c), the heatmaps show the conditional occupancy of combinations of the dominant torsional states of the prior (see text); pU denotes the residual probability. GG-PA-… view at source ↗
read the original abstract

Pretrained diffusion models provide powerful learned priors, but in scientific sampling the target distribution often depends on physical context that is not fully represented by one generative model. We introduce Generative Gibbs for Physics-Aware Sampling (GG-PA), a training-free framework that formulates the composition of learned partial priors and explicit physical context as inference over a joint target distribution in an augmented state space. We derive a Gibbs sampler for this joint target, show that it is asymptotically exact as the diffusion time approaches zero, and prove that in settings with quadratic interactions it remains exact at finite diffusion times. We further introduce replica exchange over diffusion time to accelerate mixing. Experiments on a double-well system, a $\phi^4$ lattice model, and atomistic peptide systems show that GG-PA recovers context-induced distribution shifts and emergent collective behavior in interacting systems using partial priors without retraining. These results demonstrate GG-PA as a practical approach for combining pretrained generative priors with explicit physical context.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 4 minor

Summary. The manuscript introduces Generative Gibbs for Physics-Aware Sampling (GG-PA), a training-free method that composes pretrained diffusion priors with explicit physical context by treating the problem as inference over a joint target distribution in an augmented state space. It derives a Gibbs sampler for this joint, proves asymptotic exactness as diffusion time t approaches zero, establishes exact invariance at finite t under quadratic interactions, introduces replica exchange over diffusion time to improve mixing, and reports experiments on a double-well potential, a φ⁴ lattice field theory, and atomistic peptide systems that recover context-induced distribution shifts and collective behavior without retraining the diffusion models.

Significance. If the stated derivations and exactness proofs hold, the work provides a principled, training-free route to hybrid sampling that combines the flexibility of learned generative priors with explicit physical constraints. This is potentially significant for scientific applications in statistical mechanics and molecular simulation where full retraining is impractical. The explicit proofs of asymptotic and quadratic-case exactness, together with the replica-exchange acceleration and the experimental recovery of emergent shifts on interacting systems, constitute concrete strengths that distinguish the contribution from purely heuristic composition methods.

minor comments (4)
  1. [Abstract] The abstract and introduction would benefit from a brief, explicit statement of the precise conditions under which the finite-t exactness result holds (quadratic interactions only) and the form of the augmented joint target; this would help readers immediately assess the scope without waiting for the derivation section.
  2. [Experiments] In the experimental sections, the controls for diffusion-model approximation error versus Gibbs-sampling bias are not fully separated; adding a short ablation that varies the number of Gibbs steps while holding the pretrained prior fixed would strengthen the claim that observed shifts are due to the physical context rather than residual diffusion bias.
  3. [Methods] Notation for the augmented state space and the conditional distributions in the Gibbs step should be introduced with a single, self-contained table or diagram early in the methods; current inline definitions make it easy to lose track of which variables are conditioned on the physical context versus the diffusion prior.
  4. [Replica Exchange] The replica-exchange schedule over diffusion time is described at a high level; a short paragraph or pseudocode block showing the swap acceptance criterion and the specific time ladder used would improve reproducibility.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive summary of the GG-PA framework, the assessment of its significance for scientific sampling applications, and the recommendation for minor revision. We appreciate the recognition of the derivations, exactness results, replica-exchange acceleration, and experimental demonstrations.

Circularity Check

0 steps flagged

Derivation self-contained from joint target and diffusion properties

full rationale

The central claims consist of deriving a Gibbs sampler over an explicitly constructed joint target (pretrained diffusion priors composed with physical context in augmented state space), proving asymptotic exactness as diffusion time t approaches zero, and proving exact invariance at finite t under quadratic interactions. These steps follow from standard properties of diffusion processes, Gibbs sampling, and replica exchange; they do not reduce by construction to fitted parameters, self-defined quantities, or load-bearing self-citations. Experiments on double-well, phi^4, and peptide systems serve only as validation, not as the source of the claimed exactness results. No self-definitional, fitted-input, or ansatz-smuggling patterns appear in the derivation chain.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The framework assumes the existence of pretrained diffusion models that capture partial priors and relies on properties of diffusion processes and Gibbs sampling to derive the joint sampler. No explicit free parameters or new physical entities are introduced in the abstract.

axioms (2)
  • domain assumption Pretrained diffusion models provide accurate partial priors that can be composed with explicit physical context without retraining.
    This is the core premise enabling the training-free composition.
  • domain assumption The joint target distribution over the augmented state space admits an asymptotically exact Gibbs sampler as diffusion time approaches zero.
    Invoked to establish the correctness of the sampling procedure.
invented entities (1)
  • Augmented state space for joint inference over priors and physical context no independent evidence
    purpose: To reformulate the composition problem as sampling from a joint distribution amenable to Gibbs sampling.
    New construction introduced to enable the framework; no independent evidence provided beyond the derivation.

pith-pipeline@v0.9.0 · 5465 in / 1537 out tokens · 34906 ms · 2026-05-12T04:37:17.129350+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

59 extracted references · 59 canonical work pages · 1 internal anchor

  1. [1]

    Score- based generative modeling through stochastic differential equations.International Conference on Learning Representations, 2021

    Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score- based generative modeling through stochastic differential equations.International Conference on Learning Representations, 2021

  2. [2]

    Denoising diffusion probabilistic models.Advances in Neural Information Processing Systems, 33:6840–6851, 2020

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.Advances in Neural Information Processing Systems, 33:6840–6851, 2020

  3. [3]

    Diffusion models in vision: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850–10869, 2023

    Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion models in vision: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850–10869, 2023

  4. [4]

    How Hofmeister ion interactions affect protein stability.Biophysical Journal, 71(4):2056–2063, 1996

    Robert L Baldwin. How Hofmeister ion interactions affect protein stability.Biophysical Journal, 71(4):2056–2063, 1996

  5. [5]

    Electric field induced changes in protein conformation.Soft Matter, 10(3):431–437, 2014

    Innocent Bekard and Dave E Dunstan. Electric field induced changes in protein conformation.Soft Matter, 10(3):431–437, 2014

  6. [6]

    Affinity of IDPs to their targets is modulated by ion-specific changes in kinetics and residual structure.Proceedings of the National Academy of Sciences, 114(37):9882–9887, 2017

    Basile IM Wicky, Sarah L Shammas, and Jane Clarke. Affinity of IDPs to their targets is modulated by ion-specific changes in kinetics and residual structure.Proceedings of the National Academy of Sciences, 114(37):9882–9887, 2017

  7. [7]

    Modulating charge patterning and ionic strength as a strategy to induce conformational changes in intrinsically disordered proteins.The Journal of Chemical Physics, 149(8), 2018

    Jonathan Huihui, Taylor Firman, and Kingshuk Ghosh. Modulating charge patterning and ionic strength as a strategy to induce conformational changes in intrinsically disordered proteins.The Journal of Chemical Physics, 149(8), 2018

  8. [8]

    Dynamics of activation in the voltage-sensing domain ofCiona intestinalisphosphatase Ci-VSP.Nature Communications, 15(1):1408, 2024

    Spencer C Guo, Rong Shen, Benoît Roux, and Aaron R Dinner. Dynamics of activation in the voltage-sensing domain ofCiona intestinalisphosphatase Ci-VSP.Nature Communications, 15(1):1408, 2024

  9. [9]

    Impact of protein conformational diversity on AlphaFold predictions.Bioinformatics, 38(10):2742–2748, 2022

    Tadeo Saldaño, Nahuel Escobedo, Julia Marchetti, Diego Javier Zea, Juan Mac Donagh, Ana Julia Velez Rueda, Eduardo Gonik, Agustina García Melani, Julieta Novomisky Nechcoff, Martín N Salas, et al. Impact of protein conformational diversity on AlphaFold predictions.Bioinformatics, 38(10):2742–2748, 2022

  10. [10]

    Patch-based diffusion models beat whole-image models for mismatched distribution inverse problems.arXiv preprint arXiv:2410.11730, 2024

    Jason Hu, Bowen Song, Jeffrey A Fessler, and Liyue Shen. Patch-based diffusion models beat whole-image models for mismatched distribution inverse problems.arXiv preprint arXiv:2410.11730, 2024

  11. [11]

    The troublesome kernel: On halluci- nations, no free lunches, and the accuracy-stability tradeoff in inverse problems.SIAM Review, 67(1):73–104, 2025

    Nina M Gottschling, Vegard Antun, Anders C Hansen, and Ben Adcock. The troublesome kernel: On halluci- nations, no free lunches, and the accuracy-stability tradeoff in inverse problems.SIAM Review, 67(1):73–104, 2025

  12. [12]

    Diffusion models in bioinformatics and computational biology.Nature Reviews Bioengineering, 2(2):136–154, 2024

    Zhiye Guo, Jian Liu, Yanli Wang, Mengrui Chen, Duolin Wang, Dong Xu, and Jianlin Cheng. Diffusion models in bioinformatics and computational biology.Nature Reviews Bioengineering, 2(2):136–154, 2024

  13. [13]

    Diffusion models beat GANs on image synthesis.Advances in Neural Information Processing Systems, 34:8780–8794, 2021

    Prafulla Dhariwal and Alexander Nichol. Diffusion models beat GANs on image synthesis.Advances in Neural Information Processing Systems, 34:8780–8794, 2021

  14. [14]

    Classifier-Free Diffusion Guidance

    Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance.arXiv preprint arXiv:2207.12598, 2022

  15. [15]

    Solving inverse problems in medical imaging with score-based generative models.International Conference on Learning Representations, 2022

    Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon. Solving inverse problems in medical imaging with score-based generative models.International Conference on Learning Representations, 2022

  16. [16]

    Diffusion posterior sampling for general noisy inverse problems.International Conference on Learning Representations, 2023

    Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems.International Conference on Learning Representations, 2023

  17. [17]

    Generative modeling by estimating gradients of the data distribution.Advances in Neural Information Processing Systems, 32, 2019

    Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution.Advances in Neural Information Processing Systems, 32, 2019

  18. [18]

    Score-based diffusion meets annealed importance sampling.Advances in Neural Information Processing Systems, 35:21482–21494, 2022

    Arnaud Doucet, Will Grathwohl, Alexander G Matthews, and Heiko Strathmann. Score-based diffusion meets annealed importance sampling.Advances in Neural Information Processing Systems, 35:21482–21494, 2022

  19. [19]

    Practical and asymptotically exact conditional sampling in diffusion models.Advances in Neural Information Processing Systems, 36:31372– 31403, 2023

    Luhuan Wu, Brian Trippe, Christian Naesseth, David Blei, and John P Cunningham. Practical and asymptotically exact conditional sampling in diffusion models.Advances in Neural Information Processing Systems, 36:31372– 31403, 2023

  20. [20]

    Posterior sampling by combining diffusion models with annealed Langevin dynamics.arXiv preprint arXiv:2510.26324, 2025

    Zhiyang Xun, Shivam Gupta, and Eric Price. Posterior sampling by combining diffusion models with annealed Langevin dynamics.arXiv preprint arXiv:2510.26324, 2025. 10 GG-PA - MAY12, 2026

  21. [21]

    Springer, 2007

    Christophe Chipot and Andrew Pohorille.Free energy calculations, volume 86. Springer, 2007

  22. [22]

    Com- parison of multiple Amber force fields and development of improved protein backbone parameters.Proteins: Structure, Function, and Bioinformatics, 65(3):712–725, 2006

    Viktor Hornak, Robert Abel, Asim Okur, Bentley Strockbine, Adrian Roitberg, and Carlos Simmerling. Com- parison of multiple Amber force fields and development of improved protein backbone parameters.Proteins: Structure, Function, and Bioinformatics, 65(3):712–725, 2006

  23. [23]

    Improved side-chain torsion potentials for the Amber ff99sb protein force field.Proteins: Structure, Function, and Bioinformatics, 78(8):1950–1958, 2010

    Kresten Lindorff-Larsen, Stefano Piana, Kim Palmo, Paul Maragakis, John L Klepeis, Ron O Dror, and David E Shaw. Improved side-chain torsion potentials for the Amber ff99sb protein force field.Proteins: Structure, Function, and Bioinformatics, 78(8):1950–1958, 2010

  24. [24]

    Heterogeneous parallelization and acceleration of molecular dynamics simulations in GROMACS.The Journal of Chemical Physics, 153(13), 2020

    Szilárd Páll, Artem Zhmurov, Paul Bauer, Mark Abraham, Magnus Lundborg, Alan Gray, Berk Hess, and Erik Lindahl. Heterogeneous parallelization and acceleration of molecular dynamics simulations in GROMACS.The Journal of Chemical Physics, 153(13), 2020

  25. [25]

    OpenMM 8: molecular dynamics simulation with machine learning potentials.The Journal of Physical Chemistry B, 128(1):109–116, 2023

    Peter Eastman, Raimondas Galvelis, Raúl P Peláez, Charlles RA Abreu, Stephen E Farr, Emilio Gallicchio, Anton Gorenko, Michael M Henry, Frank Hu, Jing Huang, et al. OpenMM 8: molecular dynamics simulation with machine learning potentials.The Journal of Physical Chemistry B, 128(1):109–116, 2023

  26. [26]

    Replica Monte Carlo simulation of spin glasses.Physical Review Letters, 57(21):2607–2609, 1986

    Robert H Swendsen and Jian-Sheng Wang. Replica Monte Carlo simulation of spin glasses.Physical Review Letters, 57(21):2607–2609, 1986

  27. [27]

    Replica-exchange molecular dynamics method for protein folding.Chemical Physics Letters, 314(1-2):141–151, 1999

    Yuji Sugita and Yuko Okamoto. Replica-exchange molecular dynamics method for protein folding.Chemical Physics Letters, 314(1-2):141–151, 1999

  28. [28]

    Statistically optimal analysis of samples from multiple equilibrium states

    Michael R Shirts and John D Chodera. Statistically optimal analysis of samples from multiple equilibrium states. The Journal of Chemical Physics, 129(12), 2008

  29. [29]

    CREPE: Controlling diffusion with REPlica exchange

    Jiajun He, Paul Jeha, Peter Potaptchik, Leo Zhang, José Miguel Hernández-Lobato, Yuanqi Du, Saifuddin Syed, and Francisco Vargas. CREPE: Controlling diffusion with REPlica exchange. InThe Fourteenth International Conference on Learning Representations, 2026

  30. [30]

    Finite-size scaling analysis of the φ4 field theory on the square lattice

    A Milchev, DW Heermann, and K Binder. Finite-size scaling analysis of the φ4 field theory on the square lattice. Journal of Statistical Physics, 44(5):749–784, 1986

  31. [31]

    Douglas J Tobias and Charles L Brooks III. Conformational equilibrium in the alanine dipeptide in the gas phase and aqueous solution: A comparison of theoretical results.The Journal of Physical Chemistry, 96(9):3864–3870, 1992

  32. [32]

    ExEnDiff: an experiment-guided diffusion model for protein conformational ensemble generation.PRX Life, 3(2):023013, 2025

    Yikai Liu, Zongxin Yu, Richard J Lindsay, Guang Lin, Ming Chen, Abhilash Sahoo, and Sonya M Hanson. ExEnDiff: an experiment-guided diffusion model for protein conformational ensemble generation.PRX Life, 3(2):023013, 2025

  33. [33]

    Yanbin Wang and Ming Chen. Extrapolating foundation generative models with physics: A case study of exploring peptide conformations under protein–environment interactions.The Journal of Physical Chemistry Letters, 17(2):456–465, 2026

  34. [34]

    Compositional visual generation with composable diffusion models

    Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. Compositional visual generation with composable diffusion models. InEuropean Conference on Computer Vision, pages 423–439. Springer, 2022

  35. [35]

    Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and MCMC

    Yilun Du, Conor Durkan, Robin Strudel, Joshua B Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl- Dickstein, Arnaud Doucet, and Will Sussman Grathwohl. Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and MCMC. InInternational Conference on Machine Learning, pages 8489–8510. PMLR, 2023

  36. [36]

    Plug-and-play priors for model based reconstruction

    Singanallur V Venkatakrishnan, Charles A Bouman, and Brendt Wohlberg. Plug-and-play priors for model based reconstruction. In2013 IEEE Global Conference on Signal and Information Processing, pages 945–948. IEEE, 2013

  37. [37]

    Plug-and-play ADMM for image restoration: Fixed-point convergence and applications.IEEE Transactions on Computational Imaging, 3(1):84–98, 2016

    Stanley H Chan, Xiran Wang, and Omar A Elgendy. Plug-and-play ADMM for image restoration: Fixed-point convergence and applications.IEEE Transactions on Computational Imaging, 3(1):84–98, 2016

  38. [38]

    Plug-and-play image restora- tion with deep denoiser prior.IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6360–6376, 2021

    Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte. Plug-and-play image restora- tion with deep denoiser prior.IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6360–6376, 2021. 11 GG-PA - MAY12, 2026

  39. [39]

    Plug-and-play methods for integrating physical and learned models in computational imaging: Theory, algorithms, and applications.IEEE Signal Processing Magazine, 40(1):85–97, 2023

    Ulugbek S Kamilov, Charles A Bouman, Gregery T Buzzard, and Brendt Wohlberg. Plug-and-play methods for integrating physical and learned models in computational imaging: Theory, algorithms, and applications.IEEE Signal Processing Magazine, 40(1):85–97, 2023

  40. [40]

    Plug-and-play unplugged: Optimization-free reconstruction using consensus equilibrium.SIAM Journal on Imaging Sciences, 11(3):2001– 2020, 2018

    Gregery T Buzzard, Stanley H Chan, Suhas Sreehari, and Charles A Bouman. Plug-and-play unplugged: Optimization-free reconstruction using consensus equilibrium.SIAM Journal on Imaging Sciences, 11(3):2001– 2020, 2018

  41. [41]

    Bayesian imaging using plug & play priors: when Langevin meets Tweedie.SIAM Journal on Imaging Sciences, 15(2):701– 737, 2022

    Rémi Laumont, Valentin De Bortoli, Andrés Almansa, Julie Delon, Alain Durmus, and Marcelo Pereyra. Bayesian imaging using plug & play priors: when Langevin meets Tweedie.SIAM Journal on Imaging Sciences, 15(2):701– 737, 2022

  42. [42]

    Generative plug and play: Posterior sampling for inverse problems

    Charles A Bouman and Gregery T Buzzard. Generative plug and play: Posterior sampling for inverse problems. In2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 1–7. IEEE, 2023

  43. [43]

    Plug-and-play split Gibbs sampler: embedding deep generative priors in Bayesian inference.IEEE Transactions on Image Processing, 33:3496–3507, 2024

    Florentin Coeurdoux, Nicolas Dobigeon, and Pierre Chainais. Plug-and-play split Gibbs sampler: embedding deep generative priors in Bayesian inference.IEEE Transactions on Image Processing, 33:3496–3507, 2024

  44. [44]

    Principled probabilistic imaging using diffusion models as plug-and-play priors.Advances in Neural Information Processing Systems, 37:118389–118427, 2024

    Zihui Wu, Yu Sun, Yifan Chen, Bingliang Zhang, Yisong Yue, and Katherine L Bouman. Principled probabilistic imaging using diffusion models as plug-and-play priors.Advances in Neural Information Processing Systems, 37:118389–118427, 2024

  45. [45]

    Gibbs sampling.Journal of the American Statistical Association, 95(452):1300–1304, 2000

    Alan E Gelfand. Gibbs sampling.Journal of the American Statistical Association, 95(452):1300–1304, 2000

  46. [46]

    Equation of state calculations by fast computing machines.The Journal of Chemical Physics, 21(6):1087–1092, 1953

    Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines.The Journal of Chemical Physics, 21(6):1087–1092, 1953

  47. [47]

    Split-and-augmented gibbs sampler—application to large-scale inference problems.IEEE Transactions on Signal Processing, 67(6):1648–1661, 2019

    Maxime V ono, Nicolas Dobigeon, and Pierre Chainais. Split-and-augmented gibbs sampler—application to large-scale inference problems.IEEE Transactions on Signal Processing, 67(6):1648–1661, 2019

  48. [48]

    Divergence measures based on the Shannon entropy.IEEE Transactions on Information Theory, 37(1):145–151, 2002

    Jianhua Lin. Divergence measures based on the Shannon entropy.IEEE Transactions on Information Theory, 37(1):145–151, 2002

  49. [49]

    Oxford University Press, 1992

    James J Binney, Nigel J Dowrick, Anthony J Fisher, and Mark EJ Newman.The theory of critical phenomena: an introduction to the renormalization group. Oxford University Press, 1992

  50. [50]

    Critical phenomena and renormalization-group theory.Physics Reports, 368(6):549–727, 2002

    Andrea Pelissetto and Ettore Vicari. Critical phenomena and renormalization-group theory.Physics Reports, 368(6):549–727, 2002

  51. [51]

    CRC Press, 2018

    Nigel Goldenfeld.Lectures on phase transitions and the renormalization group. CRC Press, 2018

  52. [52]

    Hiroaki Fukunishi, Osamu Watanabe, and Shoji Takada. On the Hamiltonian replica exchange method for efficient sampling of biomolecular systems: Application to protein structure prediction.The Journal of Chemical Physics, 116(20):9058–9067, 2002

  53. [53]

    Extracting and composing robust features with denoising autoencoders

    Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. InInternational Conference on Machine Learning, pages 1096–1103, 2008

  54. [54]

    Comparison of simple potential functions for simulating liquid water.The Journal of Chemical Physics, 79(2):926– 935, 1983

    William L Jorgensen, Jayaraman Chandrasekhar, Jeffry D Madura, Roger W Impey, and Michael L Klein. Comparison of simple potential functions for simulating liquid water.The Journal of Chemical Physics, 79(2):926– 935, 1983

  55. [55]

    Riemannian score-based generative modelling.Advances in Neural Information Processing Systems, 35:2406– 2422, 2022

    Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, and Arnaud Doucet. Riemannian score-based generative modelling.Advances in Neural Information Processing Systems, 35:2406– 2422, 2022

  56. [56]

    Torsional diffusion for molecular conformer generation.Advances in Neural Information Processing Systems, 35:24240–24253, 2022

    Bowen Jing, Gabriele Corso, Jeffrey Chang, Regina Barzilay, and Tommi Jaakkola. Torsional diffusion for molecular conformer generation.Advances in Neural Information Processing Systems, 35:24240–24253, 2022

  57. [57]

    Riemannian diffusion models.Advances in Neural Information Processing Systems, 35:2750–2761, 2022

    Chin-Wei Huang, Milad Aghajohari, Joey Bose, Prakash Panangaden, and Aaron C Courville. Riemannian diffusion models.Advances in Neural Information Processing Systems, 35:2750–2761, 2022. 12 GG-PA - MAY12, 2026

  58. [58]

    MDTraj: a modern open library for the analysis of molecular dynamics trajectories.Biophysical Journal, 109(8):1528–1532, 2015

    Robert T McGibbon, Kyle A Beauchamp, Matthew P Harrigan, Christoph Klein, Jason M Swails, Carlos X Hernández, Christian R Schwantes, Lee-Ping Wang, Thomas J Lane, and Vijay S Pande. MDTraj: a modern open library for the analysis of molecular dynamics trajectories.Biophysical Journal, 109(8):1528–1532, 2015

  59. [59]

    Assigned

    Edward N Baker and Roderick E Hubbard. Hydrogen bonding in globular proteins.Progress in Biophysics and Molecular Biology, 44(2):97–179, 1984. 13 GG-PA - MAY12, 2026 A Theoretical Details This appendix collects the technical derivations underlying the claims in the main text. We first recall the posterior- sampling interpretation of diffusion denoising, w...