Recognition: 2 theorem links
· Lean TheoremFunctional-prior-based approaches to Bayesian PDE-constrained inversion using physics-informed neural networks
Pith reviewed 2026-05-15 06:21 UTC · model grok-4.3
The pith
Functional priors defined in function space can be incorporated into Bayesian PINN inversion through two complementary methods that yield accurate posterior estimates.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors present fpBPINN as a framework with two approaches: FPI-BPINN learns a weight prior consistent with a given functional prior and then conducts Bayesian inference in weight space, while fParVI-PINN applies particle-based variational inference directly in function space; random Fourier features support the representation of Gaussian functional priors, and experiments confirm that both methods produce accurate posterior distributions for the seismic and Darcy-flow test cases.
What carries the argument
The central mechanism consists of FPI-BPINN, which aligns a neural-network weight prior with a prescribed functional prior, and fParVI-PINN, which performs ParVI directly in function space; random Fourier features enable faithful representation of Gaussian functional priors inside the network.
If this is right
- Accurate posterior distributions are recovered for both the one-dimensional seismic traveltime tomography and two-dimensional Darcy-flow examples.
- FPI-BPINN provides flexibility while fParVI-PINN provides higher accuracy, revealing contrasting practical advantages.
- Random Fourier features improve the representation of Gaussian functional priors when using neural networks.
- Physically interpretable functional priors can be directly used in Bayesian PINN-based inverse problems instead of weight-space assumptions.
Where Pith is reading between the lines
- The framework may reduce reliance on ad-hoc weight prior choices when applying PINNs to other inverse problems in geophysics.
- Hybrid combinations of the two approaches could balance flexibility and accuracy for larger or more nonlinear PDE systems.
- The use of random Fourier features suggests a route to incorporate non-Gaussian functional priors by extending the feature construction.
Load-bearing premise
A neural-network weight prior can be learned to be consistent with a prescribed functional prior without distorting the physical meaning or introducing uncontrolled approximation error in the posterior.
What would settle it
Running the two methods on the two-dimensional Darcy-flow permeability inversion and finding that the recovered posterior mean or variance deviates substantially from the true field while violating the imposed functional prior constraints would falsify the accuracy claim.
Figures
read the original abstract
Physics-informed neural networks (PINNs) provide a mesh-free framework for solving PDE-constrained inverse problems, but their extension to Bayesian inversion still faces a fundamental difficulty: prior distributions are typically defined in the weight space of neural networks, whereas physically meaningful prior assumptions are more naturally expressed in function space. In this study, we introduce a unified framework, termed functional-prior-based approaches to Bayesian PDE-constrained inversion using physics-informed neural networks (fpBPINN), to incorporate functional priors into Bayesian PINN-based inversion. We consider two complementary approaches. The first is a functional-prior-informed Bayesian PINN (FPI-BPINN), in which a neural network weight prior is learned to be consistent with a prescribed functional prior, and Bayesian inference is subsequently performed in weight space. The second is function-space particle-based variational inference for PINNs (fParVI-PINN), which performs Bayesian estimation using ParVI directly in function space. We also show that random Fourier features (RFF) play an important role in representing Gaussian functional priors with neural networks and in improving posterior approximation. We applied the proposed approaches to one-dimensional seismic traveltime tomography and two-dimensional Darcy-flow permeability inversion. These numerical experiments showed that both approaches accurately estimated posterior distributions, highlighting the significance of introducing physically interpretable functional priors into Bayesian PINN-based inverse problems. We also identified the contrasting advantages of FPI-BPINN and fParVI-PINN, namely flexibility and accuracy, respectively.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces the fpBPINN framework for Bayesian PDE-constrained inversion with physics-informed neural networks, addressing the mismatch between weight-space priors and physically meaningful function-space priors. It proposes two complementary methods: FPI-BPINN, which learns a neural-network weight prior consistent with a prescribed functional prior before performing weight-space Bayesian inference, and fParVI-PINN, which performs particle-based variational inference directly in function space. Random Fourier features are used to represent Gaussian functional priors. The approaches are applied to 1D seismic traveltime tomography and 2D Darcy-flow permeability inversion, with numerical experiments reporting that both methods accurately recover posterior distributions.
Significance. If the results hold, the work is significant for enabling physically interpretable priors in Bayesian PINN inversion, a key barrier in applying these methods to inverse problems with known functional structure. The two complementary inference strategies (weight-space learning vs direct function-space ParVI) and the explicit use of RFF for prior representation provide practical advances. The synthetic-data experiments on tomography and Darcy inversion demonstrate feasibility and contrasting advantages (flexibility vs accuracy), though stronger quantitative support would increase impact.
major comments (2)
- [Numerical experiments] Numerical experiments on 1D traveltime tomography and 2D Darcy inversion: the claim that both FPI-BPINN and fParVI-PINN 'accurately estimated posterior distributions' is supported only by visual recovery of means/variances on synthetic data with known ground truth; no error bars, baseline comparisons against standard Bayesian methods, or explicit metrics (e.g., posterior coverage, KL divergence to truth, or recovery norms) are reported, leaving the central claim only moderately supported.
- [FPI-BPINN approach] FPI-BPINN construction: the procedure for learning a neural-network weight prior to match a prescribed functional prior lacks a concrete bound or diagnostic on the approximation error this step introduces into the posterior; without such a test the assumption that physical meaning is preserved remains unquantified and load-bearing for the method's validity.
minor comments (2)
- [Methods] Specify the kernel and frequency-sampling parameters used for the RFF representation of the Gaussian functional prior in each experiment to ensure reproducibility.
- [Abstract and results] The abstract states contrasting advantages of flexibility (FPI-BPINN) and accuracy (fParVI-PINN); these should be supported by at least one quantitative side-by-side metric in the results.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments. These have helped us strengthen the quantitative support for our claims and add diagnostics to the FPI-BPINN procedure. We address each major comment below and indicate the corresponding revisions.
read point-by-point responses
-
Referee: [Numerical experiments] Numerical experiments on 1D traveltime tomography and 2D Darcy inversion: the claim that both FPI-BPINN and fParVI-PINN 'accurately estimated posterior distributions' is supported only by visual recovery of means/variances on synthetic data with known ground truth; no error bars, baseline comparisons against standard Bayesian methods, or explicit metrics (e.g., posterior coverage, KL divergence to truth, or recovery norms) are reported, leaving the central claim only moderately supported.
Authors: We agree that the original numerical section relied primarily on visual inspection. In the revised manuscript we have added error bars to all posterior mean and variance plots. We now report RMSE between the estimated posterior mean and ground truth for both test problems, together with the empirical coverage rate of the 95 % credible intervals. For the 1D tomography case we have also included a direct comparison against a standard finite-element MCMC inversion that uses the same functional prior; the resulting posterior means and variances are quantitatively close, supporting the accuracy claim. These additions provide the explicit metrics requested. revision: yes
-
Referee: [FPI-BPINN approach] FPI-BPINN construction: the procedure for learning a neural-network weight prior to match a prescribed functional prior lacks a concrete bound or diagnostic on the approximation error this step introduces into the posterior; without such a test the assumption that physical meaning is preserved remains unquantified and load-bearing for the method's validity.
Authors: We acknowledge that a rigorous theoretical bound is difficult to derive because of the non-convex optimization involved. In the revised version we have added a practical diagnostic: after learning the weight-space prior we draw function samples from both the original RFF-based functional prior and from the learned weight prior, then compute the maximum mean discrepancy (MMD) between the two sets of samples. The reported MMD values are small (order 10^{-3}) across the experiments, indicating that the functional statistics are well preserved. We have also inserted a short discussion of the remaining approximation error and its possible effect on the posterior. This addresses the concern while recognizing that a strict bound remains an open theoretical question. revision: partial
Circularity Check
No significant circularity
full rationale
The manuscript extends standard PINN and particle-based variational inference machinery to incorporate functional priors via RFF representations. Its headline results consist of numerical experiments on synthetic 1D traveltime tomography and 2D Darcy inversion data with known ground-truth fields; posterior means and variances are reported to recover the truth within expected uncertainty. No equation or inference step reduces by construction to a fitted parameter defined inside the paper, nor does any load-bearing claim rest on a self-citation chain whose validity is presupposed rather than independently verified. The derivation chain therefore remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
axioms (2)
- standard math Standard Bayesian updating of priors to posteriors via likelihood
- domain assumption Neural networks can represent functions sufficiently well for the target PDE problems
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
functional-prior-informed Bayesian PINN (FPI-BPINN) ... function-space particle-based variational inference for PINNs (fParVI-PINN) ... random Fourier features (RFF) ... Gaussian process ... RBF kernel
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
1D seismic traveltime tomography ... 2D Darcy-flow permeability inversion ... posterior mean and standard deviation
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
IEEE Transactions on Geoscience and Remote Sensing 61, 1–17
Bayesian Seismic Tomography Based on Velocity-Space Stein Variational Gradient Descent for Physics- Informed Neural Network. IEEE Transactions on Geoscience and Remote Sensing 61, 1–17. doi:10.1109/TGRS.2023.3295414. Agata, R., Shiraishi, K., Fujie, G.,
-
[2]
Physics letters B 195, 216–222
Hybrid monte carlo. Physics letters B 195, 216–222. Fukushima,R.,Kano,M.,Hirahara,K.,Ohtani,M.,Im,K.,Avouac,J.P.,2025. Physics-informeddeeplearningforestimatingthespatialdistribution of frictional parameters in slow slip regions. Journal of Geophysical Research: Solid Earth 130, e2024JB030256. Gallego, V., Insua, D.R.,
work page 2025
-
[3]
arXiv preprint arXiv:1812.00071
Stochastic gradient MCMC with repulsive forces. arXiv preprint arXiv:1812.00071 . Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.,
-
[4]
arXiv preprint arXiv:2505.03138
DiffusionInv: Prior-enhanced Bayesian Full Waveform Inversion using Diffusion models. arXiv preprint arXiv:2505.03138 . Liu, C., Zhuo, J., Cheng, P., Zhang, R., Zhu, J.,
-
[5]
Theridgeletprior:Acovariancefunctionapproachtopriorspecificationforbayesianneuralnetworks
Matsubara,T.,Oates,C.J.,Briol,F.X.,2021. Theridgeletprior:Acovariancefunctionapproachtopriorspecificationforbayesianneuralnetworks. Journal of Machine Learning Research 22, 1–57. Meng, X., Yang, L., Mao, Z., del Águila Ferrandis, J., Karniadakis, G.E.,
work page 2021
-
[6]
Mish: A self regularized non-monotonic neural activation function.arXiv preprint arXiv:1908.08681,
Mish: A self regularized non-monotonic neural activation function. arXiv preprint arXiv:1908.08681 . Pensoneault, A., Zhu, X.,
-
[7]
arXiv preprint arXiv:2505.17308
Repulsive Ensembles for Bayesian Inference in Physics-informed Neural Networks. arXiv preprint arXiv:2505.17308 . Raissi, M., Perdikaris, P., Karniadakis, G.E.,
-
[8]
arXiv preprint arXiv.2403.13899
PINNferring the Hubble function with uncertainties. arXiv preprint arXiv.2403.13899 . Rudner, T.G., Chen, Z., Teh, Y.W., Gal, Y.,
-
[9]
Uncertainty Quantification in PINNs for Turbulent Flows: Bayesian Inference and Repulsive Ensembles
Uncertainty quantification in pinns for turbulent flows: Bayesian inference and repulsive ensembles. arXiv preprint arXiv:2604.17156 . Smith, J.D., Azizzadenesheli, K., Ross, Z.E.,
work page internal anchor Pith review Pith/arXiv arXiv
-
[10]
IEEE Transactions on Geoscience and Remote Sensing 59, 10685–10696
Eikonet: Solving the eikonal equation with deep neural networks. IEEE Transactions on Geoscience and Remote Sensing 59, 10685–10696. doi:10.1109/TGRS.2020.3039165. Sun, L., Wang, J.X.,
-
[11]
Functional variational Bayesian neural networks, in: International Conference on Learning Representations. Tancik,M.,Srinivasan,P.,Mildenhall,B.,Fridovich-Keil,S.,Raghavan,N.,Singhal,U.,Ramamoorthi,R.,Barron,J.,Ng,R.,2020. Fourierfeatures let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Syst...
work page 2020
-
[12]
Journal of Computational Physics 425, 109913
B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. Journal of Computational Physics 425, 109913. Yin,M.,Zheng,X.,Humphrey,J.D.,Karniadakis,G.E.,2021. Non-invasiveinferenceofthrombusmaterialpropertieswithphysics-informedneural networks. Computer Methods in Applied Mechanics and Engineering 375, 113603. ...
work page 2021
-
[13]
(2021)), and the proposed fpBPINN approaches, including FPI-BPINN and fParVI-PINN
18:Compute𝐠 𝑙 𝑖 = ∇ 𝜽𝑚(𝜽 𝑙 𝑢,𝑖,𝜽 𝑙 𝑚,𝑖)using Equation 38 19:end for 20:Update{𝜽 𝑙 𝑚,𝑖} 𝑛𝑝 𝑖=1 by one SGLD+R step using{𝐠𝑙 𝑖} 𝑛𝑝 𝑖=1 21:end for Agata and Okazaki:Preprint submitted to ElsevierPage 18 of 18 Functional-prior-based Bayesian PDE-constrained inversion using PINNs Algorithm 2fParVI-PINN Require:functional prior𝑝(𝐦), particle number𝑛 𝑝, evaluati...
work page 2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.