Recognition: 1 theorem link
· Lean TheoremPhysics-Informed Neural Networks for Joint Source and Parameter Estimation in Advection-Diffusion Equations
Pith reviewed 2026-05-17 00:02 UTC · model grok-4.3
The pith
A weighted adaptive NTK-based PINN method with multiple networks jointly recovers the solution, source function, velocity and diffusion parameters in advection-diffusion equations from sparse measurements.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that a weighted adaptive approach based on the neural tangent kernel of PINNs, using multiple separate networks for the solution, source, and parameters, enables simultaneous joint recovery of all these quantities for the advection-diffusion equation while remaining robust to noise in sparse measurements.
What carries the argument
Multiple separate neural networks coupled by the PDE residual loss with a weighted adaptive NTK strategy that balances training across the solution, source, and parameter networks.
If this is right
- The unknown source function can be estimated accurately along with the solution field.
- Velocity and diffusion parameters are recoverable from the same sparse data set.
- The joint recovery remains stable when additional noise is present in the measurements.
- The PDE constraint allows more efficient use of limited measurement information across all unknowns.
Where Pith is reading between the lines
- The multi-network NTK weighting could extend to other parabolic PDE inverse problems that involve several unknown functions.
- The approach may lower the density of sensors needed for practical source localization tasks.
- Similar joint training setups might apply to time-dependent or mildly nonlinear advection-diffusion variants.
Load-bearing premise
Multiple neural networks can be trained simultaneously to accurately represent the solution, source, and parameters under the PDE constraint despite severe ill-posedness and sparsity of the measurements.
What would settle it
Running the method on a known test advection-diffusion problem with sparse noisy measurements and finding large errors in the recovered source function or parameters would falsify the success claim.
Figures
read the original abstract
Recent studies have demonstrated the success of deep learning in solving forward and inverse problems in engineering and scientific computing domains, such as physics-informed neural networks (PINNs). Source inversion problems under sparse measurements for parabolic partial differential equations (PDEs) are particularly challenging to solve using PINNs, due to their severe ill-posedness and the multiple unknowns involved including the source function and the PDE parameters. Although the neural tangent kernel (NTK) of PINNs has been widely used in forward problems involving a single neural network, its extension to inverse problems involving multiple neural networks remains less explored. In this work, we propose a weighted adaptive approach based on the NTK of PINNS including multiple separate networks representing the solution, the unknown source, and the PDE parameters. The key idea behind our methodology is to simultaneously solve the joint recovery of the solution, the source function along with the unknown parameters thereby using the underlying partial differential equation as a constraint that couples multiple unknown functional parameters, leading to more efficient use of the limited information in the measurements. We apply our method on the advection-diffusion equation and we present various 2D and 3D numerical experiments using different types of measurements data that reflect practical engineering systems. Our proposed method is successful in estimating the unknown source function, the velocity and diffusion parameters as well as recovering the solution of the equation, while remaining robust to additional noise in the measurements.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes an extension of physics-informed neural networks (PINNs) that employs adaptive neural tangent kernel (NTK) weighting across multiple separate networks to jointly recover the solution u, unknown source function f, velocity v, and diffusion coefficient D for the advection-diffusion equation from sparse and noisy measurements. The PDE residual serves as the coupling constraint. The approach is tested on various 2D and 3D synthetic cases with different measurement types, claiming successful recovery and noise robustness.
Significance. If the joint inversion proves reliable, the method could advance practical inverse problems for advection-diffusion systems in engineering by making efficient use of limited data through the PDE constraint. The extension of NTK-based adaptive weighting to multi-network inverse settings is a clear technical contribution, and the reproducible numerical experiments on standard test problems add value.
major comments (2)
- [Abstract and §4] Abstract and §4 (Numerical Experiments): the central claim of successful estimation of the source, velocity, diffusion parameters, and solution with noise robustness is stated without any quantitative error metrics (e.g., relative L2 errors for f, v, D), baseline comparisons against standard PINNs or other inversion techniques, or details on how post-training validation was performed. This leaves the experimental support for the joint-recovery claim only partially substantiated.
- [§3.2] §3.2 (NTK weighting for multi-network PINNs): the adaptive NTK loss weighting is presented as producing a well-conditioned landscape that couples the unknowns and recovers the source despite ill-posedness. However, no ablation or conditioning analysis is provided to show that the weighting prevents the source term from absorbing errors in the parameter networks when measurements are sparse and noisy, which is load-bearing for the claim that the method remains robust under the severe ill-posedness described in the introduction.
minor comments (2)
- [§3] Notation for the separate networks (u-network, f-network, parameter networks) is introduced without a consolidated table of symbols, which would improve readability.
- [§4] Figure captions in §4 could explicitly state the noise level and number of measurement points for each panel to allow direct comparison across experiments.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback and detailed comments on our manuscript. We have revised the paper to strengthen the quantitative support for our claims and to provide additional analysis of the NTK weighting. Below we address each major comment point by point.
read point-by-point responses
-
Referee: [Abstract and §4] Abstract and §4 (Numerical Experiments): the central claim of successful estimation of the source, velocity, diffusion parameters, and solution with noise robustness is stated without any quantitative error metrics (e.g., relative L2 errors for f, v, D), baseline comparisons against standard PINNs or other inversion techniques, or details on how post-training validation was performed. This leaves the experimental support for the joint-recovery claim only partially substantiated.
Authors: We agree that quantitative metrics and baselines would strengthen the experimental support. In the revised manuscript we have added tables of relative L2 errors for the recovered source f, velocity v, diffusion D, and solution u across all 2D and 3D cases. We have also included direct comparisons against a standard single-network PINN and a non-adaptive multi-network variant, together with explicit details on how validation errors were computed on held-out points and the exact noise levels and sparsity patterns used. revision: yes
-
Referee: [§3.2] §3.2 (NTK weighting for multi-network PINNs): the adaptive NTK loss weighting is presented as producing a well-conditioned landscape that couples the unknowns and recovers the source despite ill-posedness. However, no ablation or conditioning analysis is provided to show that the weighting prevents the source term from absorbing errors in the parameter networks when measurements are sparse and noisy, which is load-bearing for the claim that the method remains robust under the severe ill-posedness described in the introduction.
Authors: We acknowledge that an explicit ablation and conditioning analysis would better substantiate the role of adaptive NTK weighting under ill-posed conditions. We have added a new subsection in the revised §3.2 that reports (i) an ablation comparing recovery accuracy with and without adaptive weighting at increasing sparsity and noise levels, and (ii) the condition-number evolution of the multi-network NTK matrix, showing that the adaptive scheme keeps the loss landscape better balanced and reduces error absorption into the source network. revision: yes
Circularity Check
No significant circularity: multi-network PINN-NTK approach validated empirically on standard test cases
full rationale
The paper extends established PINN and NTK weighting techniques to a multi-network setup for joint source-parameter estimation in advection-diffusion PDEs. The derivation consists of defining separate networks for u, f, and parameters, constructing a composite loss with PDE residual and data terms, and applying adaptive NTK-based reweighting; success is then shown via numerical experiments on 2D/3D advection-diffusion problems with sparse/noisy measurements. No load-bearing step reduces by construction to a fitted parameter renamed as prediction, nor does any central claim rest on a self-citation chain that is itself unverified. The reported recovery of source, velocity, and diffusion therefore remains an independent empirical outcome rather than a tautology.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The advection-diffusion PDE is an accurate description of the physical system under study.
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
weighted adaptive method based on the neural tangent kernel... multiple separate networks representing the solution, the unknown source, and the PDE parameters
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
M Raissi, P Perdikaris, and GE Karniadakis. A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.J. Comput. Phys, 378:686–707, 2019
work page 2019
-
[2]
Physics- informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021
George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics- informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021. 14 APREPRINT- DECEMBER9, 2025
work page 2021
-
[3]
Jeremy Yu, Lu Lu, Xuhui Meng, and George Em Karniadakis. Gradient-enhanced physics-informed neural networks for forward and inverse pde problems.Computer Methods in Applied Mechanics and Engineering, 393:114823, 2022
work page 2022
-
[4]
Zhengqi Zhang, Jing Li, and Bin Liu. Annealed adaptive importance sampling method in pinns for solving high dimensional partial differential equations.Journal of Computational Physics, 521:113561, 2025
work page 2025
-
[5]
Zheyuan Hu, Khemraj Shukla, George Em Karniadakis, and Kenji Kawaguchi. Tackling the curse of dimensionality with physics-informed neural networks.Neural Networks, 176:106369, August 2024
work page 2024
-
[6]
Geoffrey Ingram Taylor. I. eddy motion in the atmosphere.Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 215(523-537):1–26, 1915
work page 1915
-
[7]
John H Seinfeld and Spyros N Pandis.Atmospheric chemistry and physics: from air pollution to climate change. John Wiley & Sons, 2016
work page 2016
-
[8]
B Addepalli, K Sikorski, ER Pardyjak, and MS Zhdanov. Source characterization of atmospheric releases using stochastic search and regularized gradient optimization.Inverse Problems in Science and Engineering, 19(8):1097–1124, 2011
work page 2011
-
[9]
Juarez Azevedo, Gildeberto S Cardoso, and Leizer Schnitman. An adaptive monte carlo markov chain method applied to the flow involving self-similar processes in porous media.Journal of Porous Media, 17(3), 2014
work page 2014
-
[10]
Juan G García, Bamdad Hosseini, and John M Stockie. Simultaneous model calibration and source inversion in atmospheric dispersion models.Pure and Applied Geophysics, 178:757–776, 2021
work page 2021
-
[11]
Bamdad Hosseini and John M Stockie. Bayesian estimation of airborne fugitive emissions using a gaussian plume model.Atmospheric Environment, 141:122–138, 2016
work page 2016
-
[12]
Bamdad Hosseini and John M Stockie. Estimating airborne particulate emissions using a finite-volume forward solver coupled with a bayesian inversion approach.Computers & Fluids, 154:27–43, 2017
work page 2017
-
[13]
Youngdeok Hwang, Hang J Kim, Won Chang, Kyongmin Yeo, and Yongku Kim. Bayesian pollution source identification via an inverse physics model.Computational Statistics & Data Analysis, 134:76–92, 2019
work page 2019
-
[14]
K. Shankar Rao. Source estimation methods for atmospheric dispersion.Atmospheric Environment, 41(33):6964–6973, October 2007
work page 2007
-
[15]
Cambridge University Press, 2002
Ian G Enting.Inverse problems in atmospheric constituent transport. Cambridge University Press, 2002
work page 2002
-
[16]
Inanc Senocak, Nicolas W. Hengartner, Margaret B. Short, and W. Brent Daniel. Stochastic event reconstruction of atmospheric contaminant dispersion using bayesian inference.Atmospheric Environment, 42(33):7718–7727, October 2008
work page 2008
-
[17]
Derek Wade and Inanc Senocak. Stochastic reconstruction of multiple source atmospheric contaminant dispersion events.Atmospheric Environment, 74:45–51, August 2013
work page 2013
-
[18]
Roseane AS Albani and Vinicius VL Albani. Tikhonov-type regularization and the finite element method applied to point source estimation in the atmosphere.Atmospheric Environment, 211:69–78, 2019
work page 2019
-
[19]
Roseane AS Albani and Vinicius VL Albani. An accurate strategy to retrieve multiple source emissions in the atmosphere.Atmospheric Environment, 233:117579, 2020
work page 2020
-
[20]
Roseane AS Albani, Vinicius VL Albani, Hélio S Migon, and Antônio J Silva Neto. Uncertainty quantification and atmospheric source estimation with a discrepancy-based and a state-dependent adaptative mcmc.Environmental Pollution, 290:118039, 2021
work page 2021
-
[21]
Zihan Huang, Yuan Wang, Qi Yu, Weichun Ma, Yan Zhang, and Limin Chen. Source area identification with observation from limited monitor sites for air pollution episodes in industrial parks.Atmospheric Environment, 122:1–9, 2015
work page 2015
-
[22]
Gerard T Schuster, Yuqing Chen, and Shihang Feng. Review of physics-informed machine-learning inversion of geophysical data.Geophysics, 89(6):T337–T356, 2024
work page 2024
-
[23]
Majid Rasht-Behesht, Christian Huber, Khemraj Shukla, and George Em Karniadakis. Physics-informed neural networks (pinns) for wave propagation and full waveform inversions.Journal of Geophysical Research: Solid Earth, 127(5):e2021JB023120, 2022
work page 2022
-
[24]
Ivan Depina, Saket Jain, Sigurdur Mar Valsson, and Hrvoje Gotovac. Application of physics-informed neural networks to inverse problems in unsaturated groundwater flow.Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 16(1):21–36, 2022
work page 2022
-
[25]
Wavefield reconstruction inversion via physics-informed neural networks
Chao Song and Tariq A Alkhalifah. Wavefield reconstruction inversion via physics-informed neural networks. IEEE Transactions on Geoscience and Remote Sensing, 60:1–12, 2021. 15 APREPRINT- DECEMBER9, 2025
work page 2021
-
[26]
Xinquan Huang and Tariq A Alkhalifah. Microseismic source imaging using physics-informed neural networks with hard constraints.IEEE Transactions on Geoscience and Remote Sensing, 62:1–11, 2024
work page 2024
-
[27]
Dongjin Kim and Jaewook Lee. A review of physics informed neural networks for multiscale analysis and inverse problems.Multiscale Science and Engineering, 6(1):1–11, 2024
work page 2024
-
[28]
Ameya D Jagtap, Zhiping Mao, Nikolaus Adams, and George Em Karniadakis. Physics-informed neural networks for inverse problems in supersonic flows.Journal of Computational Physics, 466:111402, 2022
work page 2022
-
[29]
Tarik Sahin, Max von Danwitz, and Alexander Popp. Solving forward and inverse problems of contact mechanics using physics-informed neural networks.Advanced Modeling and Simulation in Engineering Sciences, 11(1):11, 2024
work page 2024
-
[30]
Zhao Chen, Yang Liu, and Hao Sun. Physics-informed learning of governing equations from scarce data.Nature communications, 12(1):6136, 2021
work page 2021
-
[31]
Yasamin Jalalian, Juan Felipe Osorio Ramirez, Alexander Hsu, Bamdad Hosseini, and Houman Owhadi. Data- efficient kernel methods for learning differential equations and their solution operators: Algorithms and error analysis.arXiv preprint arXiv:2503.01036, 2025
-
[32]
Data-efficient kernel methods for learning hamiltonian systems.arXiv preprint arXiv:2509.17154, 2025
Yasamin Jalalian, Mostafa Samir, Boumediene Hamzi, Peyman Tavallali, and Houman Owhadi. Data-efficient kernel methods for learning hamiltonian systems.arXiv preprint arXiv:2509.17154, 2025
-
[33]
Data-driven discovery of partial differential equations.Science advances, 3(4):e1602614, 2017
Samuel H Rudy, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Data-driven discovery of partial differential equations.Science advances, 3(4):e1602614, 2017
work page 2017
-
[34]
Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055–A3081, 2021
work page 2021
-
[35]
Dehao Liu and Yan Wang. A dual-dimer method for training physics-constrained neural networks with minimax architecture.Neural Networks, 136:112–125, 2021
work page 2021
-
[36]
Colby L Wight and Jia Zhao. Solving allen-cahn and cahn-hilliard equations using the adaptive physics informed neural networks.arXiv preprint arXiv:2007.04542, 2020
-
[37]
Self-adaptive physics-informed neural networks using a soft attention mechanism
Levi McClenny Ulisses Braga-Neto. Self-adaptive physics-informed neural networks using a soft attention mechanism. 2021
work page 2021
-
[38]
Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why pinns fail to train: A neural tangent kernel perspective.Journal of Computational Physics, 449:110768, 2022
work page 2022
-
[39]
Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks.Advances in neural information processing systems, 31, 2018
work page 2018
-
[40]
Sifan Wang, Hanwen Wang, and Paris Perdikaris. On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 384:113938, 2021
work page 2021
-
[41]
Neural tangent kernel analysis to probe convergence in physics-informed neural solvers: Pikans vs
Salah A Faroughi and Farinaz Mostajeran. Neural tangent kernel analysis to probe convergence in physics-informed neural solvers: Pikans vs. pinns.arXiv preprint arXiv:2506.07958, 2025
-
[42]
Cambridge University Press, 2001
Dietrich Braess.Finite elements: Theory, fast solvers, and applications in solid mechanics. Cambridge University Press, 2001
work page 2001
-
[43]
John C Strikwerda.Finite difference schemes and partial differential equations. SIAM, 2004
work page 2004
-
[44]
Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations.arXiv preprint arXiv:1711.10561, 2017
work page internal anchor Pith review Pith/arXiv arXiv 2017
-
[45]
Deepxde: A deep learning library for solving differential equations.SIAM review, 63(1):208–228, 2021
Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. Deepxde: A deep learning library for solving differential equations.SIAM review, 63(1):208–228, 2021
work page 2021
-
[46]
The mathematics of atmospheric dispersion modeling.Siam Review, 53(2):349–372, 2011
John M Stockie. The mathematics of atmospheric dispersion modeling.Siam Review, 53(2):349–372, 2011
work page 2011
-
[47]
Jin-Sheng Lin and Lynn M Hildemann. Analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities.Atmospheric Environment, 30(2):239–254, 1996
work page 1996
-
[48]
Bamdad Hosseini and John M Stockie. Airborne contaminant source estimation using a finite-volume forward solver coupled with a bayesian inversion approach.arXiv preprint arXiv:1607.03518, 2016
work page internal anchor Pith review Pith/arXiv arXiv 2016
-
[49]
Oxford University Press New York, 1999
S Pal Arya et al.Air pollution meteorology and dispersion, volume 310. Oxford University Press New York, 1999. 16 APREPRINT- DECEMBER9, 2025 A Proof of lemma1 Proof 1We recall the loss function to minimize: L(ΘΘΘ) =λ rLr(ΘΘΘ) +λ bLb(θuθuθu) +λ zLz(θuθuθu),Θ ΘΘ ={θ uθuθu, θfθfθf , γ}(12) and the gradient flow system dΘΘΘ(s) ds =−∇ ΘΘΘL(ΘΘΘ(s)),withΘ ΘΘ(s) ...
work page 1999
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.