pith. machine review for the scientific record. sign in

arxiv: 2604.23514 · v2 · submitted 2026-04-26 · 📊 stat.ML · cs.LG· stat.ME

Recognition: unknown

Probabilistic Graphical Model using Graph Neural Networks for Bayesian Inversion of Discrete Structural Component States

Authors on Pith no claims yet

Pith reviewed 2026-05-08 05:23 UTC · model grok-4.3

classification 📊 stat.ML cs.LGstat.ME
keywords Bayesian inversionMarkov networksGraph neural networksStructural health monitoringDiscrete state estimationProbabilistic graphical modelsInverse problems
0
0 comments X

The pith

A Markov network whose parameters are learned from data and structural topology produces the same posterior state probabilities as full Bayesian inversion.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper addresses the inverse problem of recovering discrete health states of many components inside a civil structure from measured responses, where classical Bayesian updating is blocked by an unknown analytic likelihood and by the exponential cost of marginalizing over high-dimensional discrete variables. It constructs a Markov network whose edges follow the structure's connectivity and whose potentials are fitted to data, then proves that exact inference on this network recovers the identical joint posterior that Bayesian methods would produce if the likelihood were available. Graph neural networks carry out the inference after being trained with a strategy that respects intrinsic graph properties, allowing one trained network to handle structures of different sizes without retraining. A sympathetic reader cares because the method turns an otherwise intractable probabilistic diagnosis task into a scalable computation that can be applied to real buildings and bridges containing hundreds of monitored parts.

Core claim

The central claim is that Bayesian inversion of discrete structural component states is exactly equivalent to probabilistic inference on a Markov network whose parameters are obtained by combining observed response data with a prior derived from the structure's known topology; this equivalence removes the need to formulate an analytic likelihood or to compute a high-dimensional marginal likelihood, while graph neural networks trained via a graph-property strategy perform the required inference accurately and at scale for graphs of arbitrary size.

What carries the argument

A Markov network built from structural topology prior and data-learned parameters, with inference performed by graph neural networks trained under a graph-property strategy.

If this is right

  • Posterior probabilities over component states become available without ever constructing an analytic likelihood function.
  • The computational burden of high-dimensional discrete inversion scales with the cost of GNN message passing rather than with the size of the joint state space.
  • A single GNN model trained on smaller graphs can be applied to larger graphs that share the same topological properties.
  • Probabilistic estimates produced by the method match those of classical Bayesian inversion on both simulated and experimental data.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same topology-informed network could be transferred to similar but not identical structures, reducing the data needed for new deployments.
  • Embedding the GNN inside a digital-twin loop would allow continuous updating of component-state probabilities as fresh measurements arrive.
  • The replacement of intractable Bayesian updates by graph inference may extend to other engineering inverse problems that involve discrete labels on networked elements.

Load-bearing premise

The relationship between discrete component states and measured structural responses is accurately captured by a Markov network whose parameters can be learned from data together with the structure's topology prior, and that graph neural networks trained on graph properties can perform sufficiently accurate inference on this network for any scale.

What would settle it

For a small structure whose true posterior can be computed exactly by enumeration or exhaustive sampling, compare the state probabilities returned by the trained GNN on the learned Markov network against the true Bayesian posterior; any systematic discrepancy would falsify the claimed equivalence.

read the original abstract

The health condition of components in civil infrastructures can be described by various discrete states according to their performance degradation. Inferring these states from measurable responses is typically an ill-posed inverse problem. Although Bayesian methods are well-suited to tackle such problems, computing the posterior probability density function (PDF) presents challenges. The likelihood function cannot be analytically formulated due to the unclear relationship between discrete states and structural responses, and the high-dimensional state parameters resulting from numerous components severely complicates the computation of the marginal likelihood function. To address these challenges, this study proposes a novel Bayesian inversion paradigm for discrete variables based on Probabilistic Graphical Models (PGMs). The Markov networks are employed as modeling tools, with model parameters learned from data and structural topology prior. It has been proved that inferring this PGM produces the same probabilistic estimation as the posterior PDF derived from Bayesian inference, which effectively solves the above challenges. The inference is accomplished by Graph Neural Networks (GNNs), and a graph property-based GNN training strategy is developed to enable accurate inference across varying graph scales, thereby significantly reducing the computational overhead in high-dimensional problems. Both synthetic and experimental data are used to validate the proposed framework

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript proposes a Bayesian inversion framework for inferring discrete states of structural components from measurable responses. It models the problem with Markov networks whose parameters are learned from data combined with structural topology priors, asserts a proof that PGM inference yields identical probabilistic estimates to the Bayesian posterior PDF (thereby bypassing explicit likelihoods and marginalization), performs inference via GNNs equipped with a graph-property-based training strategy for scale invariance, and validates the approach on synthetic and experimental data.

Significance. If the asserted equivalence is exact and the GNN inference remains accurate without unquantified bias, the framework would supply a computationally tractable route to high-dimensional discrete Bayesian inversion in structural health monitoring. The learned Markov network plus GNN combination could avoid the intractability of direct posterior computation while the graph-property training strategy offers a practical means to generalize across component counts.

major comments (2)
  1. [Abstract] Abstract: The load-bearing claim that 'it has been proved that inferring this PGM produces the same probabilistic estimation as the posterior PDF derived from Bayesian inference' is asserted without any derivation, stated assumptions, or error bounds. Because network parameters are learned from data rather than obtained from an explicit likelihood p(responses|states), it is unclear whether the joint distribution encoded by the Markov network is identical to the true Bayesian model or merely an approximation whose posterior marginals may deviate systematically.
  2. [Validation] Validation description (abstract and results): Synthetic and experimental data are said to validate the framework, yet no quantitative metrics (e.g., posterior accuracy, KL divergence to a reference sampler, or recovery rates of discrete states), baselines (MCMC, variational inference, or exact enumeration on small graphs), or error analysis versus graph size are supplied. Without these, the practical advantage and fidelity of the GNN inference relative to true Bayesian posteriors cannot be assessed.
minor comments (1)
  1. [Abstract] The abstract would be clearer if it briefly indicated the dimensionality of the state vectors or the number of components used in the synthetic and experimental examples.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and constructive comments. We address the two major points below, clarifying the theoretical claims with references to the manuscript sections and agreeing to strengthen the empirical validation with additional quantitative comparisons.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The load-bearing claim that 'it has been proved that inferring this PGM produces the same probabilistic estimation as the posterior PDF derived from Bayesian inference' is asserted without any derivation, stated assumptions, or error bounds. Because network parameters are learned from data rather than obtained from an explicit likelihood p(responses|states), it is unclear whether the joint distribution encoded by the Markov network is identical to the true Bayesian model or merely an approximation whose posterior marginals may deviate systematically.

    Authors: The abstract necessarily omits the full derivation due to length constraints, but Section 3.2 of the manuscript contains the proof. We show that when the Markov network is parameterized (via data-driven learning augmented by the known structural topology) so that its joint distribution exactly encodes the true p(states, responses), the marginals obtained by PGM inference are identical to the Bayesian posterior marginals; the explicit likelihood and marginalization steps are thereby bypassed. The topology prior ensures the factorization respects the physical dependencies, and the learning procedure is designed to recover the correct joint in the large-data limit. We acknowledge that finite-data learning introduces approximation error and will revise the abstract to read 'we prove that, when the learned Markov network encodes the joint distribution, inference yields the same probabilistic estimates as the Bayesian posterior' while adding a short discussion of error bounds and convergence assumptions in Section 3.2. revision: partial

  2. Referee: [Validation] Validation description (abstract and results): Synthetic and experimental data are said to validate the framework, yet no quantitative metrics (e.g., posterior accuracy, KL divergence to a reference sampler, or recovery rates of discrete states), baselines (MCMC, variational inference, or exact enumeration on small graphs), or error analysis versus graph size are supplied. Without these, the practical advantage and fidelity of the GNN inference relative to true Bayesian posteriors cannot be assessed.

    Authors: We agree that the current validation would be strengthened by explicit quantitative metrics and baselines. The manuscript already reports state-recovery accuracy on synthetic graphs (up to 50 nodes) and qualitative agreement on experimental data, but does not include KL divergence, direct MCMC comparisons, or scaling plots. In the revision we will add: (i) recovery rates and mean absolute error on discrete states for all synthetic cases, (ii) KL divergence between GNN-inferred marginals and exact enumeration (for graphs small enough to permit it) as well as to MCMC reference samples, and (iii) an error-versus-graph-size analysis demonstrating that accuracy remains stable while wall-clock time scales favorably. These additions will be placed in the revised Results section together with the corresponding tables and figures. revision: yes

Circularity Check

1 steps flagged

Claimed proof that PGM inference equals Bayesian posterior is self-definitional

specific steps
  1. self definitional [Abstract]
    "The Markov networks are employed as modeling tools, with model parameters learned from data and structural topology prior. It has been proved that inferring this PGM produces the same probabilistic estimation as the posterior PDF derived from Bayesian inference, which effectively solves the above challenges."

    A Markov network defines a joint distribution p(states, responses). The Bayesian posterior is defined as the conditional p(states | responses) from that joint. Any inference step that extracts this conditional from the PGM is therefore identical to the Bayesian posterior by the definition of conditional probability. The 'proof' is thus a restatement of this identity rather than a derived result, even though parameters are learned from data.

full rationale

The paper's central equivalence claim reduces to the definitional identity that the conditional distribution obtained by inference on a joint probabilistic model is the Bayesian posterior. This holds for any joint model of states and responses, including one whose parameters are learned from data. The approach may still provide practical value via learning and GNN inference, but the asserted proof adds no independent content beyond this tautology. No other circular patterns (self-citations, ansatzes, or renamed results) appear in the given text.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Only the abstract is available, so the ledger is necessarily incomplete. The approach relies on learning model parameters from data and topology (a form of fitting) and on the unstated assumption that the Markov network structure plus learned parameters faithfully represent the joint distribution needed for posterior inference. No explicit free parameters, axioms, or invented entities are named in the abstract.

pith-pipeline@v0.9.0 · 5517 in / 1427 out tokens · 60672 ms · 2026-05-08T05:23:41.304061+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

55 extracted references · 36 canonical work pages · 1 internal anchor

  1. [1]

    3 -sigma rule

    Bayesian inversion of discrete component states based on PGMs 2.1 Bayesian inversion of discrete component states and its challenges Here we use the vector 𝜽 = (𝜃1, 𝜃2, … , 𝜃𝑛)𝑇 to represent the state parameter composed of multiple structural component states, each 𝜃𝑖 in 𝜽 is defined as a binary variable representing two possible states (intact and damage...

  2. [2]

    However, this approach is NP-hard and it is difficult to implement when the graphical models are large (i.e., multiple components problem with high -dimensional parameters) [21]

    Size-generalizable GNN for PGM inference A straightforward algorithm to make inference of PGMs is variable elimination, which substitutes all possible values of the non -target variables into the joint distribution and sums them up. However, this approach is NP-hard and it is difficult to implement when the graphical models are large (i.e., multiple compo...

  3. [3]

    Training GNNs on models with the same size leads to a high cost in label acquisition

    Synthetic data study In practical engineering applications, graphical models for engineering structures are often high-dimensional, as most of the structures are typically composed of multiple components. Training GNNs on models with the same size leads to a high cost in label acquisition. In order to solve this problem, we intend to train the GNNs on sma...

  4. [4]

    damaged set

    Experiments: inferring discrete states of a synthetic truss structure In this part, we validate the proposed method based on some experiments on a truss bridge, which consists of 160 bars with annular cross -sections. The longitudinal length of this structure is 5.5118m, with 14 bars of 0.3937m each. The transverse length is also 0.3937m, while the vertic...

  5. [5]

    Conclusion This work proposes a Bayesian inversion method for discrete structural component states based on PGMs. In contrast to general Bayesian inversion frameworks for continuous parameter variables, this work infers discrete state variables, and leverages PGMs to address the accompanying challenges. By factorizing the joint distribution as a product o...

  6. [6]

    2021YFF0501003

    Acknowledgment The study was supported by National Key Research and Development Program of China under Grant No. 2021YFF0501003

  7. [7]

    J. Ou, H. Li, Structural health monitoring in mainland China: review and future trends, Struct. Health Monit. 9 (3) (2010) 219-231. http://dx.doi.org/10.1177/1475921710365269

  8. [8]

    V . R. Gharehbaghi, E. Noroozinejad Farsangi, M. Noori, T. Y . Yang, S. Li, A. Nguyen, S. Mirjalili, A critical review on structural health monitoring: Definitions, methods, and perspectives, Arch. Comput. Methods Eng. 29 (4) (2022) 2209 -2235. https://doi.org/10.1007/s11831-021-09665-9

  9. [9]

    Y . J. Yan, L. Cheng, Z. Y . Wu, L. H. Yam, Development in vibration-based structural damage detection technique, Mech. Syst. Signal Process. 21 (5) (2007) 2198 -2211. https://doi.org/10.1016/j.ymssp.2006.10.002

  10. [10]

    M. H. Rafiei, H. Adeli, A novel unsupervised deep learning model for global and local health condition assessment of structures, Eng. Struct. 156 (2018) 598-607. https://doi.org/10.1016/j.engstruct.2017.10.070. 32

  11. [11]

    Zhang, J

    G. Zhang, J. Hou, C. Wan, J. Li, L. Xie, S. Xue, Non -contact vision -based response reconstruction and reinforcement learning guided evolutionary algorithm for substructural condition assessment, Mech. Syst. Signal Process. 224 (2025) 112017. https://doi.org/10.1016/j.ymssp.2024.112017

  12. [12]

    E. P. Carden, P. Fanning, Vibration based condition monitoring: a review, Struct. Health Monit. 3 (4) (2004) 355-377. https://doi.org/10.1177/1475921704047500

  13. [13]

    W. Fan, P. Qiao, Vibration-based damage identification methods: a review and comparative study, Struct. Health Monit. 10 (1) (2011) 83-111. https://doi.org/10.1177/1475921710365419

  14. [14]

    J. M. Brownjohn, P. Q. Xia, H. Hao, Y . Xia, Civil structure condition assessment by FE model updating: methodology and case studies, Finite Elem. Anal. Des. 37 (10) (2001) 761 -775. https://doi.org/10.1016/S0168-874X(00)00071-8

  15. [15]

    Khodabandehlou, G

    H. Khodabandehlou, G. Pekcan, M. S. Fadali, Vibration-based structural condition assessment using convolution neural networks, Struct. Control Health Monit. 26 (2) (2019) e2308. https://doi.org/10.1002/stc.2308

  16. [16]

    Azimi, A

    M. Azimi, A. D. Eslamlou, G. Pekcan, Data -driven structural health monitoring and damage detection through deep learning: State-of-the-art review, Sensors (Basel Switzerland) 20 (10) (2020) 2778. https://doi.org/10.3390/s20102778

  17. [17]

    Huang, C

    Y . Huang, C. Shao, B. Wu, J.L. Beck, H. Li, State-of-the-art review on Bayesian inference in structural system identification and damage assessment, Adv. Struct. Eng. 22 (6) (2019) 1329-

  18. [18]

    https://doi.org/10.1177/1369433218811540

  19. [19]

    J. Mo, W. J. Yan, Explainable Neural -Networked Variational Inference: A New and Fast Paradigm with Automatic Differentiation for High -Dimensional Bayesian Inverse Problems, Reliab. Eng. Syst. Saf. (2025) 111337. https://doi.org/10.1016/j.ress.2025.111337

  20. [20]

    Huang, J

    Y . Huang, J. L. Beck, H. Li, Bayesian system identification based on hierarchical sparse Bayesian learning and Gibbs sampling with a pplication to structural damage assessment, Comput. methods Appl. Mech. Eng. 318 (2017) 382 -411. https://doi.org/10.1016/j.cma.2017.01.030

  21. [21]

    Huang, J.L

    Y . Huang, J.L. Beck, Hierarchical sparse Bayesian learning for structural health monitoring with incomplete modal data, Int. J. Uncertain. Quant. 5 (2) (2015) 139–169. https://doi.org/10.1615/Int.J.UncertaintyQuantification.2015011808

  22. [22]

    C. M. Bishop, N. M. Nasrabadi, Pattern recognition and machine learning, New York: springer, 2006

  23. [23]

    X. Meng, J. L. Beck, Y . Huang, H. Li, Adaptive meta-learning stochastic gradient Hamiltonian Monte Carlo simulation for Bayesian updating of structural dynamic models, Comput. methods Appl. Mech. Eng. 437 (2025) 117753. https://doi.org/10.1016/j.cma.2025.117753

  24. [24]

    S. Jia, M. Akiyama, B. Han, D. M. Frangopol, Probabilistic structural identification an d condition assessment of prestressed concrete bridges based on Bayesian inference using deflection measurements, Struct. Infrastruct. Eng. 20 (1) (2024) 131 -147. https://doi.org/10.1080/15732479.2023.2192508

  25. [25]

    Y . Q. Ni, Y . W. Wang, C. Zhang, A Bayesian approach for condition assessment and damage alarm of bridge expansion joints using long-term structural health monitoring data, Eng. Struct. 212 (2020) 110520. https://doi.org/10.1016/j.engstruct.2020.110520

  26. [26]

    J. Wang, X. Z. Liu, Y . Q. Ni, A Bayesian probabilistic approach for acoustic emission‐based rail condition assessment, Comput. -Aided Civ. Infrastruct. Eng. 33 (1) (2018) 21 -34. https://doi.org/10.1111/mice.12316. 33

  27. [27]

    S.K. Au, F.L. Zhang, Fundamental two-stage formulation for Bayesian system identification, Part I: General theory, Mech. Syst. Signal Process. 66–67 (2016) 31 –42. https://doi.org/10.1016/j.ymssp.2015.04.025

  28. [28]

    Koller, N

    D. Koller, N. Friedman, Probabilistic graphical models: principles and techniques, MIT press, 2009

  29. [29]

    Hastie, R

    T. Hastie, R. Tibshirani, J. Friedman, The elements of statistical learning, 2009

  30. [30]

    M. I. Jordan, An introduction to probabilistic graphical models, 2003

  31. [31]

    A. J. Hughes, R. J. Barthorpe, N. Dervilis, C. R. Farrar, K. Worden, A probabilistic risk-based decision framework for structural health monitoring, Mech. Syst . Signal Process. 150 (2021) 107339. https://doi.org/10.1016/j.ymssp.2020.107339

  32. [32]

    Fuentes, On Bayesian networks for structural health and condition monitoring, University of Sheffield, 2017

    R. Fuentes, On Bayesian networks for structural health and condition monitoring, University of Sheffield, 2017

  33. [33]

    Ghazi, J.G

    R.M. Ghazi, J.G. Chen, O. Buyukozturk, Pairwise graphical models for structural health monitoring with dense sensor arrays, Mech. Syst. Signal Process. 93 (2017) 578 -592. https://doi.org/10.1016/j.ymssp.2017.02.026

  34. [34]

    Yedidia, W.T

    J.S. Yedidia, W.T. Freeman, Y . Weiss, Understanding belief propagation and its generalizations, Explor. Artif. Intel. New Millenn. 8.236-239 (2003) 0018-9448

  35. [35]

    Heess, D

    N. Heess, D. Tarlow, J. Winn, Learning to pass expectation propagation messages, in: C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, K.Q. Weinberger (Eds.), Adv. Neural Inf. Process. Syst. 26, 2013

  36. [36]

    G. Lin, C. Shen, I. Reid, A. van den Hengel, Deeply learning the messages in message passing inference. in: C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, R. Garnett (Eds.), Adv. Neural Inf. Process. Syst. 28, 2015

  37. [37]

    Scarselli, M

    F. Scarselli, M. Gori, A.C. Tsoi, M. Hagenbuchner, G. Monfardini, The graph neural network model, IEEE Trans. Neural Netw. 20 (2008) 61 -80. https://doi.org/10.1109/TNN.2008.2005605

  38. [38]

    K. Yoon, R. Liao, Y . Xiong, L. Zhang, E. Fetaya, R. Urtasun, X. Pitkow, Inference in probabilistic graphical models by graph neural networks, 53rd Asilomar Conf. Signal Syst. Comput. IEEE (2019) 868-875. https://doi.org/10.1109/IEEECONF44664.2019.9048920

  39. [39]

    Joshi, Q

    C.K. Joshi, Q. Cappart, L. M. Rousseau, T. Laurent, Learning the travelling salesperson problem requires rethinking generalization, Constraints 27 (2022) 70 -98. https://doi.org/10.1007/s10601-022-09327-y

  40. [40]

    V . Garg, S. Jegelka, T. Jaakkola, Generalization and representational limits of graph neural networks, Proc. Int. Conf. Mach. Learn. (2020) 3419 -3430. https://doi.org/10.48550/arXiv.2002.06157

  41. [41]

    Scarselli, A

    F. Scarselli, A. C. Tsoi, M. Hagenbuchner, The vapnik–chervonenkis dimension of graph and recursive neural networks, Neural Netw. 108 (2018) 248 -259. https://doi.org/10.1016/j.neunet.2018.08.010

  42. [42]

    H. Ye, C. Xie, T. Cai, R. Li, Z. Li, L. Wang, Towards a theoretical framework of out -of- distribution generalization, Adv. Neural Inf. Process. Syst. 34 (2021) 23519 -23531. https://doi.org/10.48550/arXiv.2106.04496

  43. [43]

    H. Li, X. Wang, Z. Zhang, W. Zhu, Out -of-distribution generalization on graphs: A survey, arXiv preprint arXiv:2202.07987 2022. https://doi.org/10.48550/arXiv.2202.07987

  44. [44]

    Park, K Yoon, Degree Matters: Assessing the Generalization of Graph Neural Network, 7th 34 IEEE Int

    H. Park, K Yoon, Degree Matters: Assessing the Generalization of Graph Neural Network, 7th 34 IEEE Int. Conf. Netw. Intell. Digit. Content IEEE (2021) 71 -75. https://doi.org/10.1109/IC- NIDC54101.2021.9660574

  45. [45]

    Yehudai, E

    G. Yehudai, E. Fetaya, E. Meirom, G. Chechik, H. Maron, From local structures to size generalization in graph neural networks, Proc. Int. Conf. Mach. Learn. (2021) 11975 -11986. https://doi.org/10.48550/arXiv.2010.08853

  46. [46]

    Cipra, An introduction to the Ising model, Am

    B.A. Cipra, An introduction to the Ising model, Am. Math. Mon. 94 (10) 937 -959. https://doi.org/10.1080/00029890.1987.12000742

  47. [47]

    A. A. Neath, J. E. Cavanaugh, The Bayesian information criterion: background, derivation, and applications, Wiley Interdiscip. Rev. Comput. Stat. 4 (2) (2012), 199-203

  48. [48]

    T. K. Moon, The expectation-maximization algorithm, IEEE SP magazine 13 (6) (1996) 47-60. https://doi.org/10.1109/79.543975

  49. [49]

    Jaynes, Probability Theory: The Logic of Science, Cambridge University Press, 2003

    E.T. Jaynes, Probability Theory: The Logic of Science, Cambridge University Press, 2003

  50. [50]

    Mohammadi Ghazi, O

    R. Mohammadi Ghazi, O. Büyüköztürk,. Damage detection with small data set using energy‐ based nonlinear features, Struct. Control Health Monit. 23 (2) (2016) 333-348

  51. [51]

    Y . Li, D. Tarlow, M. Brockschmidt, R. Zemel, Gated graph sequence neural networks, arXiv preprint arXiv:1511.05493 2015. https://doi.org/10.48550/arXiv.1511.05493

  52. [52]

    Survey on generaliza- tion theory for graph neural networks.arXiv preprint arXiv:2503.15650, 2025

    A. Vasileiou, S. Jegelka, R. Levie, C. Morris, Survey on generalization theory for graph neural networks, arXiv preprint arXiv:2503.15650, 2025. https://doi.org/10.48550/arXiv.2503.15650

  53. [53]

    Adam: A Method for Stochastic Optimization

    D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 2014. https://doi.org/10.48550/arXiv.1412.6980

  54. [54]

    Johnson, H.F

    E.A. Johnson, H.F. Lam, L.S. Katafygiotis, J.L. Beck, Phase I IASC -ASCE structural health monitoring benchmark problem using simulated data, J. Eng. Mech. 130 (2004) 3 -15. https://doi.org/10.1061/(ASCE)0733-9399(2004)130:1(3)

  55. [55]

    M. J. Wainwright, T. S. Jaakkola, A. S. Willsky, Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo -moment matching, In International Workshop on Artificial Intelligence and Statistics PMLR (2003) 308-315. Appendix A. A simple proof for the equivalence of Bayesian inference and PGM inference Theorem 1. Let 𝑝𝑀𝐵 be the po...