pith. machine review for the scientific record. sign in

arxiv: 2602.23089 · v2 · submitted 2026-02-26 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

Physics-informed neural particle flow for the Bayesian update step

Authors on Pith no claims yet

Pith reviewed 2026-05-15 18:38 UTC · model grok-4.3

classification 💻 cs.LG
keywords physics-informed neural networksparticle flowBayesian updatecontinuity equationunsupervised learningdensity transportamortized inference
0
0 comments X

The pith

Coupling the log-homotopy path to the continuity equation produces a master PDE that a neural network solves unsupervised to transport prior densities to posteriors.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that the Bayesian update can be reframed as a density transport problem whose governing PDE arises from linking the log-homotopy trajectory between prior and posterior with the continuity equation. A neural network is trained to output the required velocity field by treating this PDE as a hard constraint inside the loss, removing any requirement for ground-truth posterior samples. This unsupervised route avoids the stiffness of classical particle-flow ODEs and supplies an implicit regularizer that improves mode coverage on multimodal targets. A reader would care because high-dimensional nonlinear filtering has long been limited either by sampling cost or by unstable analytic flows; the new construction promises amortized inference that stays faithful to the exact finite-horizon geometry.

Core claim

By embedding the master PDE, obtained from the log-homotopy trajectory of the prior-to-posterior density together with the continuity equation, as a physical constraint in the loss, a neural network learns the transport velocity field for the Bayesian update step, enabling purely unsupervised amortized inference that mitigates stiffness and reduces online cost.

What carries the argument

The master PDE derived by coupling the log-homotopy trajectory with the continuity equation, enforced as a loss constraint on the neural network that parameterizes the transport velocity field.

If this is right

  • The neural parameterization reduces numerical stiffness compared with analytic particle flows.
  • Online inference complexity drops because the velocity field is evaluated by a single forward pass rather than solving a stiff ODE.
  • Mode coverage improves on multimodal benchmarks relative to existing particle-flow and deep-learning baselines.
  • The method remains robust on challenging nonlinear estimation tasks without requiring posterior samples for training.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same PDE-constrained training pattern could be reused for other density-transport problems such as smoothing or sequential Monte Carlo proposals.
  • Because the network acts as an implicit regularizer, similar physics-informed losses might stabilize other finite-horizon probability flows that currently rely on asymptotic relaxation.
  • The unsupervised formulation opens the possibility of online adaptation when the observation model itself changes, provided the master PDE can be re-derived on the fly.

Load-bearing premise

The log-homotopy path from prior to posterior density, when combined with the continuity equation, yields a well-posed PDE that a neural network can approximate without adding new instabilities or systematic bias.

What would settle it

Generate a known multimodal posterior, run the trained network on the corresponding prior, and check whether the transported particle density places mass on the correct modes with accurate relative weights; mismatch would falsify the claim.

Figures

Figures reproduced from arXiv: 2602.23089 by Domonkos Csuzdi, Oliv\'er T\"or\H{o}, Tam\'as B\'ecsi.

Figure 1
Figure 1. Figure 1: Physics based Bayesian computation. Center: The estimation objectiveis a static target posterior and its discrete approximation via an [PITH_FULL_IMAGE:figures/full_fig_p010_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: We selected the sample whose ED and SWD values are closest to the mean values reported in Table 3. [PITH_FULL_IMAGE:figures/full_fig_p016_2.png] view at source ↗
Figure 2
Figure 2. Figure 2: Corner plot for a representative test inference task, which is selected as the one whose quantitative performance is closest to the mean [PITH_FULL_IMAGE:figures/full_fig_p017_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The effect of different adaptive step thresholds ∆L and particle numbers N on the validation dataset in terms of SWD and computational time. 17 [PITH_FULL_IMAGE:figures/full_fig_p017_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The likelihood landscape of the TDOA measurement. The nonlinear measurement equation creates a hyperbolic high-probability ridge. [PITH_FULL_IMAGE:figures/full_fig_p020_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparison on a sample with an informative prior. The analytic flows struggle to capture the “banana” shape (Mean Exact) [PITH_FULL_IMAGE:figures/full_fig_p021_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Comparison on a sample with a distant prior. The Gaussian-based approximations (Exact flows) fail due to linearization errors. The [PITH_FULL_IMAGE:figures/full_fig_p022_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Particle trajectories generated by PINPF for the sample in Fig. 6. The color gradient represents the flow time [PITH_FULL_IMAGE:figures/full_fig_p023_7.png] view at source ↗
read the original abstract

The Bayesian update step poses significant computational challenges in high-dimensional nonlinear estimation. While log-homotopy particle flow filters offer an alternative to stochastic sampling, existing formulations usually yield stiff differential equations. Conversely, existing deep learning approximations typically treat the update as a black-box task or rely on asymptotic relaxation, neglecting the exact geometric structure of the finite-horizon probability transport. In this work, we propose a physics-informed neural particle flow, which is an amortized inference framework. To construct the flow, we couple the log-homotopy trajectory of the prior to posterior density function with the continuity equation describing the density evolution. This derivation yields a governing partial differential equation (PDE), referred to as the master PDE. By embedding this PDE as a physical constraint into the loss function, we train a neural network to approximate the transport velocity field. This approach enables purely unsupervised training, eliminating the need for ground-truth posterior samples. We demonstrate that the neural parameterization acts as an implicit regularizer, mitigating the numerical stiffness inherent to analytic flows and reducing online computational complexity. Experimental validation on multimodal benchmarks and a challenging nonlinear scenario confirms better mode coverage and robustness compared to state-of-the-art baselines.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes a physics-informed neural particle flow for the Bayesian update step in high-dimensional nonlinear estimation. It couples the log-homotopy trajectory of the prior-to-posterior density with the continuity equation to derive a governing master PDE, which is then embedded as a physical constraint in the loss function to train a neural network approximating the transport velocity field. This enables purely unsupervised training without ground-truth posterior samples and is claimed to mitigate stiffness while improving mode coverage and robustness on multimodal benchmarks and nonlinear scenarios.

Significance. If the central derivation and approximation hold with sufficient accuracy, the method would provide an amortized, geometry-preserving alternative to stochastic sampling or black-box neural updates in particle flow filters, potentially reducing online complexity in challenging Bayesian inference tasks.

major comments (2)
  1. [Abstract] Abstract: the derivation of the master PDE by coupling log-homotopy trajectory with the continuity equation is not shown, so it is impossible to verify well-posedness, including any required boundary conditions, uniqueness arguments, or regularity assumptions on the densities that would be needed for the finite-horizon transport map to be uniquely determined in high dimensions.
  2. [Abstract] Abstract: no error bounds, residual estimates, or convergence analysis are supplied to confirm that the neural solution to the master PDE approximates the true velocity field closely enough to keep the Bayesian update consistent and unbiased, despite the claim of implicit regularization mitigating stiffness.
minor comments (1)
  1. The abstract refers to 'experimental validation on multimodal benchmarks' and 'state-of-the-art baselines' without naming the specific test cases, metrics, or quantitative improvements reported.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed comments on the abstract. We address the concerns regarding the missing derivation and lack of theoretical analysis below, and will incorporate revisions in the next version of the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the derivation of the master PDE by coupling log-homotopy trajectory with the continuity equation is not shown, so it is impossible to verify well-posedness, including any required boundary conditions, uniqueness arguments, or regularity assumptions on the densities that would be needed for the finite-horizon transport map to be uniquely determined in high dimensions.

    Authors: The abstract summarizes the approach but does not include the full derivation steps. The complete manuscript derives the master PDE in Section 3 by substituting the log-homotopy density trajectory into the continuity equation, yielding a first-order PDE for the velocity field. We assume densities are positive, twice differentiable, and decay sufficiently fast at infinity to ensure the transport map is well-defined over the finite horizon; boundary conditions are taken as vanishing flux at spatial infinity. We will revise the abstract to include a concise outline of these steps and assumptions. revision: partial

  2. Referee: [Abstract] Abstract: no error bounds, residual estimates, or convergence analysis are supplied to confirm that the neural solution to the master PDE approximates the true velocity field closely enough to keep the Bayesian update consistent and unbiased, despite the claim of implicit regularization mitigating stiffness.

    Authors: We agree that the manuscript does not supply explicit error bounds or convergence rates for the neural approximation of the velocity field. Empirical evidence from multimodal and nonlinear benchmarks demonstrates that the PDE-constrained training produces updates with improved mode coverage and reduced stiffness compared to baselines, supporting practical consistency. We will add a discussion of the PDE residual norm as an empirical proxy for approximation quality and acknowledge the absence of rigorous a priori bounds as a limitation to be addressed in future work. revision: yes

Circularity Check

0 steps flagged

No circularity: master PDE derived from log-homotopy and continuity equation

full rationale

The abstract presents the central step as coupling the log-homotopy trajectory of the prior-to-posterior density with the continuity equation to obtain a governing master PDE, which is then used as a loss constraint to train a neural network for the transport velocity field. This is a standard first-principles derivation of a transport PDE and does not reduce to any fitted parameter or self-referential definition within the provided text. No equations are supplied that would exhibit a reduction by construction, no self-citations are invoked as load-bearing, and the unsupervised training claim follows directly from enforcing the derived PDE residual rather than from any circular renaming or ansatz smuggling. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on the domain assumption that the continuity equation plus log-homotopy trajectory produces a PDE whose solution can be learned by a neural network without supervision or bias. No explicit free parameters or new invented entities are stated in the abstract.

axioms (1)
  • domain assumption Log-homotopy trajectory of prior-to-posterior density can be coupled with the continuity equation to produce a governing master PDE for the transport velocity.
    Invoked to derive the PDE that is then embedded in the loss.
invented entities (1)
  • master PDE no independent evidence
    purpose: Governing partial differential equation that constrains the neural velocity field
    Introduced as the result of coupling log-homotopy and continuity; no independent verification outside the paper.

pith-pipeline@v0.9.0 · 5484 in / 1433 out tokens · 24645 ms · 2026-05-15T18:38:07.088699+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

72 extracted references · 72 canonical work pages · 1 internal anchor

  1. [1]

    Chen, Q.-M

    M.-H. Chen, Q.-M. Shao, J. G. Ibrahim, Monte Carlo methods in Bayesian computation, Springer Science & Business Media, 2012

  2. [2]

    G. L. Jones, Q. Qin, Markov chain Monte Carlo in practice, Annual Review of Statistics and Its Application 9 (1) (2022) 557–578

  3. [3]

    Asmussen, P

    S. Asmussen, P. W. Glynn, A new proof of convergence of MCMC via the ergodic theorem, Statistics & Proba- bility Letters 81 (10) (2011) 1482–1485

  4. [4]

    Barbu, S.-C

    A. Barbu, S.-C. Zhu, Monte Carlo Methods, Springer Singapore, Singapore, 2020.doi:10.1007/ 978-981-13-2971-5. URLhttp://link.springer.com/10.1007/978-981-13-2971-5

  5. [5]

    C. P. Robert, G. Casella, Monte Carlo Statistical Methods, 2nd Edition, Springer, 2004

  6. [6]

    Doucet, S

    A. Doucet, S. Godsill, C. Andrieu, On sequential Monte Carlo sampling methods for Bayesian filtering, Statistics and Computing 10 (3) (2000) 197–208.doi:10.1023/A:1008935410038. URLhttps://link.springer.com/10.1023/A:1008935410038

  7. [7]

    Doucet, N

    A. Doucet, N. Freitas, N. Gordon (Eds.), Sequential Monte Carlo Methods in Practice, Springer New York, New York, NY , 2001.doi:10.1007/978-1-4757-3437-9. URLhttp://link.springer.com/10.1007/978-1-4757-3437-9

  8. [8]

    M. S. Arulampalam, S. Maskell, N. Gordon, T. Clapp, A tutorial on particle filters for online nonlinear/non- Gaussian Bayesian tracking, IEEE Transactions on Signal Processing 50 (2) (2002) 174–188. 23 Figure A.8: Random samples drawn from the training dataset for the TDOA problem. The dashed black lines represent the Gaussian prior, and the solid red lin...

  9. [9]

    Kirkpatrick, C

    S. Kirkpatrick, C. D. Gelatt Jr, M. P. Vecchi, Optimization by simulated annealing, Science 220 (4598) (1983) 671–680

  10. [10]

    F. Daum, J. Huang, Nonlinear filters with log-homotopy, in: Signal and Data Processing of Small Targets 2007, V ol. 6699, SPIE, 2007, pp. 423–437

  11. [11]

    F. Daum, J. Huang, A. Noushin, Exact particle flow for nonlinear filters, in: Signal processing, sensor fusion, and target recognition XIX, V ol. 7697, SPIE, 2010, pp. 92–110

  12. [12]

    F. Daum, J. Huang, Exact particle flow for nonlinear filters: seventeen dubious solutions to a first order linear underdetermined pde, in: 2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers, IEEE, 2010, pp. 64–71

  13. [13]

    L. Dai, F. Daum, A new parameterized family of stochastic particle flow filters, arXiv preprint arXiv:2103.09676 (2021)

  14. [14]

    F. Daum, J. Huang, Seven dubious methods to mitigate stiffness in particle flow with non-zero diffusion for nonlinear filters, Bayesian decisions, and transport, in: Signal and Data Processing of Small Targets 2014, V ol. 9092, SPIE, 2014, pp. 72–82

  15. [15]

    L. Dai, F. Daum, On the role of the diffusion matrix in stiffness mitigation for stochastic particle flow filters, in: 2022 25th International Conference on Information Fusion (FUSION), IEEE, 2022, pp. 1–8

  16. [16]

    L. Dai, F. Daum, Stiffness mitigation in stochastic particle flow filters, IEEE Transactions on Aerospace and Electronic Systems 58 (4) (2022) 3563–3577

  17. [17]

    D. F. Crouse, Particle Flow Solutions Avoiding StiffIntegration, Tech. Rep. NRL/5340/FR–2021/1, U. S. Naval Research Laboratory (2021). URLhttps://apps.dtic.mil/sti/html/trecms/AD1134516/

  18. [18]

    Rezende, S

    D. Rezende, S. Mohamed, Variational inference with normalizing flows, in: International conference on machine learning, PMLR, 2015, pp. 1530–1538. 24 Figure A.9: Randomly selected samples from the test set comparing the ground truth posterior and the proposed Neural Flow method. Black dashed: Prior distribution. Red solid: Ground truth posterior. Blue sol...

  19. [19]

    Papamakarios, E

    G. Papamakarios, E. Nalisnick, D. J. Rezende, S. Mohamed, B. Lakshminarayanan, Normalizing flows for probabilistic modeling and inference, Journal of Machine Learning Research 22 (57) (2021) 1–64

  20. [20]

    Garnelo, D

    M. Garnelo, D. Rosenbaum, C. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y . W. Teh, D. Rezende, S. A. Eslami, Conditional neural processes, in: International conference on machine learning, PMLR, 2018, pp. 1704– 1713

  21. [21]

    X. Chen, H. Dai, L. Song, Particle Flow Bayes’ Rule, in: International Conference on Machine Learning, PMLR, 2019, pp. 1119–1129

  22. [22]

    E. T. Jaynes, Probability theory: The logic of science, Cambridge university press, 2003

  23. [23]

    Särkkä, L

    S. Särkkä, L. Svensson, Bayesian Filtering and Smoothing, 2nd Edition, Cambridge University Press, 2023. doi:10.1017/9781108917407. URLhttps://www.cambridge.org/core/product/identifier/9781108917407/type/book

  24. [24]

    Taghvaei, P

    A. Taghvaei, P. G. Mehta, How to implement the Bayes’ formula in the age of ml?, arXiv preprint arXiv:2411.09653 (2024)

  25. [25]

    Raiffa, R

    H. Raiffa, R. Schlaifer, Applied statistical decision theory, John Wiley & Sons, 2000

  26. [26]

    Thrun, W

    S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics, MIT Press, 2005

  27. [27]

    R. E. Kalman, A new approach to linear filtering and prediction problems, Journal of Basic Engineering 82 (1) (1960) 35–45

  28. [28]

    Chen, et al., Bayesian filtering: From kalman filters to particle filters, and beyond, Statistics 182 (1) (2003) 1–69

    Z. Chen, et al., Bayesian filtering: From kalman filters to particle filters, and beyond, Statistics 182 (1) (2003) 1–69

  29. [29]

    N. J. Gordon, D. J. Salmond, A. F. Smith, Novel approach to nonlinear/non-Gaussian Bayesian state estimation, IEE Proceedings F (Radar and Signal Processing) 140 (1993) 107–113(6). 25 Figure A.10: Cornerplots of randomly selected test samples of the four-dimensional Gaussian mixture posterior problem. Red solid: Ground truth posterior. Blue solid: kernel ...

  30. [30]

    R. L. Stratonovich, Conditional Markov processes, Theory of Probability & Its Applications 5 (2) (1960) 156– 178.doi:10.1137/1105015

  31. [31]

    H. J. Kushner, On the differential equations satisfied by conditional probablitity densities of markov processes, with applications, Journal of the Society for Industrial and Applied Mathematics Series A Control 2 (1) (1964) 106–119.doi:10.1137/0302009

  32. [32]

    Kalman, R

    R. Kalman, R. Bucy, New Results in Linear Filtering and Prediction Theory, Transactions of the ASME, Journal of Basic Engineering, 83 (1) (1961) 95–108.doi:10.1115/1.3658902

  33. [33]

    A. H. Jazwinski, Stochastic processes and filtering theory, Academic Press, 1970

  34. [34]

    Särkkä, A

    S. Särkkä, A. Solin, Applied stochastic differential equations, V ol. 10, Cambridge University Press, 2019

  35. [35]

    C. M. Bishop, N. M. Nasrabadi, Pattern recognition and machine learning, Springer, 2006

  36. [36]

    F. Daum, J. Huang, Curse of dimensionality and particle filters, in: 2003 IEEE Aerospace Conference Proceed- ings, V ol. 4, IEEE, 2003, pp. 1979–1993

  37. [37]

    Bengtsson, P

    T. Bengtsson, P. Bickel, B. Li, Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems, Probability and Statistics: Essays in Honor of David A. Freedman 2 (2008) 316–334

  38. [38]

    Snyder, T

    C. Snyder, T. Bengtsson, P. Bickel, J. Anderson, Obstacles to high-dimensional particle filtering, Monthly Weather Review 136 (12) (2008) 4629–4640

  39. [39]

    Rebeschini, R

    P. Rebeschini, R. Van Handel, Can local particle filters beat the curse of dimensionality?, The Annals of Applied Probability 25 (5) (2015) 2809–2866

  40. [40]

    P. J. Van Leeuwen, Nonlinear Data Assimilation for high-dimensional systems: - with geophysical applications -, in: Nonlinear Data Assimilation, V ol. 2, Springer International Publishing, Cham, 2015, pp. 1–73, series Title: Frontiers in Applied Dynamical Systems: Reviews and Tutorials.doi:10.1007/978-3-319-18347-3_1. URLhttps://link.springer.com/10.1007/...

  41. [41]

    Oudjane, C

    N. Oudjane, C. Musso, Progressive correction for regularized particle filters, in: Proceedings of the Third Inter- national Conference on Information Fusion, V ol. 2, IEEE, 2000, pp. THB2/10–THB2/17. 26

  42. [42]

    U. D. Hanebeck, Pgf 42: Progressive gaussian filtering with a twist, in: Proceedings of the 16th international conference on information fusion, IEEE, 2013, pp. 1103–1110

  43. [43]

    U. D. Hanebeck, K. Briechle, A. Rauh, Progressive Bayes: a new framework for nonlinear state estimation, in: Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2003, V ol. 5099, SPIE, 2003, pp. 256–267

  44. [44]

    Tierney, Markov chains for exploring posterior distributions, The Annals of Statistics 22 (4) (1994) 1701– 1722

    L. Tierney, Markov chains for exploring posterior distributions, The Annals of Statistics 22 (4) (1994) 1701– 1722

  45. [45]

    Metropolis, A

    N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, E. Teller, Equation of state calculations by fast computing machines, The journal of chemical physics 21 (6) (1953) 1087–1092

  46. [46]

    W. K. Hastings, Monte Carlo sampling methods using markov chains and their applications, Biometrika 57 (1) (1970) 97–109

  47. [47]

    Bellman, Adaptive Control Processes: A Guided Tour, Princeton University Press, 1961

    R. Bellman, Adaptive Control Processes: A Guided Tour, Princeton University Press, 1961

  48. [48]

    R. M. Neal, Annealed importance sampling, Statistics and computing 11 (2) (2001) 125–139

  49. [49]

    Brenier, Polar factorization and monotone rearrangement of vector-valued functions, Communications on pure and applied mathematics 44 (4) (1991) 375–417

    Y . Brenier, Polar factorization and monotone rearrangement of vector-valued functions, Communications on pure and applied mathematics 44 (4) (1991) 375–417

  50. [50]

    Bonnotte, From Knothe’s rearrangement to Brenier’s optimal transport map, SIAM Journal on Mathematical Analysis 45 (1) (2013) 64–87

    N. Bonnotte, From Knothe’s rearrangement to Brenier’s optimal transport map, SIAM Journal on Mathematical Analysis 45 (1) (2013) 64–87

  51. [51]

    Rosenblatt, Remarks on a multivariate transformation, The Annals of Mathematical Statistics 23 (3) (1952) 470–472

    M. Rosenblatt, Remarks on a multivariate transformation, The Annals of Mathematical Statistics 23 (3) (1952) 470–472

  52. [52]

    Knothe, Contributions to the theory of convex bodies, Michigan Mathematical Journal 4 (1) (1957) 39–52

    H. Knothe, Contributions to the theory of convex bodies, Michigan Mathematical Journal 4 (1) (1957) 39–52

  53. [53]

    Kobyzev, S

    I. Kobyzev, S. J. Prince, M. A. Brubaker, Normalizing flows: An introduction and review of current methods, IEEE transactions on pattern analysis and machine intelligence 43 (11) (2020) 3964–3979

  54. [54]

    Durkan, A

    C. Durkan, A. Bekasov, I. Murray, G. Papamakarios, Neural spline flows, Advances in neural information pro- cessing systems 32 (2019)

  55. [55]

    R. T. Chen, Y . Rubanova, J. Bettencourt, D. K. Duvenaud, Neural ordinary differential equations, Advances in neural information processing systems 31 (2018)

  56. [56]

    Köthe, A review of change of variable formulas for generative modeling, arXiv preprint arXiv:2308.02652 (2023)

    U. Köthe, A review of change of variable formulas for generative modeling, arXiv preprint arXiv:2308.02652 (2023)

  57. [57]

    D. M. Blei, A. Kucukelbir, J. D. McAuliffe, Variational inference: A review for statisticians, Journal of the American statistical Association 112 (518) (2017) 859–877

  58. [58]

    Q. Liu, D. Wang, Stein variational gradient descent: A general purpose Bayesian inference algorithm, Advances in neural information processing systems 29 (2016)

  59. [59]

    Large sample analysis of the median heuristic

    D. Garreau, W. Jitkrittum, M. Kanagawa, Large sample analysis of the median heuristic, arXiv preprint arXiv:1707.07269 (2017)

  60. [60]

    Liu, Stein Variational Gradient Descent as Gradient Flow, in: Advances in Neural Information Processing Systems, V ol

    Q. Liu, Stein Variational Gradient Descent as Gradient Flow, in: Advances in Neural Information Processing Systems, V ol. 30, 2017

  61. [61]

    Risken, The Fokker-Planck equation, Springer, 1996

    H. Risken, The Fokker-Planck equation, Springer, 1996. 27

  62. [62]

    D. F. Crouse, C. Lewis, Consideration of particle flow filter implementations and biases, Tech. Rep. NRL/MR/5344–19-9938, Naval Research Laboratory, Washington DC (2020). URLhttps://apps.dtic.mil/sti/html/trecms/AD1091426/

  63. [63]

    Tör ˝o, T

    O. Tör ˝o, T. Bécsi, Analytic solution of the exact Daum–Huang flow equation for particle filters, Information Fusion 92 (2023) 247–255

  64. [64]

    F. Daum, J. Huang, A. Noushin, New Theory and Numerical Results for Gromov’s Method for Stochastic Particle Flow Filters, in: 2018 21st International Conference on Information Fusion (FUSION), IEEE, Cambridge, 2018, pp. 108–115.doi:10.23919/ICIF.2018.8455287

  65. [65]

    F. Daum, J. Huang, A. Noushin, Coulomb’s law particle flow for nonlinear filters, in: Signal and Data Processing of Small Targets 2011, V ol. 8137, SPIE, 2011, pp. 99–108

  66. [66]

    O. C. Schrempf, D. Brunn, U. D. Hanebeck, Dirac mixture density approximation based on minimization of the weighted cramer-von mises distance, in: 2006 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, IEEE, 2006, pp. 512–517

  67. [67]

    S. Mori, F. Daum, J. Douglas, Adaptive step size approach to homotopy-based particle filtering Bayesian update, in: 2016 19th International Conference on Information Fusion (FUSION), IEEE, 2016, pp. 2035–2042

  68. [68]

    G. J. Székely, M. L. Rizzo, Energy statistics: A class of statistics based on distances, Journal of statistical planning and inference 143 (8) (2013) 1249–1272

  69. [69]

    Bonneel, J

    N. Bonneel, J. Rabin, G. Peyré, H. Pfister, Sliced and radon Wasserstein barycenters of measures, Journal of Mathematical Imaging and Vision 51 (1) (2015) 22–45

  70. [70]

    M. D. Hoffman, A. Gelman, et al., The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo, J. Mach. Learn. Res. 15 (1) (2014) 1593–1623

  71. [71]

    Aulia, I

    U. Aulia, I. Hasanuddin, M. Dirhamsyah, N. Nasaruddin, Navigation framework from a monocular camera for autonomous mobile robots, Periodica Polytechnica Transportation Engineering 53 (4) (2025) 389–405

  72. [72]

    Csuzdi, T

    D. Csuzdi, T. Bécsi, P. Gáspár, O. Tör˝o, Exact particle flow Daum–Huang filters for mobile robot localization in occupancy grid maps, Complex & Intelligent Systems 11 (4) (2025) 187. 28