pith. machine review for the scientific record. sign in

arxiv: 2604.02581 · v1 · submitted 2026-04-02 · 📊 stat.ML · cs.LG· cs.NA· math.NA

Recognition: no theorem link

Learning interacting particle systems from unlabeled data

Authors on Pith no claims yet

Pith reviewed 2026-05-13 19:57 UTC · model grok-4.3

classification 📊 stat.ML cs.LGcs.NAmath.NA
keywords interacting particle systemsunlabeled dataself-test lossweak-form equationpotential estimationparametric convergencenonparametric regression
0
0 comments X

The pith

A trajectory-free self-test loss based on the weak-form evolution of the empirical distribution learns interaction potentials from unlabeled particle data.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a method to recover the interaction potentials of particle systems when observations consist only of unlabeled positions at discrete times. It builds a loss directly from the weak-form stochastic evolution equation satisfied by the empirical distribution, avoiding any need to reconstruct particle trajectories through label matching. Because the loss is quadratic in the unknown potentials, standard regression techniques can be applied in both parametric and nonparametric settings and scale to high-dimensional systems with large data volumes. Numerical experiments show the approach outperforms trajectory-recovery baselines and remains accurate even for large observation time steps, while theory establishes convergence of the parametric estimators as the number of samples grows.

Core claim

We introduce a trajectory-free self-test loss function that leverages the weak-form stochastic evolution equation of the empirical distribution. The loss function is quadratic in potentials, supporting parametric and nonparametric regression algorithms for robust estimation that scale to large, high-dimensional systems with big data. Systematic numerical tests show that our method outperforms baseline methods that regress on trajectories recovered via label matching, tolerating large observation time steps. We establish the convergence of parametric estimators as the sample size increases.

What carries the argument

Trajectory-free self-test loss function derived from the weak-form stochastic evolution equation of the empirical distribution, which is quadratic in the potentials.

If this is right

  • Parametric estimators converge to the true potentials as the number of observed snapshots increases.
  • The quadratic loss permits scalable regression for both parametric and nonparametric models on high-dimensional data.
  • Estimation remains accurate for observation time steps large enough to break conventional trajectory-matching methods.
  • The approach supports robust learning directly from big unlabeled datasets without intermediate trajectory reconstruction.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The method could be applied to privacy-constrained biological tracking where individual cell identities cannot be maintained across frames.
  • Nonparametric variants might recover interaction forms that deviate from standard parametric families used in physics models.
  • Sparse temporal sampling regimes common in experimental physics become tractable once large time steps are tolerated.
  • The same loss construction might extend to learning in related mean-field or stochastic differential equation settings.

Load-bearing premise

The observed data are generated by an interacting particle system whose empirical distribution satisfies the weak-form stochastic evolution equation used to construct the self-test loss.

What would settle it

Generate synthetic unlabeled snapshots from a known interacting particle system whose dynamics are altered so that the empirical measure no longer satisfies the assumed weak-form equation, then verify whether the estimator recovers the true potentials or produces inconsistent results.

Figures

Figures reproduced from arXiv: 2604.02581 by Fei Lu, Viska Wei.

Figure 1
Figure 1. Figure 1: Workflow of both estimation algorithms using the self-test loss function. [PITH_FULL_IMAGE:figures/full_fig_p014_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: M-scaling on the reference model under Riemann-sum (left pair) and trapezoidal (right pair) time integrations. Both with data generated with δt “ 10´4 and integrated with the various ∆t values. The Riemann-sum has error bound Op∆t ` M´1{2 q, so the four ∆t values track the OpM´1{2 q rate (green line) until they saturate at an Op∆tq floor (left pair), while the trapezoidal rule has error bound Opp∆tq 2 ` M´… view at source ↗
Figure 3
Figure 3. Figure 3: Convergence for discrete-time model (i.e., [PITH_FULL_IMAGE:figures/full_fig_p021_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Condition numbers scaling with N for the normal matrix and its diagonal blocks (κ˚, κV V , κΦΦ). Slopes (italic numbers at N“100) are log-log regression rates from [PITH_FULL_IMAGE:figures/full_fig_p021_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Non-radial potential recovery (d“2). Percentages are relative L 2 pρq errors. 5.5 Boundary stress tests We further test the self-test approach on four additional models at the boundaries of the theory, each designed to violate or stress a condition in Assumption 4.1. The potentials in these four models are as follows. • Smoothness test: V ˚ pxq “ 1 2 α1|x| ` α2|x| 2 , Φ ˚ prq “ β11 ϵ r0.5,1s prq ` β21 ϵ r1… view at source ↗
Figure 6
Figure 6. Figure 6: Typical estimators of Φ 1 prq for the four boundary test models (d“2, M“2,000, σ“1, δt“10´4 , ∆t“10´2 ). Gray shading indicates the approximate exploration measure ρΦ. Smooth￾ness: Smoothed indicator with sharp transitions at r “ 0.5 and r “ 1. Conditioning: Slow￾decaying 1{pr`1q interaction that challenges conditioning. Singularity: Lennard-Jones r ´12´r ´6 singularity; gradient diverges as r Ñ 0. Smooth … view at source ↗
Figure 7
Figure 7. Figure 7: Typical estimators of V 1 and Φ 1 for the radial potentials when d “ 2. Top left: Smoothness test. Top right: Conditioning test. Bottom left: Singularity test (LJ). Bottom right: Smooth control (Morse). Gray shading indicates the empirical density of observed distances. 29 [PITH_FULL_IMAGE:figures/full_fig_p029_7.png] view at source ↗
read the original abstract

Learning the potentials of interacting particle systems is a fundamental task across various scientific disciplines. A major challenge is that unlabeled data collected at discrete time points lack trajectory information due to limitations in data collection methods or privacy constraints. We address this challenge by introducing a trajectory-free self-test loss function that leverages the weak-form stochastic evolution equation of the empirical distribution. The loss function is quadratic in potentials, supporting parametric and nonparametric regression algorithms for robust estimation that scale to large, high-dimensional systems with big data. Systematic numerical tests show that our method outperforms baseline methods that regress on trajectories recovered via label matching, tolerating large observation time steps. We establish the convergence of parametric estimators as the sample size increases, providing a theoretical foundation for the proposed approach.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces a trajectory-free self-test loss function derived from the weak-form stochastic evolution equation of the empirical distribution for learning interaction potentials in particle systems from unlabeled discrete-time snapshots. The loss is quadratic in the potentials, enabling scalable parametric and nonparametric regression. Numerical tests claim outperformance over label-matching baselines while tolerating large observation time steps, and a convergence result is stated for parametric estimators as sample size increases.

Significance. If the convergence result holds under the discrete large-Δt regime used in the experiments, the approach would offer a practical advance for unlabeled data settings common in scientific applications, with good scaling properties for high-dimensional systems.

major comments (2)
  1. [convergence theorem / §4] The convergence statement for parametric estimators (abstract and §4/Theorem on consistency): the analysis must be checked against the discrete-time setting with fixed large observation intervals Δt. If the proof relies on the infinitesimal generator or continuous-time limit, it does not cover the large-Δt regime advertised in the numerical tests; a concrete statement of the observation-time assumptions and the limit taken (N→∞ with Δt fixed) is required.
  2. [loss derivation / §3] Weak-form loss construction (abstract and §3): the derivation assumes the empirical measure satisfies the stated weak-form stochastic evolution equation exactly. Clarify the error introduced when this holds only approximately for finite-particle discrete observations, and whether the quadratic loss remains consistent without additional bias terms.
minor comments (1)
  1. [numerical experiments] Numerical results lack reported error bars or standard deviations across runs; add these to support the outperformance claims.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments on our manuscript. We address each major point below and will revise the paper to improve clarity on the theoretical assumptions and approximation errors.

read point-by-point responses
  1. Referee: [convergence theorem / §4] The convergence statement for parametric estimators (abstract and §4/Theorem on consistency): the analysis must be checked against the discrete-time setting with fixed large observation intervals Δt. If the proof relies on the infinitesimal generator or continuous-time limit, it does not cover the large-Δt regime advertised in the numerical tests; a concrete statement of the observation-time assumptions and the limit taken (N→∞ with Δt fixed) is required.

    Authors: The convergence theorem in §4 is formulated directly for the discrete-time setting: we take N → ∞ with the observation interval Δt held fixed and positive (including large values). The proof works with the weak-form evolution equation evaluated at the discrete observation times and does not invoke the infinitesimal generator or any continuous-time limit. We will revise the statement of the theorem and the surrounding text in §4 (and the abstract) to explicitly record these assumptions, including that the result holds for any fixed Δt > 0 and that the numerical experiments operate inside this regime. revision: yes

  2. Referee: [loss derivation / §3] Weak-form loss construction (abstract and §3): the derivation assumes the empirical measure satisfies the stated weak-form stochastic evolution equation exactly. Clarify the error introduced when this holds only approximately for finite-particle discrete observations, and whether the quadratic loss remains consistent without additional bias terms.

    Authors: The weak-form equation holds exactly for the continuous-time empirical measure; for finite N and discrete observations the equation is satisfied only up to an O(1/√N) fluctuation term plus a discretization error controlled by the smoothness of the test functions. Because the loss is quadratic and the estimator is defined as its minimizer, these errors vanish in the N → ∞ limit for fixed Δt. Consequently the quadratic loss remains consistent (no persistent bias term appears in the large-sample limit). We will add a short paragraph in §3 that quantifies the approximation error and states the consistency result under the same assumptions used in the convergence theorem. revision: yes

Circularity Check

0 steps flagged

No circularity: loss derived from external weak-form equation; convergence claim independent of fitted inputs

full rationale

The paper constructs its trajectory-free self-test loss directly from the weak-form stochastic evolution equation of the empirical distribution, an external mathematical relation not defined in terms of the paper's own fitted potentials or predictions. Parametric and nonparametric regression follow from the quadratic structure of this loss. The claimed convergence of parametric estimators as sample size increases is presented as a separate theoretical result without reduction to a self-citation chain or re-labeling of fitted quantities as predictions. No equations or sections in the provided text exhibit self-definitional loops, fitted-input predictions, or ansatz smuggling. This matches the reader's assessment of no evident circularity and keeps the derivation self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the weak-form stochastic evolution equation holding for the empirical measure; no free parameters or invented entities are introduced in the abstract.

axioms (1)
  • domain assumption The empirical distribution of the particle system satisfies the weak-form stochastic evolution equation derived from the underlying SDE.
    This equation is the foundation for constructing the self-test loss without trajectories.

pith-pipeline@v0.9.0 · 5416 in / 1269 out tokens · 41336 ms · 2026-05-13T19:57:19.086603+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

44 extracted references · 44 canonical work pages

  1. [1]

    Opinion fluctuations and dis- agreement in social networks.Mathematics of Operations Research, 38(1):1–27, 2013

    Daron Acemoglu, Asuman Ozdaglar, and Ali ParandehGheibi. Opinion fluctuations and dis- agreement in social networks.Mathematics of Operations Research, 38(1):1–27, 2013

  2. [2]

    Parameter es- timation of discretely observed interacting particle systems.Stochastic Processes and their Applications, 2023

    Chiara Amorino, Akram Heidari, Vytaut˙ e Pilipauskait˙ e, and Mark Podolskij. Parameter es- timation of discretely observed interacting particle systems.Stochastic Processes and their Applications, 2023

  3. [3]

    Bjornsson, Martin I

    Hans T. Bjornsson, Martin I. Sigurdsson, M. Daniele Fallin, Rafael A. Irizarry, Thor As- pelund, Hengmi Cui, Wenqiang Yu, Michael A. Rongione, Tomas J. Ekström, Tamara B. Harris, Lenore J. Launer, Gudny Eiriksdottir, Mark F. Leppert, Carmen Sapienza, Vilmundur Gudnason, and Andrew P. Feinberg. Intra-individual change over time in dna methylation with fami...

  4. [4]

    Inferringinteraction rules from observations of evolutive systems i: The variational approach.Mathematical Models and Methods in Applied Sciences, 27(05):909–951, 2017

    MattiaBongini, MassimoFornasier, MarkusHansen, andMauroMaggioni. Inferringinteraction rules from observations of evolutive systems i: The variational approach.Mathematical Models and Methods in Applied Sciences, 27(05):909–951, 2017

  5. [5]

    Proximal op- timal transport modeling of population dynamics

    Charlotte Bunne, Laetitia Papaxanthos, Andreas Krause, and Marco Cuturi. Proximal op- timal transport modeling of population dynamics. InInternational Conference on Artificial Intelligence and Statistics, pages 6511–6528. PMLR, 2022

  6. [6]

    Partial optimal tranport with applications on positive-unlabeled learning.Advances in Neural Information Processing Systems, 33:2903– 2913, 2020

    Laetitia Chapel, Mokhtar Z Alaya, and Gilles Gasso. Partial optimal tranport with applications on positive-unlabeled learning.Advances in Neural Information Processing Systems, 33:2903– 2913, 2020

  7. [7]

    Maximum likelihood estimation of potential energy in interacting particle sys- tems from single-trajectory data.Electron

    Xiaohui Chen. Maximum likelihood estimation of potential energy in interacting particle sys- tems from single-trajectory data.Electron. Commun. Probab., 26:1–13, 2021

  8. [8]

    Sinkhorn distances: Lightspeed computation of optimal transport

    Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. InAdvances in Neural Information Processing Systems, volume 26, 2013

  9. [9]

    Conditional moment estimation for diffusion processes

    Frank De Jong and Pedro Santa-Clara. Conditional moment estimation for diffusion processes. NBER Working Paper, 2004

  10. [10]

    From the master equation to mean field game limit theory: A central limit theorem.Electronic Journal of Probability, 24:1–54, 2019

    François Delarue, Daniel Lacker, and Kavita Ramanan. From the master equation to mean field game limit theory: A central limit theorem.Electronic Journal of Probability, 24:1–54, 2019

  11. [11]

    Data-driven discovery of interacting particle systems using gaussian processes.arXiv preprint arXiv:2106.02735, 2021

    Jinchao Feng, Yunxiang Ren, and Sui Tang. Data-driven discovery of interacting particle systems using gaussian processes.arXiv preprint arXiv:2106.02735, 2021

  12. [12]

    Interpolating between optimal transport and mmd using sinkhorn divergences

    Jean Feydy, Thibault Séjourné, François-Xavier Vialard, Shun-ichi Amari, Alain Trouvé, and Gabriel Peyré. Interpolating between optimal transport and mmd using sinkhorn divergences. 37 InThe 22nd International Conference on Artificial Intelligence and Statistics, pages 2681–2690. PMLR, 2019

  13. [13]

    Self-test loss functions for learning weak-form operators and gradient flows.arXiv preprint arXiv:2412.03506, 2024

    Yuan Gao, Quanjun Lang, and Fei Lu. Self-test loss functions for learning weak-form operators and gradient flows.arXiv preprint arXiv:2412.03506, 2024

  14. [14]

    Stochastic optimization for large-scale optimal transport.Advances in Neural Information Processing Systems, 29, 2016

    Aude Genevay, Marco Cuturi, Gabriel Peyré, and Francis Bach. Stochastic optimization for large-scale optimal transport.Advances in Neural Information Processing Systems, 29, 2016

  15. [15]

    Analysis of discrete ill-posed problems by means of the L-curve.SIAM Review, 34(4):561–580, 1992

    Per Christian Hansen. Analysis of discrete ill-posed problems by means of the L-curve.SIAM Review, 34(4):561–580, 1992

  16. [16]

    Learning to simulate high energy particle collisions from unlabeled data.Scientific Reports, 12(1):7567, 2022

    Jessica N Howard, Stephan Mandt, Daniel Whiteson, and Yibo Yang. Learning to simulate high energy particle collisions from unlabeled data.Scientific Reports, 12(1):7567, 2022

  17. [17]

    Learning interacting particle systems: Diffusion parameter estimation for aggregation equations.Mathematical Models and Methods in Applied Sciences, 29(01):1–29, 2019

    Hui Huang, Jian-Guo Liu, and Jianfeng Lu. Learning interacting particle systems: Diffusion parameter estimation for aggregation equations.Mathematical Models and Methods in Applied Sciences, 29(01):1–29, 2019

  18. [18]

    Springer, 2009

    Stefano M Iacus.Simulation and inference for stochastic differential equations. Springer, 2009

  19. [19]

    Potential en- ergy landscapes identify the information-theoretic nature of the epigenome.Nature Genetics, 49(5):719–729, 2017

    Gareth Jenkinson, Elena Pujadas, John Goutsias, and Andrew P Feinberg. Potential en- ergy landscapes identify the information-theoretic nature of the epigenome.Nature Genetics, 49(5):719–729, 2017

  20. [20]

    Raphael A. Kasonga. Maximum likelihood theory for large interacting systems.SIAM J. Appl. Math., 50(3):865–875, 1990

  21. [21]

    Parameter estimation for partially observed hypoelliptic diffusions.Journal of the Royal Statistical Society: Series B, 66(2):405–422, 2004

    Yury A Kutoyants. Parameter estimation for partially observed hypoelliptic diffusions.Journal of the Royal Statistical Society: Series B, 66(2):405–422, 2004

  22. [22]

    Learning interaction kernels in mean-field equations of first-order systems of interacting particles.SIAM Journal on Scientific Computing, 44(1):A260–A285, 2022

    Quanjun Lang and Fei Lu. Learning interaction kernels in mean-field equations of first-order systems of interacting particles.SIAM Journal on Scientific Computing, 44(1):A260–A285, 2022

  23. [23]

    Identifiability of interaction kernels in mean-field equations of interacting particles.Foundations of Data Science, 5(4):480–502, 2023

    Quanjun Lang and Fei Lu. Identifiability of interaction kernels in mean-field equations of interacting particles.Foundations of Data Science, 5(4):480–502, 2023

  24. [24]

    Stochastic inverse problem: stability, regularization and wasserstein gradient flow.arXiv preprint arXiv:2410.00229,

    Qin Li, Maria Oprea, Li Wang, and Yunan Yang. Stochastic inverse problem: stability, regu- larization and wasserstein gradient flow.arXiv preprint arXiv:2410.00229, 2024

  25. [25]

    Inverse problems over probability measure space.arXiv preprint arXiv:2504.18999,

    Qin Li, Maria Oprea, Li Wang, and Yunan Yang. Inverse problems over probability measure space.arXiv preprint arXiv:2504.18999, 2025

  26. [26]

    Robust first-and second-order differentiation for regularized optimal transport.SIAM Journal on Scientific Computing, 47(3):C630–C654, 2025

    Xingjie Li, Fei Lu, Molei Tao, and Felix X-F Ye. Robust first-and second-order differentiation for regularized optimal transport.SIAM Journal on Scientific Computing, 47(3):C630–C654, 2025

  27. [27]

    Recurrent graph optimal transport for learning 3d flow motion in particle tracking.Nature Machine Intelligence, 5(5):505–517, 2023

    Jiaming Liang, Chao Xu, and Shengze Cai. Recurrent graph optimal transport for learning 3d flow motion in particle tracking.Nature Machine Intelligence, 5(5):505–517, 2023

  28. [28]

    Parameter estimation of path-dependent McKean-Vlasov stochastic differential equations.Acta Mathematica Scientia, 42(3):876–886, 2022

    Meiqi Liu and Huijie Qiao. Parameter estimation of path-dependent McKean-Vlasov stochastic differential equations.Acta Mathematica Scientia, 42(3):876–886, 2022

  29. [29]

    Learning interaction kernels in heterogeneous systems of agents from multiple trajectories.Journal of Machine Learning Research, 22(32):1–67, 2021

    Fei Lu, Mauro Maggioni, and Sui Tang. Learning interaction kernels in heterogeneous systems of agents from multiple trajectories.Journal of Machine Learning Research, 22(32):1–67, 2021

  30. [30]

    Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories.Foundations of Computational Mathematics, pages 1–55, 2021

    Fei Lu, Mauro Maggioni, and Sui Tang. Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories.Foundations of Computational Mathematics, pages 1–55, 2021. 38

  31. [31]

    Nonparametric inference of interaction laws in systems of agents from trajectory data.Proc

    Fei Lu, Ming Zhong, Sui Tang, and Mauro Maggioni. Nonparametric inference of interaction laws in systems of agents from trajectory data.Proc. Natl. Acad. Sci. USA, 116(29):14424– 14433, 2019

  32. [32]

    Learning generalized diffusions using an energetic variational approach, 2024

    Yubin Lu, Xiaofan Li, Chun Liu, Qi Tang, and Yiwei Wang. Learning generalized diffusions using an energetic variational approach, 2024

  33. [33]

    Learning mean-field equations from particle data using wsindy.Physica D: Nonlinear Phenomena, 439:133406, 2022

    Daniel A Messenger and David M Bortz. Learning mean-field equations from particle data using wsindy.Physica D: Nonlinear Phenomena, 439:133406, 2022

  34. [34]

    Heterophilious Dynamics Enhances Consensus.SIAM Rev, 56(4):577 – 621, 2014

    Sebastien Motsch and Eitan Tadmor. Heterophilious Dynamics Enhances Consensus.SIAM Rev, 56(4):577 – 621, 2014

  35. [35]

    Exploring gen- eralizationindeepnetworks

    Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring gen- eralizationindeepnetworks. InAdvances in Neural Information Processing Systems, volume30, 2017

  36. [36]

    Consensusandcooperationinnetworked multi-agent systems.Proceedings of the IEEE, 95(1):215–233, 2007

    RezaOlfati-Saber, JAlexFax, andRichardMMurray. Consensusandcooperationinnetworked multi-agent systems.Proceedings of the IEEE, 95(1):215–233, 2007

  37. [37]

    DNA methylation and gene function.Science, 210(4470):604–610, 1980

    Aharon Razin and Howard Cedar. DNA methylation and gene function.Science, 210(4470):604–610, 1980

  38. [38]

    Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming.Cell, 176(4):928–943, 2019

    Geoffrey Schiebinger, Jian Shu, Marcin Tabaka, Brian Cleary, Vidya Subramanian, Aryeh Solomon, Joshua Gould, Siyan Liu, Stacie Lin, Peter Berube, et al. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming.Cell, 176(4):928–943, 2019

  39. [39]

    Pavliotis

    Louis Sharrock, Nikolas Kantas, Panos Parpas, and Grigorios A. Pavliotis. Parameter esti- mation for the McKean-Vlasov stochastic differential equation.ArXiv210613751 Math Stat, 2021

  40. [40]

    Springer, 1991

    Alain-Sol Sznitman.Topics in propagation of chaos. Springer, 1991

  41. [41]

    Particle-based energetic variational inference.Statistics and Computing, 31:1–17, 2021

    Yiwei Wang, Jiuhai Chen, Chun Liu, and Lulu Kang. Particle-based energetic variational inference.Statistics and Computing, 31:1–17, 2021

  42. [42]

    Maximum likelihood estima- tionofMcKean-Vlasovstochasticdifferentialequationanditsapplication.Applied Mathematics and Computation, 274:237–246, 2016

    Jianghui Wen, Xiangjun Wang, Shuhua Mao, and Xinping Xiao. Maximum likelihood estima- tionofMcKean-Vlasovstochasticdifferentialequationanditsapplication.Applied Mathematics and Computation, 274:237–246, 2016

  43. [43]

    Liu Yang, Constantinos Daskalakis, and George E Karniadakis. Generative ensemble regres- sion: Learning particle dynamics from observations of ensembles with physics-informed deep generative models.SIAM Journal on Scientific Computing, 44(1):B80–B99, 2022

  44. [44]

    Mean-field nonparametric estimation of interacting particle systems

    Rentian Yao, Xiaohui Chen, and Yun Yang. Mean-field nonparametric estimation of interacting particle systems. InConference on Learning Theory, pages 2242–2275. PMLR, 2022. 39