pith. machine review for the scientific record. sign in

arxiv: 2604.08130 · v1 · submitted 2026-04-09 · 📡 eess.SY · cs.SY

Recognition: 3 theorem links

· Lean Theorem

Cognitive Flexibility as a Latent Structural Operator for Bayesian State Estimation

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:52 UTC · model grok-4.3

classification 📡 eess.SY cs.SY
keywords cognitive flexibilitybayesian filteringlatent structure selectionstructural mismatchinnovation scorestate estimationbelief recursionadaptive filtering
0
0 comments X

The pith

Cognitive Flexibility acts as an online operator to select latent structures in Bayesian state estimation while preserving the filtering recursion.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces Cognitive Flexibility as a representation-level operator for selecting latent structures in deep stochastic state-space models. It does this online using an innovation-based predictive score while keeping the Bayesian filtering recursion intact. Structural mismatch is defined as irreducible predictive inconsistency when the structure is fixed. The belief-structure recursion is proven well-posed with a structural descent property and finite switching, falling back to standard filtering when the model matches reality. Experiments confirm better accuracy under mismatch without harming performance in correct cases.

Core claim

Cognitive Flexibility is introduced as a representation-level operator that selects latent structures online via an innovation-based predictive score, while preserving the Bayesian filtering recursion. Structural mismatch is formalized as irreducible predictive inconsistency under fixed structure. The resulting belief-structure recursion is shown to be well posed, to exhibit a structural descent property, and to admit finite switching, with reduction to standard Bayesian filtering under correct specification.

What carries the argument

Cognitive Flexibility (CF) as a representation-level operator that selects latent structures online via an innovation-based predictive score while preserving the Bayesian filtering recursion.

Load-bearing premise

The innovation-based predictive score can detect and resolve structural mismatches without creating new inconsistencies or violating the Bayesian recursion.

What would settle it

A demonstration that the operator produces infinite structure switches or that the updated beliefs fail to satisfy the standard Bayesian update equations under mismatch.

Figures

Figures reproduced from arXiv: 2604.08130 by Sudchai Boonto, Thanana Nuchkrua, Xiaoqi Liu.

Figure 1
Figure 1. Figure 1: The CF pipeline as a latent structural operator. [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Experiment 4.1. Top: True state zt and estimates from Fixed LIN, Fixed NL, IMM, and CF, initialised at s0 = slin. Bottom: Structure sequence st (LIN= 0, NL= 1) and innovation scores ΦLIN, ΦNL; CF commits to snl at t ≈ 5 and produces no further switches. t 0 100 200 300 400 La t e n t s t a t e −20 0 20 t = 𝜏 Observation shift t 0 100 200 300 400 s t QUAD = 0 SAT = 1 t = 𝜏 Observation shift Φ ( 𝔅t ,s ) = − … view at source ↗
Figure 3
Figure 3. Figure 3: Experiment 4.2: Top: True state zt and estimates from fixed QUAD, fixed SAT, and CF. The dashed line marks the change time τ . Bottom: Selected structure st (QUAD= 0, SAT= 1) and scores ΦQUAD, ΦSAT. t 0 100 200 300 400 La t e n t s t a t e −20 −10 0 10 20 t 0 100 200 300 400 s t QUAD = 0 SAT = 1 Φ ( 𝔅t ,s ) = − lo g ℓ 𝜃,s 0 100 200 300 true zt fixed SAT fixed QUAD CF CF ΦQUAD ΦSAT [PITH_FULL_IMAGE:figures… view at source ↗
Figure 4
Figure 4. Figure 4: Experiment 4.3 (negative control, no observation shift). Top: True state zt and estimates from Fixed-QUAD, Fixed-SAT, and CF (proposed); CF overlaps with Fixed-QUAD throughout. Bottom: Structure sequence st (QUAD= 0, SAT= 1) and innovation scores ΦQUAD, ΦSAT; the selected structure remains at QUAD for all t and ΦQUAD stays below ΦSAT throughout. while under ssat, yt ∼ N  tanh z 2 t 20 , σˆ 2 v  . (31) … view at source ↗
Figure 5
Figure 5. Figure 5: Experiment 4.4 (Z = R 2 , T = 200, N = 1000 particles, s0 = slin). Panels 1–2: Latent state estimates zt,1 and zt,2; CF recovers the accuracy of Fixed NL in both dimensions after a single structural switch, while Fixed LIN remains persistently biased. Panel 3: Structure sequence st; CF commits to snl at t ≈ 2 and produces no further switches. Panel 4: Innovation scores Φt(snl) and Φt(slin); Φt(snl) remains… view at source ↗
read the original abstract

Deep stochastic state-space models enable Bayesian filtering in nonlinear, partially observed systems but typically assume a fixed latent structure. When this assumption is violated, parameter adaptation alone may result in persistent belief inconsistency. We introduce \emph{Cognitive Flexibility} (CF) as a representation-level operator that selects latent structures online via an innovation-based predictive score, while preserving the Bayesian filtering recursion. Structural mismatch is formalized as irreducible predictive inconsistency under fixed structure. The resulting belief--structure recursion is shown to be well posed, to exhibit a structural descent property, and to admit finite switching, with reduction to standard Bayesian filtering under correct specification. Experiments on latent-dynamics mismatch, observation-structure shifts, and well-specified regimes confirm that CF improves predictive accuracy under a mismatch while remaining non-intrusive when the model is correctly specified.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript introduces Cognitive Flexibility (CF) as a representation-level operator that augments Bayesian filtering in deep stochastic state-space models by selecting latent structures online via an innovation-based predictive score. Structural mismatch is defined as irreducible predictive inconsistency under a fixed structure. The authors claim to establish that the resulting belief-structure recursion is well-posed, possesses a structural descent property, admits finite switching, and reduces exactly to standard Bayesian filtering under correct model specification. Experiments on latent-dynamics mismatch, observation-structure shifts, and well-specified regimes are reported to show improved predictive accuracy under mismatch while remaining non-intrusive otherwise.

Significance. If the claimed well-posedness, descent, and finite-switching properties are rigorously derived without hidden parameters or post-hoc fitting in the predictive score, the framework would provide a useful extension of Bayesian filtering to structurally uncertain models while preserving core recursion properties. The explicit reduction to the standard case and the non-intrusive behavior under correct specification are strengths that could aid adoption in adaptive state estimation. However, the overall significance hinges on whether the innovation-based score is independently defined and whether the descent property holds independently of data-dependent tuning.

major comments (2)
  1. [Abstract] Abstract: The claims that the belief-structure recursion is well-posed, exhibits structural descent, and admits finite switching are asserted without any derivation, proof sketch, or reference to a specific theorem or section; this prevents verification that these properties follow from the CF operator definition rather than from additional assumptions.
  2. [Abstract] Abstract: The innovation-based predictive score is presented as the mechanism for detecting and resolving structural mismatch, yet its exact functional form, any free parameters, and the procedure for its computation are not supplied; if the score involves fitting to the same data used for evaluation, the claimed descent property risks becoming circular rather than an independent guarantee.
minor comments (1)
  1. The term 'Cognitive Flexibility' is introduced as an invented entity without a clear mapping to existing adaptive-filtering constructs; a brief comparison table or paragraph distinguishing CF from prior structure-selection methods would improve clarity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and detailed comments, which help clarify the presentation of our results. We address each major comment below and indicate planned revisions to the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The claims that the belief-structure recursion is well-posed, exhibits structural descent, and admits finite switching are asserted without any derivation, proof sketch, or reference to a specific theorem or section; this prevents verification that these properties follow from the CF operator definition rather than from additional assumptions.

    Authors: We agree that the abstract, due to length constraints, does not include derivations or section references. The well-posedness of the belief-structure recursion is established in Theorem 3.1 (Section 3), the structural descent property in Theorem 3.2, and finite switching in Theorem 4.1 (Section 4), with full proofs in Appendix A. These results follow directly from the CF operator definition and the standard assumptions of Bayesian filtering, without additional hidden parameters. In the revised manuscript we will update the abstract to include explicit references, e.g., 'as shown in Theorems 3.1--4.1'. revision: yes

  2. Referee: [Abstract] Abstract: The innovation-based predictive score is presented as the mechanism for detecting and resolving structural mismatch, yet its exact functional form, any free parameters, and the procedure for its computation are not supplied; if the score involves fitting to the same data used for evaluation, the claimed descent property risks becoming circular rather than an independent guarantee.

    Authors: The innovation-based predictive score is defined in Section 2.2, Equation (5), as the negative log predictive density of the innovation under the current structure, computed from the innovation covariance matrix with no free parameters or data-dependent tuning. Its computation uses only quantities available at the current time step and is independent of any evaluation data, ensuring the descent property in Theorem 3.2 is non-circular. We will revise the abstract to include a concise reference to this definition and its independence from fitting. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected in derivation

full rationale

The paper defines Cognitive Flexibility as an operator augmenting Bayesian state estimation with online structure selection via an innovation-based predictive score. It then asserts that the resulting belief-structure recursion is well-posed, exhibits structural descent, admits finite switching, and reduces to standard filtering under correct specification. These properties are presented as consequences of the recursive construction and the formalization of mismatch as irreducible inconsistency, without any visible reduction of the claimed predictions or descent property back to a fitted parameter or self-referential definition in the abstract. No load-bearing self-citations, ansatzes smuggled via prior work, or renaming of known results are evident that would collapse the central claims by construction. The derivation therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claim rests on the new operator definition, the formalization of mismatch as irreducible inconsistency, and the preservation of the Bayesian recursion; these are introduced without external benchmarks or independent evidence in the abstract.

axioms (2)
  • domain assumption Bayesian filtering recursion remains valid under the structural operator
    Stated directly in the abstract as a preserved property.
  • ad hoc to paper Structural mismatch equals irreducible predictive inconsistency under fixed structure
    Defined in the abstract as the basis for triggering the operator.
invented entities (1)
  • Cognitive Flexibility (CF) operator no independent evidence
    purpose: Online selection of latent structures via innovation-based score
    Newly introduced concept with no independent evidence or falsifiable handle provided in the abstract.

pith-pipeline@v0.9.0 · 5434 in / 1472 out tokens · 49294 ms · 2026-05-10T17:52:06.860434+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

65 extracted references · 35 canonical work pages

  1. [1]

    In: 2025 IEEE 64th Conference on Decision and Control (CDC)

    M. Sznaier, F. Allgower, A. C. B. de Oliveira, N. Ozay, E. Sontag, Tutorial: Data driven and learning enabled control, in: 2025 IEEE 64th Conference on Decision and Control (CDC), 2025, pp. 2858–2873. doi:10.1109/CDC57313.2025.11312267

  2. [2]

    Arulkumaran, M

    K. Arulkumaran, M. P. Deisenroth, M. Brundage, A. A. Bharath, Deep reinforcement learning: A brief survey, IEEE Signal Processing Magazine 34 (6) (2017) 26–38. doi:10.1109/MSP.2017.2743240

  3. [3]

    (27) Qi, P .; Dai, Y .; Luo, Y .; Tao, G.; Zheng, L.; Liu, D.; Zhang, T.; Zhou, J.; Shen, B.; Lin, F.; Liu, Z.; Fang, Z

    B. Lusch, J. N. Kutz, S. L. Brunton, Deep learning for universal linear embeddings of nonlinear dynamics, Nature Communications 9 (2018) 4950. doi:10.1038/s41467-018- 07210-0

  4. [4]

    Miller, T

    J. Miller, T. Dai, M. Sznaier, Data-driven superstabilizing control under quadratically-bounded errors-in-variables 13 noise, IEEE Control Systems Letters 8 (2024) 1655–1660. doi:10.1109/LCSYS.2024.3410888

  5. [5]

    Nuchkrua, T

    T. Nuchkrua, T. Leephakpreeda, Novel compliant control of a pneumatic artificial muscle driven by hydrogen pressure under a varying environment, IEEE Transactions on Industrial Electronics 69 (7) (2022) 7120–7129. doi:10.1109/TIE.2021.3102486

  6. [6]

    Derler, E

    P. Derler, E. A. Lee, A. S. Vincentelli, Modeling cyber– physical systems, Proceedings of the IEEE 100 (1) (2012) 13–28. doi:10.1109/JPROC.2011.2160929

  7. [7]

    Continuum Robot State Estimation Using Gaussian Process Regression on SE(3) , url =

    S. Lilge, Continuum robot state estimation using gaussian process models, The International Journal of Robotics Research (2022). doi:10.1177/02783649221128843

  8. [8]

    Ljung, System Identification: Theory for the User, Prentice-Hall, Upper Saddle River, NJ, 1999

    L. Ljung, System Identification: Theory for the User, Prentice-Hall, Upper Saddle River, NJ, 1999

  9. [9]

    S. J. Qin, T. A. Badgwell, A survey of industrial model predictive control technology, Control Engineering Practice 11 (7) (2003) 733–764

  10. [10]

    A. H. Jazwinski, Stochastic Processes and Filtering Theory, Academic Press, New York, 1970

  11. [11]

    P. S. Maybeck, Stochastic Models, Estimation, and Control, Academic Press, New York, 1979

  12. [12]

    J. B. Rawlings, D. Q. Mayne, M. M. Diehl, Model Predictive Control: Theory, Computation, and Design, Nob Hill Publishing, 2017

  13. [13]

    L. P. Kaelbling, M. L. Littman, A. R. Cassandra, Planning and acting in partially observable stochastic domains, Artificial Intelligence 101 (1–2) (1998) 99–134

  14. [14]

    Thrun, W

    S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics, MIT Press, 2005

  15. [15]

    Hewing, K

    L. Hewing, K. P. Wabersich, M. Menner, M. N. Zeilinger, Learning-based model predictive control: Toward safe learning in control, Annual Review of Control, Robotics, and Autonomous Systems 3 (2020) 269–296. doi:10.1146/annurev-control-090919-075625

  16. [16]

    Becker, L

    F. Becker, L. Hewing, M. N. Zeilinger, Learning-based model predictive control with stochastic state-space models, IEEE Control Systems Letters 5 (2) (2021) 558–563

  17. [17]

    S. Curi, F. Berkenkamp, A. Krause, Efficient model-based reinforcement learning through optimistic policy search and planning (2020). arXiv:2006.08684. URL https://arxiv.org/abs/2006.08684

  18. [18]

    C. A. Alonso, J. Sieber, M. N. Zeilinger, State space models as foundation models: A control theoretic overview, in: 2025 American Control Conference (ACC), 2025, pp. 146–153. doi:10.23919/ACC63710.2025.11107969

  19. [19]

    Gedon, N

    D. Gedon, N. Wahlström, T. B. Schön, L. Ljung, Deep state space models for nonlinear system identification, in: IF AC-PapersOnLine, Vol. 54, 2021, pp. 481–486. doi:10.1016/j.ifacol.2021.08.406

  20. [21]

    R. G. Krishnan, U. Shalit, D. Sontag, Deep Kalman filters, arXiv preprint arXiv:1511.05121 (2015)

  21. [22]

    Fraccaro, S

    M. Fraccaro, S. K. Sønderby, U. Paquet, O. Winther, Sequential neural models with stochastic layers, in: Advances in Neural Information Processing Systems (NeurIPS), 2017

  22. [23]

    M. Karl, M. S. Soelch, J. Bayer, P. van der Smagt, Deep variational bayes filters: Unsupervised learning of state space models from raw data, in: International Conference on Learning Representations (ICLR), 2017

  23. [24]

    Hafner, T

    D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, J. Davidson, Learning latent dynamics for planning from pixels, in: Proceedings of the International Conference on Machine Learning, 2019

  24. [25]

    Durkan, A

    C. Durkan, A. Bekasov, I. Murray, G. Papamakarios, Neural spline flows, Advances in Neural Information Processing Systems 33 (2020) 7509–7520

  25. [26]

    Forgione, D

    M. Forgione, D. Piga, DynoNet: A neural network architecture for learning dynamical systems, International Journal of Adaptive Control and Signal Processing 35 (4) (2021) 612–626. doi:10.1002/acs.3216

  26. [27]

    G. I. Beintema, R. Tóth, M. Schoukens, Nonlinear state-space identification using deep encoder networks, in: Proceedings of the 3rd Conference on Learning for Dynamics and Control (L4DC), Vol. 144 of Proceedings of Machine Learning Research, PMLR, 2021, pp. 241–250

  27. [28]

    Soloperto, L

    R. Soloperto, L. Hewing, J. Köhler, M. N. Zeilinger, Bayesian learning-based control of uncertain dynamical systems, IEEE Transactions on Automatic Control 68 (8) (2023) 4682–4697

  28. [29]

    J. Lin, G. Michailidis, Deep learning-based approaches for state space models: A selective review, arXiv:2412.11211 (2024)

  29. [30]

    Ovadia, E

    Y. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. V. Dillon, B. Lakshminarayanan, J. Snoek, Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift, in: Advances in Neural Information Processing Systems, Vol. 32, Curran Associates, Inc., 2019, pp. 13991–14002

  30. [31]

    Diehl, T

    C. Diehl, T. Sievernich, M. Krüger, F. Hoffmann, T. Bertram, Umbrella: Uncertainty-aware model-based offline reinforcement learning leveraging planning (2022). arXiv:2111.11097

  31. [32]

    J.-J. E. Slotine, W. Li, Applied Nonlinear Control, Prentice Hall, 1991

  32. [33]

    P. A. Ioannou, J. Sun, Robust Adaptive Control, Prentice Hall, 1996

  33. [34]

    Aswani, H

    A. Aswani, H. Gonzalez, S. S. Sastry, C. Tomlin, Provably safe and robust learning-based model predictive control, Automatica 49 (5) (2013) 1216–1226

  34. [35]

    Y. Chow, M. Ghavamzadeh, L. Janson, M. Pavone, Risk- constrained reinforcement learning with percentile risk criteria, Journal of Machine Learning Research 18 (167) (2018) 1–51

  35. [36]

    Berkenkamp, M

    F. Berkenkamp, M. Turchetta, A. Krause, A. P. Schoellig, Safe reinforcement learning: A survey, Annual Review of Control, Robotics, and Autonomous Systems 4 (2021) 1–26. doi:10.1146/annurev-control-062020-090810

  36. [37]

    Thananjeyan, A

    B. Thananjeyan, A. Balakrishna, U. Rosolia, J. K. Lee, S. Levine, F. Borrelli, Safety augmented value estimation from demonstrations, in: Proceedings of Robotics: Science and Systems (RSS), Virtual Conference, 2021. doi:10.15607/RSS.2021.XVII.074

  37. [38]

    Y. Ju, B. Mu, L. Ljung, T. Chen, Asymptotic theory for regularized system identification part i: Empirical bayes hyperparameter estimator, IEEE Transactions on Automatic Control 68 (12) (2023) 7224–7239. doi:10.1109/TAC.2023.3259977

  38. [39]

    Pillonetto, L

    G. Pillonetto, L. Ljung, Full bayesian identification of linear dynamic systems using stable kernels, Proceedings of the National Academy of Sciences 120 (18) (2023) e2218197120. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2218197120, doi:10.1073/pnas.2218197120. 14

  39. [40]

    Springer Dordrecht, 1988.doi:10.1007/978- 94-009-2871-8

    G. Pillonetto, T. Chen, A. Chiuso, G. D. Nicolao, L. Ljung, Regularized System Identification: Learning Dynamic Models from Data, Springer Nature, Cham, 2022. doi:10.1007/978- 3-030-77885-9

  40. [41]

    Besançon, Nonlinear Observers and Applications, Springer, Berlin, 2007

    G. Besançon, Nonlinear Observers and Applications, Springer, Berlin, 2007

  41. [42]

    A. C. Oliveira, V. C. S. Campos, L. A. Mozelli, Less conservative adaptive gain-scheduling control for continuous-time systems with polytopic uncertainties (2025). arXiv:2506.12476

  42. [43]

    Bar-Shalom, X

    Y. Bar-Shalom, X. R. Li, Estimation and Tracking: Principles, Techniques, and Software, Artech House, Boston, MA, 1993

  43. [44]

    K. S. Narendra, A. M. Annaswamy, Stable Adaptive Systems, Prentice-Hall, Englewood Cliffs, NJ, 1989

  44. [45]

    Balluchi, L

    A. Balluchi, L. Benvenuti, M. D. Di Benedetto, A. Sangiovanni-Vincentelli, The design of dynamical observers for hybrid systems: Theory and application to an automotive control problem, Automatica 49 (4) (2013) 915–

  45. [46]

    doi:https://doi.org/10.1016/j.automatica.2013.01.037

  46. [47]

    N. J. Kong, J. J. Payne, G. Council, A. M. Johnson, The salted kalman filter: Kalman filtering on hybrid dynamical systems, Automatica 131 (2021) 109752. doi:https://doi.org/10.1016/j.automatica.2021.109752

  47. [48]

    H. A. P. Blom, Y. Bar-Shalom, The interacting multiple model algorithm for systems with Markovian switching coefficients, IEEE Transactions on Automatic Control 33 (8) (1988) 780–783

  48. [49]

    Lavaei, S

    A. Lavaei, S. Soudjani, A. Abate, M. Zamani, Automated verification and synthesis of stochastic hybrid systems: A survey, Automatica 146 (2022) 110617. doi:10.1016/j.automatica.2022.110617

  49. [50]

    N. J. Kong, J. Joe Payne, J. Zhu, A. M. Johnson, Saltation matrices: The essential tool for linearizing hybrid dynamical systems, Proceedings of the IEEE 112 (6) (2024) 585–608. doi:10.1109/JPROC.2024.3440211

  50. [51]

    Revach, N

    G. Revach, N. Shlezinger, X. Ni, A. L. Escoriza, R. J. G. van Sloun, Y. C. Eldar, KalmanNet: Neural network aided kalman filtering for partially known dynamics, IEEE Transactions on Signal Processing 70 (2022) 1532–1547. doi:10.1109/TSP.2022.3158588

  51. [52]

    Chakrabarty, G

    A. Chakrabarty, G. Wichern, C. R. Laughman, Meta- learning of neural state-space models using data from similar systems, in: IF AC-PapersOnLine, 2023. doi:10.1016/j.ifacol.2023.10.1843

  52. [53]

    D. G. McClement, N. P. Lawrence, M. G. Forbes, P. D. Loewen, J. U. Backström, R. B. Gopaluni, Meta- reinforcement learning for adaptive control of second order systems, arXiv preprint arXiv:2209.09301 (2022)

  53. [54]

    W. A. Scott, Cognitive complexity and cognitive flexibility, Sociometry 25 (4) (1962) 405–414

  54. [55]

    A. G. E. Collins, M. J. Frank, Cognitive control over learning: Creating, clustering, and generalizing task-set structure, Psychological Review 120 (1) (2013) 190–229. doi:10.1037/a0030852

  55. [56]

    D. P. Bertsekas, Dynamic Programming and Optimal Control, 3rd Edition, Vol. 2, Athena Scientific, Belmont, MA, USA, 2005

  56. [57]

    Nuchkrua, S

    T. Nuchkrua, S. Boonto, Robust cognitive-flexible filtering under noisy innovation scores, IEEE Control Systems LettersSubmitted (2026)

  57. [58]

    Nuchkrua, S

    T. Nuchkrua, S. Boonto, Cognitive-flexible control via latent model reorganization with predictive safety guarantees, arxXiv preprint arXiv:2602.00812 (2026). arXiv:2602.00812

  58. [59]

    S. Xu, A. Y. Zhang, A. Singer, Misspecified maximum likelihood estimation for non-uniform group orbit recovery, arXiv:2509.22945 (2025). URL https://arxiv.org/abs/2509.22945

  59. [60]

    B. D. O. Anderson, J. B. Moore, Optimal Filtering, Prentice- Hall, Englewood Cliffs, NJ, 1979

  60. [61]

    Abate, M

    A. Abate, M. Prandini, J. Lygeros, S. Sastry, Probabilistic reachability and safety for controlled discrete time stochastic hybrid systems, Automatica 44 (11) (2008) 2724–2734. doi:10.1016/j.automatica.2008.03.027

  61. [62]

    M. S. Arulampalam, S. Maskell, N. Gordon, T. Clapp, A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking, IEEE Transactions on Signal Processing 50 (2) (2002) 174–188

  62. [63]

    Jha, A novel statistical particle filtering approach for non- linear and non-gaussian system identification, International Journal of Computer Applications (2012)

    D. Jha, A novel statistical particle filtering approach for non- linear and non-gaussian system identification, International Journal of Computer Applications (2012). doi:10.5120/9700- 4147

  63. [64]

    Journal of Computational and Graphical Statistics , publisher =

    G. Kitagawa, Monte carlo filter and smoother for non-gaussian nonlinear state space models, Journal of Computational and Graphical Statistics 5 (1) (1996) 1–25. doi:10.1080/10618600.1996.10474692

  64. [65]

    B. P. Carlin, N. G. Polson, D. S. Stoffer, A monte carlo approach to nonnormal and nonlinear state-space modeling, Journal of the American Statistical Association 87 (418) (1992) 493–500. doi:10.1080/01621459.1992.10475231

  65. [66]

    N. J. Gordon, D. J. Salmond, A. F. M. Smith, Novel approach to nonlinear/non-gaussian bayesian state estimation, IEE Proceedings F 140 (2) (1993) 107–113. 15