pith. machine review for the scientific record. sign in

arxiv: 2604.03508 · v1 · submitted 2026-04-03 · 📡 eess.SY · cs.SY

Recognition: no theorem link

Data-Driven Tensor Decomposition Identification of Homogeneous Polynomial Dynamical Systems

Can Chen, Joshua Pickard, Xin Mao

Authors on Pith no claims yet

Pith reviewed 2026-05-13 18:47 UTC · model grok-4.3

classification 📡 eess.SY cs.SY
keywords homogeneous polynomial dynamical systemstensor decompositiondata-driven identificationalternating least squaressystem identificationlow-rank tensortime-series datanetworked systems
0
0 comments X

The pith

Low-rank tensor decompositions enable direct identification of homogeneous polynomial dynamical systems from time-series data.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Homogeneous polynomial dynamical systems model higher-order interactions in networked systems such as ecological networks and multi-agent robotics, but their full tensor representation grows rapidly with dimension and degree. The paper shows that compact low-rank forms, including canonical polyadic, tensor train, and hierarchical Tucker decompositions, can represent these systems while drastically cutting the number of parameters. Rather than recovering the entire dynamic tensor, the method learns only the underlying factor tensors or matrices straight from input-output time series. Tailored alternating least-squares solvers recover the factors, and the approach includes explicit checks for noise robustness and conditions on data that guarantee unique recovery.

Core claim

Homogeneous polynomial dynamical systems admit equivalent low-rank tensor representations, and the factor tensors in those representations can be identified directly from time-series data by alternating least-squares algorithms specialized to each decomposition format, producing accurate models with far fewer parameters than the full tensor.

What carries the argument

Low-rank tensor decompositions (canonical polyadic, tensor train, hierarchical Tucker) whose factor tensors or matrices are learned from data by decomposition-specific alternating least-squares iterations.

Load-bearing premise

The underlying homogeneous polynomial system must admit a sufficiently low-rank tensor decomposition and the collected time-series measurements must be informative enough to uniquely determine the factor tensors.

What would settle it

Apply the method to time-series data generated by a known homogeneous polynomial system whose tensor representation has high rank; the recovered low-rank model should then produce large prediction error on new trajectories even with abundant clean measurements.

Figures

Figures reproduced from arXiv: 2604.03508 by Can Chen, Joshua Pickard, Xin Mao.

Figure 1
Figure 1. Figure 1: The resulting sequential structure enables ef [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 1
Figure 1. Figure 1: Illustration of the TT decomposition of a [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: An example of the CPD of a third-order tensor. [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 5
Figure 5. Figure 5: Relative identification errors of the lifting-based meth [PITH_FULL_IMAGE:figures/full_fig_p013_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Computation time comparison between the lift [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
read the original abstract

Homogeneous polynomial dynamical systems (HPDSs), which can be equivalently represented by tensors, are essential for modeling higher-order networked systems, including ecological networks, chemical reactions, and multi-agent robotic systems. However, identifying such systems from data is challenging due to the rapid growth in the number of parameters with increasing system dimension and polynomial degree. In this article, we adopt compact and scalable representations of HPDSs leveraging low-rank tensor decompositions, including tensor train, hierarchical Tucker, and canonical polyadic decompositions. These representations exploit the intrinsic multilinear structure of HPDSs and substantially reduce the dimensionality of the parameter space. Rather than identifying the full dynamic tensor, we develop a data-driven framework that directly learns the underlying factor tensors or matrices in the associated decompositions from time-series data. The resulting identification problem is solved using alternating least-squares algorithms tailored to each tensor decomposition, achieving both accuracy and computational efficiency. We further analyze the robustness of the proposed framework in the presence of measurement noise and characterize data informativity. Finally, we demonstrate the effectiveness of our framework with numerical examples.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper develops a data-driven identification method for homogeneous polynomial dynamical systems (HPDS) by representing the system tensor via low-rank decompositions (canonical polyadic, tensor train, hierarchical Tucker). Rather than recovering the full tensor, it directly estimates the factor tensors/matrices from time-series data using tailored alternating least-squares (ALS) solvers for each decomposition, claims accuracy and efficiency, provides a noise-robustness analysis, characterizes data informativity conditions, and validates the approach on numerical examples.

Significance. If the central claims hold, the framework offers a scalable route to identifying high-dimensional HPDS by exploiting intrinsic low-rank multilinear structure, substantially lowering the number of parameters relative to unstructured tensor identification. This is relevant for networked systems in ecology, chemistry, and robotics. The direct factor-learning approach and tailored ALS are computationally attractive, and the explicit data-informativity characterization is a positive step toward rigorous identification.

major comments (3)
  1. [ALS algorithm sections (around the tailored solvers for CP/TT/HT)] The robustness analysis (mentioned in the abstract and developed in the identification sections) supplies no quantitative error bounds, convergence rates, or initialization-independent recovery guarantees for the ALS iterates under additive measurement noise. Because the overall objective remains non-convex, the absence of such bounds leaves open the possibility that the algorithm returns spurious factors even when the underlying HPDS admits an exact low-rank representation and the data are informative.
  2. [Data informativity characterization] Data-informativity conditions are stated but are not linked to the basin of attraction of the ALS iterations or to uniqueness of the recovered factors. Without this link, the claim that the method “directly learns the underlying factor tensors” from time-series data is not fully supported when noise is present.
  3. [Numerical examples section] Numerical examples demonstrate effectiveness but do not report quantitative comparisons against full-tensor least-squares identification or against other tensor-based baselines, making it difficult to assess the claimed computational-efficiency gains in a controlled way.
minor comments (2)
  1. [Abstract] The abstract asserts “accuracy and computational efficiency” without defining the metrics or baselines used to support these adjectives.
  2. [Preliminaries / tensor decomposition definitions] Notation for the factor matrices/tensors in the three decompositions (CP, TT, HT) should be introduced with explicit index conventions and dimension statements to avoid ambiguity when the algorithms are described.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We address each major comment point by point below, providing the strongest honest defense of the manuscript while acknowledging its limitations.

read point-by-point responses
  1. Referee: [ALS algorithm sections (around the tailored solvers for CP/TT/HT)] The robustness analysis (mentioned in the abstract and developed in the identification sections) supplies no quantitative error bounds, convergence rates, or initialization-independent recovery guarantees for the ALS iterates under additive measurement noise. Because the overall objective remains non-convex, the absence of such bounds leaves open the possibility that the algorithm returns spurious factors even when the underlying HPDS admits an exact low-rank representation and the data are informative.

    Authors: We acknowledge that the robustness analysis in the manuscript provides a perturbation-based bound on the estimation error under bounded noise but does not derive convergence rates or initialization-independent global recovery guarantees for the non-convex ALS iterations. This is a genuine limitation of the current work, as establishing such guarantees for general tensor decompositions remains an open challenge even in the broader tensor literature. The analysis instead shows that, when the ALS iterates reach a stationary point close to the true factors, the error scales linearly with the noise level. We have added a clarifying remark in the revised manuscript explicitly stating the local nature of the guarantees and the reliance on standard initialization heuristics that performed reliably in our experiments. revision: partial

  2. Referee: [Data informativity characterization] Data-informativity conditions are stated but are not linked to the basin of attraction of the ALS iterations or to uniqueness of the recovered factors. Without this link, the claim that the method “directly learns the underlying factor tensors” from time-series data is not fully supported when noise is present.

    Authors: The data informativity conditions derived in the paper guarantee that the regressor matrix has full column rank, ensuring unique recovery of the full system tensor in the noiseless case; factor uniqueness then follows from the standard uniqueness results for CP, TT, and HT decompositions. Under noise, the manuscript does not rigorously connect these conditions to the basin of attraction of ALS, which would indeed require additional analysis combining persistent excitation with non-convex optimization theory. We have revised the relevant section to clarify that the “direct learning” claim holds under the assumption that ALS converges to the correct stationary point (supported by the numerical evidence), and we note the gap as a direction for future work. revision: partial

  3. Referee: [Numerical examples section] Numerical examples demonstrate effectiveness but do not report quantitative comparisons against full-tensor least-squares identification or against other tensor-based baselines, making it difficult to assess the claimed computational-efficiency gains in a controlled way.

    Authors: We agree that quantitative comparisons are necessary to substantiate the efficiency claims. In the revised manuscript we have added new experiments in the numerical examples section that directly compare the proposed ALS solvers against full-tensor least-squares identification as well as against other low-rank tensor baselines, reporting both identification error and wall-clock runtime across increasing state dimensions, polynomial degrees, and noise levels. revision: yes

Circularity Check

0 steps flagged

No circularity: standard ALS applied to tensor factors from data

full rationale

The paper's core method adopts known low-rank tensor decompositions (CP, TT, HT) for HPDS and solves for their factors via tailored alternating least-squares on time-series data. No equation reduces a claimed prediction to a fitted quantity defined by the same data, no self-citation supplies a uniqueness theorem that forces the result, and no ansatz is smuggled in. Data informativity is characterized separately from the recovery algorithm. The derivation remains self-contained against external tensor-algebra benchmarks and does not collapse to its inputs by construction.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

Abstract-only review limits visibility into explicit parameters or axioms; the central claim rests on the domain assumption that HPDS possess exploitable low-rank multilinear structure.

free parameters (1)
  • tensor ranks
    Chosen to balance accuracy and efficiency; selection procedure not detailed in abstract.
axioms (1)
  • domain assumption Homogeneous polynomial dynamical systems admit low-rank tensor decompositions that capture their multilinear structure
    Invoked to justify the reduction from full tensor to factor tensors.

pith-pipeline@v0.9.0 · 5487 in / 1221 out tokens · 35736 ms · 2026-05-13T18:47:24.526028+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

60 extracted references · 60 canonical work pages

  1. [1]

    High-order species interactions shape ecosystem diversity,

    E. Bairey, E. D. Kelsic, and R. Kishony, “High-order species interactions shape ecosystem diversity,”Nature communications, vol. 7, no. 1, p. 12285, 2016

  2. [2]

    Modeling and analysis of mass-action kinetics,

    V. Chellaboina, S. P. Bhat, W. M. Haddad, and D. S. Bernstein, “Modeling and analysis of mass-action kinetics,” IEEE Control Systems Magazine, vol. 29, no. 4, pp. 60–78, 2009

  3. [3]

    Higher-order interactions stabilize dynamics in competitive network models,

    J. Grilli, G. Barab´ as, M. J. Michalska-Smith, and S. Allesina, “Higher-order interactions stabilize dynamics in competitive network models,”Nature, vol. 548, no. 7666, pp. 210–213, 2017

  4. [4]

    Temporal properties of higher-order interactions in social networks,

    G. Cencetti, F. Battiston, B. Lepri, and M. Karsai, “Temporal properties of higher-order interactions in social networks,”Scientific reports, vol. 11, no. 1, p. 7028, 2021

  5. [5]

    Polynomial dynamical systems, reaction networks, and toric differential inclusions,

    G. Craciun, “Polynomial dynamical systems, reaction networks, and toric differential inclusions,”SIAM Journal on Applied Algebra and Geometry, vol. 3, no. 1, pp. 87–106, 2019

  6. [6]

    Model reduction of homogeneous polynomial dynamical systems via tensor decomposition,

    X. Mao and C. Chen, “Model reduction of homogeneous polynomial dynamical systems via tensor decomposition,” IEEE Transactions on Automatic Control, 2026

  7. [7]

    Chesi, A

    G. Chesi, A. Garulli, A. Tesi, and A. Vicino,Homogeneous polynomial forms for robustness analysis of uncertain systems. Springer Science & Business Media, 2009, vol. 390. 14

  8. [8]

    Stability properties of autonomous homogeneous polynomial differential systems,

    N. Samardzija, “Stability properties of autonomous homogeneous polynomial differential systems,”Journal of Differential Equations, vol. 48, no. 1, pp. 60–70, 1983

  9. [9]

    Model order reduction in fluid dynamics: challenges and perspectives,

    T. Lassila, A. Manzoni, A. Quarteroni, and G. Rozza, “Model order reduction in fluid dynamics: challenges and perspectives,”Reduced Order Methods for modeling and computational reduction, pp. 235–273, 2014

  10. [10]

    The gnat method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows,

    K. Carlberg, C. Farhat, J. Cortial, and D. Amsallem, “The gnat method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows,”Journal of Computational Physics, vol. 242, pp. 623–647, 2013

  11. [11]

    Brauer, C

    F. Brauer, C. Castillo-Chavez, Z. Fenget al.,Mathematical models in epidemiology. Springer, 2019, vol. 32

  12. [12]

    Guckenheimer and P

    J. Guckenheimer and P. Holmes,Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer Science & Business Media, 2013, vol. 42

  13. [13]

    Reconstructing higher-order interactions in coupled dynamical systems,

    F. Malizia, A. Corso, L. V. Gambuzza, G. Russo, V. Latora, and M. Frasca, “Reconstructing higher-order interactions in coupled dynamical systems,”Nature Communications, vol. 15, no. 1, p. 5184, 2024

  14. [14]

    D. A. Cox,Applications of polynomial systems. American Mathematical Soc., 2020, vol. 134

  15. [15]

    Polynomial theory of complex systems,

    A. G. Ivakhnenko, “Polynomial theory of complex systems,” IEEE transactions on Systems, Man, and Cybernetics, no. 4, pp. 364–378, 2007

  16. [16]

    Uncertainty quantification and polynomial chaos techniques in computational fluid dynamics,

    H. N. Najm, “Uncertainty quantification and polynomial chaos techniques in computational fluid dynamics,”Annual review of fluid mechanics, vol. 41, no. 1, pp. 35–52, 2009

  17. [17]

    Van Overschee and B

    P. Van Overschee and B. De Moor,Subspace Identification for Linear Systems: Theory—Implementation—Applications. Springer Science & Business Media, 2012

  18. [18]

    On consistency of subspace methods for system identification,

    M. Jansson and B. Wahlberg, “On consistency of subspace methods for system identification,”Automatica, vol. 34, no. 12, pp. 1507–1519, 1998

  19. [19]

    Identification of arx-models subject to missing data,

    A. J. Isaksson, “Identification of arx-models subject to missing data,”IEEE Transactions on Automatic Control, vol. 38, no. 5, pp. 813–819, 2002

  20. [20]

    Subspace identification and arx modeling,

    M. Jansson, “Subspace identification and arx modeling,” IFAC Proceedings Volumes, vol. 36, no. 16, pp. 1585–1590, 2003

  21. [21]

    Ljung,System identification

    L. Ljung,System identification. Springer, 1998

  22. [22]

    Linear least squares regression,

    G. S. Watson, “Linear least squares regression,”The Annals of Mathematical Statistics, pp. 1679–1699, 1967

  23. [23]

    Recursive identification for nonlinear arx systems based on stochastic approximation algorithm,

    W.-X. Zhao, H.-F. Chen, and W. X. Zheng, “Recursive identification for nonlinear arx systems based on stochastic approximation algorithm,”IEEE Transactions on Automatic Control, vol. 55, no. 6, pp. 1287–1299, 2010

  24. [24]

    Stabilizing predictive control of nonlinear arx models,

    G. De Nicolao, L. Magni, and R. Scattolini, “Stabilizing predictive control of nonlinear arx models,”Automatica, vol. 33, no. 9, pp. 1691–1697, 1997

  25. [25]

    A tensor network kalman filter with an application in recursive mimo volterra system identification,

    K. Batselier, Z. Chen, and N. Wong, “A tensor network kalman filter with an application in recursive mimo volterra system identification,”Automatica, vol. 84, pp. 17–25, 2017

  26. [26]

    Volterra series and geometric control theory,

    R. W. Brockett, “Volterra series and geometric control theory,”Automatica, vol. 12, no. 2, pp. 167–176, 1976

  27. [27]

    Sparse high-dimensional regression,

    D. Bertsimas and B. Van Parys, “Sparse high-dimensional regression,”The Annals of Statistics, vol. 48, no. 1, pp. 300– 323, 2020

  28. [28]

    Sparse high-dimensional regression,

    ——, “Sparse high-dimensional regression,”The Annals of Statistics, vol. 48, no. 1, pp. 300–323, 2020

  29. [29]

    Modelling using polynomial regression,

    E. Ostertagov´ a, “Modelling using polynomial regression,” Procedia engineering, vol. 48, pp. 500–506, 2012

  30. [30]

    Constructing least- squares polynomial approximations,

    L. Guo, A. Narayan, and T. Zhou, “Constructing least- squares polynomial approximations,”SIAM Review, vol. 62, no. 2, pp. 483–508, 2020

  31. [31]

    Discovering governing equations from data by sparse identification of nonlinear dynamical systems,

    S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering governing equations from data by sparse identification of nonlinear dynamical systems,”Proceedings of the national academy of sciences, vol. 113, no. 15, pp. 3932–3937, 2016

  32. [32]

    Hypergraph reconstruction from dynamics,

    R. Delabays, G. De Pasquale, F. D¨ orfler, and Y. Zhang, “Hypergraph reconstruction from dynamics,”Nature Communications, vol. 16, no. 1, p. 2691, 2025

  33. [33]

    Identification of hypergraph dynamics via physics-informed neural networks,

    X. Mao, A. Dong, and C. Chen, “Identification of hypergraph dynamics via physics-informed neural networks,”IEEE Control Systems Letters, vol. 9, pp. 2525–2530, 2025

  34. [34]

    Dynamic mode decomposition with control,

    J. L. Proctor, S. L. Brunton, and J. N. Kutz, “Dynamic mode decomposition with control,”SIAM Journal on Applied Dynamical Systems, vol. 15, no. 1, pp. 142–161, 2016

  35. [35]

    Spatially adaptive sparse grids for high-dimensional data-driven problems,

    D. Pfl¨ uger, B. Peherstorfer, and H.-J. Bungartz, “Spatially adaptive sparse grids for high-dimensional data-driven problems,”Journal of Complexity, vol. 26, no. 5, pp. 508– 522, 2010

  36. [36]

    Controllability and observability of temporal hypergraphs,

    A. Dong, X. Mao, R. Vasudevan, and C. Chen, “Controllability and observability of temporal hypergraphs,” IEEE Control Systems Letters, 2024

  37. [37]

    Low-rank tensor decompositions for nonlinear system identification: A tutorial with examples,

    K. Batselier, “Low-rank tensor decompositions for nonlinear system identification: A tutorial with examples,”IEEE Control Systems Magazine, vol. 42, no. 1, pp. 54–74, 2022

  38. [38]

    Explicit solutions and stability properties of homogeneous polynomial dynamical systems,

    C. Chen, “Explicit solutions and stability properties of homogeneous polynomial dynamical systems,”IEEE Transactions on Automatic Control, vol. 68, no. 8, pp. 4962– 4969, 2022

  39. [39]

    Controllability of hypergraphs,

    C. Chen, A. Surana, A. M. Bloch, and I. Rajapakse, “Controllability of hypergraphs,”IEEE Transactions on Network Science and Engineering, vol. 8, no. 2, pp. 1646– 1657, 2021

  40. [40]

    High- dimensional stochastic optimal control using continuous tensor decompositions,

    A. Gorodetsky, S. Karaman, and Y. Marzouk, “High- dimensional stochastic optimal control using continuous tensor decompositions,”The International Journal of Robotics Research, vol. 37, no. 2-3, pp. 340–377, 2018

  41. [41]

    Observability of hypergraphs,

    J. Pickard, A. Surana, A. Bloch, and I. Rajapakse, “Observability of hypergraphs,”2023 62nd IEEE Conference on Decision and Control (CDC), pp. 2445–2451, 2023

  42. [42]

    Geometric aspects of observability of hypergraphs,

    J. Pickard, C. Stansbury, A. Surana, I. Rajapakse, and A. Bloch, “Geometric aspects of observability of hypergraphs,”Ifac-papersonline, vol. 58, no. 6, pp. 321–326, 2024

  43. [43]

    Kronecker product of tensors and hypergraphs: structure and dynamics,

    J. Pickard, C. Chen, C. Stansbury, A. Surana, A. M. Bloch, and I. Rajapakse, “Kronecker product of tensors and hypergraphs: structure and dynamics,”SIAM Journal on Matrix Analysis and Applications, vol. 45, no. 3, pp. 1621– 1642, 2024

  44. [44]

    On discrete-time polynomial dynamical systems on hypergraphs,

    S. Cui, G. Zhang, H. Jard´ on-Kojakhmetov, and M. Cao, “On discrete-time polynomial dynamical systems on hypergraphs,”IEEE Control Systems Letters, vol. 8, pp. 1078–1083, 2024

  45. [45]

    Analysis of higher-order lotka-volterra models: Application of s-tensors and the polynomial complementarity problem,

    S. Cui, Q. Zhao, G. Zhang, H. Jardon-Kojakhmetov, and M. Cao, “Analysis of higher-order lotka-volterra models: Application of s-tensors and the polynomial complementarity problem,”IEEE Transactions on Automatic Control, 2025

  46. [46]

    Tensor-train decomposition,

    I. V. Oseledets, “Tensor-train decomposition,”SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2295–2317, 2011. 15

  47. [47]

    Breaking the curse of dimensionality, or how to use svd in many dimensions,

    I. V. Oseledets and E. E. Tyrtyshnikov, “Breaking the curse of dimensionality, or how to use svd in many dimensions,” SIAM Journal on Scientific Computing, vol. 31, no. 5, pp. 3744–3759, 2009

  48. [48]

    Dynamical approximation by hierarchical tucker and tensor-train tensors,

    C. Lubich, T. Rohwedder, R. Schneider, and B. Vandereycken, “Dynamical approximation by hierarchical tucker and tensor-train tensors,”SIAM Journal on Matrix Analysis and Applications, vol. 34, no. 2, pp. 470–494, 2013

  49. [49]

    Hierarchical singular value decomposition of tensors,

    L. Grasedyck, “Hierarchical singular value decomposition of tensors,”SIAM journal on matrix analysis and applications, vol. 31, no. 4, pp. 2029–2054, 2010

  50. [50]

    Candecomp/parafac decomposition of high- order tensors through tensor reshaping,

    A.-H. Phan, P. Tichavsk` y, and A. Cichocki, “Candecomp/parafac decomposition of high- order tensors through tensor reshaping,”IEEE Transactions on Signal Processing, vol. 61, no. 19, pp. 4847–4860, 2013

  51. [51]

    Generalized canonical polyadic tensor decomposition,

    D. Hong, T. G. Kolda, and J. A. Duersch, “Generalized canonical polyadic tensor decomposition,”SIAM review, vol. 62, no. 1, pp. 133–163, 2020

  52. [52]

    Tensor-based large-scale blind system identification using segmentation,

    M. Bouss´ e, O. Debals, and L. De Lathauwer, “Tensor-based large-scale blind system identification using segmentation,” IEEE Transactions on Signal Processing, vol. 65, no. 21, pp. 5770–5784, 2017

  53. [53]

    Nonlinear system identification via tensor completion,

    N. Kargas and N. D. Sidiropoulos, “Nonlinear system identification via tensor completion,”Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 4420–4427, 2020

  54. [54]

    Stable low-rank tensor decomposition for compression of convolutional neural network,

    A.-H. Phan, K. Sobolev, K. Sozykin, D. Ermilov, J. Gusak, P. Tichavsk` y, V. Glukhov, I. Oseledets, and A. Cichocki, “Stable low-rank tensor decomposition for compression of convolutional neural network,”European Conference on Computer Vision, pp. 522–539, 2020

  55. [55]

    Image-based process monitoring using low-rank tensor decomposition,

    H. Yan, K. Paynabar, and J. Shi, “Image-based process monitoring using low-rank tensor decomposition,”IEEE Transactions on Automation Science and Engineering, vol. 12, no. 1, pp. 216–227, 2014

  56. [56]

    Tensor decompositions and applications,

    T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,”SIAM review, vol. 51, no. 3, pp. 455–500, 2009

  57. [57]

    Chen,Tensor-based dynamical systems: Theory and Applications

    C. Chen,Tensor-based dynamical systems: Theory and Applications. Springer Nature, 2024

  58. [58]

    Block tensor unfoldings,

    S. Ragnarsson and C. F. Van Loan, “Block tensor unfoldings,”SIAM Journal on Matrix Analysis and Applications, vol. 33, no. 1, pp. 149–169, 2012

  59. [59]

    Tensor-based homogeneous polynomial dynamical system analysis from data,

    X. Mao, A. Dong, Z. He, Y. Mei, S. Mei, and C. Chen, “Tensor-based homogeneous polynomial dynamical system analysis from data,”arXiv preprint arXiv:2503.17774, 2025

  60. [60]

    Tensor toolbox for matlab, version 3.6,

    B. W. Bader and T. G. Kolda, “Tensor toolbox for matlab, version 3.6,”www.tensortoolbox.org, 2023. A Algorithms FunctionLeafLS(p,X 0,{V q},{C Q},T) 1:Input:leaf indexp, dataX 0 = [x(1)· · ·x(T)], factors{V q}k q=1, transfers{C Q}Q∈T , treeT 2:Output:stacked regressor matrixH p 3:forj= 1 toTdo 4:Construct left contraction (22) 5:Construct right contraction...