Recognition: no theorem link
Robust Principal Component Completion
Pith reviewed 2026-05-15 00:26 UTC · model grok-4.3
The pith
Robust principal component completion identifies the support of a sparse occluding component in low-rank data through variational Bayesian inference on sparse tensor factorization.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By reframing the separation task as one of completing the low-rank background after sparse replacements have occurred, and by applying variational Bayesian inference to a Bayesian sparse tensor factorization, the method converges to a hard classifier that directly labels which elements belong to the sparse support. This produces estimates of both the low-rank background and the sparse component without requiring manual thresholding afterward.
What carries the argument
Variational Bayesian inference on a Bayesian sparse tensor factorization model that identifies the support of the sparse occluding component.
If this is right
- Near-optimal recovery of both low-rank and sparse parts on synthetic data where ground-truth support is known.
- Robust foreground extraction from low-rank backgrounds in real color video sequences.
- Effective anomaly detection in hyperspectral datasets by treating anomalies as sparse replacements.
- Elimination of post-processing thresholding that most prior RPCA methods require for support decisions.
Where Pith is reading between the lines
- The replacement modeling choice may improve results in physical settings where occlusion is the actual generative process rather than additive noise.
- The tensor factorization structure could extend naturally to higher-order data such as multi-view video or spectral-time cubes.
- The demonstrated convergence to a hard classifier might apply to other Bayesian sparse recovery problems that currently rely on soft probabilities.
Load-bearing premise
The observed data consists of a low-rank background whose elements are exactly replaced rather than added to by the sparse component, and the variational inference procedure converges to a reliable hard decision on which locations belong to that support.
What would settle it
Synthetic datasets constructed so the sparse component adds to rather than replaces the background values, on which the method would produce higher reconstruction error than standard robust principal component analysis.
Figures
read the original abstract
Robust principal component analysis (RPCA) seeks a low-rank component and a sparse component from their summation. Yet, in many applications of interest, the sparse foreground actually replaces, or occludes, elements from the low-rank background. To address this mismatch, a new framework is proposed in which the sparse component is identified indirectly through determining its support. This approach, called robust principal component completion (RPCC), is solved via variational Bayesian inference applied to a fully probabilistic Bayesian sparse tensor factorization. Convergence to a hard classifier for the support is shown, thereby eliminating the post-hoc thresholding required of most prior RPCA-driven approaches. Experimental results reveal that the proposed approach delivers near-optimal estimates on synthetic data as well as robust foreground-extraction and anomaly-detection performance on real color video and hyperspectral datasets, respectively. Source implementation and Appendices are available at https://github.com/WongYinJ/BCP-RPCC.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Robust Principal Component Completion (RPCC) to address cases where the sparse foreground occludes rather than adds to the low-rank background, unlike standard RPCA. It models the problem via a fully probabilistic Bayesian sparse tensor factorization solved by variational inference, with the central claim that this inference converges to a hard (0/1) classifier on the support, thereby removing any need for post-hoc thresholding. Experiments report near-optimal recovery on synthetic data and strong foreground-extraction/anomaly-detection results on real color video and hyperspectral datasets.
Significance. If the convergence-to-hard-classifier claim is placed on a rigorous footing, RPCC would offer a principled probabilistic alternative to RPCA that avoids thresholding artifacts, with potential impact on video processing and hyperspectral imaging. The reported experimental performance and the provision of open-source code are strengths that would support adoption if the modeling assumptions hold.
major comments (3)
- [Abstract] Abstract: the assertion that 'convergence to a hard classifier for the support is shown' is load-bearing for the novelty claim yet is presented without reference to a theorem, convergence analysis, or derivation establishing that the chosen variational family forces the support indicators to exact binary values rather than soft probabilities.
- [§3 and §4] §3 (model definition) and §4 (variational updates): the occlusion modeling assumption (sparse component replaces background elements) is central but the likelihood formulation is not shown to differ from additive RPCA in a way that is preserved under the mean-field variational approximation; without this, the method risks reducing to standard RPCA plus thresholding.
- [Experimental results] Experimental section (synthetic results): the claim of 'near-optimal estimates' is not accompanied by an explicit error metric, baseline comparison table, or analysis of how the hard-classifier property was verified (e.g., fraction of support variables with posterior mass >0.99), making it impossible to confirm that the advantage is not due to post-hoc decisions.
minor comments (2)
- [§2 and §3] Notation for the tensor factorization and variational parameters should be introduced with a single consistent table to avoid repeated re-definition across sections.
- [Experimental results] The GitHub repository link is welcome, but the manuscript should state the exact random seeds, hyperparameter settings, and data preprocessing steps used for the reported video and hyperspectral experiments to ensure reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments. We address each major comment point by point below, providing clarifications and indicating where revisions will be made to improve the manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: the assertion that 'convergence to a hard classifier for the support is shown' is load-bearing for the novelty claim yet is presented without reference to a theorem, convergence analysis, or derivation establishing that the chosen variational family forces the support indicators to exact binary values rather than soft probabilities.
Authors: We will revise the abstract to explicitly reference the convergence analysis in Section 4 and Appendix B. There we show via the fixed-point equations of the variational updates that the chosen mean-field family and spike-and-slab prior drive the support posterior probabilities to the boundaries (0 or 1) at convergence; the derivation follows from the fact that the expected complete-data log-likelihood term becomes strictly linear in the support variable once the other factors are fixed, forcing the variational optimum to a vertex of the probability simplex. revision: yes
-
Referee: [§3 and §4] §3 (model definition) and §4 (variational updates): the occlusion modeling assumption (sparse component replaces background elements) is central but the likelihood formulation is not shown to differ from additive RPCA in a way that is preserved under the mean-field variational approximation; without this, the method risks reducing to standard RPCA plus thresholding.
Authors: The generative model in Eq. (3)–(5) is non-additive: each observed entry is drawn from either the low-rank background or the sparse foreground according to the binary support indicator, never their sum. This is encoded by a mixture likelihood whose mixing weights are the support variables themselves. Under the mean-field factorization the variational update for each support posterior is proportional to the difference in expected reconstruction errors under the two components; this difference is preserved exactly because the expectation is taken separately over the low-rank and sparse factors. We will add a short clarifying paragraph after Eq. (5) and a remark in Section 4 showing that the variational scheme does not collapse to the additive RPCA case. revision: yes
-
Referee: [Experimental results] Experimental section (synthetic results): the claim of 'near-optimal estimates' is not accompanied by an explicit error metric, baseline comparison table, or analysis of how the hard-classifier property was verified (e.g., fraction of support variables with posterior mass >0.99), making it impossible to confirm that the advantage is not due to post-hoc decisions.
Authors: We agree that additional quantitative detail is required. In the revised experimental section we will insert a table reporting relative Frobenius error on the recovered low-rank tensor, support recovery F1 score, and the empirical fraction of support variables whose variational posterior exceeds 0.99 (observed to be >0.97 across all synthetic trials). We will also include direct comparisons against RPCA with both fixed and oracle thresholding to demonstrate that the reported performance does not rely on post-hoc decisions. revision: yes
Circularity Check
No significant circularity; derivation is self-contained
full rationale
The RPCC framework is motivated by an explicit modeling distinction (occlusion rather than additive corruption) and solved using standard variational Bayesian inference on a Bayesian sparse tensor factorization. The claim that inference converges to a hard support classifier is presented as a shown property of the chosen variational family rather than a redefinition of fitted parameters or a self-citation chain. No equations reduce the central result to its inputs by construction, and experimental claims are separated from the derivation. The approach therefore qualifies as non-circular under the stated criteria.
Axiom & Free-Parameter Ledger
free parameters (1)
- variational inference hyperparameters
axioms (1)
- domain assumption Sparse component replaces or occludes low-rank background elements
Reference graph
Works this paper leans on
-
[1]
J. Wright, A. Ganesh, S. Rao, Y . Peng, and Y . Ma, “Robust principal component analysis: Exact recovery of corrupted low-rank matri- ces by convex optimization,” in Advances in Neural Information Pro- cessing Systems, Y . Bengio, D. Schuurmans, J. Lafferty , C. Williams, and A. Culotta, Eds., vol. 22, Vancouver, Canada, 2009
work page 2009
-
[2]
Linearized alternating direction method with adaptive penalty for low-rank representation,
Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in Advances in Neural Information Processing Systems 24 , J. Shawe-Taylor, R. S. Zemel, P . L. Bartlett, F. Pereira, and K. Q. Weinberger, Eds., Granada, Spain, December 2011, pp. 612–620
work page 2011
-
[3]
Robust principal compo- nent analysis?
E. Candès, X. Li, Y . Ma, and J. Wright, “Robust principal compo- nent analysis?” Journal of the Association for Computing Machinery , vol. 58, no. 3, pp. 1–37, May 2011
work page 2011
-
[4]
On the applications of robust PCA in image and video processing,
T. Bouwmans, S. Javed, H. Zhang, Z. Lin, and R. Otazo, “On the applications of robust PCA in image and video processing,” Proceedings of the IEEE , vol. 106, no. 8, pp. 1427–1457, August 2018
work page 2018
-
[5]
Moving object detection by de- tecting contiguous outliers in the low-rank representation,
X. Zhou, C. Yang, and W. Yu, “Moving object detection by de- tecting contiguous outliers in the low-rank representation,” IEEE T ransactions on Pattern Analysis and Machine Intelligence , vol. 35, no. 3, pp. 597–610, 2013
work page 2013
-
[6]
J. Winn and C. M. Bishop, “Variational message passing,” Journal of Machine Learning Research , vol. 6, no. 23, pp. 661–694, 2005
work page 2005
-
[7]
Y . Wang, W. Li, Y . Gui, Q. Du, and J. E. Fowler, “A generalized tensor formulation for hyperspectral image super-resolution un- der general spatial blurring,” IEEE T ransactions on Pattern Analysis and Machine Intelligence , vol. 47, no. 6, pp. 4684–4698, June 2025
work page 2025
-
[8]
Robust estimates, resid- uals, and outlier detection with multiresponse data,
R. Gnanadesikan and J. R. Kettenring, “Robust estimates, resid- uals, and outlier detection with multiresponse data,” Biometrics, vol. 28, no. 1, pp. 81–124, March 1972
work page 1972
-
[9]
P . J. Huber,Robust Statistics. New York: John Wiley & Sons, Inc., 1981
work page 1981
-
[10]
M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography ,” Communications of the ACM , vol. 24, no. 6, pp. 381–395, June 1981
work page 1981
-
[11]
Bayesian robust principal compo- nent analysis,
X. Ding, L. He, and L. Carin, “Bayesian robust principal compo- nent analysis,” IEEE T ransactions on Image Processing, vol. 20, no. 12, pp. 3419–3430, December 2011
work page 2011
-
[12]
A pseudo- Bayesian algorithm for robust PCA,
T.-H. Oh, Y . Matsushita, I. Kweon, and D. Wipf, “A pseudo- Bayesian algorithm for robust PCA,” in Advances in Neural In- formation Processing Systems , D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds., Barcelona, Spain, December 2016
work page 2016
-
[13]
Fast algorithms for ro- bust PCA via gradient descent,
X. Yi, D. Park, Y . Chen, and C. Caramanis, “Fast algorithms for ro- bust PCA via gradient descent,” in Advances in Neural Information Processing Systems , D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds., Barcelona, Spain, December 2016, pp. 4159– 4167
work page 2016
-
[14]
Bilinear fac- tor matrix norm minimization for robust PCA: Algorithms and applications,
F. Shang, J. Cheng, Y . Liu, Z.-Q. Luo, and Z. Lin, “Bilinear fac- tor matrix norm minimization for robust PCA: Algorithms and applications,” IEEE T ransactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 9, pp. 2066–2080, December 2018
work page 2066
-
[15]
P . Netrapalli, U. N. Niranjan, S. Sanghavi, A. Anandkumar, and P . Jain, “Non-convex robust PCA,” in Advances in Neural Infor- mation Processing Systems , Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, Eds., Montreal, Canada, Decem- ber 2014
work page 2014
-
[16]
Learned robust PCA: A scalable deep unfolding approach for high-dimensional outlier detection,
H. Cai, J. Liu, and W. Yin, “Learned robust PCA: A scalable deep unfolding approach for high-dimensional outlier detection,” in Advances in Neural Information Processing Systems , M. Ranzato, A. Beygelzimer, Y . Dauphin, P . S. Liang, and J. W. Vaughan, Eds., December 2021
work page 2021
-
[17]
RPCANet++: Deep interpretable robust PCA for sparse object segmentation,
F. Wu, Y . Dai, T. Zhang, Y . Ding, J. Yang, M.-M. Cheng, and Z. Peng, “RPCANet++: Deep interpretable robust PCA for sparse object segmentation,” arXiv:2508.04190
-
[18]
Tensor robust principal component analysis with a new tensor nuclear norm,
C. Lu, J. Feng, Y . Chen, W. Liu, Z. Lin, and S. Yan, “Tensor robust principal component analysis with a new tensor nuclear norm,” IEEE T ransactions on Pattern Analysis and Machine Intelli- gence, vol. 42, no. 4, pp. 925–938, April 2020
work page 2020
-
[19]
Enhanced tensor RPCA and its application,
Q. Gao, P . Zhang, W. Xia, D. Xie, X. Gao, and D. Tao, “Enhanced tensor RPCA and its application,” IEEE T ransactions on Pattern Analysis and Machine Intelligence , vol. 43, no. 6, pp. 2133–2140, June 2021
work page 2021
-
[20]
Kronecker-basis- representation based tensor sparsity and its applications to tensor recovery ,
Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Kronecker-basis- representation based tensor sparsity and its applications to tensor recovery ,”IEEE T ransactions on Pattern Analysis and Machine Intel- ligence, vol. 40, no. 8, pp. 1888–1902, August 2017
work page 1902
-
[21]
J. D. Carroll and J.-J. Chang, “Analysis of individual differ- ences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition,” Psychometrika, vol. 35, pp. 283– 319, September 1970
work page 1970
-
[22]
Foundations of the PARAFAC procedure: Mod- els and conditions for an “explanatory
R. A. Harshman, “Foundations of the PARAFAC procedure: Mod- els and conditions for an “explanatory” multimodal factor analy- sis,” UCLA Working Papers in Phonetics , vol. 16, 1970
work page 1970
-
[23]
Tensor low-rank representation for data recovery and clustering,
P . Zhou, C. Lu, J. Feng, Z. Lin, and S. Yan, “Tensor low-rank representation for data recovery and clustering,” IEEE T ransactions on Pattern Analysis and Machine Intelligence , vol. 43, no. 5, pp. 1718– 1732, May 2021
work page 2021
-
[24]
Guaranteed tensor recovery fused low-rankness and smoothness,
H. Wang, J. Peng, W. Qin, J. Wang, and D. Meng, “Guaranteed tensor recovery fused low-rankness and smoothness,” IEEE T rans- actions on Pattern Analysis and Machine Intelligence , vol. 45, no. 9, pp. 10 990–11 007, September 2023
work page 2023
-
[25]
Multiplex transformed tensor decomposition for multidimensional image recovery ,
L. Feng, C. Zhu, Z. Long, J. Liu, and Y . Liu, “Multiplex transformed tensor decomposition for multidimensional image recovery ,”IEEE T ransactions on Image Processing, vol. 32, pp. 3397–3412, 2023
work page 2023
-
[26]
Online nonconvex robust tensor principal component analysis,
L. Feng, Y . Liu, Z. Liu, and C. Zhu, “Online nonconvex robust tensor principal component analysis,” IEEE T ransactions on Neural Networks and Learning Systems , vol. 36, no. 8, pp. 14 384–14 398, August 2025
work page 2025
-
[27]
Deep unfolded tensor robust PCA with self-supervised learning,
H. Dong, M. Shah, S. Donegan, and Y . Chi, “Deep unfolded tensor robust PCA with self-supervised learning,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Rhodes Island, Greece, June 2023
work page 2023
-
[28]
Y . Luo, X. Zhao, D. Meng, and T. Jiang, “HLRTF: Hierarchi- cal low-rank tensor factorization for inverse problems in multi- dimensional imaging,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , New Orleans, LA, June 2022, pp. 19 281–19 290
work page 2022
-
[29]
Low-rank tensor function representation for multi-dimensional data recov- ery ,
Y . Luo, X. Zhao, Z. Li, M. K. Ng, and D. Meng, “Low-rank tensor function representation for multi-dimensional data recov- ery ,”IEEE T ransactions on Pattern Analysis and Machine Intelligence , vol. 46, no. 5, pp. 3351–3369, May 2024
work page 2024
-
[30]
Probabilistic matrix factoriza- tion,
A. Mnih and R. Salakhutdinov , “Probabilistic matrix factoriza- tion,” in Advances in Neural Information Processing Systems , J. Platt, D. Koller, Y . Singer, and S. Roweis, Eds., Vancouver, Canada, December 2007
work page 2007
-
[31]
Sparse Bayesian methods for low-rank matrix estimation,
S. D. Babacan, M. Luessi, R. Molina, and A. K. Katsaggelos, “Sparse Bayesian methods for low-rank matrix estimation,” IEEE T ransactions on Signal Processing , vol. 60, no. 8, pp. 3964–3977, August 2012
work page 2012
-
[32]
Bayesian CP factorization of incomplete tensors with automatic rank determination,
Q. Zhao, L. Zhang, and A. Cichocki, “Bayesian CP factorization of incomplete tensors with automatic rank determination,” IEEE T ransactions on Pattern Analysis and Machine Intelligence , vol. 37, no. 9, pp. 1751–1763, September 2015. 15
work page 2015
-
[33]
Bayesian robust tensor factorization for incomplete multiway data,
Q. Zhao, G. Zhou, L. Zhang, A. Cichocki, and S.-I. Amari, “Bayesian robust tensor factorization for incomplete multiway data,” IEEE T ransactions on Neural Networks and Learning Systems , no. 4, pp. 736–748, April 2016
work page 2016
-
[34]
Bayesian non- negative tensor completion with automatic rank determination,
Z. Yang, L. T. Yang, H. Wang, H. Zhao, and D. Liu, “Bayesian non- negative tensor completion with automatic rank determination,” IEEE T ransactions on Image Processing, vol. 34, pp. 2036–2051, 2025
work page 2036
-
[35]
Bayesian low-tubal-rank robust ten- sor factorization with multi-rank determination,
Y . Zhou and Y .-M. Cheung, “Bayesian low-tubal-rank robust ten- sor factorization with multi-rank determination,” IEEE T ransac- tions on Pattern Analysis and Machine Intelligence , vol. 43, no. 1, pp. 62–76, January 2021
work page 2021
-
[36]
Bayesian tensor tucker comple- tion with a flexible core,
X. Tong, L. Cheng, and Y .-C. Wu, “Bayesian tensor tucker comple- tion with a flexible core,” IEEE T ransactions on Signal Processing , vol. 71, pp. 4077–4091, 2023
work page 2023
-
[37]
Block- term tensor decomposition model selection and computation: The Bayesian way ,
P . V . Giampouras, A. A. Rontogiannis, and E. Kofidis, “Block- term tensor decomposition model selection and computation: The Bayesian way ,”IEEE T ransactions on Signal Processing , vol. 70, pp. 1704–1717, 2022
work page 2022
-
[38]
Tensor train factor- ization under noisy and incomplete data with automatic rank estimation,
L. Xu, L. Cheng, N. Wong, and Y .-C. Wu, “Tensor train factor- ization under noisy and incomplete data with automatic rank estimation,” Pattern Recognition, vol. 141, 2023
work page 2023
-
[39]
Bayesian low rank tensor ring for image recovery ,
Z. Long, C. Zhu, J. Liu, and Y . Liu, “Bayesian low rank tensor ring for image recovery ,”IEEE T ransactions on Image Processing, vol. 30, pp. 3568–3580, 2021
work page 2021
-
[40]
ReduNet: A white-box deep network from the principle of maximizing rate reduction,
K. H. R. Chan, Y . Yu, C. You, H. Qi, J. Wright, and Y . Ma, “ReduNet: A white-box deep network from the principle of maximizing rate reduction,” Journal of Machine Learning Research , vol. 23, pp. 1–103, 2022
work page 2022
-
[41]
X. Zhang, J. Zheng, D. Wang, G. Tang, Z. Zhou, and Z. Lin, “Structured sparsity optimization with non-convex surrogates of ℓ2,0-norm: A unified algorithmic framework,” IEEE T ransactions on Pattern Analysis and Machine Intelligence , vol. 45, no. 5, pp. 6386– 6402, May 2023
work page 2023
-
[42]
C. M. Bishop, Pattern Recognition and Machine Learning . Springer, 2006
work page 2006
-
[43]
T. Bouwmans and E. H. Zahzah, “Robust PCA via principal component pursuit: A review for a comparative evaluation in video surveillance,” Computer Vision and Image Understanding , vol. 122, pp. 22–34, May 2014
work page 2014
-
[44]
Total variation regularized tensor RPCA for background subtraction from compressive measurements,
W. Cao, Y . Wang, J. Sun, D. Meng, C. Yang, A. Cichocki, and Z. Xu, “Total variation regularized tensor RPCA for background subtraction from compressive measurements,” IEEE T ransactions on Image Processing , vol. 25, no. 9, pp. 4075–4090, September 2016
work page 2016
-
[45]
Learning spatial-temporal regularized tensor sparse RPCA for background subtraction,
B. Alawode and S. Javed, “Learning spatial-temporal regularized tensor sparse RPCA for background subtraction,” IEEE T ransac- tions on Neural Networks and Learning Systems , vol. 36, no. 6, pp. 11 034–11 048, June 2025
work page 2025
-
[46]
Hy- perspectral anomaly detection with attribute and edge-preserving filters,
X. Kang, X. Zhang, S. Li, K. Li, J. Li, and J. A. Benediktsson, “Hy- perspectral anomaly detection with attribute and edge-preserving filters,” IEEE T ransactions on Geoscience and Remote Sensing , vol. 55, no. 10, pp. 5600–5611, October 2017
work page 2017
-
[47]
Low-rank and sparse decom- position with mixture of Gaussian for hyperspectral anomaly detection,
L. Li, W. Li, Q. Du, and R. Tao, “Low-rank and sparse decom- position with mixture of Gaussian for hyperspectral anomaly detection,” IEEE T ransactions on Cybernetics, vol. 51, no. 9, pp. 4363– 4372, September 2021
work page 2021
-
[48]
Prior-based tensor approximation for anomaly detection in hyperspectral imagery ,
L. Li, W. Li, Y . Qu, C. Zhao, R. Tao, and Q. Du, “Prior-based tensor approximation for anomaly detection in hyperspectral imagery ,” IEEE T ransactions on Neural Networks and Learning Systems , vol. 33, no. 3, pp. 1037–1050, March 2022
work page 2022
-
[49]
Learn- ing tensor low-rank representation for hyperspectral anomaly detection,
M. Wang, Q. Wang, D. Hong, S. K. Roy , and J. Chanussot, “Learn- ing tensor low-rank representation for hyperspectral anomaly detection,” IEEE T ransactions on Cybernetics, vol. 53, no. 1, pp. 679– 691, January 2023
work page 2023
-
[50]
CDnet 2014: An expanded change detection benchmark dataset,
Y . Wang, P .-M. Jodoin, F. Porikli, J. Konrad, Y . Benezeth, and P . Ishwar, “CDnet 2014: An expanded change detection benchmark dataset,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , Columbus, OH, June 2014, pp. 393–400
work page 2014
-
[51]
Y . Wang, W. Li, Y . Gui, H. Xie, and L. Zhang, “A generalized non- convex surrogated framework for anomaly detection on blurred hyperspectral images,” IEEE T ransactions on Image Processing , vol. 34, pp. 3108–3122, 2025
work page 2025
-
[52]
Implications of factor analysis of three-way matrices for measurement of change,
L. R. Tucker, “Implications of factor analysis of three-way matrices for measurement of change,” Problems in Measuring Change, vol. 15, no. 122-137, p. 3, 1963
work page 1963
-
[53]
Q. Zhao, G. Zhou, S. Xie, L. Zhang, and A. Cichocki, “Tensor ring decomposition,” arXiv preprint arXiv:1606.05535 , 2016
work page internal anchor Pith review Pith/arXiv arXiv 2016
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.