pith. machine review for the scientific record. sign in

arxiv: 2604.05890 · v1 · submitted 2026-04-07 · 💻 cs.IT · cs.LG· math.IT

Recognition: no theorem link

A Tensor-Train Framework for Bayesian Inference in High-Dimensional Systems: Applications to MIMO Detection and Channel Decoding

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:19 UTC · model grok-4.3

classification 💻 cs.IT cs.LGmath.IT
keywords tensor trainBayesian inferenceMIMO detectionchannel decodinglow-rank approximationAPP marginalsadditive noise modelslog-posterior
0
0 comments X

The pith

The joint log-APP mass function admits an exact low-rank tensor-train representation that enables tractable Bayesian inference in high-dimensional systems.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

In communication systems the joint posterior probability over many discrete variables has support that grows exponentially, rendering standard Bayesian inference impossible. The paper establishes that the joint log-APP mass function nevertheless possesses an exact low-rank representation in the tensor-train format, so that storage and arithmetic remain polynomial in the number of variables. A practical algorithm approximates the exponential of this log-posterior by running the TT-cross procedure from a truncated Taylor-series starting point, thereby recovering accurate symbol-wise marginals. The same construction is carried through explicitly for MIMO detection under additive white Gaussian noise and for soft-decision decoding of binary linear block codes, producing near-optimal error rates at modest tensor ranks over a wide SNR range.

Core claim

The joint log-APP mass function admits an exact low-rank representation in the TT format, enabling compact storage and efficient computations. To recover symbol-wise APP marginals, the exponential of the log-posterior is approximated by a TT-cross algorithm initialized with a truncated Taylor series. Explicit low-rank TT constructions are derived for the linear observation model under AWGN, applied to MIMO detection, and for soft-decision decoding of binary linear block codes over the binary-input AWGN channel, both yielding near-optimal error-rate performance with only modest TT ranks.

What carries the argument

The exact low-rank tensor-train (TT) representation of the joint log-APP mass function, which permits compact storage and efficient marginal recovery via the TT-cross algorithm.

If this is right

  • The representation keeps both memory and arithmetic polynomial in the number of variables instead of exponential.
  • Near-optimal error rates are obtained for MIMO detection under AWGN with only modest TT ranks.
  • The identical low-rank construction applies to soft-decision decoding of binary linear block codes over the BI-AWGN channel.
  • The framework covers general discrete-input additive-noise models beyond the two canonical cases shown.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same TT structure could be reused for other high-dimensional discrete inference tasks that share an additive-noise observation model.
  • Replacing the Taylor initialization with a more refined starting guess might enlarge the SNR interval where the marginals remain accurate.
  • Tensor-network methods of this type offer a systematic route to tractable inference whenever the joint distribution factors into low-order interactions.

Load-bearing premise

The TT-cross algorithm initialized with a truncated Taylor series produces sufficiently accurate approximations to the exponential of the log-posterior for recovering accurate symbol-wise marginals across the claimed SNR range.

What would settle it

For a small-dimensional instance whose exact symbol-wise marginals can be computed by enumeration, compare those values to the marginals returned by the TT procedure at several SNR points; large systematic deviation would show the approximation step fails to deliver the claimed accuracy.

Figures

Figures reproduced from arXiv: 2604.05890 by Dominik Sulz, Laurent Schmalen, Luca Schmid, Shrinivas Chimmalgi.

Figure 1
Figure 1. Figure 1: Graphical representation of the TT decomposition. Each edge can be [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: SER over SNR for various MIMO detectors across different [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: SER over SNR for different ranks r_max of the truncated Taylor series initialization of the TTDet-sample algorithm for a MIMO system with N˜ = 8 and 16-QAM. field F2 := {0, 1}. To introduce redundancy and enable error correction capabilities, a binary linear block encoder maps each of the 2 k information words u bijectively to a codeword c = Gu, G ∈ F n×k 2 , using the generator matrix G. The set of all 2 … view at source ↗
Figure 5
Figure 5. Figure 5: Histogram of the maximum TT rank rmax after TT-cross approxima￾tion of the joint APP mass function for the TTDet-sweep algorithm applied to the BCH(63, 30) code at different Eb/N0 values. For the latter, we use the sparse “1-min” parity check matrix provided in [49]. For the BCH(15, 7) and BCH(31, 16) codes, both TTDec variants achieve a BER performance comparable to OSD-3 across the entire evaluated Eb/N0… view at source ↗
Figure 7
Figure 7. Figure 7: Graphical illustration of the cross decomposition in (12). [PITH_FULL_IMAGE:figures/full_fig_p011_7.png] view at source ↗
read the original abstract

Bayesian inference in high-dimensional discrete-input additive noise models is a fundamental challenge in communication systems, as the support of the required joint a posteriori probability (APP) mass function grows exponentially with the number of unknown variables. In this work, we propose a tensor-train (TT) framework for tractable, near-optimal Bayesian inference in discrete-input additive noise models. The central insight is that the joint log-APP mass function admits an exact low-rank representation in the TT format, enabling compact storage and efficient computations. To recover symbol-wise APP marginals, we develop a practical inference procedure that approximates the exponential of the log-posterior using a TT-cross algorithm initialized with a truncated Taylor-series. To demonstrate the generality of the approach, we derive explicit low-rank TT constructions for two canonical communication problems: the linear observation model under additive white Gaussian noise (AWGN), applied to multiple-input multiple-output (MIMO) detection, and soft-decision decoding of binary linear block error correcting codes over the binary-input AWGN channel. Numerical results show near-optimal error-rate performance across a wide range of signal-to-noise ratios while requiring only modest TT ranks. These results highlight the potential of tensor-network methods for efficient Bayesian inference in communication systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that the joint log-APP mass function in high-dimensional discrete-input additive noise models admits an exact low-rank tensor-train (TT) representation. This structural property enables compact storage and efficient operations. To recover symbol-wise marginals, the authors propose approximating the normalized exponential of the log-APP via the TT-cross algorithm initialized from a truncated Taylor series. The framework is instantiated for MIMO detection under AWGN and soft decoding of binary linear block codes, with numerical experiments reporting near-optimal error rates using modest TT ranks.

Significance. If the TT approximation step proves reliable, the work offers a scalable tensor-network route to near-optimal Bayesian inference in communication problems whose exact solution is exponential in dimension. The exact low-rank TT form for the log-APP itself is a clean structural insight that may generalize beyond the two canonical models treated. The reported numerical performance is encouraging, but the absence of error bounds on the nonlinear approximation and limited verification at high SNR reduce the immediate strength of the contribution.

major comments (2)
  1. The section describing the practical inference procedure (TT-cross approximation to exp(log-APP)): no error bounds, convergence guarantees, or sensitivity analysis are supplied for the nonlinear step. This is load-bearing because, at high SNR, the posterior concentrates sharply and even modest rank truncation can distort the symbol-wise marginals that determine the reported error rates.
  2. Numerical results section: the claim of 'near-optimal error-rate performance across a wide range of SNRs' is asserted without error bars, ablation studies on TT rank, or explicit high-SNR comparisons. This leaves the practical accuracy of the marginal recovery unverified and directly engages the concern that TT-cross may fail when the mass is peaked.
minor comments (1)
  1. Abstract: the statement that results require 'only modest TT ranks' is not accompanied by any indication of the actual ranks employed or their scaling with system size.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for the detailed and constructive feedback on our manuscript. The points raised about the lack of theoretical analysis for the approximation step and the need for more comprehensive numerical validation are important. We will revise the paper accordingly to include additional experiments and discussions. Our point-by-point responses are as follows.

read point-by-point responses
  1. Referee: The section describing the practical inference procedure (TT-cross approximation to exp(log-APP)): no error bounds, convergence guarantees, or sensitivity analysis are supplied for the nonlinear step. This is load-bearing because, at high SNR, the posterior concentrates sharply and even modest rank truncation can distort the symbol-wise marginals that determine the reported error rates.

    Authors: We agree that the manuscript currently lacks error bounds, convergence guarantees, or a dedicated sensitivity analysis for the TT-cross approximation of the normalized exponential of the log-APP. Providing rigorous theoretical bounds for this nonlinear approximation is challenging and remains an open problem, as the TT-cross algorithm involves iterative sampling and the exponential function can amplify small errors in the log-domain. In the revised manuscript, we will add a sensitivity analysis section that examines the impact of TT rank and Taylor truncation order on the accuracy of the recovered marginals, including at high SNR regimes. We will also discuss the empirical reliability observed in our experiments, where the approximation maintains near-optimal performance even as the posterior becomes peaked. revision: partial

  2. Referee: Numerical results section: the claim of 'near-optimal error-rate performance across a wide range of SNRs' is asserted without error bars, ablation studies on TT rank, or explicit high-SNR comparisons. This leaves the practical accuracy of the marginal recovery unverified and directly engages the concern that TT-cross may fail when the mass is peaked.

    Authors: We acknowledge that the numerical results section would benefit from error bars, ablation studies on TT rank, and more explicit high-SNR comparisons to better verify the accuracy of the marginal recovery. In the revision, we will incorporate these: (i) error bars computed from multiple independent Monte Carlo simulations, (ii) ablation plots showing bit-error-rate versus TT rank for different SNR values, and (iii) focused high-SNR experiments (e.g., SNR > 20 dB) with comparisons to exact or near-exact methods where feasible. These additions will directly address the potential issues with peaked posteriors and strengthen the claim of near-optimal performance. revision: yes

standing simulated objections not resolved
  • Theoretical error bounds and convergence guarantees for the TT-cross approximation of the normalized exponential of the log-APP mass function.

Circularity Check

0 steps flagged

No circularity: exact TT structure for log-APP derived from quadratic likelihood

full rationale

The paper derives the exact low-rank TT representation of the joint log-APP directly from the quadratic structure of the AWGN log-likelihood (sum over receive antennas of squared distances), which factors into a tensor-train form by construction of the model without reference to fitted parameters, self-citations, or the later approximation step. The TT-cross procedure with Taylor initialization is presented as a practical numerical method to handle the exponential and normalization for marginals, not as a 'prediction' equivalent to its inputs. No load-bearing self-citations, uniqueness theorems from the authors, or smuggled ansatzes appear in the derivation chain. The approach remains self-contained and independent of the target error-rate results.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Only abstract available; the low-rank TT property of the log-APP is treated as a domain insight rather than derived from first principles or external benchmarks.

axioms (1)
  • domain assumption The joint log-APP mass function admits an exact low-rank representation in the TT format
    Stated as the central insight enabling the framework.

pith-pipeline@v0.9.0 · 5533 in / 1085 out tokens · 52158 ms · 2026-05-10T18:19:15.522423+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references · 50 canonical work pages

  1. [1]

    Proakis and M

    J. Proakis and M. Salehi,Digital Communications, 5th ed. McGraw Hill, Nov. 2007

  2. [2]

    C. M. Bishop,Pattern recognition and machine learning. New York: Springer, 2006

  3. [3]

    Factor graphs and the sum- product algorithm,

    F. R. Kschischang and H.-A. Loeliger, “Factor graphs and the sum- product algorithm,”IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 498–519, Feb. 2001

  4. [4]

    Unified design of iterative receivers using factor graphs,

    A. Worthen and W. Stark, “Unified design of iterative receivers using factor graphs,”IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 843–849, Feb. 2001

  5. [5]

    Richardson and R

    T. Richardson and R. Urbanke,Modern Coding Theory. Cambridge University Press, 2008

  6. [6]

    Expectation propagation for approximate Bayesian infer- ence,

    T. P. Minka, “Expectation propagation for approximate Bayesian infer- ence,” inProc. UAI, Seattle, W A, USA, 2001, pp. 362–369

  7. [7]

    Expectation propagation detection for high-order high-dimensional MIMO systems,

    J. C ´espedes, P. M. Olmos, M. S ´anchez-Fern´andez, and F. Perez-Cruz, “Expectation propagation detection for high-order high-dimensional MIMO systems,”IEEE Trans. Commun., vol. 62, no. 8, pp. 2840–2849, Aug. 2014

  8. [8]

    Monte Carlo methods for signal processing: A review in the statistical signal processing context,

    A. Doucet and X. Wang, “Monte Carlo methods for signal processing: A review in the statistical signal processing context,”IEEE Signal Process. Mag., vol. 22, no. 6, pp. 152–170, 2005

  9. [9]

    Markov chain Monte Carlo algorithms for CDMA and MIMO communication systems,

    B. Farhang-Boroujeny, H. Zhu, and Z. Shi, “Markov chain Monte Carlo algorithms for CDMA and MIMO communication systems,”IEEE Trans. Signal Process., vol. 54, no. 5, pp. 1896–1909, 2006

  10. [10]

    Auto-encoding variational Bayes,

    D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” in Proc. Int. Conf. Learning Representations (ICLR), Banff, Canada, 2014

  11. [11]

    Blind equalization and channel estimation in coherent optical communications using variational autoencoders,

    V . Lauinger, F. Buchali, and L. Schmalen, “Blind equalization and channel estimation in coherent optical communications using variational autoencoders,”IEEE J. Sel. Areas Commun., vol. 40, no. 9, pp. 2529– 2539, Sep. 2022

  12. [12]

    Denoising diffusion probabilistic models,

    J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Proc. Advances in Neural Information Processing Systems (NeurIPS), vol. 33, pp. 6840–6851, 2020

  13. [13]

    Diffusion- based generative prior for low-complexity MIMO channel estimation,

    B. Fesl, M. Baur, F. Strasser, M. Joham, and W. Utschick, “Diffusion- based generative prior for low-complexity MIMO channel estimation,” IEEE Wireless Commun. Lett., vol. 13, no. 12, pp. 3493–3497, 2024

  14. [14]

    A literature survey of low- rank tensor approximation techniques,

    L. Grasedyck, D. Kressner, and C. Tobler, “A literature survey of low- rank tensor approximation techniques,”GAMM-Mitteilungen, vol. 36, no. 1, pp. 53–78, 2013

  15. [15]

    Hackbusch,Tensor spaces and numerical tensor calculus, ser

    W. Hackbusch,Tensor spaces and numerical tensor calculus, ser. Springer Series in Computational Mathematics. Springer, Cham, 2019, vol. 56, second edition

  16. [16]

    Supervised learning with tensor networks,

    E. Stoudenmire and D. J. Schwab, “Supervised learning with tensor networks,”Proc. Advances in Neural Information Processing Systems (NeurIPS), vol. 29, 2016

  17. [17]

    Unifying time evolution and optimization with matrix product states,

    J. Haegeman, C. Lubich, I. Oseledets, B. Vandereycken, and F. Ver- straete, “Unifying time evolution and optimization with matrix product states,”Physical Review B, vol. 94, no. 16, p. 165116, 2016

  18. [18]

    The alternating linear scheme for tensor optimization in the tensor train format,

    S. Holtz, T. Rohwedder, and R. Schneider, “The alternating linear scheme for tensor optimization in the tensor train format,”SIAM Journal on Scientific Computing, vol. 34, no. 2, pp. A683–A713, 2012

  19. [19]

    A semi-Lagrangian Vlasov solver in tensor train format,

    K. Kormann, “A semi-Lagrangian Vlasov solver in tensor train format,” SIAM Journal on Scientific Computing, vol. 37, no. 4, pp. B613–B632, 2015

  20. [20]

    Tensor decompositions and applications,

    T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM review, vol. 51, no. 3, pp. 455–500, 2009

  21. [21]

    Tensor decompositions for signal processing applica- tions: From two-way to multiway component analysis,

    A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou, Q. Zhao, C. Caiafa, and H. A. Phan, “Tensor decompositions for signal processing applica- tions: From two-way to multiway component analysis,”IEEE Signal Process. Mag., vol. 32, no. 2, pp. 145–163, 2015

  22. [22]

    Overview of tensor decompositions with applications to com- munications,

    A. L. F. de Almeida, G. Favier, J. P. C. L. da Costa, and J. C. M. Mota, “Overview of tensor decompositions with applications to com- munications,” inSignals and Images: Advances and Results in Speech, Estimation, Compression, Recognition, Filtering, and Processing. Boca Raton, FL, USA: CRC Press, 2016, pp. 325–356

  23. [23]

    Tensor decomposition for signal processing and machine learning,

    N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalex- akis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,”IEEE Trans. Signal Process., vol. 65, no. 13, pp. 3551–3582, 2017

  24. [24]

    Tensor decompositions for signal processing: Theory, advances, and applications,

    N. Tokcanet al., “Tensor decompositions for signal processing: Theory, advances, and applications,”Signal Processing, vol. 234, p. 110191, 2025

  25. [25]

    Tensor-based channel estimation for massive MIMO-OFDM systems,

    D. C. Ara ´ujo, A. L. F. de Almeida, J. P. C. L. Da Costa, and R. T. de Sousa, “Tensor-based channel estimation for massive MIMO-OFDM systems,”IEEE Access, vol. 7, pp. 42 133–42 147, 2019

  26. [26]

    Tensor train decomposition-based channel estimation for MIMO-AFDM systems with fractional delay and doppler,

    R. Wang, C. Pan, H. Ren, H. Wu, and J. Wang, “Tensor train decomposition-based channel estimation for MIMO-AFDM systems with fractional delay and doppler,”preprint arXiv:2603.09293, 2026

  27. [27]

    Tensor decompositions in wireless communications and MIMO radar,

    H. Chen, F. Ahmad, S. V orobyov, and F. Porikli, “Tensor decompositions in wireless communications and MIMO radar,”IEEE J. Sel. Topics Signal Process., vol. 15, no. 3, pp. 438–453, 2021

  28. [28]

    Tensor-train decomposition,

    I. V . Oseledets, “Tensor-train decomposition,”SIAM Journal on Scien- tific Computing, vol. 33, no. 5, pp. 2295–2317, 2011

  29. [29]

    Cross tensor approximation methods for compression and dimensionality reduction,

    S. Ahmadi-Asl, C. F. Caiafa, A. Cichocki, A. H. Phan, T. Tanaka, I. Oseledets, and J. Wang, “Cross tensor approximation methods for compression and dimensionality reduction,”IEEE Access, vol. 9, pp. 150 809–150 838, 2021

  30. [30]

    The expression of a tensor or a polyadic as a sum of products,

    F. L. Hitchcock, “The expression of a tensor or a polyadic as a sum of products,”Journal of Mathematics and Physics, vol. 6, no. 1-4, pp. 164–189, 1927

  31. [31]

    Some mathematical notes on three-mode factor analysis,

    L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966

  32. [32]

    Matrix product state representations,

    D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac, “Matrix product state representations,”Quantum Information & Computation, vol. 7, no. 5, pp. 401–430, 2007

  33. [33]

    A multilinear singular value decomposition,

    L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,”SIAM J. Matrix Anal. Appl., vol. 21, pp. 1253–1278, 2000

  34. [34]

    TT-cross approximation for multidi- mensional arrays,

    I. Oseledets and E. Tyrtyshnikov, “TT-cross approximation for multidi- mensional arrays,”Linear Algebra and its Applications, vol. 432, no. 1, pp. 70–88, 2010

  35. [35]

    A DEIM Tucker tensor cross algorithm and its application to dynamical low-rank approximation,

    B. Ghahremani and H. Babaee, “A DEIM Tucker tensor cross algorithm and its application to dynamical low-rank approximation,”Comput. Methods Appl. Mech. Eng., vol. 423, p. 116879, 2024

  36. [36]

    Multilayer formulation of the multiconfigu- ration time-dependent Hartree theory,

    H. Wang and M. Thoss, “Multilayer formulation of the multiconfigu- ration time-dependent Hartree theory,”J. Chem. Phys., vol. 119, no. 3, pp. 1289–1299, 2003

  37. [37]

    A new scheme for the tensor representa- tion,

    W. Hackbusch and S. K ¨uhn, “A new scheme for the tensor representa- tion,”J. Fourier Anal. Appl., vol. 15, no. 5, pp. 706–722, 2009

  38. [38]

    Geometric structures in tensor representations,

    A. Falc ´o, W. Hackbusch, and A. Nouy, “Geometric structures in tensor representations,”preprint arXiv:1505.03027, 2015

  39. [39]

    Oseledets, S

    I. Oseledets, S. Dolgov, V . Kazeev, D. Savostyanov, O. Lebedeva, P. Zhlobich, T. Mach, and L. Song, http://oseledets.github.io/software, 2016, TT-Toolbox in MATLAB

  40. [40]

    TTOpt: A maximum volume quantized tensor train- based optimization and its application to reinforcement learning,

    K. Sozykin, A. Chertkov, R. Schutski, A.-H. Phan, A. S. Cichocki, and I. Oseledets, “TTOpt: A maximum volume quantized tensor train- based optimization and its application to reinforcement learning,”Proc. Advances in Neural Information Processing Systems (NeurIPS), vol. 35, pp. 26 052–26 065, 2022

  41. [41]

    Fifty years of MIMO detection: The road to large-scale MIMOs,

    S. Yang and L. Hanzo, “Fifty years of MIMO detection: The road to large-scale MIMOs,”IEEE Commun. Surveys Tuts., vol. 17, no. 4, pp. 1941–1988, 2015

  42. [42]

    Computational complexity of optimum multiuser detection,

    S. Verd ´u, “Computational complexity of optimum multiuser detection,” Algorithmica, vol. 4, no. 1, pp. 303–312, Jun. 1989

  43. [43]

    B. W. Bader, T. G. Koldaet al., https://www.tensortoolbox.org, April 2021, MATLAB Tensor Toolbox Version 3.2.1

  44. [44]

    Schmid, D

    L. Schmid, D. Sulz, S. Chimmalgi, and L. Schmalen, https://github.com/ kit-cel/tt-bayesian-inference, 2026, source code to be published upon acceptance of the paper

  45. [45]

    A mathematical theory of communication,

    C. E. Shannon, “A mathematical theory of communication,”Bell Syst. Tech. J., vol. 27, no. 3, pp. 379–423, 1948

  46. [46]

    Casella and R

    G. Casella and R. L. Berger,Statistical Inference, 2nd ed. Pacific Grove, CA: Duxbury, 2002

  47. [47]

    Channel coding rate in the finite blocklength regime,

    Y . Polyanskiy, H. V . Poor, and S. Verd ´u, “Channel coding rate in the finite blocklength regime,”IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2307–2359, 2010

  48. [48]

    Transmitting short packets over wireless channels - an information-theoretic perspective,

    G. Durisi and A. Lancho, “Transmitting short packets over wireless channels - an information-theoretic perspective,” https://gdurisi.github. io/fbl-notes/, Nov. 2020

  49. [49]

    Database of channel codes and ML simulation results,

    M. Helmling, S. Scholl, F. Gensheimer, T. Dietz, K. Kraft, O. Griebel, S. Ruzika, and N. Wehn, “Database of channel codes and ML simulation results,” www.rptu.de/channel-codes, 2025. SUBMITTED VERSION, APRIL 8, 2026 13

  50. [50]

    The maximal-volume concept in approximation by low-rank matrices,

    S. Goreinov and E. Tyrtyshnikov, “The maximal-volume concept in approximation by low-rank matrices,”Contemporary Mathematics, vol. 268, pp. 47–51, 2001