Recognition: 3 theorem links
· Lean TheoremEntanglement is Half the Story: Post-Selection vs. Partial Traces
Pith reviewed 2026-05-08 18:25 UTC · model grok-4.3
The pith
Post-selection is the key property that interpolates hybrid tensor networks between classical and quantum limits.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By running classical tensor networks on quantum hardware, the hybrid model provides a unified framework whose classical and quantum edge cases are recovered by varying post-selection. Post-selection is identified as the property that controls the enforcement of quantum constraints, with its quantity setting the interpolation level between the two regimes. The new hyperparameter governs this transition and, together with bond dimension, enables trainable allocation of post-selection to improve quantum machine learning performance.
What carries the argument
The hybrid tensor network architecture, in which post-selection on quantum hardware enforces adjustable levels of quantum constraints during inference.
Load-bearing premise
Post-selection can be applied controllably and practically on quantum hardware to enforce quantum constraints without prohibitive overhead.
What would settle it
Implementing the hybrid model on quantum hardware and finding that varying the post-selection hyperparameter produces no measurable gain in trainability or accuracy, or incurs prohibitive overhead, would falsify the claim.
read the original abstract
While tensor networks have their traditional application in simulating quantum systems, in the recent decade they have gathered interest as machine learning models. We combine the experience from both fields and derive how quantum constraints placed on a tensor network manifest a change in capabilities. To this end, we employ a method of inference of classical tensor networks on a quantum computer to define a hybrid architecture. This hybrid tensor network is a practical unified framework for it's classical and quantum tensor network edge cases. We identify post-selection as the important property on which this interpolation hinges. The amount of post-selection corresponds to the level to which quantum constraints are enforced on the tensor network. On this basis, we propose a new hyperparameter which controls the transition between the hybrid and the quantum tensor network. In the comparison of classical and quantum tensor networks it complements the bond dimension. Quantum machine learning is improved by using the hyperparameter to allocate the practically limited post-selection to the quantum model in a trainable manner.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that quantum constraints on tensor networks can be controlled via post-selection in a hybrid classical-quantum tensor network architecture obtained by performing classical tensor network inference on quantum hardware. Post-selection is identified as the mechanism that interpolates between the classical and fully quantum limits, and the authors introduce a new hyperparameter quantifying the amount of post-selection; this hyperparameter is said to complement bond dimension and to enable trainable allocation of limited post-selection resources, thereby improving quantum machine learning performance.
Significance. A controllable post-selection hyperparameter that smoothly enforces quantum constraints while remaining trainable would constitute a useful conceptual bridge between classical and quantum tensor-network models for machine learning. The proposal is novel in framing post-selection explicitly as a tunable resource complementary to bond dimension, but its significance hinges on whether the claimed interpolation can be realized without exponential sampling overhead.
major comments (2)
- [Abstract] Abstract: the central assertion that 'the amount of post-selection corresponds to the level to which quantum constraints are enforced' is presented without any derivation, circuit diagram, or explicit mapping showing how the hybrid inference procedure produces a tunable post-selection rate.
- [Abstract] Abstract: no bound, scaling argument, or even schematic is supplied for the post-selection success probability as a function of the proposed hyperparameter or system size; without such control the claimed trainable allocation and performance improvement cannot be assessed.
minor comments (1)
- [Abstract] Abstract: 'it's' should be 'its' in the sentence describing the hybrid tensor network as 'a practical unified framework for it's classical and quantum tensor network edge cases.'
Simulated Author's Rebuttal
We thank the referee for their detailed review and constructive comments on our manuscript. We address each major comment below and commit to revisions that strengthen the presentation of the hybrid tensor network framework.
read point-by-point responses
-
Referee: [Abstract] Abstract: the central assertion that 'the amount of post-selection corresponds to the level to which quantum constraints are enforced' is presented without any derivation, circuit diagram, or explicit mapping showing how the hybrid inference procedure produces a tunable post-selection rate.
Authors: The full manuscript derives the hybrid architecture explicitly by describing how classical tensor network inference is performed on quantum hardware, with post-selection arising as the mechanism that enforces quantum constraints during the inference step. The proposed hyperparameter modulates the strength of these constraints, thereby controlling the post-selection rate and interpolating between the classical and quantum limits. We agree that the abstract would benefit from greater clarity on this point. In the revised version we will add a concise reference to the inference procedure and include a simple schematic diagram illustrating the mapping. revision: yes
-
Referee: [Abstract] Abstract: no bound, scaling argument, or even schematic is supplied for the post-selection success probability as a function of the proposed hyperparameter or system size; without such control the claimed trainable allocation and performance improvement cannot be assessed.
Authors: The referee correctly identifies that the current manuscript does not supply an explicit bound or scaling relation for the post-selection success probability. The hyperparameter is introduced by construction to allocate post-selection resources in a trainable manner, and the interpolation is demonstrated both conceptually and through numerical experiments on machine-learning tasks. Nevertheless, we acknowledge that a schematic or preliminary scaling discussion would help readers evaluate the practical overhead. We will add such a schematic together with a brief analysis of the expected dependence on the hyperparameter and system size in the revised manuscript. revision: yes
Circularity Check
No circularity: identification of post-selection is interpretive, not self-referential or fitted-by-construction.
full rationale
The provided abstract and claims identify post-selection as the hinge for interpolation between classical and quantum tensor networks, then propose a hyperparameter controlling its amount to allocate enforcement of quantum constraints. This step does not reduce any derived quantity to its own inputs by construction, nor does it rename a fitted parameter as a prediction, smuggle an ansatz via self-citation, or invoke a uniqueness theorem from the authors' prior work. No equations are shown equating the hyperparameter to post-selection success probability or enforcement level tautologically. The framework remains an external interpretive mapping rather than a closed loop, and is therefore self-contained against benchmarks of tensor-network expressivity and hybrid inference.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
Cost.FunctionalEquationwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Λ[ρ] = Σ K_i ρ K_i† with Σ K_i† K_i = I ... Stinespring dilation Λ[ρ_A] = tr_B(U_AB(ρ_A⊗|0⟩⟨0|_B)U_AB†).
-
Foundation.AlphaCoordinateFixationJ_uniquely_calibrated_via_higher_derivative unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
L_w = (1/N)Σ −tr(τ_i log Λ[σ_i]) − (1−w)(1/N)Σ −log tr(Λ[σ_i]); monotonicity via operator-monotone logarithm.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Grohs, P., Kutyniok, G.: Mathematical Aspects of Deep Learning. Cambridge University Press. https://doi.org/10.1017/9781009025096
-
[2]
Schuld, M., Petruccione, F.: Machine Learning with Quantum Computers. Springer. https://doi.org/10.1007/978-3-030-83098-4 17
-
[3]
Cambridge University Press
Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press. https://doi.org/10.1017/ cbo9780511976667
-
[4]
White, S.R.: Density matrix formulation for quantum renormalization groups 69(19), 2863–2866 https://doi.org/10.1103/physrevlett.69.2863
-
[5]
Ran, S.-J., Tirrito, E., Peng, C., Chen, X., Tagliacozzo, L., Su, G., Lewenstein, M.: Tensor Network Contractions: Methods and Applications to Quantum Many- Body Systems. Springer. https://doi.org/10.1007/978-3-030-34489-4
-
[6]
Bridgeman, J.C., Chubb, C.T.: Hand-waving and interpretive dance: an intro- ductory course on tensor networks50(22), 223001 https://doi.org/10.1088/ 1751-8121/aa6dc3
-
[7]
Orus, R.: A practical introduction to tensor networks: Matrix product states and projected entangled pair states349, 117–158 (2013) https://doi.org/10.1016/j. aop.2014.06.013 1306.2164
work page doi:10.1016/j 2013
-
[8]
Schollwoeck, U.: The density-matrix renormalization group in the age of matrix product states https://doi.org/10.48550/ARXIV.1008.3477
-
[9]
Pan, F., Chen, K., Zhang, P.: Solving the sampling problem of the sycamore quantum circuits https://doi.org/10.48550/ARXIV.2111.03011
-
[11]
Berezutskii, A., Liu, M., Acharya, A., Ellerbrock, R., Gray, J., Haghshenas, R., He, Z., Khan, A., Kuzmin, V., Lyakh, D., Lykov, D., Mandr` a, S., Mansell, C., Melnikov, A., Melnikov, A., Mironov, V., Morozov, D., Neukart, F., Nocera, A., Perlin, M.A., Perelshtein, M., Steinberg, M., Shaydulin, R., Villalonga, B., Pflitsch, M., Pistoia, M., Vinokur, V., A...
-
[12]
Cambridge University Press
Bengtsson, I., ˙Zyczkowski, K.: Geometry of Quantum States: An Introduction to Quantum Entanglement. Cambridge University Press. https://doi.org/10.1017/ 9781139207010
-
[13]
Wilde, M.M.: Quantum Information Theory. Cambridge University Press. https: //doi.org/10.1017/9781316809976
-
[14]
Rieser, H.-M., K¨ oster, F., Raulf, A.P.: Tensor networks for quantum machine learning479(2275) https://doi.org/10.1098/rspa.2023.0218
-
[15]
Zaletel, M.P., Pollmann, F.: Isometric tensor network states in two dimensions 18 124(3), 037201 https://doi.org/10.1103/physrevlett.124.037201
-
[16]
1088/2058-9565/aaea94
Huggins, W., Patil, P., Mitchell, B., Whaley, K.B., Stoudenmire, E.M.: Towards quantum machine learning with tensor networks4(2), 024001 https://doi.org/10. 1088/2058-9565/aaea94
2058
-
[17]
Stoudenmire, E.M.: Learning relevant features of data with multi-scale tensor networks3(3), 034003 https://doi.org/10.1088/2058-9565/aaba1a
-
[18]
Dilip, R., Liu, Y.-J., Smith, A., Pollmann, F.: Data compression for quantum machine learning https://doi.org/10.48550/ARXIV.2204.11170
-
[19]
Hickmann, M.L., Alves, P., Quero, D., Schwenker, F., Rieser, H.-M.: Hybrid quan- tum tensor networks for aeroelastic applications7(2) https://doi.org/10.1007/ s42484-025-00327-8
-
[20]
Wall, M.L., Titum, P., Quiroz, G., Foss-Feig, M., Hazzard, K.R.A.: Tensor- network discriminator architecture for classification of quantum data on quantum computers105(6), 062439 https://doi.org/10.1103/physreva.105.062439
-
[21]
Stoudenmire, E.M., Schwab, D.J.: Supervised learning with quantum-inspired tensor networks29(4799) https://doi.org/10.48550/ARXIV.1605.05775
-
[22]
Novikov, A., Trofimov, M., Oseledets, I.: Exponential machines (2016) https: //doi.org/10.48550/ARXIV.1605.03795 1605.03795
-
[23]
Harada, K., Okubo, T., Kawashima, N.: Tensor tree learns hidden relational structures in data to construct generative models6(2), 025002 https://doi.org/ 10.1088/2632-2153/adc2c7
-
[24]
Pomarico, D., Cilli, R., Monaco, A., Bellantuono, L., La Rocca, M., Maggipinto, T., Magnifico, G., Ortega, M.O., Pantaleo, E., Tangaro, S., Stramaglia, S., Bel- lotti, R., Amoroso, N.: Transfer entropy and O-information to detect grokking in tensor network multi-class classification problems. arXiv. https://doi.org/10. 48550/ARXIV.2507.23346
-
[25]
Pomarico, D., Monaco, A., Magnifico, G., Lacalamita, A., Pantaleo, E., Bellan- tuono, L., Tangaro, S., Maggipinto, T., La Rocca, M., Picardi, E., Amoroso, N., Pesole, G., Stramaglia, S., Bellotti, R.: Grokking as an entanglement transition in tensor network machine learning. arXiv. https://doi.org/10.48550/ARXIV.2503. 10483
-
[26]
Tomut, A., Jahromi, S.S., Sarkar, A., Kurt, U., Singh, S., Ishtiaq, F., Mu˜ noz, C., Bajaj, P.S., Elborady, A., Bimbo, G., Alizadeh, M., Montero, D., Martin-Ramiro, P., Ibrahim, M., Alaoui, O.T., Malcolm, J., Mugel, S., Orus, R.: CompactifAI: Extreme Compression of Large Language Models using Quantum-Inspired Tensor Networks. arXiv. https://doi.org/10.485...
-
[27]
Wang, M., Pan, Y., Xu, Z., Li, G., Yang, X., Mandic, D., Cichocki, A.: Tensor Networks Meet Neural Networks: A Survey and Future Perspectives. arXiv. https: //doi.org/10.48550/ARXIV.2302.09019
-
[28]
Yuan, X., Sun, J., Liu, J., Zhao, Q., Zhou, Y.: Quantum simulation with hybrid tensor networks127(4), 040501 https://doi.org/10.1103/physrevlett.127.040501 arxiv:2007.00958
- [29]
-
[30]
Huang, J., He, W., Zhang, Y., Wu, Y., Wu, B., Yuan, X.: Tensor-network-assisted variational quantum algorithm108(5), 052407 https://doi.org/10.1103/physreva. 108.052407 arxiv:2212.10421
-
[31]
org/10.1109/jstars.2023.3308723
Xiu, Y., Ye, F., Chen, Z., Liu, Y.: Hybrid tensor networks for fully supervised and semisupervised hyperspectral image classification16, 7882–7895 https://doi. org/10.1109/jstars.2023.3308723
-
[32]
Schuhmacher, J., Ballarin, M., Baiardi, A., Magnifico, G., Tacchino, F., Mon- tangero, S., Tavernelli, I.: Hybrid tree tensor networks for quantum simulation 6(1), 010320 https://doi.org/10.1103/prxquantum.6.010320 arxiv:2404.05784
-
[33]
Harada, H., Suzuki, Y., Yang, B., Tokunaga, Y., Endo, S.: Density matrix representation of hybrid tensor networks for noisy quantum devices9, 1823 https://doi.org/10.22331/q-2025-08-07-1823 arxiv:2309.15761v4
-
[34]
In: 2024 2nd International Conference on Computer, Vision and Intelligent Technology (ICCVIT), pp
Yao, J.: Hybrid tensor networks: The integration of quantum and classical machine learning. In: 2024 2nd International Conference on Computer, Vision and Intelligent Technology (ICCVIT), pp. 1–5. IEEE. https://doi.org/10.1109/ iccvit63928.2024.10872520
-
[35]
org/10.48550/ARXIV.1711.11240 1711.11240
Cao, Y., Guerreschi, G.G., Aspuru-Guzik, A.: Quantum neuron: an elementary building block for machine learning on quantum computers (2017) https://doi. org/10.48550/ARXIV.1711.11240 1711.11240
-
[36]
Boumal, N.: An Introduction to Optimization on Smooth Manifolds, 1st ed. edn. Cambridge University Press. https://doi.org/10.1017/9781009166164 . Descrip- tion based on publisher supplied metadata and other sources
-
[37]
Siegl, P., Reese, G.S., Hashizume, T., H¨ ulst, N.-L., Jaksch, D.: Tensor- Programmable Quantum Circuits for Solving Differential Equations. arXiv. https: //doi.org/10.48550/ARXIV.2502.04425
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2502.04425
-
[38]
Termanova, A., Melnikov, A., Mamenchikov, E., Belokonev, N., Dolgov, S., Berezutskii, A., Ellerbrock, R., Mansell, C., Perelshtein, M.: Tensor Quantum 20 Programming. arXiv. https://doi.org/10.48550/ARXIV.2403.13486
-
[39]
Suzuki, Y., Tiang, B.H., Son, J., Ng, N.H.Y., Holmes, Z., Gluza, M.: Double- bracket algorithm for quantum signal processing without post-selection9, 1954 https://doi.org/10.22331/q-2025-12-23-1954
-
[40]
Zylberman, J., Nzongani, U., Simonetto, A., Debbasch, F.: Efficient quantum circuits for non-unitary and unitary diagonal operators with space-time-accuracy trade-offs6(2), 1–43 https://doi.org/10.1145/3718348
-
[41]
Springer
Hirose, A.: Complex-Valued Neural Networks. Springer. https://doi.org/10.1007/ 978-3-642-27632-3
-
[42]
Lee, C., Hasegawa, H., Gao, S.: Complex-valued neural networks: A comprehen- sive survey9(8), 1406–1426 https://doi.org/10.1109/jas.2022.105743
- [43]
-
[44]
Projective characterization of higher- order quantum transformations, 2022
Barratt, F., Dborin, J., Wright, L.: Improvements to gradient descent methods for quantum tensor network machine learning https://doi.org/10.48550/ARXIV. 2203.03366
work page internal anchor Pith review doi:10.48550/arxiv
-
[45]
Shangnan, Z., Wang, Y.: Quantum cross entropy and maximum likelihood principle (2021) https://doi.org/10.48550/ARXIV.2102.11887 2102.11887
-
[46]
Geng, C., Hu, H.-Y., Zou, Y.: Differentiable programming of isometric tensor net- works3(1), 015020 https://doi.org/10.1088/2632-2153/ac48a2 arxiv:2110.03898
-
[47]
J¨ ager, G., Plenio, M.B., Rieser, H.-M.: Quantum tensor network learning with dmrg. In: ESANN 2025 Proceesdings. ESANN 2025, pp. 537–542. Ciaco - i6doc.com. https://doi.org/10.14428/esann/2025.es2025-157
-
[48]
Chansangiam, P.: A survey on operator monotonicity, operator convexity, and operator means2015, 1–8 https://doi.org/10.1155/2015/649839
-
[49]
HIAI, F.: Matrix analysis: Matrix monotone functions, matrix means, and majorization16(2), 139–246 https://doi.org/10.4036/iis.2010.139
-
[50]
Furuta, T.: Operator monotone functions, a>b>0 and loga>logb (1), 93–96 https://doi.org/10.7153/jmi-07-08
-
[51]
Le, I.N.M., Sun, S., Mendl, C.B.: Riemannian quantum circuit optimiza- tion based on matrix product operators9, 1833 https://doi.org/10.22331/ q-2025-08-27-1833
2025
-
[52]
Fisher, R.A.: The use of multiple measurements in taxonomic problems7(2), 21 179–188 https://doi.org/10.1111/j.1469-1809.1936.tb02137.x
-
[53]
Anderson, E.: The species problem in iris23(3), 457 https://doi.org/10.2307/ 2394164
-
[54]
Nature Communications9(1) (2018) https:// doi.org/10.1038/s41467-018-07090-4
McClean, J.R., Boixo, S., Smelyanskiy, V.N., Babbush, R., Neven, H.: Barren plateaus in quantum neural network training landscapes9(1) (2018) https://doi. org/10.1038/s41467-018-07090-4 1803.11173
-
[55]
Michael Hanna, Sandro Pezzelle, and Yonatan Belinkov
Gray, J.: quimb: A python package for quantum information and many-body calculations3(29), 819 https://doi.org/10.21105/joss.00819 A The Hyperparameter and the MSE loss With the partial normalizationN h dependent on the hyperparameterh(eithertor w) the MSE loss is defined as LMSE,h = 1 N NX i=1 1 2(τi − Nh ◦Λ[σ i])2.(20) Again, forh= 0 the normalization i...
-
[56]
Specifically, using the triangle inequality of distinguishability measures [12]
and statements about operator montonicity [48]. Specifically, using the triangle inequality of distinguishability measures [12]. 22
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.