Recognition: no theorem link
Non-variational supervised quantum kernel methods: a review
Pith reviewed 2026-05-10 18:14 UTC · model grok-4.3
The pith
Non-variational quantum kernel methods achieve stable training by fixing quantum feature maps and performing model selection through classical convex optimization.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Non-variational supervised quantum kernel methods employ fixed quantum feature maps to encode data, followed by classical convex optimization for model selection, thereby ensuring stable training without gradient-based issues. The review analyzes their foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, estimation techniques on hardware, generalization bounds, and conditions for quantum advantage. It further examines challenges including exponential concentration of kernel values, dequantization via tensor networks, spectral properties of kernel operators, and synthesizes evidence from comparative studies and hardware experiments on regimes where
What carries the argument
Fixed quantum feature embedding separated from classical convex training, which isolates quantum data encoding from model fitting to guarantee stable optimization.
If this is right
- Stable optimization follows directly once the quantum embedding is fixed and training reduces to convex problems.
- Quantum advantage requires structured problem classes that satisfy necessary separation conditions from classical kernels.
- Generalization bounds derived from kernel integral operators provide a concrete way to test for advantage.
- Exponential concentration and dequantization must be overcome for any claimed separation to survive in practice.
- Hardware studies can validate whether fidelity or projected kernels retain useful spectral properties.
Where Pith is reading between the lines
- The separation strategy may allow direct transfer of classical kernel regularization techniques to quantum settings without modification.
- Structured problems identified here could be used to design targeted benchmarks that isolate quantum embedding benefits.
- If concentration is mitigated in one class, the same fixed-map approach might extend to unsupervised or generative quantum tasks.
- Comparative studies in the review suggest that advantage claims should be tested against specific classical baselines rather than generic ones.
Load-bearing premise
Practical estimation of quantum kernels on near-term hardware can be carried out before exponential concentration or classical dequantization erase any potential separation from classical models.
What would settle it
Demonstration on quantum hardware that kernel matrices for a candidate advantageous problem class exhibit full exponential concentration or that a tensor-network classical method matches the quantum model's accuracy and generalization.
Figures
read the original abstract
Quantum kernel methods (QKMs) have emerged as a prominent framework for supervised quantum machine learning. Unlike variational quantum algorithms, which rely on gradient-based optimisation and may suffer from issues such as barren plateaus, non-variational QKMs employ fixed quantum feature maps, with model selection performed classically via convex optimisation and cross-validation. This separation of quantum feature embedding from classical training ensures stable optimisation while leveraging quantum circuits to encode data in high-dimensional Hilbert spaces. In this review, we provide a thorough analysis of non-variational supervised QKMs, covering their foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, and methods for their estimation in practice. We examine frameworks for assessing quantum advantage, including generalisation bounds and necessary conditions for separation from classical models, and analyse key challenges such as exponential concentration, dequantisation via tensor-network methods, and the spectral properties of kernel integral operators. We further discuss structured problem classes that may enable advantage, and synthesise insights from comparative and hardware studies. Overall, this review aims to clarify the regimes in which QKMs may offer genuine advantages, and to delineate the conceptual, methodological, and technical obstacles that must be overcome for practical quantum-enhanced learning.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. This review synthesizes non-variational supervised quantum kernel methods (QKMs), contrasting them with variational approaches by emphasizing fixed quantum feature maps combined with classical convex optimization and cross-validation. It covers foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, practical estimation on hardware, generalization bounds and necessary conditions for quantum advantage, challenges including exponential concentration, tensor-network dequantization, and spectral properties of kernel operators, as well as structured problem classes that may permit advantage and insights from comparative/hardware studies.
Significance. If the synthesis is accurate, the review is significant for organizing the literature on non-variational QKMs, explicitly crediting the separation of fixed quantum embedding from classical convex training as the source of stable optimization (a direct consequence of standard kernel theory), and delineating open challenges and necessary conditions for advantage rather than claiming resolutions. It provides a useful reference point for the field by framing exponential concentration and dequantization as analysis of obstacles rather than resolved issues.
major comments (2)
- [Challenges and advantage assessment sections] § on exponential concentration and dequantization: the discussion of regimes where advantage may persist assumes that practical kernel estimation can overcome concentration effects in the claimed structured classes, but no quantitative bound or explicit condition (e.g., on circuit depth or data distribution) is derived to delineate when this holds versus when tensor-network dequantization succeeds; this is load-bearing for the central claim that advantage remains possible.
- [Frameworks for assessing quantum advantage] Generalization bounds section: the review cites external results on kernel generalization but does not verify or reproduce the dependence on the quantum feature map's properties (e.g., the RKHS norm or eigenvalue decay of the integral operator) for the specific fidelity/projected kernels discussed; without this, the claimed separation from classical models remains at the level of necessary conditions rather than demonstrated sufficiency.
minor comments (3)
- [Abstract and Introduction] The abstract and introduction use 'non-variational' and 'fixed quantum feature maps' interchangeably; a brief clarifying sentence on whether all non-variational methods are strictly fixed (no trainable parameters at all) would improve precision.
- [Comparative and hardware studies] Comparative and hardware studies section: several cited numerical results on kernel estimation are summarized without reporting the circuit depths, number of shots, or device noise models used; adding these details would strengthen the synthesis of practical feasibility.
- [Constructions of fidelity and projected quantum kernels] Notation for projected quantum kernels is introduced without an explicit equation linking the projection operator to the classical kernel matrix; a short derivation or reference to the defining equation would aid readability.
Simulated Author's Rebuttal
We thank the referee for the constructive comments and the recommendation for minor revision. We address each major comment point by point below, with proposed revisions to improve clarity and precision while remaining faithful to the review's scope as a synthesis of the literature.
read point-by-point responses
-
Referee: [Challenges and advantage assessment sections] § on exponential concentration and dequantization: the discussion of regimes where advantage may persist assumes that practical kernel estimation can overcome concentration effects in the claimed structured classes, but no quantitative bound or explicit condition (e.g., on circuit depth or data distribution) is derived to delineate when this holds versus when tensor-network dequantization succeeds; this is load-bearing for the central claim that advantage remains possible.
Authors: We agree that the review does not derive new quantitative bounds, as its purpose is to synthesize existing results rather than present original theoretical derivations. The sections on challenges and advantage assessment summarize the literature on exponential concentration for random and structured quantum circuits, tensor-network dequantization methods, and conjectured regimes (e.g., low-entanglement or geometrically structured data) where advantage may persist according to cited analyses. To address the concern, we will revise the relevant paragraphs to explicitly state that no general quantitative condition (such as explicit bounds on circuit depth or data distribution) has been established to separate regimes where kernel estimation overcomes concentration from those where dequantization succeeds. The revision will frame the discussion as outlining necessary conditions from the literature and highlight this delineation as an open challenge, thereby avoiding any overstatement of sufficiency for practical advantage. revision: partial
-
Referee: [Frameworks for assessing quantum advantage] Generalization bounds section: the review cites external results on kernel generalization but does not verify or reproduce the dependence on the quantum feature map's properties (e.g., the RKHS norm or eigenvalue decay of the integral operator) for the specific fidelity/projected kernels discussed; without this, the claimed separation from classical models remains at the level of necessary conditions rather than demonstrated sufficiency.
Authors: The generalization bounds section cites foundational results from classical kernel theory and their quantum extensions in the referenced works, which analyze the dependence of generalization on properties such as the RKHS norm for fidelity kernels and eigenvalue decay of the integral operator for projected kernels. As a review, we summarize these results and their implications for quantum kernels without reproducing full proofs or performing new verifications. In the revision, we will add a brief summary paragraph outlining how these properties apply to the fidelity and projected kernels as reported in the cited literature, and we will explicitly note that any separation from classical models is discussed at the level of necessary conditions identified therein. This will make the section more self-contained while accurately reflecting the current state of the field. revision: partial
Circularity Check
No significant circularity
full rationale
This is a review paper that synthesizes foundations from classical kernel theory, constructions of fidelity and projected quantum kernels, and analyses of challenges such as exponential concentration and dequantisation. All load-bearing claims are supported by external citations to prior literature on kernel methods and quantum information rather than by internal derivations, fitted parameters, or self-citations that reduce to the paper's own inputs by construction. The separation of fixed quantum feature maps from classical convex optimisation follows directly from standard results in convex optimisation and kernel theory, with no self-definitional loops or renamed predictions present in the manuscript.
Axiom & Free-Parameter Ledger
axioms (1)
- standard math Foundations of classical kernel methods and quantum feature maps from prior literature
Forward citations
Cited by 1 Pith paper
-
Wavelet Variance Equipartition as a Threshold for World-Model Quality and Quantum Kernel TN-Simulability
Wavelet scaling α = 1/2 separates classically simulable area-law from volume-law phases for quantum kernels in world-model latents, with empirical VideoMAE latents and a Θ(d^{-2}) variance bound implying simulation ha...
Reference graph
Works this paper leans on
-
[1]
narrower
Other dequantisation approaches. Beyond tensor-network methods, a number of classi- cal techniques have been developed that may be used to dequantise QKMs. One prominent family of ap- proaches is based on random Fourier features (RFF) and related sampling techniques [110]. These methods ex- ploit the spectral structure of shift-invariant or approxi- matel...
-
[2]
Biamonte, P
J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd. Quantum machine learning. Nature 549, 195–202 (2017)
2017
-
[3]
Guest column: A survey of quantum learning theory.ACM Sigact News, 48(2):41–67, 2017
Srinivasan Arunachalam and Ronald De Wolf. Guest column: A survey of quantum learning theory.ACM Sigact News, 48(2):41–67, 2017
2017
-
[4]
Quantum random access memory.Physical Re- view Letters, 100(16), April 2008
Vittorio Giovannetti, Seth Lloyd, and Lorenzo Mac- cone. Quantum random access memory.Physical Re- view Letters, 100(16), April 2008
2008
-
[5]
Harrow, Avinatan Hassidim, and Seth Lloyd
Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Lett., 103:150502, Oct 2009
2009
-
[6]
Quan- tum algorithm for data fitting.Phys
Nathan Wiebe, Daniel Braun, and Seth Lloyd. Quan- tum algorithm for data fitting.Phys. Rev. Lett., 109:050505, Aug 2012
2012
-
[7]
Quantum algorithms for supervised and unsupervised machine learning, 2013
Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning, 2013. 34
2013
-
[8]
Quantum principal component analysis.Nature Physics, 10(9):631–633, 2014
Seth Lloyd, Masoud Mohseni, and Patrick Reben- trost. Quantum principal component analysis.Nature Physics, 10(9):631–633, 2014
2014
-
[9]
Quantum support vector machine for big data classifi- cation.Physical review letters, 113(13):130503, 2014
Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classifi- cation.Physical review letters, 113(13):130503, 2014
2014
-
[10]
Quantum algorithms for topological and geometric analysis of data.Nature Communications, 7(1):10138, 2016
Seth Lloyd, Silvano Garnerone, and Paolo Zanardi. Quantum algorithms for topological and geometric analysis of data.Nature Communications, 7(1):10138, 2016
2016
-
[11]
Quantum discriminant analysis for dimensionality reduction and classifica- tion.New Journal of Physics, 18(7):073011, jul 2016
Iris Cong and Luming Duan. Quantum discriminant analysis for dimensionality reduction and classifica- tion.New Journal of Physics, 18(7):073011, jul 2016
2016
-
[12]
Quantum singular-value decomposi- tion of nonsparse low-rank matrices.Phys
Patrick Rebentrost, Adrian Steffens, Iman Marvian, and Seth Lloyd. Quantum singular-value decomposi- tion of nonsparse low-rank matrices.Phys. Rev. A, 97:012327, Jan 2018
2018
-
[13]
Fitzsimons, and Joseph F
Zhikuan Zhao, Jack K. Fitzsimons, and Joseph F. Fitzsimons. Quantum-assisted gaussian process regres- sion.Physical Review A, 99(5), May 2019
2019
-
[14]
Parameterized quantum circuits as machine learning models.Quantum Science and Tech- nology, 4(4):043001, November 2019
Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mattia Fiorentini. Parameterized quantum circuits as machine learning models.Quantum Science and Tech- nology, 4(4):043001, November 2019
2019
-
[15]
Cerezo, Andrew Arrasmith, Ryan Babbush, Si- mon C
M. Cerezo, Andrew Arrasmith, Ryan Babbush, Si- mon C. Benjamin, Suguru Endo, Keisuke Fujii, Jar- rod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. Variational quantum al- gorithms.Nature Reviews Physics, 3(9):625–644, Au- gust 2021
2021
-
[16]
Love, Al´ an Aspuru-Guzik, and Jeremy L
Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J. Love, Al´ an Aspuru-Guzik, and Jeremy L. O’Brien. A variational eigenvalue solver on a photonic quantum processor. Nature Communications, 5(1):4213, 2014
2014
-
[17]
A quantum approximate optimization algorithm, 2014
Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm, 2014
2014
-
[18]
Osborne, Robert Salzmann, Daniel Scheier- mann, and Ramona Wolf
Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, To- bias J. Osborne, Robert Salzmann, Daniel Scheier- mann, and Ramona Wolf. Training deep quantum neural networks.Nature Communications, 11(1):808, 2020
2020
-
[19]
I. Cong, S. Choi, and M. D. Lukin. Quantum con- volutional neural networks. Nat. Phys. 15, 1273–1278 (2019)
2019
-
[20]
Quantum autoencoders for efficient compression of quantum data.Quantum Science and Technology, 2(4):045001, August 2017
Jonathan Romero, Jonathan P Olson, and Alan Aspuru-Guzik. Quantum autoencoders for efficient compression of quantum data.Quantum Science and Technology, 2(4):045001, August 2017
2017
-
[21]
Preskill
J. Preskill. Quantum computing in the nisq era and beyond. DOI: Quantum 2, 79 (2018)
2018
-
[22]
McClean, Sergio Boixo, Vadim N
Jarrod R. McClean, Sergio Boixo, Vadim N. Smelyan- skiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training land- scapes.Nature Communications, 9(1):4812, 2018
2018
-
[23]
Coles, Lukasz Cincio, Jarrod R
Mart´ ın Larocca, Supanut Thanasilp, Samson Wang, Kunal Sharma, Jacob Biamonte, Patrick J. Coles, Lukasz Cincio, Jarrod R. McClean, Zo¨ e Holmes, and M. Cerezo. Barren plateaus in variational quantum computing.Nature Reviews Physics, 7(4):174–189, 2025
2025
-
[24]
Cerezo, Kunal Sharma, Akira Sone, Lukasz Cincio, and Patrick J
Samson Wang, Enrico Fontana, M. Cerezo, Kunal Sharma, Akira Sone, Lukasz Cincio, and Patrick J. Coles. Noise-induced barren plateaus in variational quantum algorithms.Nature Communications, 12(1), November 2021
2021
-
[25]
Entanglement-induced barren plateaus.PRX quantum, 2(4):040316, 2021
Carlos Ortiz Marrero, M´ aria Kieferov´ a, and Nathan Wiebe. Entanglement-induced barren plateaus.PRX quantum, 2(4):040316, 2021
2021
-
[26]
Cerezo, and Patrick J
Zo¨ e Holmes, Kunal Sharma, M. Cerezo, and Patrick J. Coles. Connecting ansatz expressibility to gradient magnitudes and barren plateaus.PRX Quantum, 3(1), January 2022
2022
-
[27]
Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J
M. Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J. Coles. Cost function dependent bar- ren plateaus in shallow parametrized quantum circuits. Nature Communications, 12(1), March 2021
2021
-
[28]
Cerezo, Samson Wang, Tyler Volkoff, Andrew T
Arthur Pesah, M. Cerezo, Samson Wang, Tyler Volkoff, Andrew T. Sornborger, and Patrick J. Coles. Absence of barren plateaus in quantum convolutional neural networks.Phys. Rev. X, 11:041011, Oct 2021
2021
-
[29]
Rudolph, Zo¨ e Holmes, Lukasz Cincio, and M
Pablo Bermejo, Paolo Braccia, Manuel S. Rudolph, Zo¨ e Holmes, Lukasz Cincio, and M. Cerezo. Quantum convolutional neural networks are (effectively) classi- cally simulable, 2024
2024
-
[30]
Cerezo, Martin Larocca, Diego Garc´ ıa-Mart´ ın, N
M. Cerezo, Martin Larocca, Diego Garc´ ıa-Mart´ ın, N. L. Diaz, Paolo Braccia, Enrico Fontana, Manuel S. Rudolph, Pablo Bermejo, Aroosa Ijaz, Supanut Thanasilp, Eric R. Anschuetz, and Zo¨ e Holmes. Does provable absence of barren plateaus imply classical simulability?Nature Communications, 16(1):7907, 2025
2025
-
[31]
Schuld and N
M. Schuld and N. Killoran. Quantum machine learning in feature hilbert spaces. Phys. Rev. Lett. 122, 040504 (2019)
2019
-
[32]
Supervised learn- ing with quantum-enhanced feature spaces.Nature, 567(7747):209–212, 2019
Vojtˇ ech Havl´ ıˇ cek, Antonio D C´ orcoles, Kristan Temme, Aram W Harrow, Abhinav Kandala, Jerry M Chow, and Jay M Gambetta. Supervised learn- ing with quantum-enhanced feature spaces.Nature, 567(7747):209–212, 2019
2019
-
[33]
Supervised quantum machine learning models are kernel methods, 2021
Maria Schuld. Supervised quantum machine learning models are kernel methods, 2021
2021
-
[34]
Thanasilp, S
S. Thanasilp, S. Wang, Cerezo M., and Z. Holmes. Ex- ponential concentration in quantum kernel methods. Nat Commun 15, 5200 (2024)
2024
-
[35]
The complexity of quantum support vector machines.Quantum, 8:1225, January 2024
Gian Gentinetta, Arne Thomsen, David Sutter, and 35 Stefan Woerner. The complexity of quantum support vector machines.Quantum, 8:1225, January 2024
2024
-
[36]
In search of quantum advantage: Estimating the number of shots in quantum kernel methods, 2024
Artur Miroszewski, Marco Fellous Asiani, Jakub Miel- czarek, Bertrand Le Saux, and Jakub Nalepa. In search of quantum advantage: Estimating the number of shots in quantum kernel methods, 2024
2024
-
[37]
Towards understanding the power of quantum kernels in the nisq era.Quantum, 5:531, August 2021
Xinbiao Wang, Yuxuan Du, Yong Luo, and Dacheng Tao. Towards understanding the power of quantum kernels in the nisq era.Quantum, 5:531, August 2021
2021
-
[38]
Noisy quantum kernel machines.Phys
Valentin Heyraud, Zejian Li, Zakari Denis, Alexandre Le Boit´ e, and Cristiano Ciuti. Noisy quantum kernel machines.Phys. Rev. A, 106:052421, Nov 2022
2022
-
[39]
Jerbi, L
S. Jerbi, L. J. Fiderer, H. P. Nautrup, J. M. K¨ ubler, H. J. Briegel, and V. Dunjko. Quantum machine learn- ing beyond kernel methods. Nat Commun 14, 517 (2023)
2023
-
[40]
Huang, M
H.-Y. Huang, M. Broughton, M. Mohseni, R. Babbush, S. Boixo, H. Neven, and J. R. McClean. Power of data in quantum machine learning. Nat. Commun. 12, 2631 (2021)
2021
- [41]
-
[42]
Ruslan Shaydulin and Stefan M. Wild. Importance of kernel bandwidth in quantum machine learning.Phys. Rev. A, 106:042407, Oct 2022
2022
-
[43]
Wild, and Ruslan Shaydulin
Abdulkadir Canatar, Evan Peters, Cengiz Pehlevan, Stefan M. Wild, and Ruslan Shaydulin. Bandwidth enables generalization in quantum kernel models, 2023
2023
-
[44]
Lucas Slattery, Ruslan Shaydulin, Shouvanik Chakrabarti, Marco Pistoia, Sami Khairy, and Stefan M. Wild. Numerical evidence against advan- tage with quantum fidelity kernels on classical data. Physical Review A, 107(6), June 2023
2023
-
[45]
Dequantizing quantum machine learning mod- els using tensor networks.Physical Review Research, 6(2):023218, 2024
Seongwook Shin, Yong Siah Teo, and Hyunseok Jeong. Dequantizing quantum machine learning mod- els using tensor networks.Physical Review Research, 6(2):023218, 2024
2024
-
[46]
Mehrad Sahebi, Alice Barthe, Yudai Suzuki, Zo¨ e Holmes, and Michele Grossi. On dequantization of su- pervised quantum machine learning via random fourier features.arXiv preprint arXiv:2505.15902, 2025
-
[47]
Y. Liu, S. Arunachalam, and K. Temme. A rigorous and robust quantum speed-up in supervised machine learning. Nat. Phys. 17, 1013–1017 (2021)
2021
-
[48]
Muser, E
T. Muser, E. Zapusek, V. Belis, and F. Reiter. Prov- able advantages of kernel-based quantum learners and quantum preprocessing based on grover’s algorithm. Phys. Rev. A, 110:032434, Sep 2024
2024
-
[49]
Y. Wu, B. Wu, J. Wang, and X. Yuan. Quantum phase recognition via quantum kernel methods. Quantum 7, 981 (2023)
2023
-
[50]
Glick, Tanvi P
Jennifer R. Glick, Tanvi P. Gujarati, Antonio D. C´ orcoles, Youngseok Kim, Abhinav Kandala, Jay M. Gambetta, and Kristan Temme. Covariant quantum kernels for data with group structure.Nature Physics, 20(3):479–483, January 2024
2024
-
[51]
Quantum kernel for image classifi- cation of real world manufacturing defects, 2022
Daniel Beaulieu, Dylan Miracle, Anh Pham, and William Scherr. Quantum kernel for image classifi- cation of real world manufacturing defects, 2022
2022
-
[52]
Sabir, Adel A
Mahmoud Ragab, Ehab Bahauden Ashary, Maha Farouk S. Sabir, Adel A. Bahaddad, and Romany F. Mansour. Mathematical modelling of quantum kernel method for biomedical data analysis.Computers, Ma- terials and Continua, 71(3):5441–5457, 2022
2022
-
[53]
Quantum kernels for real-world predictions based on electronic health records.IEEE Transactions on Quantum Engineering, 3:1–11, 2022
Zoran Krunic, Frederik Flother, George Seegan, Nate Earnest-Noble, and Shehab Omar. Quantum kernels for real-world predictions based on electronic health records.IEEE Transactions on Quantum Engineering, 3:1–11, 2022
2022
-
[54]
Non-hemolytic peptide classification using a quantum support vector machine.Quantum Informa- tion Processing, 23(11):379, 2024
Shengxin Zhuang, John Tanner, Yusen Wu, Du Huynh, Wei Liu, Xavier Cadet, Nicolas Fontaine, Philippe Charton, Cedric Damour, Frederic Cadet, and Jingbo Wang. Non-hemolytic peptide classification using a quantum support vector machine.Quantum Informa- tion Processing, 23(11):379, 2024
2024
-
[55]
Artur Miroszewski, Jakub Mielczarek, Grzegorz Czelusta, Filip Szczepanek, Bartosz Grabowski, Bertrand Le Saux, and Jakub Nalepa. Detecting clouds in multispectral satellite images using quantum-kernel support vector machines.IEEE journal of selected top- ics in applied earth observations and remote sensing, 16:7601–7613, 2023
2023
-
[56]
Wijata, Artur Miroszewski, Bertrand Le Saux, Nicolas Long´ ep´ e, Bogdan Ruszczak, and Jakub Nalepa
Agata M. Wijata, Artur Miroszewski, Bertrand Le Saux, Nicolas Long´ ep´ e, Bogdan Ruszczak, and Jakub Nalepa. Detection of bare soil in hyperspectral im- ages using quantum-kernel support vector machines. In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, pages 817–822, 2024
2024
-
[57]
Shungo Miyabe, Brian Quanz, Noriaki Shimada, Ab- hijit Mitra, Takahiro Yamamoto, Vladimir Rastunkov, Dimitris Alevras, Mekena Metcalf, Daniel J. M. King, Mohammad Mamouei, Matthew D. Jackson, Martin Brown, Philip Intallura, and Jae-Eun Park. Quantum multiple kernel learning in financial classification tasks, 2023
2023
-
[58]
Quantum kernel meth- ods under scrutiny: a benchmarking study.Quantum Machine Intelligence, 7(1), apr 2025
Jan Schnabel and Marco Roth. Quantum kernel meth- ods under scrutiny: a benchmarking study.Quantum Machine Intelligence, 7(1), apr 2025
2025
-
[59]
Benchmarking quantum ma- chine learning kernel training for classification tasks
Diego Alvarez-Estevez. Benchmarking quantum ma- chine learning kernel training for classification tasks. IEEE Transactions on Quantum Engineering, 6:1–15, 2025
2025
-
[60]
Comparative in- vestigation of quantum and classical kernel functions applied in support vector machine algorithms.Quan- 36 tum Information Processing, 24(4):109, 2025
Ghada Abdulsalam and Irfan Ahmad. Comparative in- vestigation of quantum and classical kernel functions applied in support vector machine algorithms.Quan- 36 tum Information Processing, 24(4):109, 2025
2025
-
[61]
A hyperparameter study for quantum kernel methods.Quantum Machine Intelligence, 6(2):44, 2024
Sebastian Egginger, Alona Sakhnenko, and Jeanette Miriam Lorenz. A hyperparameter study for quantum kernel methods.Quantum Machine Intelligence, 6(2):44, 2024
2024
-
[62]
Per- due
Evan Peters, Jo˜ ao Caldeira, Alan Ho, Stefan Le- ichenauer, Masoud Mohseni, Hartmut Neven, Pana- giotis Spentzouris, Doug Strain, and Gabriel N. Per- due. Machine learning of high dimensional data on a noisy quantum processor.npj Quantum Information, 7(1):161, 2021
2021
-
[63]
Barkoutsos, Stefan Wo- erner, Ivano Tavernelli, Federico Carminati, Alberto Di Meglio, Andy C
Sau Lan Wu, Shaojun Sun, Wen Guan, Chen Zhou, Jay Chan, Chi Lung Cheng, Tuan Pham, Yan Qian, Alex Zeng Wang, Rui Zhang, Miron Livny, Jen- nifer Glick, Panagiotis Kl. Barkoutsos, Stefan Wo- erner, Ivano Tavernelli, Federico Carminati, Alberto Di Meglio, Andy C. Y. Li, Joseph Lykken, Panagio- tis Spentzouris, Samuel Yen-Chi Chen, Shinjae Yoo, and Tzu-Chieh ...
2021
-
[64]
Practical evaluation of quantum kernel meth- ods for radar micro-doppler classification on noisy intermediate-scale quantum (nisq) hardware, 2026
Vikas Agnihotri, Jasleen Kaur, and Sarvagya Kaushik. Practical evaluation of quantum kernel meth- ods for radar micro-doppler classification on noisy intermediate-scale quantum (nisq) hardware, 2026
2026
-
[65]
Quantum support vector machines for clas- sification and regression on a trapped-ion quantum computer.Quantum Machine Intelligence, 6(1):31, 2024
Teppei Suzuki, Takashi Hasebe, and Tsubasa Miyazaki. Quantum support vector machines for clas- sification and regression on a trapped-ion quantum computer.Quantum Machine Intelligence, 6(1):31, 2024
2024
-
[66]
Continuous-variable quantum kernel method on a programmable photonic quantum processor.Phys
Keitaro Anai, Shion Ikehara, Yoshichika Yano, Daichi Okuno, and Shuntaro Takeda. Continuous-variable quantum kernel method on a programmable photonic quantum processor.Phys. Rev. A, 110:022404, Aug 2024
2024
-
[67]
Experimental quantum-enhanced kernel- based machine learning on a photonic processor.Na- ture Photonics, 19(9):1020–1027, 2025
Zhenghao Yin, Iris Agresti, Giovanni de Felice, Dou- glas Brown, Alexis Toumi, Ciro Pentangelo, Si- mone Piacentini, Andrea Crespi, Francesco Cecca- relli, Roberto Osellame, Bob Coecke, and Philip Walther. Experimental quantum-enhanced kernel- based machine learning on a photonic processor.Na- ture Photonics, 19(9):1020–1027, 2025
2025
-
[68]
Kernel methods in quantum machine learning.Quantum Ma- chine Intelligence, 1(3):65–71, 2019
Riccardo Mengoni and Alessandra Di Pierro. Kernel methods in quantum machine learning.Quantum Ma- chine Intelligence, 1(3):65–71, 2019
2019
-
[69]
Better than classical? the subtle art of benchmarking quantum machine learning models, 2024
Joseph Bowles, Shahnawaz Ahmed, and Maria Schuld. Better than classical? the subtle art of benchmarking quantum machine learning models, 2024
2024
-
[70]
Sch¨ olkopf and A
B. Sch¨ olkopf and A. J. Smola.Learning with Ker- nels: Support Vector Machines, Regularization, Opti- mization, and Beyond. MIT Press, Cambridge, MA, USA, 2001
2001
-
[71]
Steinwart and A
I. Steinwart and A. Christmann.Support Vector Ma- chines. Springer, 2008
2008
-
[72]
Mohri, A
M. Mohri, A. Rostamizadeh, and A. Talwalkar.Foun- dations of Machine Learning. MIT Press, Cambridge, MA, USA, 2018
2018
-
[73]
Kuhn and Albert W
Harold W. Kuhn and Albert W. Tucker.Nonlinear Programming, pages 247–258. Springer Basel, Basel, 2014
2014
-
[74]
William Karush.Minima of Functions of Several Vari- ables with Inequalities as Side Conditions, pages 217–
-
[75]
Springer Basel, Basel, 2014
2014
-
[76]
Sequential minimal optimization: A fast algorithm for training support vector machines
John Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. Tech- nical Report MSR-TR-98-14, Microsoft, April 1998
1998
-
[77]
Powers of tensors and fast matrix multiplication
Fran¸ cois Le Gall. Powers of tensors and fast matrix multiplication. InProceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC ’14, page 296–303, New York, NY, USA, 2014. Association for Computing Machinery
2014
-
[78]
Quantum kernel estimation-based quantum support vector regression.Quantum Information Processing, 23(1):29, 2024
Xiaojian Zhou, Jieyao Yu, Junfan Tan, and Ting Jiang. Quantum kernel estimation-based quantum support vector regression.Quantum Information Processing, 23(1):29, 2024
2024
-
[79]
Oberoi, Barry C
Seyed Shakib Vedaie, Moslem Noori, Jaspreet S. Oberoi, Barry C. Sanders, and Ehsan Zahedinejad. Quantum multiple kernel learning, 2020
2020
-
[80]
Nielsen and I
M. Nielsen and I. Chuang.Quantum Computation and Quantum Information. Cambridge University Press, 2000
2000
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.