pith. machine review for the scientific record. sign in

arxiv: 2604.07896 · v1 · submitted 2026-04-09 · 🪐 quant-ph · cs.LG

Recognition: no theorem link

Non-variational supervised quantum kernel methods: a review

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:14 UTC · model grok-4.3

classification 🪐 quant-ph cs.LG
keywords quantum kernel methodssupervised quantum machine learningnon-variational quantum algorithmsquantum advantageexponential concentrationfidelity quantum kernelsgeneralization boundstensor network dequantization
0
0 comments X

The pith

Non-variational quantum kernel methods achieve stable training by fixing quantum feature maps and performing model selection through classical convex optimization.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper reviews how non-variational supervised quantum kernel methods embed data using fixed quantum circuits into high-dimensional spaces and then train models classically via convex optimization and cross-validation. This separation avoids the optimization instabilities of variational quantum algorithms while aiming to leverage quantum properties for feature representation. A reader would care because the review maps out when such methods might separate from classical performance through generalization bounds and necessary conditions, while cataloging concrete barriers like kernel concentration and classical simulability. It focuses on fidelity and projected kernel constructions, their practical estimation, and the structured problem classes where advantage remains plausible.

Core claim

Non-variational supervised quantum kernel methods employ fixed quantum feature maps to encode data, followed by classical convex optimization for model selection, thereby ensuring stable training without gradient-based issues. The review analyzes their foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, estimation techniques on hardware, generalization bounds, and conditions for quantum advantage. It further examines challenges including exponential concentration of kernel values, dequantization via tensor networks, spectral properties of kernel operators, and synthesizes evidence from comparative studies and hardware experiments on regimes where

What carries the argument

Fixed quantum feature embedding separated from classical convex training, which isolates quantum data encoding from model fitting to guarantee stable optimization.

If this is right

  • Stable optimization follows directly once the quantum embedding is fixed and training reduces to convex problems.
  • Quantum advantage requires structured problem classes that satisfy necessary separation conditions from classical kernels.
  • Generalization bounds derived from kernel integral operators provide a concrete way to test for advantage.
  • Exponential concentration and dequantization must be overcome for any claimed separation to survive in practice.
  • Hardware studies can validate whether fidelity or projected kernels retain useful spectral properties.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The separation strategy may allow direct transfer of classical kernel regularization techniques to quantum settings without modification.
  • Structured problems identified here could be used to design targeted benchmarks that isolate quantum embedding benefits.
  • If concentration is mitigated in one class, the same fixed-map approach might extend to unsupervised or generative quantum tasks.
  • Comparative studies in the review suggest that advantage claims should be tested against specific classical baselines rather than generic ones.

Load-bearing premise

Practical estimation of quantum kernels on near-term hardware can be carried out before exponential concentration or classical dequantization erase any potential separation from classical models.

What would settle it

Demonstration on quantum hardware that kernel matrices for a candidate advantageous problem class exhibit full exponential concentration or that a tensor-network classical method matches the quantum model's accuracy and generalization.

Figures

Figures reproduced from arXiv: 2604.07896 by Chon-Fai Kam, Jingbo Wang, John Tanner.

Figure 1
Figure 1. Figure 1: FIG. 1 [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIG. 3 [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIG. 2 [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4 [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: FIG. 5 [PITH_FULL_IMAGE:figures/full_fig_p009_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: FIG. 6 [PITH_FULL_IMAGE:figures/full_fig_p010_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: FIG. 7 [PITH_FULL_IMAGE:figures/full_fig_p021_7.png] view at source ↗
Figure 9
Figure 9. Figure 9: FIG. 9 [PITH_FULL_IMAGE:figures/full_fig_p022_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: FIG. 10 [PITH_FULL_IMAGE:figures/full_fig_p022_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: FIG. 11 [PITH_FULL_IMAGE:figures/full_fig_p027_11.png] view at source ↗
read the original abstract

Quantum kernel methods (QKMs) have emerged as a prominent framework for supervised quantum machine learning. Unlike variational quantum algorithms, which rely on gradient-based optimisation and may suffer from issues such as barren plateaus, non-variational QKMs employ fixed quantum feature maps, with model selection performed classically via convex optimisation and cross-validation. This separation of quantum feature embedding from classical training ensures stable optimisation while leveraging quantum circuits to encode data in high-dimensional Hilbert spaces. In this review, we provide a thorough analysis of non-variational supervised QKMs, covering their foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, and methods for their estimation in practice. We examine frameworks for assessing quantum advantage, including generalisation bounds and necessary conditions for separation from classical models, and analyse key challenges such as exponential concentration, dequantisation via tensor-network methods, and the spectral properties of kernel integral operators. We further discuss structured problem classes that may enable advantage, and synthesise insights from comparative and hardware studies. Overall, this review aims to clarify the regimes in which QKMs may offer genuine advantages, and to delineate the conceptual, methodological, and technical obstacles that must be overcome for practical quantum-enhanced learning.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. This review synthesizes non-variational supervised quantum kernel methods (QKMs), contrasting them with variational approaches by emphasizing fixed quantum feature maps combined with classical convex optimization and cross-validation. It covers foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, practical estimation on hardware, generalization bounds and necessary conditions for quantum advantage, challenges including exponential concentration, tensor-network dequantization, and spectral properties of kernel operators, as well as structured problem classes that may permit advantage and insights from comparative/hardware studies.

Significance. If the synthesis is accurate, the review is significant for organizing the literature on non-variational QKMs, explicitly crediting the separation of fixed quantum embedding from classical convex training as the source of stable optimization (a direct consequence of standard kernel theory), and delineating open challenges and necessary conditions for advantage rather than claiming resolutions. It provides a useful reference point for the field by framing exponential concentration and dequantization as analysis of obstacles rather than resolved issues.

major comments (2)
  1. [Challenges and advantage assessment sections] § on exponential concentration and dequantization: the discussion of regimes where advantage may persist assumes that practical kernel estimation can overcome concentration effects in the claimed structured classes, but no quantitative bound or explicit condition (e.g., on circuit depth or data distribution) is derived to delineate when this holds versus when tensor-network dequantization succeeds; this is load-bearing for the central claim that advantage remains possible.
  2. [Frameworks for assessing quantum advantage] Generalization bounds section: the review cites external results on kernel generalization but does not verify or reproduce the dependence on the quantum feature map's properties (e.g., the RKHS norm or eigenvalue decay of the integral operator) for the specific fidelity/projected kernels discussed; without this, the claimed separation from classical models remains at the level of necessary conditions rather than demonstrated sufficiency.
minor comments (3)
  1. [Abstract and Introduction] The abstract and introduction use 'non-variational' and 'fixed quantum feature maps' interchangeably; a brief clarifying sentence on whether all non-variational methods are strictly fixed (no trainable parameters at all) would improve precision.
  2. [Comparative and hardware studies] Comparative and hardware studies section: several cited numerical results on kernel estimation are summarized without reporting the circuit depths, number of shots, or device noise models used; adding these details would strengthen the synthesis of practical feasibility.
  3. [Constructions of fidelity and projected quantum kernels] Notation for projected quantum kernels is introduced without an explicit equation linking the projection operator to the classical kernel matrix; a short derivation or reference to the defining equation would aid readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments and the recommendation for minor revision. We address each major comment point by point below, with proposed revisions to improve clarity and precision while remaining faithful to the review's scope as a synthesis of the literature.

read point-by-point responses
  1. Referee: [Challenges and advantage assessment sections] § on exponential concentration and dequantization: the discussion of regimes where advantage may persist assumes that practical kernel estimation can overcome concentration effects in the claimed structured classes, but no quantitative bound or explicit condition (e.g., on circuit depth or data distribution) is derived to delineate when this holds versus when tensor-network dequantization succeeds; this is load-bearing for the central claim that advantage remains possible.

    Authors: We agree that the review does not derive new quantitative bounds, as its purpose is to synthesize existing results rather than present original theoretical derivations. The sections on challenges and advantage assessment summarize the literature on exponential concentration for random and structured quantum circuits, tensor-network dequantization methods, and conjectured regimes (e.g., low-entanglement or geometrically structured data) where advantage may persist according to cited analyses. To address the concern, we will revise the relevant paragraphs to explicitly state that no general quantitative condition (such as explicit bounds on circuit depth or data distribution) has been established to separate regimes where kernel estimation overcomes concentration from those where dequantization succeeds. The revision will frame the discussion as outlining necessary conditions from the literature and highlight this delineation as an open challenge, thereby avoiding any overstatement of sufficiency for practical advantage. revision: partial

  2. Referee: [Frameworks for assessing quantum advantage] Generalization bounds section: the review cites external results on kernel generalization but does not verify or reproduce the dependence on the quantum feature map's properties (e.g., the RKHS norm or eigenvalue decay of the integral operator) for the specific fidelity/projected kernels discussed; without this, the claimed separation from classical models remains at the level of necessary conditions rather than demonstrated sufficiency.

    Authors: The generalization bounds section cites foundational results from classical kernel theory and their quantum extensions in the referenced works, which analyze the dependence of generalization on properties such as the RKHS norm for fidelity kernels and eigenvalue decay of the integral operator for projected kernels. As a review, we summarize these results and their implications for quantum kernels without reproducing full proofs or performing new verifications. In the revision, we will add a brief summary paragraph outlining how these properties apply to the fidelity and projected kernels as reported in the cited literature, and we will explicitly note that any separation from classical models is discussed at the level of necessary conditions identified therein. This will make the section more self-contained while accurately reflecting the current state of the field. revision: partial

Circularity Check

0 steps flagged

No significant circularity

full rationale

This is a review paper that synthesizes foundations from classical kernel theory, constructions of fidelity and projected quantum kernels, and analyses of challenges such as exponential concentration and dequantisation. All load-bearing claims are supported by external citations to prior literature on kernel methods and quantum information rather than by internal derivations, fitted parameters, or self-citations that reduce to the paper's own inputs by construction. The separation of fixed quantum feature maps from classical convex optimisation follows directly from standard results in convex optimisation and kernel theory, with no self-definitional loops or renamed predictions present in the manuscript.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The review relies on standard quantum mechanics, classical kernel theory, and prior results in quantum machine learning without introducing new free parameters, axioms beyond domain standards, or invented entities.

axioms (1)
  • standard math Foundations of classical kernel methods and quantum feature maps from prior literature
    The paper builds directly on established theory in kernel methods and quantum computing as described in the abstract.

pith-pipeline@v0.9.0 · 5514 in / 1184 out tokens · 126195 ms · 2026-05-10T18:14:38.205545+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Wavelet Variance Equipartition as a Threshold for World-Model Quality and Quantum Kernel TN-Simulability

    quant-ph 2026-05 unverdicted novelty 5.0

    Wavelet scaling α = 1/2 separates classically simulable area-law from volume-law phases for quantum kernels in world-model latents, with empirical VideoMAE latents and a Θ(d^{-2}) variance bound implying simulation ha...

Reference graph

Works this paper leans on

125 extracted references · 4 canonical work pages · cited by 1 Pith paper

  1. [1]

    narrower

    Other dequantisation approaches. Beyond tensor-network methods, a number of classi- cal techniques have been developed that may be used to dequantise QKMs. One prominent family of ap- proaches is based on random Fourier features (RFF) and related sampling techniques [110]. These methods ex- ploit the spectral structure of shift-invariant or approxi- matel...

  2. [2]

    Biamonte, P

    J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd. Quantum machine learning. Nature 549, 195–202 (2017)

  3. [3]

    Guest column: A survey of quantum learning theory.ACM Sigact News, 48(2):41–67, 2017

    Srinivasan Arunachalam and Ronald De Wolf. Guest column: A survey of quantum learning theory.ACM Sigact News, 48(2):41–67, 2017

  4. [4]

    Quantum random access memory.Physical Re- view Letters, 100(16), April 2008

    Vittorio Giovannetti, Seth Lloyd, and Lorenzo Mac- cone. Quantum random access memory.Physical Re- view Letters, 100(16), April 2008

  5. [5]

    Harrow, Avinatan Hassidim, and Seth Lloyd

    Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Lett., 103:150502, Oct 2009

  6. [6]

    Quan- tum algorithm for data fitting.Phys

    Nathan Wiebe, Daniel Braun, and Seth Lloyd. Quan- tum algorithm for data fitting.Phys. Rev. Lett., 109:050505, Aug 2012

  7. [7]

    Quantum algorithms for supervised and unsupervised machine learning, 2013

    Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning, 2013. 34

  8. [8]

    Quantum principal component analysis.Nature Physics, 10(9):631–633, 2014

    Seth Lloyd, Masoud Mohseni, and Patrick Reben- trost. Quantum principal component analysis.Nature Physics, 10(9):631–633, 2014

  9. [9]

    Quantum support vector machine for big data classifi- cation.Physical review letters, 113(13):130503, 2014

    Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classifi- cation.Physical review letters, 113(13):130503, 2014

  10. [10]

    Quantum algorithms for topological and geometric analysis of data.Nature Communications, 7(1):10138, 2016

    Seth Lloyd, Silvano Garnerone, and Paolo Zanardi. Quantum algorithms for topological and geometric analysis of data.Nature Communications, 7(1):10138, 2016

  11. [11]

    Quantum discriminant analysis for dimensionality reduction and classifica- tion.New Journal of Physics, 18(7):073011, jul 2016

    Iris Cong and Luming Duan. Quantum discriminant analysis for dimensionality reduction and classifica- tion.New Journal of Physics, 18(7):073011, jul 2016

  12. [12]

    Quantum singular-value decomposi- tion of nonsparse low-rank matrices.Phys

    Patrick Rebentrost, Adrian Steffens, Iman Marvian, and Seth Lloyd. Quantum singular-value decomposi- tion of nonsparse low-rank matrices.Phys. Rev. A, 97:012327, Jan 2018

  13. [13]

    Fitzsimons, and Joseph F

    Zhikuan Zhao, Jack K. Fitzsimons, and Joseph F. Fitzsimons. Quantum-assisted gaussian process regres- sion.Physical Review A, 99(5), May 2019

  14. [14]

    Parameterized quantum circuits as machine learning models.Quantum Science and Tech- nology, 4(4):043001, November 2019

    Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mattia Fiorentini. Parameterized quantum circuits as machine learning models.Quantum Science and Tech- nology, 4(4):043001, November 2019

  15. [15]

    Cerezo, Andrew Arrasmith, Ryan Babbush, Si- mon C

    M. Cerezo, Andrew Arrasmith, Ryan Babbush, Si- mon C. Benjamin, Suguru Endo, Keisuke Fujii, Jar- rod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. Variational quantum al- gorithms.Nature Reviews Physics, 3(9):625–644, Au- gust 2021

  16. [16]

    Love, Al´ an Aspuru-Guzik, and Jeremy L

    Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J. Love, Al´ an Aspuru-Guzik, and Jeremy L. O’Brien. A variational eigenvalue solver on a photonic quantum processor. Nature Communications, 5(1):4213, 2014

  17. [17]

    A quantum approximate optimization algorithm, 2014

    Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm, 2014

  18. [18]

    Osborne, Robert Salzmann, Daniel Scheier- mann, and Ramona Wolf

    Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, To- bias J. Osborne, Robert Salzmann, Daniel Scheier- mann, and Ramona Wolf. Training deep quantum neural networks.Nature Communications, 11(1):808, 2020

  19. [19]

    I. Cong, S. Choi, and M. D. Lukin. Quantum con- volutional neural networks. Nat. Phys. 15, 1273–1278 (2019)

  20. [20]

    Quantum autoencoders for efficient compression of quantum data.Quantum Science and Technology, 2(4):045001, August 2017

    Jonathan Romero, Jonathan P Olson, and Alan Aspuru-Guzik. Quantum autoencoders for efficient compression of quantum data.Quantum Science and Technology, 2(4):045001, August 2017

  21. [21]

    Preskill

    J. Preskill. Quantum computing in the nisq era and beyond. DOI: Quantum 2, 79 (2018)

  22. [22]

    McClean, Sergio Boixo, Vadim N

    Jarrod R. McClean, Sergio Boixo, Vadim N. Smelyan- skiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training land- scapes.Nature Communications, 9(1):4812, 2018

  23. [23]

    Coles, Lukasz Cincio, Jarrod R

    Mart´ ın Larocca, Supanut Thanasilp, Samson Wang, Kunal Sharma, Jacob Biamonte, Patrick J. Coles, Lukasz Cincio, Jarrod R. McClean, Zo¨ e Holmes, and M. Cerezo. Barren plateaus in variational quantum computing.Nature Reviews Physics, 7(4):174–189, 2025

  24. [24]

    Cerezo, Kunal Sharma, Akira Sone, Lukasz Cincio, and Patrick J

    Samson Wang, Enrico Fontana, M. Cerezo, Kunal Sharma, Akira Sone, Lukasz Cincio, and Patrick J. Coles. Noise-induced barren plateaus in variational quantum algorithms.Nature Communications, 12(1), November 2021

  25. [25]

    Entanglement-induced barren plateaus.PRX quantum, 2(4):040316, 2021

    Carlos Ortiz Marrero, M´ aria Kieferov´ a, and Nathan Wiebe. Entanglement-induced barren plateaus.PRX quantum, 2(4):040316, 2021

  26. [26]

    Cerezo, and Patrick J

    Zo¨ e Holmes, Kunal Sharma, M. Cerezo, and Patrick J. Coles. Connecting ansatz expressibility to gradient magnitudes and barren plateaus.PRX Quantum, 3(1), January 2022

  27. [27]

    Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J

    M. Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J. Coles. Cost function dependent bar- ren plateaus in shallow parametrized quantum circuits. Nature Communications, 12(1), March 2021

  28. [28]

    Cerezo, Samson Wang, Tyler Volkoff, Andrew T

    Arthur Pesah, M. Cerezo, Samson Wang, Tyler Volkoff, Andrew T. Sornborger, and Patrick J. Coles. Absence of barren plateaus in quantum convolutional neural networks.Phys. Rev. X, 11:041011, Oct 2021

  29. [29]

    Rudolph, Zo¨ e Holmes, Lukasz Cincio, and M

    Pablo Bermejo, Paolo Braccia, Manuel S. Rudolph, Zo¨ e Holmes, Lukasz Cincio, and M. Cerezo. Quantum convolutional neural networks are (effectively) classi- cally simulable, 2024

  30. [30]

    Cerezo, Martin Larocca, Diego Garc´ ıa-Mart´ ın, N

    M. Cerezo, Martin Larocca, Diego Garc´ ıa-Mart´ ın, N. L. Diaz, Paolo Braccia, Enrico Fontana, Manuel S. Rudolph, Pablo Bermejo, Aroosa Ijaz, Supanut Thanasilp, Eric R. Anschuetz, and Zo¨ e Holmes. Does provable absence of barren plateaus imply classical simulability?Nature Communications, 16(1):7907, 2025

  31. [31]

    Schuld and N

    M. Schuld and N. Killoran. Quantum machine learning in feature hilbert spaces. Phys. Rev. Lett. 122, 040504 (2019)

  32. [32]

    Supervised learn- ing with quantum-enhanced feature spaces.Nature, 567(7747):209–212, 2019

    Vojtˇ ech Havl´ ıˇ cek, Antonio D C´ orcoles, Kristan Temme, Aram W Harrow, Abhinav Kandala, Jerry M Chow, and Jay M Gambetta. Supervised learn- ing with quantum-enhanced feature spaces.Nature, 567(7747):209–212, 2019

  33. [33]

    Supervised quantum machine learning models are kernel methods, 2021

    Maria Schuld. Supervised quantum machine learning models are kernel methods, 2021

  34. [34]

    Thanasilp, S

    S. Thanasilp, S. Wang, Cerezo M., and Z. Holmes. Ex- ponential concentration in quantum kernel methods. Nat Commun 15, 5200 (2024)

  35. [35]

    The complexity of quantum support vector machines.Quantum, 8:1225, January 2024

    Gian Gentinetta, Arne Thomsen, David Sutter, and 35 Stefan Woerner. The complexity of quantum support vector machines.Quantum, 8:1225, January 2024

  36. [36]

    In search of quantum advantage: Estimating the number of shots in quantum kernel methods, 2024

    Artur Miroszewski, Marco Fellous Asiani, Jakub Miel- czarek, Bertrand Le Saux, and Jakub Nalepa. In search of quantum advantage: Estimating the number of shots in quantum kernel methods, 2024

  37. [37]

    Towards understanding the power of quantum kernels in the nisq era.Quantum, 5:531, August 2021

    Xinbiao Wang, Yuxuan Du, Yong Luo, and Dacheng Tao. Towards understanding the power of quantum kernels in the nisq era.Quantum, 5:531, August 2021

  38. [38]

    Noisy quantum kernel machines.Phys

    Valentin Heyraud, Zejian Li, Zakari Denis, Alexandre Le Boit´ e, and Cristiano Ciuti. Noisy quantum kernel machines.Phys. Rev. A, 106:052421, Nov 2022

  39. [39]

    Jerbi, L

    S. Jerbi, L. J. Fiderer, H. P. Nautrup, J. M. K¨ ubler, H. J. Briegel, and V. Dunjko. Quantum machine learn- ing beyond kernel methods. Nat Commun 14, 517 (2023)

  40. [40]

    Huang, M

    H.-Y. Huang, M. Broughton, M. Mohseni, R. Babbush, S. Boixo, H. Neven, and J. R. McClean. Power of data in quantum machine learning. Nat. Commun. 12, 2631 (2021)

  41. [41]

    J. M. K¨ ubler, S. Buchholz, and B. Sch¨ olkopf. The inductive bias of quantum kernels. 2021. arXiv:2106.03747

  42. [42]

    Ruslan Shaydulin and Stefan M. Wild. Importance of kernel bandwidth in quantum machine learning.Phys. Rev. A, 106:042407, Oct 2022

  43. [43]

    Wild, and Ruslan Shaydulin

    Abdulkadir Canatar, Evan Peters, Cengiz Pehlevan, Stefan M. Wild, and Ruslan Shaydulin. Bandwidth enables generalization in quantum kernel models, 2023

  44. [44]

    Lucas Slattery, Ruslan Shaydulin, Shouvanik Chakrabarti, Marco Pistoia, Sami Khairy, and Stefan M. Wild. Numerical evidence against advan- tage with quantum fidelity kernels on classical data. Physical Review A, 107(6), June 2023

  45. [45]

    Dequantizing quantum machine learning mod- els using tensor networks.Physical Review Research, 6(2):023218, 2024

    Seongwook Shin, Yong Siah Teo, and Hyunseok Jeong. Dequantizing quantum machine learning mod- els using tensor networks.Physical Review Research, 6(2):023218, 2024

  46. [46]

    On dequantization of su- pervised quantum machine learning via random fourier features.arXiv preprint arXiv:2505.15902, 2025

    Mehrad Sahebi, Alice Barthe, Yudai Suzuki, Zo¨ e Holmes, and Michele Grossi. On dequantization of su- pervised quantum machine learning via random fourier features.arXiv preprint arXiv:2505.15902, 2025

  47. [47]

    Y. Liu, S. Arunachalam, and K. Temme. A rigorous and robust quantum speed-up in supervised machine learning. Nat. Phys. 17, 1013–1017 (2021)

  48. [48]

    Muser, E

    T. Muser, E. Zapusek, V. Belis, and F. Reiter. Prov- able advantages of kernel-based quantum learners and quantum preprocessing based on grover’s algorithm. Phys. Rev. A, 110:032434, Sep 2024

  49. [49]

    Y. Wu, B. Wu, J. Wang, and X. Yuan. Quantum phase recognition via quantum kernel methods. Quantum 7, 981 (2023)

  50. [50]

    Glick, Tanvi P

    Jennifer R. Glick, Tanvi P. Gujarati, Antonio D. C´ orcoles, Youngseok Kim, Abhinav Kandala, Jay M. Gambetta, and Kristan Temme. Covariant quantum kernels for data with group structure.Nature Physics, 20(3):479–483, January 2024

  51. [51]

    Quantum kernel for image classifi- cation of real world manufacturing defects, 2022

    Daniel Beaulieu, Dylan Miracle, Anh Pham, and William Scherr. Quantum kernel for image classifi- cation of real world manufacturing defects, 2022

  52. [52]

    Sabir, Adel A

    Mahmoud Ragab, Ehab Bahauden Ashary, Maha Farouk S. Sabir, Adel A. Bahaddad, and Romany F. Mansour. Mathematical modelling of quantum kernel method for biomedical data analysis.Computers, Ma- terials and Continua, 71(3):5441–5457, 2022

  53. [53]

    Quantum kernels for real-world predictions based on electronic health records.IEEE Transactions on Quantum Engineering, 3:1–11, 2022

    Zoran Krunic, Frederik Flother, George Seegan, Nate Earnest-Noble, and Shehab Omar. Quantum kernels for real-world predictions based on electronic health records.IEEE Transactions on Quantum Engineering, 3:1–11, 2022

  54. [54]

    Non-hemolytic peptide classification using a quantum support vector machine.Quantum Informa- tion Processing, 23(11):379, 2024

    Shengxin Zhuang, John Tanner, Yusen Wu, Du Huynh, Wei Liu, Xavier Cadet, Nicolas Fontaine, Philippe Charton, Cedric Damour, Frederic Cadet, and Jingbo Wang. Non-hemolytic peptide classification using a quantum support vector machine.Quantum Informa- tion Processing, 23(11):379, 2024

  55. [55]

    Artur Miroszewski, Jakub Mielczarek, Grzegorz Czelusta, Filip Szczepanek, Bartosz Grabowski, Bertrand Le Saux, and Jakub Nalepa. Detecting clouds in multispectral satellite images using quantum-kernel support vector machines.IEEE journal of selected top- ics in applied earth observations and remote sensing, 16:7601–7613, 2023

  56. [56]

    Wijata, Artur Miroszewski, Bertrand Le Saux, Nicolas Long´ ep´ e, Bogdan Ruszczak, and Jakub Nalepa

    Agata M. Wijata, Artur Miroszewski, Bertrand Le Saux, Nicolas Long´ ep´ e, Bogdan Ruszczak, and Jakub Nalepa. Detection of bare soil in hyperspectral im- ages using quantum-kernel support vector machines. In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, pages 817–822, 2024

  57. [57]

    Shungo Miyabe, Brian Quanz, Noriaki Shimada, Ab- hijit Mitra, Takahiro Yamamoto, Vladimir Rastunkov, Dimitris Alevras, Mekena Metcalf, Daniel J. M. King, Mohammad Mamouei, Matthew D. Jackson, Martin Brown, Philip Intallura, and Jae-Eun Park. Quantum multiple kernel learning in financial classification tasks, 2023

  58. [58]

    Quantum kernel meth- ods under scrutiny: a benchmarking study.Quantum Machine Intelligence, 7(1), apr 2025

    Jan Schnabel and Marco Roth. Quantum kernel meth- ods under scrutiny: a benchmarking study.Quantum Machine Intelligence, 7(1), apr 2025

  59. [59]

    Benchmarking quantum ma- chine learning kernel training for classification tasks

    Diego Alvarez-Estevez. Benchmarking quantum ma- chine learning kernel training for classification tasks. IEEE Transactions on Quantum Engineering, 6:1–15, 2025

  60. [60]

    Comparative in- vestigation of quantum and classical kernel functions applied in support vector machine algorithms.Quan- 36 tum Information Processing, 24(4):109, 2025

    Ghada Abdulsalam and Irfan Ahmad. Comparative in- vestigation of quantum and classical kernel functions applied in support vector machine algorithms.Quan- 36 tum Information Processing, 24(4):109, 2025

  61. [61]

    A hyperparameter study for quantum kernel methods.Quantum Machine Intelligence, 6(2):44, 2024

    Sebastian Egginger, Alona Sakhnenko, and Jeanette Miriam Lorenz. A hyperparameter study for quantum kernel methods.Quantum Machine Intelligence, 6(2):44, 2024

  62. [62]

    Per- due

    Evan Peters, Jo˜ ao Caldeira, Alan Ho, Stefan Le- ichenauer, Masoud Mohseni, Hartmut Neven, Pana- giotis Spentzouris, Doug Strain, and Gabriel N. Per- due. Machine learning of high dimensional data on a noisy quantum processor.npj Quantum Information, 7(1):161, 2021

  63. [63]

    Barkoutsos, Stefan Wo- erner, Ivano Tavernelli, Federico Carminati, Alberto Di Meglio, Andy C

    Sau Lan Wu, Shaojun Sun, Wen Guan, Chen Zhou, Jay Chan, Chi Lung Cheng, Tuan Pham, Yan Qian, Alex Zeng Wang, Rui Zhang, Miron Livny, Jen- nifer Glick, Panagiotis Kl. Barkoutsos, Stefan Wo- erner, Ivano Tavernelli, Federico Carminati, Alberto Di Meglio, Andy C. Y. Li, Joseph Lykken, Panagio- tis Spentzouris, Samuel Yen-Chi Chen, Shinjae Yoo, and Tzu-Chieh ...

  64. [64]

    Practical evaluation of quantum kernel meth- ods for radar micro-doppler classification on noisy intermediate-scale quantum (nisq) hardware, 2026

    Vikas Agnihotri, Jasleen Kaur, and Sarvagya Kaushik. Practical evaluation of quantum kernel meth- ods for radar micro-doppler classification on noisy intermediate-scale quantum (nisq) hardware, 2026

  65. [65]

    Quantum support vector machines for clas- sification and regression on a trapped-ion quantum computer.Quantum Machine Intelligence, 6(1):31, 2024

    Teppei Suzuki, Takashi Hasebe, and Tsubasa Miyazaki. Quantum support vector machines for clas- sification and regression on a trapped-ion quantum computer.Quantum Machine Intelligence, 6(1):31, 2024

  66. [66]

    Continuous-variable quantum kernel method on a programmable photonic quantum processor.Phys

    Keitaro Anai, Shion Ikehara, Yoshichika Yano, Daichi Okuno, and Shuntaro Takeda. Continuous-variable quantum kernel method on a programmable photonic quantum processor.Phys. Rev. A, 110:022404, Aug 2024

  67. [67]

    Experimental quantum-enhanced kernel- based machine learning on a photonic processor.Na- ture Photonics, 19(9):1020–1027, 2025

    Zhenghao Yin, Iris Agresti, Giovanni de Felice, Dou- glas Brown, Alexis Toumi, Ciro Pentangelo, Si- mone Piacentini, Andrea Crespi, Francesco Cecca- relli, Roberto Osellame, Bob Coecke, and Philip Walther. Experimental quantum-enhanced kernel- based machine learning on a photonic processor.Na- ture Photonics, 19(9):1020–1027, 2025

  68. [68]

    Kernel methods in quantum machine learning.Quantum Ma- chine Intelligence, 1(3):65–71, 2019

    Riccardo Mengoni and Alessandra Di Pierro. Kernel methods in quantum machine learning.Quantum Ma- chine Intelligence, 1(3):65–71, 2019

  69. [69]

    Better than classical? the subtle art of benchmarking quantum machine learning models, 2024

    Joseph Bowles, Shahnawaz Ahmed, and Maria Schuld. Better than classical? the subtle art of benchmarking quantum machine learning models, 2024

  70. [70]

    Sch¨ olkopf and A

    B. Sch¨ olkopf and A. J. Smola.Learning with Ker- nels: Support Vector Machines, Regularization, Opti- mization, and Beyond. MIT Press, Cambridge, MA, USA, 2001

  71. [71]

    Steinwart and A

    I. Steinwart and A. Christmann.Support Vector Ma- chines. Springer, 2008

  72. [72]

    Mohri, A

    M. Mohri, A. Rostamizadeh, and A. Talwalkar.Foun- dations of Machine Learning. MIT Press, Cambridge, MA, USA, 2018

  73. [73]

    Kuhn and Albert W

    Harold W. Kuhn and Albert W. Tucker.Nonlinear Programming, pages 247–258. Springer Basel, Basel, 2014

  74. [74]

    William Karush.Minima of Functions of Several Vari- ables with Inequalities as Side Conditions, pages 217–

  75. [75]

    Springer Basel, Basel, 2014

  76. [76]

    Sequential minimal optimization: A fast algorithm for training support vector machines

    John Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. Tech- nical Report MSR-TR-98-14, Microsoft, April 1998

  77. [77]

    Powers of tensors and fast matrix multiplication

    Fran¸ cois Le Gall. Powers of tensors and fast matrix multiplication. InProceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC ’14, page 296–303, New York, NY, USA, 2014. Association for Computing Machinery

  78. [78]

    Quantum kernel estimation-based quantum support vector regression.Quantum Information Processing, 23(1):29, 2024

    Xiaojian Zhou, Jieyao Yu, Junfan Tan, and Ting Jiang. Quantum kernel estimation-based quantum support vector regression.Quantum Information Processing, 23(1):29, 2024

  79. [79]

    Oberoi, Barry C

    Seyed Shakib Vedaie, Moslem Noori, Jaspreet S. Oberoi, Barry C. Sanders, and Ehsan Zahedinejad. Quantum multiple kernel learning, 2020

  80. [80]

    Nielsen and I

    M. Nielsen and I. Chuang.Quantum Computation and Quantum Information. Cambridge University Press, 2000

Showing first 80 references.