pith. machine review for the scientific record. sign in

arxiv: 2604.08827 · v1 · submitted 2026-04-09 · 🪐 quant-ph

Recognition: unknown

Quantum Patches: Enhancing Robustness of Quantum Machine Learning Models

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:44 UTC · model grok-4.3

classification 🪐 quant-ph
keywords quantum machine learningadversarial attacksrandom quantum circuitspseudo-noisemodel robustnessCIFAR-10CINIC-10
0
0 comments X

The pith

Random quantum circuits generate pseudo-noise that trains quantum machine learning models to resist adversarial attacks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that feeding data from random quantum circuits into the training of quantum machine learning models produces robustness gains similar to those from explicit adversarial training. This matters for practical use of QML on image tasks because such models remain vulnerable to small input changes that flip their outputs. Experiments report clear drops in successful attack rates on two standard datasets when the quantum-generated noise is added during training. If the effect generalizes, quantum circuits could supply a built-in source of defensive noise without requiring separate adversarial example generation.

Core claim

The paper demonstrates that data generated by random quantum circuits provides a similar effect to models trained with adversarial data on high-feature datasets, reducing the successful attack rate on CIFAR-10 from 89.8 percent to 68.45 percent and on CINIC-10 from 94.23 percent to 78.68 percent.

What carries the argument

Random quantum circuits used to produce pseudo-noise that serves as adversarial training data for quantum machine learning models.

If this is right

  • Quantum machine learning models achieve lower vulnerability to adversarial attacks on high-feature image datasets without classical adversarial example generation.
  • The protective effect scales to at least the size of CIFAR-10 and CINIC-10.
  • Unique quantum features can be applied directly during training to improve model quality against real-world perturbations.
  • The method offers an alternative route to robustness that may complement existing classical defense techniques.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same circuit-generated noise could be tested as a defense in purely classical machine learning pipelines.
  • Larger or more diverse datasets would reveal whether the observed attack-rate reductions hold beyond the two reported cases.
  • Decoherence effects in the circuits might be tuned as an additional controllable source of defensive variation.

Load-bearing premise

Noise produced by random quantum circuits must resemble the structure of real adversarial perturbations closely enough that training on it transfers protection to actual attacks.

What would settle it

Train identical quantum machine learning models on the same datasets once with random quantum circuit data and once with standard adversarial examples, then measure whether both reach comparable attack success rates on the same held-out attack set.

Figures

Figures reproduced from arXiv: 2604.08827 by Ban Q. Tran, Chuong K. Luong, Duong M. Chu, Susan Mengel, Viet Q. Nguyen.

Figure 1
Figure 1. Figure 1: FIGURE 1 [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIGURE 2 [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIGURE 3 [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIGURE 4 [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: FIGURE 5 [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: FIGURE 6 [PITH_FULL_IMAGE:figures/full_fig_p006_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: FIGURE 7 [PITH_FULL_IMAGE:figures/full_fig_p007_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: FIGURE 8 [PITH_FULL_IMAGE:figures/full_fig_p008_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: FIGURE 9 [PITH_FULL_IMAGE:figures/full_fig_p009_9.png] view at source ↗
read the original abstract

Machine learning models and their applications, such as autonomous driving systems, are becoming increasingly common and are essential components of human daily life. However, due to their sensitivity to perturbed noise, these models are easily susceptible to adversarial attacks. Not only are classical machine learning models affected, but quantum machine learning (QML) models have also been proven to be vulnerable to adversarial attacks, which degrade their performance. To defend against these types of attacks, several classical methods have been proposed. Among these, a prominent approach uses various types of pseudo-noise during training to enhance the model's robustness against real-world attacks. One of the recently emerging solutions is to leverage the unique properties of quantum circuits to create quantum-based pseudo-noise similar to real perturbed noise to counter adversarial attacks. This paper proposes a solution that utilizes random quantum circuits (RQCs) as adversarial data to help QML models overcome these adversarial attacks. The results reported in this paper show that the data generated by RQC actually provides a similar effect to models trained with adversarial data on high-feature datasets. This quantum-based pseudo-noise resulted in a significant reduction in the attack rate in the CIFAR-10 data set, from \textbf{89. 8\%} to \textbf{68.45\%}. For the CINIC-10 dataset, the successful attack rate decreased from \textbf{94.23\%} to \textbf{78.68\%}. This research opens up avenues for applying unique quantum properties, such as superposition, entanglement, and even decoherence, to enhance the quality of machine learning models.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes using pseudo-noise generated by random quantum circuits (RQCs) during training of quantum machine learning (QML) models to improve robustness against adversarial attacks. It reports that this approach yields reductions in successful attack rates comparable to adversarial training, specifically from 89.8% to 68.45% on CIFAR-10 and from 94.23% to 78.68% on CINIC-10, and attributes the gains to quantum properties including superposition, entanglement, and decoherence.

Significance. If the empirical claims are substantiated with appropriate controls and full experimental details, the work could provide a concrete demonstration of quantum circuits as a source of structured pseudo-noise for QML robustness, potentially distinguishing quantum-generated augmentation from classical methods on high-dimensional image datasets. The reported effect sizes are large enough to be practically relevant if reproducible.

major comments (2)
  1. [Abstract] Abstract (results paragraph): The reported attack-rate reductions (89.8% → 68.45% on CIFAR-10; 94.23% → 78.68% on CINIC-10) are presented without any description of the QML model architecture, the precise manner in which RQC outputs are injected as training data, the adversarial attack algorithm and strength, the number of trials, or error bars. These omissions render the numerical claims unverifiable and prevent assessment of whether the effect is statistically meaningful.
  2. [Abstract] Abstract (results paragraph): The manuscript attributes the observed robustness gains to the 'unique properties of quantum circuits' (superposition, entanglement, decoherence) yet contains no classical control experiment that generates pseudo-noise from a classical random-number generator with matched mean, variance, and distribution. Without this ablation, it is impossible to determine whether the reported reductions arise from quantum-specific features or from generic data-augmentation effects.
minor comments (2)
  1. [Abstract] Abstract: Typographical spacing error in '89. 8%' should be corrected to '89.8%'.
  2. [Abstract] Abstract: The final sentence claims the work 'opens up avenues for applying unique quantum properties'; this phrasing is vague and should be replaced by a concrete statement of what new capability has been demonstrated.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address the two major comments point by point below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [Abstract] Abstract (results paragraph): The reported attack-rate reductions (89.8% → 68.45% on CIFAR-10; 94.23% → 78.68% on CINIC-10) are presented without any description of the QML model architecture, the precise manner in which RQC outputs are injected as training data, the adversarial attack algorithm and strength, the number of trials, or error bars. These omissions render the numerical claims unverifiable and prevent assessment of whether the effect is statistically meaningful.

    Authors: We agree that the abstract would benefit from additional context to make the numerical claims more immediately verifiable. In the revised manuscript we will expand the results paragraph of the abstract with a concise summary of the QML architecture, the injection protocol for RQC outputs, the adversarial attack algorithm and perturbation strength, the number of independent trials, and the inclusion of error bars or standard deviations. Full experimental protocols remain in the Methods and Results sections. revision: yes

  2. Referee: [Abstract] Abstract (results paragraph): The manuscript attributes the observed robustness gains to the 'unique properties of quantum circuits' (superposition, entanglement, decoherence) yet contains no classical control experiment that generates pseudo-noise from a classical random-number generator with matched mean, variance, and distribution. Without this ablation, it is impossible to determine whether the reported reductions arise from quantum-specific features or from generic data-augmentation effects.

    Authors: This is a valid observation; the current manuscript does not contain a classical pseudo-noise control with matched statistics. We will add this ablation study to the revised version, generating classical random noise with identical mean, variance, and distribution to the RQC outputs and comparing robustness gains against both the RQC-augmented and standard adversarial-training baselines. The abstract, results, and discussion sections will be updated to present and interpret the new control data. revision: yes

Circularity Check

0 steps flagged

No circularity: purely empirical results with no derivation chain

full rationale

The paper reports experimental outcomes from training QML models with RQC-generated pseudo-noise on CIFAR-10 and CINIC-10, claiming observed reductions in adversarial attack rates. No equations, first-principles derivations, fitted parameters renamed as predictions, or self-citation load-bearing steps appear in the abstract or described claims. All assertions are presented as direct empirical observations from training/testing runs rather than constructed equivalences or self-referential definitions. This is self-contained experimental work with no load-bearing derivation chain to inspect for circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract contains no explicit free parameters, axioms, or invented entities; the approach rests on standard concepts of quantum circuits and empirical adversarial training.

pith-pipeline@v0.9.0 · 5596 in / 1165 out tokens · 61196 ms · 2026-05-10T16:44:09.648675+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references · 15 canonical work pages · 5 internal anchors

  1. [1]

    Intriguing properties of neural networks

    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013

  2. [2]

    Deep learning,

    Y . LeCun, Y . Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015

  3. [3]

    Explaining and Harnessing Adversarial Examples

    I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014

  4. [4]

    A rigorous and robust quantum speed-up in supervised machine learning,

    Y . Liu, S. Arunachalam, and K. Temme, “A rigorous and robust quantum speed-up in supervised machine learning,” Nature Physics, vol. 17, no. 9, pp. 1013–1017, 2021

  5. [5]

    Quantum adversarial machine learning,

    S. Lu, L.-M. Duan, and D.-L. Deng, “Quantum adversarial machine learning,” Physical Review Research, vol. 2, no. 3, p. 033212, 2020

  6. [6]

    A comparative analysis of adversarial robustness for quantum and classical machine learning models,

    M. Wendlinger, K. Tscharke, and P. Debus, “A comparative analysis of adversarial robustness for quantum and classical machine learning models,” in 2024 IEEE International Conference on Quantum Computing and Engineering (QCE), vol. 1, pp. 1447–1457, IEEE, 2024

  7. [7]

    Quantum computing,

    A. Steane, “Quantum computing,” Reports on Progress in Physics, vol. 61, no. 2, p. 117, 1998

  8. [8]

    Characterizing quantum supremacy in near-term devices,

    S. Boixo, S. V . Isakov, V . N. Smelyanskiy, R. Babbush, N. Ding, Z. Jiang, M. J. Bremner, J. M. Martinis, and H. Neven, “Characterizing quantum supremacy in near-term devices,” Nature Physics, vol. 14, no. 6, pp. 595– 600, 2018

  9. [9]

    Towards Deep Learning Models Resistant to Adversarial Attacks

    A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017

  10. [10]

    Adversarial attacks in modulation recognition with convolutional neural networks,

    Y . Lin, H. Zhao, X. Ma, Y . Tu, and M. Wang, “Adversarial attacks in modulation recognition with convolutional neural networks,” IEEE Transactions on Reliability, vol. 70, no. 1, pp. 389–401, 2020

  11. [11]

    Adversarial examples in the physical world,

    A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial intelligence safety and security, pp. 99–112, Chapman and Hall/CRC, 2018

  12. [12]

    Deepfool: a simple and accurate method to fool deep neural networks,

    S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574– 2582, 2016

  13. [13]

    Boosting ad- versarial attacks with momentum,

    Y . Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting ad- versarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9185–9193, 2018

  14. [14]

    Adversarial Machine Learn- ing at Scale,

    A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016

  15. [15]

    A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks

    D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” arXiv preprint arXiv:1610.02136, 2016

  16. [16]

    Countering Adversarial Images using Input Transformations

    C. Guo, M. Rana, M. Cisse, and L. Van Der Maaten, “Countering adversar- ial images using input transformations,” arXiv preprint arXiv:1711.00117, 2017

  17. [17]

    Practical black-box attacks against machine learning,

    N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506–519, 2017

  18. [18]

    The Space of Transferable Adversarial Examples

    F. Tramèr, N. Papernot, I. Goodfellow, D. Boneh, and P. Mc- Daniel, “The space of transferable adversarial examples,” arXiv preprint arXiv:1704.03453, 2017

  19. [20]

    arXiv preprint arXiv:1705.07204 , year=

    F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017

  20. [21]

    Defense-gan: Protecting classifiers against adversarial attacks using generative models

    P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-gan: Protecting classifiers against adversarial attacks using generative models,” arXiv preprint arXiv:1805.06605, 2018

  21. [22]

    Distillation as a defense to adversarial perturbations against deep neural networks,

    N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE symposium on security and privacy (SP), pp. 582–597, IEEE, 2016

  22. [23]

    Grad-cam: Visual explanations from deep networks via gradient- based localization,

    R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Ba- tra, “Grad-cam: Visual explanations from deep networks via gradient- based localization,” in Proceedings of the IEEE international conference on computer vision, pp. 618–626, 2017

  23. [24]

    Certified adversarial robustness via randomized smoothing,

    J. Cohen, E. Rosenfeld, and Z. Kolter, “Certified adversarial robustness via randomized smoothing,” in international conference on machine learning, pp. 1310–1320, PMLR, 2019

  24. [25]

    Quantum noise protects quantum classifiers against adversaries,

    Y . Du, M.-H. Hsieh, T. Liu, D. Tao, and N. Liu, “Quantum noise protects quantum classifiers against adversaries,” Physical Review Research, vol. 3, no. 2, p. 023153, 2021

  25. [26]

    Optimal provable robustness of quantum classification via quantum hypothesis testing,

    M. Weber, N. Liu, B. Li, C. Zhang, and Z. Zhao, “Optimal provable robustness of quantum classification via quantum hypothesis testing,” npj Quantum Information, vol. 7, no. 1, p. 76, 2021. 10 VOLUME X, 2025 Tranet al.: Quantum Patches: Enhancing Robustness of Quantum Machine Learning Models

  26. [27]

    Differential privacy in quantum computation,

    L. Zhou and M. Ying, “Differential privacy in quantum computation,” in 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pp. 249–262, IEEE, 2017

  27. [28]

    Detection theory and quantum mechanics,

    C. W. Helstrom, “Detection theory and quantum mechanics,” Information and Control, vol. 10, no. 3, pp. 254–291, 1967

  28. [29]

    Statistical decision theory for quantum systems,

    A. S. Holevo, “Statistical decision theory for quantum systems,” Journal of multivariate analysis, vol. 3, no. 4, pp. 337–394, 1973

  29. [30]

    Fast is better than free: Revisiting adversarial training

    E. Wong, L. Rice, and J. Z. Kolter, “Fast is better than free: Revisiting adversarial training,” arXiv preprint arXiv:2001.03994, 2020

  30. [31]

    Recent advances in adversarial training for adversarial robustness.arXiv preprint arXiv:2102.01356,

    T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang, “Recent ad- vances in adversarial training for adversarial robustness,” arXiv preprint arXiv:2102.01356, 2021

  31. [32]

    Experimental quantum adversarial learning with programmable superconducting qubits,

    W. Ren, W. Li, S. Xu, K. Wang, W. Jiang, F. Jin, X. Zhu, J. Chen, Z. Song, P. Zhang, et al., “Experimental quantum adversarial learning with programmable superconducting qubits,” Nature Computational Science, vol. 2, no. 11, pp. 711–717, 2022

  32. [33]

    Transfer of adversarial robustness between perturbation types. arxiv,

    D. Kang, Y . Sun, T. Brown, D. Hendrycks, and J. Steinhardt, “Transfer of adversarial robustness between perturbation types. arxiv,” arXiv preprint arXiv:1905.01034, 2019

  33. [34]

    Random quantum circuits are approximate 2-designs,

    A. W. Harrow and R. A. Low, “Random quantum circuits are approximate 2-designs,” Communications in Mathematical Physics, vol. 291, no. 1, pp. 257–302, 2009

  34. [35]

    M. A. Nielsen and I. L. Chuang, Quantum computation and quantum information. Cambridge university press, 2010

  35. [36]

    Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,

    P. W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,” SIAM review, vol. 41, no. 2, pp. 303–332, 1999

  36. [37]

    A fast quantum mechanical algorithm for database search,

    L. K. Grover, “A fast quantum mechanical algorithm for database search,” in Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pp. 212–219, 1996

  37. [38]

    Circuit-centric quantum classifiers,

    M. Schuld, A. Bocharov, K. M. Svore, and N. Wiebe, “Circuit-centric quantum classifiers,” Physical Review A, vol. 101, no. 3, p. 032308, 2020

  38. [39]

    Quantum convolutional neural networks,

    I. Cong, S. Choi, and M. D. Lukin, “Quantum convolutional neural networks,” Nature Physics, vol. 15, no. 12, pp. 1273–1278, 2019

  39. [40]

    Quantum recurrent neural networks for sequential learning,

    Y . Li, Z. Wang, R. Han, S. Shi, J. Li, R. Shang, H. Zheng, G. Zhong, and Y . Gu, “Quantum recurrent neural networks for sequential learning,” Neural Networks, vol. 166, pp. 148–161, 2023

  40. [41]

    Quantum vision transformers,

    E. A. Cherrat, I. Kerenidis, N. Mathur, J. Landman, M. Strahm, and Y . Y . Li, “Quantum vision transformers,” Quantum, vol. 8, no. arXiv: 2209.08167, p. 1265, 2024

  41. [42]

    Generalization in quantum machine learning from few training data,

    M. C. Caro, H.-Y . Huang, M. Cerezo, K. Sharma, A. Sornborger, L. Cincio, and P. J. Coles, “Generalization in quantum machine learning from few training data,” Nature communications, vol. 13, no. 1, p. 4919, 2022

  42. [43]

    Quanvolutional neural networks: powering image recognition with quantum circuits,

    M. Henderson, S. Shakya, S. Pradhan, and T. Cook, “Quanvolutional neural networks: powering image recognition with quantum circuits,” Quantum Machine Intelligence, vol. 2, no. 1, p. 2, 2020

  43. [44]

    Training robust and generalizable quantum models,

    J. Berberich, D. Fink, D. Pranji ´c, C. Tutschku, and C. Holm, “Training robust and generalizable quantum models,” Physical Review Research, vol. 6, no. 4, p. 043326, 2024

  44. [45]

    A simple formula for the average gate fidelity of a quantum dynamical operation,

    M. A. Nielsen, “A simple formula for the average gate fidelity of a quantum dynamical operation,” Physics Letters A, vol. 303, no. 4, pp. 249–252, 2002

  45. [46]

    Gradient-based learning applied to document recognition,

    Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 2002

  46. [47]

    Learning multiple layers of features from tiny images.(2009),

    A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images.(2009),” 2009

  47. [48]

    CINIC-10 is not ImageNet or CIFAR-10,

    L. N. Darlow, E. J. Crowley, A. Antoniou, and A. J. Storkey, “CINIC-10 is not ImageNet or CIFAR-10,” 2018

  48. [49]

    PennyLane: Automatic differentiation of hybrid quantum-classical computations

    V . Bergholm, J. Izaac, M. Schuld, C. Gogolin, S. Ahmed, V . Ajith, M. S. Alam, G. Alonso-Linaje, B. AkashNarayanan, A. Asadi, et al., “Pennylane: Automatic differentiation of hybrid quantum-classical com- putations,” arXiv preprint arXiv:1811.04968, 2018

  49. [50]

    Jax: composable transformations of python+numpy programs,

    J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, et al., “Jax: composable transformations of python+numpy programs,” arXiv e-prints, vol. arXiv:1812.01564, no. arXiv:1812.01564, p. arXiv:1812.01564, 2018

  50. [51]

    Optax: Gradient processing and optimization library in JAX

    DeepMind, “Optax: Gradient processing and optimization library in JAX.” GitHub, 2020. Accessed: October 3, 2025. BAN Q. TRAN(Student Member IEEE) received the B.S degree in Electronics and Communi- cation from Hanoi University of Science and Technology, Vietnam, in 2004 and a dual master’s degree in Computer Science from the University of Science and Tech...