pith. machine review for the scientific record. sign in

arxiv: 2605.07866 · v1 · submitted 2026-05-08 · 🪐 quant-ph

Recognition: 2 theorem links

· Lean Theorem

Hybrid Quantum-Classical Logistic Regression for Calibrated Classification of Pulsar Candidates

Authors on Pith no claims yet

Pith reviewed 2026-05-11 03:18 UTC · model grok-4.3

classification 🪐 quant-ph
keywords hybrid quantum-classicallogistic regressionpulsar candidatesprobability calibrationquantum feature encodingHTRU-2 datasetMurphy decomposition
0
0 comments X

The pith

Angle-encoded hybrid quantum logistic regression matches classical baselines in pulsar candidate discrimination while achieving the lowest calibration error.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper evaluates hybrid quantum-classical logistic regression for ranking pulsar candidates on the imbalanced HTRU-2 dataset. It tests three quantum feature encodings against classical baselines and a quantum support vector machine reference under a paired-seed protocol. The angle-encoded variant stays close to the best classical models in rare-event discrimination and low false-positive recovery at shallow depths. It also produces the lowest calibration error, with Murphy decomposition confirming stable resolution and low reliability error across circuit depths and training sizes. Data re-uploading and amplitude encodings lag behind, particularly at greater depths or across dataset scales.

Core claim

The angle-encoded hybrid quantum-classical logistic regression model maintains discrimination and low-false-positive-rate recovery comparable to top classical baselines while delivering the lowest calibration error at the benchmark configuration on the HTRU-2 pulsar candidate dataset, with probability estimates that preserve both calibration and separation between candidate groups as shown by Murphy decomposition.

What carries the argument

Angle encoding, which maps input features to qubit rotation angles in a variational quantum circuit for logistic regression, optimized via analytic gradients in a hybrid quantum-classical setup.

If this is right

  • The angle-encoded model maintains low reliability error and high stable resolution across varying circuit depths and training-set sizes.
  • Data re-uploading remains competitive only at small depths and loses discrimination and resolution at larger depths in the multi-qubit implementation.
  • Amplitude encoding performs weaker than the other encodings across all tested dataset sizes.
  • Shallow-depth circuits suffice for the observed performance balance without requiring deeper quantum resources.
  • Classical simulation runtime poses a practical limit even when the quantum model matches classical accuracy.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could extend to other imbalanced rare-event classification tasks in astronomy where well-calibrated probabilities improve follow-up prioritization.
  • Improved quantum hardware might alleviate the simulation runtime bottleneck and allow direct testing on larger candidate sets.
  • The observed stability across training sizes hints at robustness that could be tested on streaming astronomical data pipelines.
  • Direct comparison against other variational quantum models on the same pulsar data would clarify whether logistic regression is the optimal quantum architecture here.

Load-bearing premise

The chosen quantum feature encodings must faithfully represent the input features without losing information critical for accurate classification and calibration.

What would settle it

Running the angle-encoded model on a fresh pulsar candidate dataset and finding its calibration error higher than that of classical logistic regression would disprove the claim of superior calibration.

Figures

Figures reproduced from arXiv: 2605.07866 by Chanelle Matadah Manfouo, Donovan Slabbert, Francesco Petruccione, Prince Koree Osei.

Figure 1
Figure 1. Figure 1: FIG. 1: QLR pipeline for pulsar candidate classification and circuit architecture. The top panel shows the full [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIG. 2: Depth dependence of QLR discrimination performance for [PITH_FULL_IMAGE:figures/full_fig_p011_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIG. 3: ECE as a function of circuit depth for all QLR variants and [PITH_FULL_IMAGE:figures/full_fig_p012_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4: Training-time scaling for the QLR variants. The left panel shows training time as a function of training-set [PITH_FULL_IMAGE:figures/full_fig_p012_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: FIG. 5: Reliability diagrams at [PITH_FULL_IMAGE:figures/full_fig_p013_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: FIG. 6: Murphy reliability term as a function of circuit depth for QLR-angle and QLR-DR. Each panel corresponds [PITH_FULL_IMAGE:figures/full_fig_p013_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: FIG. 7: Murphy resolution term as a function of circuit depth for QLR-angle and QLR-DR. Each panel corresponds [PITH_FULL_IMAGE:figures/full_fig_p014_7.png] view at source ↗
read the original abstract

Reliable pulsar candidate ranking requires probability estimates that are not only discriminative but also well calibrated. We evaluate hybrid quantum-calssical logistic regression on the imbalanced HTRU-2 dataset using three quantum feature encodings: angle encoding, amplitude encoding, and data re-uploading. The models are trained using analytic gradients and compared with classical baselines and a quantum support vector machine reference model under a paired-seed protocol. Evaluation combines rare-event discrimination, low-false-positive-rate recovery, probability calibration, and runtime analysis. Angle encoding gives the strongest performance among the quantum logistic regression variants. At shallow depth, the angle-encoded model remains close to the best classical baselines in discrimination and low-false-positive-rate recovery, while also giving the lowest calibration error at the benchmark configuration. Murphy decomposition shows that the angle-encoded model maintains low reliability error and high, stable resolution across circuit depths and training-set sizes. This means that its probability estimates preserve both calibration and meaningful separation between candidate groups. Data re-uploading is competitive at small depth but loses discrimination and resolution at larger depth in the present multi-qubit implementation, while amplitude encoding remains weaker across dataset sizes. Shallow angle-encoded quantum logistic regression therefore gives the best balance among the tested quantum logistic models, although simulation runtime remains a practical limitation.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The paper evaluates hybrid quantum-classical logistic regression models with three quantum feature encodings (angle, amplitude, and data re-uploading) on the imbalanced HTRU-2 pulsar candidate dataset. Models are trained via analytic (parameter-shift) gradients and compared to classical logistic regression baselines and a QSVM reference under a paired-seed protocol. Evaluation metrics include rare-event discrimination, low-FPR recovery, probability calibration via Murphy decomposition, and runtime. The central claim is that the shallow-depth angle-encoded variant matches the best classical baselines in discrimination and low-FPR recovery while achieving the lowest calibration error, with Murphy decomposition confirming low reliability error and stable high resolution across depths and training sizes.

Significance. If the results hold under the encoding assumptions, the work provides concrete empirical evidence that hybrid quantum models can deliver competitive discrimination with superior calibration for imbalanced rare-event tasks in astronomy. The explicit use of Murphy decomposition to separate reliability and resolution is a methodological strength, and the paired-seed protocol with public dataset supports reproducibility. This could inform encoding choices in near-term quantum ML for applications requiring trustworthy probabilities.

major comments (3)
  1. [Feature encoding / Methods] Feature encoding section: The angle-encoding mapping (normalization of each of the 8 HTRU-2 features to rotations θ_i = π·(x_i - min)/(max-min)) is central to the headline claim that the quantum model improves calibration without sacrificing discrimination. No analysis is provided showing that this mapping preserves class-conditional separations and variances (e.g., via pre/post-encoding histograms, KL divergence, or overlap metrics between pulsar and noise distributions). If the mapping saturates angles for the most discriminative features, the reported calibration gain via Murphy decomposition could be an artifact of preprocessing rather than the variational circuit or analytic gradients.
  2. [Results / Evaluation metrics] Results on calibration and Murphy decomposition: The claim that the angle-encoded model gives the lowest calibration error and maintains 'low reliability error and high, stable resolution' requires quantitative tables or figures reporting the decomposed reliability and resolution components for all models, depths, and training-set sizes. Without these values and associated uncertainties, it is not possible to verify that the improvement is statistically meaningful or robust to the imbalanced dataset.
  3. [Experimental setup / Results] Training and evaluation protocol: The paired-seed protocol is used for comparisons, but the manuscript does not specify the number of seeds, whether paired statistical tests (e.g., Wilcoxon or t-tests on AUC, calibration error) were performed, or how class imbalance was handled during training (e.g., loss weighting). These details are load-bearing for the comparative claims against classical baselines.
minor comments (3)
  1. [Abstract] Abstract contains a typo: 'quantum-calssical' should be 'quantum-classical'.
  2. [Methods] Notation for the three encodings and the logistic regression output should be introduced with explicit equations (e.g., the circuit ansatz and the parameter-shift rule implementation) to improve clarity for readers unfamiliar with the specific hybrid model.
  3. [Results] Runtime analysis figures should report variance or error bars across the paired seeds rather than single-point estimates, given the stochastic nature of training.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We are grateful to the referee for the careful reading and constructive comments on our manuscript. We address each major comment point by point below, proposing targeted revisions that strengthen the presentation without altering the core results.

read point-by-point responses
  1. Referee: Feature encoding section: The angle-encoding mapping (normalization of each of the 8 HTRU-2 features to rotations θ_i = π·(x_i - min)/(max-min)) is central to the headline claim that the quantum model improves calibration without sacrificing discrimination. No analysis is provided showing that this mapping preserves class-conditional separations and variances (e.g., via pre/post-encoding histograms, KL divergence, or overlap metrics between pulsar and noise distributions). If the mapping saturates angles for the most discriminative features, the reported calibration gain via Murphy decomposition could be an artifact of preprocessing rather than the variational circuit or analytic gradients.

    Authors: We thank the referee for this methodological observation. The angle encoding employs a standard linear rescaling to [0, π], but we agree that explicit checks are needed to rule out preprocessing artifacts. In the revised manuscript we will insert a dedicated paragraph in the feature-encoding subsection that includes (i) overlaid histograms of the original and encoded values for the two most discriminative features and (ii) KL-divergence values between the pulsar and non-pulsar class-conditional distributions computed both before and after encoding. These additions will demonstrate that class separations are preserved and that the observed calibration improvement is attributable to the variational model. revision: yes

  2. Referee: Results on calibration and Murphy decomposition: The claim that the angle-encoded model gives the lowest calibration error and maintains 'low reliability error and high, stable resolution' requires quantitative tables or figures reporting the decomposed reliability and resolution components for all models, depths, and training-set sizes. Without these values and associated uncertainties, it is not possible to verify that the improvement is statistically meaningful or robust to the imbalanced dataset.

    Authors: We acknowledge that the current text summarizes the Murphy-decomposition outcomes without tabulating every component. The revised manuscript will contain a new table (placed after the calibration-error figure) that lists, for every model, depth, and training-set size, the reliability error, resolution, and total calibration error together with the standard deviations obtained from the paired-seed runs. This table will make the statistical robustness of the low-reliability, high-resolution behavior directly verifiable. revision: yes

  3. Referee: Training and evaluation protocol: The paired-seed protocol is used for comparisons, but the manuscript does not specify the number of seeds, whether paired statistical tests (e.g., Wilcoxon or t-tests on AUC, calibration error) were performed, or how class imbalance was handled during training (e.g., loss weighting). These details are load-bearing for the comparative claims against classical baselines.

    Authors: We appreciate the referee highlighting these missing experimental specifications. The revised methods section will state that ten paired seeds were employed throughout, that paired t-tests were performed on AUC and calibration error (with p-values now reported), and that class-imbalance was addressed by weighting the logistic loss inversely to the observed class frequencies for both quantum and classical models. These clarifications will be added without changing any numerical results. revision: yes

Circularity Check

0 steps flagged

No significant circularity; evaluation is empirically grounded

full rationale

The paper trains hybrid quantum logistic regression variants (angle, amplitude, and data re-uploading encodings) on the public HTRU-2 dataset using analytic parameter-shift gradients, then reports discrimination, low-FPR recovery, and Murphy-decomposed calibration metrics against independent classical logistic regression baselines and a QSVM reference. No derivation step reduces by construction to its own fitted parameters, no prediction is a renamed fit, and no load-bearing claim rests on a self-citation chain or imported uniqueness theorem; all quantitative claims follow directly from standard train/test splits and external benchmarks.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

The central claim depends on the validity of quantum simulation for small systems, the appropriateness of logistic regression for this binary classification, and the representativeness of the HTRU-2 dataset. No new physical entities are introduced. Free parameters are the trainable weights and any hyperparameters for the quantum circuits.

free parameters (2)
  • encoding parameters
    Parameters for angle, amplitude, and re-uploading encodings are set based on data and likely optimized.
  • logistic regression weights
    The model parameters are fitted to the training data using gradients.
axioms (2)
  • standard math The quantum circuits implement the specified encodings correctly and measurements yield the expected probabilities
    Assumed in the hybrid model implementation.
  • domain assumption The HTRU-2 dataset provides reliable labels for supervised training
    Standard assumption for the classification task.

pith-pipeline@v0.9.0 · 5538 in / 1647 out tokens · 65511 ms · 2026-05-11T03:18:30.200358+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

48 extracted references · 48 canonical work pages · 3 internal anchors

  1. [1]

    Hybrid Quantum-Classical Logistic Regression for Calibrated Classification of Pulsar Candidates

    dataset, widely used as a benchmark for pulsar can- didate classification, contains approximately 9% pulsars. This skewed distribution complicates candidate screening because operational follow-up decisions rely on predicted probabilities rather than raw accuracy. Miscalibrated predictions can lead either to wasted telescope time on false positives or to ...

  2. [2]

    QSVM training time is included in the left panel as a reference

    The right panel shows training time as a function of circuit depth at fixed training sizeN= 1000. QSVM training time is included in the left panel as a reference. Training time increases with bothNandL. QLR- amplitude has the lowest training cost among the QLR variants across all configurations. AtN= 1000 and L= 3, QLR-amplitude requires approximately 1,0...

  3. [3]

    Banks, Go ahead for the ska radio telescope, Physics World34, 8 (2021)

    M. Banks, Go ahead for the ska radio telescope, Physics World34, 8 (2021)

  4. [4]

    Keane, B

    E. Keane, B. Bhattacharyya, M. Kramer, B. Stap- pers, S. Bates, M. Burgay, S. Chatterjee, D. Champion, R. Eatough, J. Hessels,et al., A cosmic census of ra- dio pulsars with the ska, arXiv preprint arXiv:1501.00056 (2014)

  5. [5]

    Kramer and B

    M. Kramer and B. Stappers, Pulsar science with the ska, arXiv preprint arXiv:1507.04423 (2015)

  6. [6]

    D. R. Lorimer and M. Kramer,Handbook of pulsar as- tronomy, Vol. 4 (Cambridge university press, 2005)

  7. [7]

    R. S. Foster III,Constructing a pulsar timing array(Uni- versity of California, Berkeley, 1990)

  8. [8]

    Hobbs, A

    G. Hobbs, A. Archibald, Z. Arzoumanian, D. Backer, M. Bailes, N. Bhat, M. Burgay, S. Burke-Spolaor, D. Champion, I. Cognard,et al., The international pul- sar timing array project: using pulsars as a gravita- tional wave detector, Classical and Quantum Gravity27, 084013 (2010)

  9. [9]

    Verbiest, L

    J. Verbiest, L. Lentati, G. Hobbs, R. van Haasteren, P. B. Demorest, G. Janssen, J.-B. Wang, G. Desvignes, R. Ca- ballero, M. Keith,et al., The international pulsar timing array: first data release, Monthly Notices of the Royal Astronomical Society458, 1267 (2016)

  10. [10]

    R. J. Lyon, B. W. Stappers, S. Cooper, J. M. Brooke, and J. D. Knowles, Fifty years of pulsar candidate se- lection: from simple filters to a new principled real-time classification approach, Monthly Notices of the Royal As- tronomical Society459, 1104 (2016)

  11. [11]

    Y. Wang, Z. Pan, J. Zheng, L. Qian, and M. Li, A hy- brid ensemble method for pulsar candidate classification, Astrophysics and Space Science364, 139 (2019)

  12. [12]

    Tariq, Q

    I. Tariq, Q. Meng, S. Yao, W. Liu, C. Zhou, A. Ahmed, and A. Spanakis-Misirlis, Adaboost-dsnn: an adaptive boosting algorithm based on deep self normalized neural network for pulsar identification, Monthly Notices of the Royal Astronomical Society511, 683 (2022)

  13. [13]

    J. Xiao, X. Li, H. Lin, and K. Qiu, Pulsar candidate se- lection using pseudo-nearest centroid neighbour classifier, Monthly Notices of the Royal Astronomical Society492, 2119 (2020)

  14. [14]

    Kordzanganeh, A

    M. Kordzanganeh, A. Utting, and A. Scaife, Quantum machine learning for radio astronomy, arXiv preprint arXiv:2112.02655 (2021)

  15. [15]

    Slabbert, M

    D. Slabbert, M. Lourens, and F. Petruccione, Pulsar clas- sification: comparing quantum convolutional neural net- works and quantum support vector machines, Quantum Machine Intelligence6, 56 (2024)

  16. [16]

    McFarthing, A

    S. McFarthing, A. Pillay, I. Sinayskiy, and F. Petruc- cione, Classical ensembles of single-qubit quantum vari- ational circuits for classification, Quantum Machine In- telligence6, 81 (2024)

  17. [17]

    Souza, C

    A. Souza, C. Cruz, and M. A. Moret, Qiskit variational quantum classifier on the pulsar classification problem, arXiv preprint arXiv:2505.15600 (2025)

  18. [18]

    P´ erez-Salinas, A

    A. P´ erez-Salinas, A. Cervera-Lierta, E. Gil-Fuster, and J. I. Latorre, Data re-uploading for a universal quantum classifier, Quantum4, 226 (2020)

  19. [19]

    C. M. Bishop and N. M. Nasrabadi,Pattern recognition and machine learning, Vol. 4 (Springer, 2006)

  20. [20]

    Hastie, R

    T. Hastie, R. Tibshirani, J. Friedman,et al., The ele- ments of statistical learning (2009). 16

  21. [21]

    Benedetti, E

    M. Benedetti, E. Lloyd, S. Sack, and M. Fiorentini, Pa- rameterized quantum circuits as machine learning mod- els, Quantum science and technology4, 043001 (2019)

  22. [22]

    Cerezo, A

    M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio,et al., Variational quantum algorithms, Nature Reviews Physics3, 625 (2021)

  23. [23]

    Keith, A

    M. Keith, A. Jameson, W. van Straten, M. Bailes, S. Johnston, M. Kramer, A. Possenti, S. Bates, N. Bhat, M. Burgay,et al., The high time resolution universe pul- sar survey–i. system configuration and initial discoveries, Monthly Notices of the Royal Astronomical Society409, 619 (2010)

  24. [24]

    Pedregosa, G

    F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg,et al., Scikit-learn: Machine learn- ing in python, the Journal of machine Learning research 12, 2825 (2011)

  25. [25]

    Tacchino, C

    F. Tacchino, C. Macchiavello, D. Gerace, and D. Bajoni, An artificial neuron implemented on an actual quantum processor, npj Quantum Information5, 1 (2019)

  26. [26]

    Schuld and F

    M. Schuld and F. Petruccione,Machine learning with quantum computers, Vol. 676 (Springer, 2021)

  27. [27]

    Mitarai, M

    K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Quantum circuit learning, Physical Review A98, 032309 (2018)

  28. [28]

    Schuld, V

    M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, and N. Kil- loran, Evaluating analytic gradients on quantum hard- ware, Physical Review A99, 032331 (2019)

  29. [29]

    PennyLane: Automatic differentiation of hybrid quantum-classical computations

    V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, S. Ahmed, V. Ajith, M. S. Alam, G. Alonso-Linaje, B. Akash- Narayanan, A. Asadi,et al., Pennylane: Automatic dif- ferentiation of hybrid quantum-classical computations, arXiv preprint arXiv:1811.04968 (2018)

  30. [30]

    D. P. Kingma, Adam: A method for stochastic optimiza- tion, arXiv preprint arXiv:1412.6980 (2014)

  31. [31]

    T. G. Dietterich, Approximate statistical tests for com- paring supervised classification learning algorithms, Neu- ral computation10, 1895 (1998)

  32. [32]

    Demˇ sar, Statistical comparisons of classifiers over mul- tiple data sets, Journal of Machine learning research7, 1 (2006)

    J. Demˇ sar, Statistical comparisons of classifiers over mul- tiple data sets, Journal of Machine learning research7, 1 (2006)

  33. [33]

    Schader, W

    L. Schader, W. Song, R. Kempker, and D. Benkeser, Don’t let your analysis go to seed: on the impact of random seed on machine learning-based causal inference, Epidemiology35, 764 (2024)

  34. [34]

    Saito and M

    T. Saito and M. Rehmsmeier, The precision-recall plot is more informative than the roc plot when evaluating binary classifiers on imbalanced datasets, PloS one10, e0118432 (2015)

  35. [35]

    Cortes and V

    C. Cortes and V. Vapnik, Support-vector networks, Ma- chine learning20, 273 (1995)

  36. [36]

    Plattet al., Probabilistic outputs for support vec- tor machines and comparisons to regularized likelihood methods, Advances in large margin classifiers10, 61 (1999)

    J. Plattet al., Probabilistic outputs for support vec- tor machines and comparisons to regularized likelihood methods, Advances in large margin classifiers10, 61 (1999)

  37. [37]

    Chen and C

    T. Chen and C. Guestrin, Xgboost: A scalable tree boost- ing system, inProceedings of the 22nd acm sigkdd in- ternational conference on knowledge discovery and data mining(2016) pp. 785–794

  38. [38]

    Havl´ ıˇ cek, A

    V. Havl´ ıˇ cek, A. D. C´ orcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Super- vised learning with quantum-enhanced feature spaces, Nature567, 209 (2019)

  39. [39]

    N. J. Higham, Computing a nearest symmetric positive semidefinite matrix, Linear algebra and its applications 103, 103 (1988)

  40. [40]

    Davis and M

    J. Davis and M. Goadrich, The relationship between precision-recall and roc curves, inProceedings of the 23rd international conference on Machine learning(2006) pp. 233–240

  41. [41]

    C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, On calibration of modern neural networks, inInternational conference on machine learning(PMLR, 2017) pp. 1321– 1330

  42. [42]

    A. H. Murphy, A new vector partition of the probability score, Journal of Applied Meteorology and Climatology 12, 595 (1973)

  43. [43]

    W. B. Glennet al., Verification of forecasts expressed in terms of probability, Monthly weather review78, 1 (1950)

  44. [44]

    A. H. Murphy and R. L. Winkler, Reliability of subjective probability forecasts of precipitation and temperature, Journal of the Royal Statistical Society Series C: Applied Statistics26, 41 (1977)

  45. [45]

    Siegert, Simplifying and generalising murphy’s brier score decomposition, Quarterly Journal of the Royal Me- teorological Society143, 1178 (2017)

    S. Siegert, Simplifying and generalising murphy’s brier score decomposition, Quarterly Journal of the Royal Me- teorological Society143, 1178 (2017)

  46. [46]

    Niculescu-Mizil and R

    A. Niculescu-Mizil and R. Caruana, Predicting good probabilities with supervised learning, inProceedings of the 22nd international conference on Machine learning (2005) pp. 625–632

  47. [47]

    Preskill, Quantum computing in the nisq era and be- yond, Quantum2, 79 (2018)

    J. Preskill, Quantum computing in the nisq era and be- yond, Quantum2, 79 (2018)

  48. [48]

    Larocca, S

    M. Larocca, S. Thanasilp, S. Wang, K. Sharma, J. Bia- monte, P. J. Coles, L. Cincio, J. R. McClean, Z. Holmes, and M. Cerezo, Barren plateaus in variational quantum computing, Nature Reviews Physics, Volume 7, Issue 4, pp. 174-189 10.1038/s42254-025-00813-9 (2025)