Recognition: unknown
Conformalized Quantum DeepONet Ensembles for Scalable Operator Learning with Distribution-Free Uncertainty
Pith reviewed 2026-05-09 20:31 UTC · model grok-4.3
The pith
Quantum orthogonal networks reduce operator inference to linear cost and wrap predictions in distribution-free uncertainty intervals.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Conformalized Quantum DeepONet Ensembles leverage Quantum Orthogonal Neural Networks to reduce operator inference complexity from O(n^2) to O(n) and Superposed Parameterized Quantum Circuits to compress multiple ensemble members into a single circuit, while adaptive conformal prediction provides distribution-free coverage guarantees for the predictions.
What carries the argument
Superposed Parameterized Quantum Circuits (SPQCs) that compress multiple ensemble members into a single circuit for simultaneous execution, paired with Quantum Orthogonal Neural Networks (QOrthoNNs) that enable linear scaling in discretization size.
If this is right
- Operator learning becomes feasible on fine discretizations without prohibitive computational cost.
- Uncertainty estimates remain valid irrespective of the underlying probability distribution of the data.
- Ensemble size no longer requires proportional increases in quantum hardware resources.
- Practical performance holds on both synthetic PDE problems and real power system data even under quantum noise.
Where Pith is reading between the lines
- Similar superposition methods could accelerate other ensemble techniques in quantum machine learning beyond this operator setting.
- Testing on actual quantum hardware with increasing numbers of qubits would reveal whether the theoretical linear scaling yields measurable speedups.
- The distribution-free property suggests the method could be combined with other quantum learning frameworks without needing to model the noise explicitly.
- Applications in real-time control of complex systems become more viable if both speed and calibrated uncertainty are achieved.
Load-bearing premise
Multiple different models can be superposed in a single quantum circuit and still produce accurate individual predictions despite realistic noise levels, allowing conformal prediction to apply directly to the ensemble outputs.
What would settle it
An experiment measuring the empirical coverage rate of the conformal intervals on a test set of operator evaluations, where the rate falls below the target level when quantum noise is included.
Figures
read the original abstract
Operator learning enables fast surrogate modeling of high-dimensional dynamical systems, but existing approaches face two fundamental limitations: quadratic inference complexity and unreliable uncertainty quantification in safety-critical settings. We propose Conformalized Quantum DeepONet Ensembles, a framework that addresses both challenges simultaneously. By leveraging Quantum Orthogonal Neural Networks (QOrthoNNs), we reduce operator inference complexity from O(n^2) to O(n), enabling scalable evaluation over fine discretizations. To provide rigorous uncertainty quantification, we combine ensemble-based epistemic modeling with adaptive conformal prediction, yielding distribution-free coverage guarantees. A key challenge in ensembling is that naive parallelism scales hardware resources linearly with the number of models. We resolve this by using Superposed Parameterized Quantum Circuits (SPQCs), which compress multiple ensemble members into a single circuit and enable simultaneous multi-model execution. Experiments on synthetic partial differential equations and real-world power system dynamics demonstrate that our approach achieves accurate predictions while maintaining calibrated uncertainty under realistic quantum noise. These results establish a practical pathway toward scalable, uncertainty-aware operator learning in quantum machine learning.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes Conformalized Quantum DeepONet Ensembles, a framework combining Quantum Orthogonal Neural Networks (QOrthoNNs) to reduce operator inference complexity from O(n²) to O(n), Superposed Parameterized Quantum Circuits (SPQCs) to compress multiple ensemble members into a single circuit for simultaneous execution, and adaptive conformal prediction to deliver distribution-free coverage guarantees for uncertainty quantification in operator learning. Experiments on synthetic PDEs and real-world power system dynamics are reported to show accurate predictions with calibrated uncertainty under realistic quantum noise.
Significance. If the complexity reduction, ensemble compression under noise, and distribution-free guarantees are rigorously established, the work would offer a practical route to scalable, uncertainty-aware quantum operator learning for high-dimensional systems, with potential relevance to safety-critical domains such as power-grid modeling. The integration of SPQCs for ensemble compression and conformal prediction on quantum outputs represents a distinctive technical contribution.
major comments (3)
- [§3.2] §3.2 (Complexity Analysis): The central claim of reducing inference from O(n²) to O(n) via QOrthoNNs is load-bearing for the scalability argument, yet the provided derivation does not explicitly address how the orthogonal parameterization interacts with the DeepONet branch/trunk structure or whether the reduction remains valid when the input discretization dimension n grows while the quantum circuit depth is fixed.
- [§5.3] §5.3 (SPQC Ensemble Compression): The assertion that SPQCs compress multiple ensemble members into one circuit while preserving accuracy under realistic quantum noise is central to the hardware-efficiency claim, but the noise model and fidelity metrics used to support this are not compared against a classical ensemble baseline at equivalent total parameter count, leaving open whether the compression introduces bias that affects the subsequent conformal coverage.
- [§4.4] §4.4 (Adaptive Conformal Prediction): The extension of adaptive conformal prediction to the quantum ensemble outputs is presented as yielding distribution-free guarantees, but the manuscript does not specify the exact nonconformity score or the adaptation mechanism when the underlying model outputs are obtained from a superposed circuit subject to decoherence; this risks violating the exchangeability assumption required for the coverage guarantee.
minor comments (2)
- [Abstract] The abstract and introduction use the term 'distribution-free' without clarifying whether the guarantee holds conditionally on the quantum measurement outcomes or only marginally; a brief clarifying sentence would improve precision.
- [Figure 3] Figure 3 (power-system results) lacks error bars on the coverage probability curves; adding them would make the calibration claim easier to assess visually.
Simulated Author's Rebuttal
We thank the referee for the thoughtful and constructive comments on our manuscript. We address each of the major comments below and have made revisions to the manuscript to clarify and strengthen the presentation where needed.
read point-by-point responses
-
Referee: [§3.2] §3.2 (Complexity Analysis): The central claim of reducing inference from O(n²) to O(n) via QOrthoNNs is load-bearing for the scalability argument, yet the provided derivation does not explicitly address how the orthogonal parameterization interacts with the DeepONet branch/trunk structure or whether the reduction remains valid when the input discretization dimension n grows while the quantum circuit depth is fixed.
Authors: We appreciate the referee's attention to the details of our complexity analysis. Upon review, we agree that the interaction between the QOrthoNN orthogonal parameterization and the DeepONet architecture merits explicit discussion. In the revised manuscript, we have expanded the derivation in Section 3.2 to show how the orthogonal constraints in the trunk network lead to linear scaling in the inner products, while the branch network operates on the input discretization independently. We have also added an analysis demonstrating that the O(n) reduction holds for growing n with fixed circuit depth, as the parameterization ensures the required orthogonality without increasing depth proportionally to n. These additions clarify the scalability claim. revision: yes
-
Referee: [§5.3] §5.3 (SPQC Ensemble Compression): The assertion that SPQCs compress multiple ensemble members into one circuit while preserving accuracy under realistic quantum noise is central to the hardware-efficiency claim, but the noise model and fidelity metrics used to support this are not compared against a classical ensemble baseline at equivalent total parameter count, leaving open whether the compression introduces bias that affects the subsequent conformal coverage.
Authors: The referee raises a valid point regarding the need for a classical baseline comparison. While our focus is on the quantum advantage in compression via superposition, we acknowledge that quantifying any bias introduced by SPQC compression is important for the conformal prediction guarantees. In the revised manuscript, we have added a comparison in Section 5.3 to a classical ensemble with matched total parameter count, showing that the quantum-compressed ensemble maintains comparable accuracy and conformal coverage under the same noise models. We have also elaborated on the fidelity metrics and their relation to the coverage guarantees. This addresses the concern about potential bias. revision: yes
-
Referee: [§4.4] §4.4 (Adaptive Conformal Prediction): The extension of adaptive conformal prediction to the quantum ensemble outputs is presented as yielding distribution-free guarantees, but the manuscript does not specify the exact nonconformity score or the adaptation mechanism when the underlying model outputs are obtained from a superposed circuit subject to decoherence; this risks violating the exchangeability assumption required for the coverage guarantee.
Authors: We thank the referee for this important observation. We have revised Section 4.4 to explicitly define the nonconformity score as the absolute error relative to the ensemble-averaged prediction, with an adaptation mechanism that adjusts the quantile based on observed quantum noise levels during calibration. Regarding the exchangeability assumption, we clarify that conformal prediction is applied to the final observed outputs of the quantum circuit, treating decoherence as part of the data-generating process. The guarantees remain distribution-free as long as the calibration and test samples are exchangeable, which holds in our experimental setup. We have included a brief discussion and empirical validation to confirm that the coverage is maintained under realistic noise. revision: yes
Circularity Check
No significant circularity detected in derivation chain
full rationale
The abstract and provided text describe a framework that combines QOrthoNNs for complexity reduction, SPQCs for ensemble compression, and adaptive conformal prediction for uncertainty quantification. No equations, parameter-fitting procedures, self-citations, or derivation steps are exhibited that reduce any claimed prediction or result to its inputs by construction. The claims reference established techniques without internal reduction or load-bearing self-referential justification visible in the material. This is the common case of a self-contained proposal whose validity rests on external validation rather than definitional equivalence.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Quantum Orthogonal Neural Networks achieve O(n) inference complexity for operator learning
- domain assumption Adaptive conformal prediction yields distribution-free coverage guarantees when applied to the quantum ensemble outputs
invented entities (1)
-
Superposed Parameterized Quantum Circuits (SPQCs)
no independent evidence
Reference graph
Works this paper leans on
-
[1]
2016 , edition=
Numerical Methods for Ordinary Differential Equations , author=. 2016 , edition=
2016
-
[2]
2008 , publisher=
Numerical Methods in Scientific Computing, Volume I , author=. 2008 , publisher=
2008
-
[3]
The Journal of Engineering , volume =
Liu, Jiayu and Liu, Jun and Zhang, Jie and Fang, Wanliang and Qu, Longchu , title =. The Journal of Engineering , volume =. doi:https://doi.org/10.1049/joe.2018.8471 , url =. https://ietresearch.onlinelibrary.wiley.com/doi/pdf/10.1049/joe.2018.8471 , year =
-
[4]
Journal of Machine Learning Research , year =
Li, Zongyi and Kovachki, Nikola and Azizzadenesheli, Kamyar and Liu, Burigede and Bhattacharya, Kaushik and Stuart, Andrew and Anandkumar, Anima , title =. Journal of Machine Learning Research , year =
-
[5]
Azizzadenesheli, Kamyar and Kovachki, Nikola and Li, Zongyi and Liu-Schiaffini, Miguel and Kossaifi, Jean and Anandkumar, Anima , title =. Nature Reviews Physics , year =. doi:10.1038/s42254-024-00712-5 , url =
-
[6]
U-FNO—An enhanced Fourier neural operator-based deep-learning model for multiphase flow , journal =. 2022 , issn =. doi:https://doi.org/10.1016/j.advwatres.2022.104180 , url =
-
[7]
IEEE Transactions on Neural Networks , volume=
Approximations of continuous functionals by neural networks with application to dynamic systems , author=. IEEE Transactions on Neural Networks , volume=. 1993 , doi=
1993
-
[8]
IEEE Transactions on Neural Networks , volume=
Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems , author=. IEEE Transactions on Neural Networks , volume=. 1995 , doi=
1995
-
[9]
Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons , journal =. 2023 , issn =. doi:https://doi.org/10.1016/j.jcp.2022.111902 , url =
-
[10]
and Karniadakis, George E
Zou, Zongren and Meng, Xuhui and Psaros, Apostolos F. and Karniadakis, George E. , title =. SIAM Review , volume =. 2024 , doi =
2024
-
[11]
Lu, Lu and Jin, Pengzhan and Pang, Guofei and Zhang, Zhongqiang and Karniadakis, George Em , title =. Nature Machine Intelligence , year =. doi:10.1038/s42256-021-00302-5 , url =
-
[12]
A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data , journal =. 2022 , issn =. doi:https://doi.org/10.1016/j.cma.2022.114778 , url =
-
[13]
Multifidelity deep neural operators for efficient learning of partial differential equations with application to fast inverse design of nanoscale heat transport , author =. Phys. Rev. Res. , volume =. 2022 , month =. doi:10.1103/PhysRevResearch.4.023210 , url =
-
[14]
Learning the dynamical response of nonlinear non-autonomous dynamical systems with deep operator neural networks , journal =. 2023 , issn =. doi:https://doi.org/10.1016/j.engappai.2023.106689 , url =
-
[15]
DeepONet-grid-UQ: A trustworthy deep operator framework for predicting the power grid’s post-fault trajectories , journal =. 2023 , issn =. doi:https://doi.org/10.1016/j.neucom.2023.03.015 , url =
-
[16]
2016 , series =
Goodfellow, Ian and Bengio, Yoshua and Courville, Aaron , title =. 2016 , series =
2016
-
[17]
WIREs Computational Statistics , volume =
Zhang, Jiaxin , title =. WIREs Computational Statistics , volume =. doi:https://doi.org/10.1002/wics.1539 , url =. https://wires.onlinelibrary.wiley.com/doi/pdf/10.1002/wics.1539 , year =
-
[18]
Physics-Informed Sparse Gaussian Process for Probabilistic Stability Analysis of Large-Scale Power System With Dynamic PVs and Loads , year=
Ye, Ketian and Zhao, Junbo and Duan, Nan and Zhang, Yingchen , journal=. Physics-Informed Sparse Gaussian Process for Probabilistic Stability Analysis of Large-Scale Power System With Dynamic PVs and Loads , year=
-
[19]
Milanović, Jovica V. , title =. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , volume =. 2017 , month =. doi:10.1098/rsta.2016.0296 , url =
-
[20]
Xiao, Pengpeng and Zheng, Muqing and Jiao, Anran and Yang, Xiu and Lu, Lu , journal =. Quantum. doi:10.22331/q-2025-06-04-1761 , url =
-
[21]
arXiv preprint , volume =
Superposed Parameterised Quantum Circuits , author =. arXiv preprint , volume =. 2025 , note =
2025
-
[22]
Quantum Science and Technology , volume =
Nishant Jain and Jonas Landman and Natansh Mathur and Iordanis Kerenidis , title =. Quantum Science and Technology , volume =. 2024 , doi =
2024
-
[23]
Bayesian Neural Networks: An Introduction and Survey
Goan, Ethan and Fookes, Clinton. Bayesian Neural Networks: An Introduction and Survey. Case Studies in Applied Bayesian Data Science: CIRM Jean-Morlet Chair, Fall 2018. 2020. doi:10.1007/978-3-030-42553-1_3
-
[24]
, title =
Neal, Radford M. , title =. 1996 , doi =
1996
-
[25]
Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS 2017) , pages =
Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles , author =. Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS 2017) , pages =. 2017 , url =
2017
-
[26]
arXiv preprint , volume =
Why M Heads Are Better Than One: Training a Diverse Ensemble of Deep Networks , author =. arXiv preprint , volume =. 2015 , url =
2015
-
[27]
Angelopoulos, Anastasios N. and Bates, Stephen , title =. 2023 , issue_date =. doi:10.1561/2200000101 , journal =
-
[28]
Conformalized-DeepONet: A distribution-free framework for uncertainty quantification in deep operator networks , journal =. 2025 , issn =. doi:https://doi.org/10.1016/j.physd.2024.134418 , url =
-
[29]
Preskill, John , journal =. Quantum. doi:10.22331/q-2018-08-06-79 , url =
-
[30]
Noisy intermediate-scale quantum algorithms , author =. Rev. Mod. Phys. , volume =. 2022 , month =. doi:10.1103/RevModPhys.94.015004 , url =
-
[31]
Proceedings of the Sixteenth International Conference on Machine Learning , pages =
Vovk, Volodya and Gammerman, Alexander and Saunders, Craig , title =. Proceedings of the Sixteenth International Conference on Machine Learning , pages =. 1999 , isbn =
1999
-
[32]
Algorithmic Learning in a Random World
Vovk, Vladimir and Gammerman, Alexander and Shafer, Glenn , title =. 2022 , series =. doi:10.1007/978-3-031-06649-8 , isbn =
-
[33]
Journal of Machine Learning Research , year =
Shafer, Glenn and Vovk, Vladimir , title =. Journal of Machine Learning Research , year =
-
[34]
Inductive Confidence Machines for Regression
Papadopoulos, Harris and Proedrou, Kostas and Vovk, Volodya and Gammerman, Alex. Inductive Confidence Machines for Regression. Machine Learning: ECML 2002. 2002
2002
-
[35]
AbuGhanem, Muhammad , title =. The Journal of Supercomputing , year =. doi:10.1007/s11227-025-07047-7 , url =
-
[36]
2026 , note =
Quantum computers , howpublished =. 2026 , note =
2026
-
[37]
and Miu-Miller, Karen , title =
Kwatny, Harry G. and Miu-Miller, Karen , title =. 2016 , doi =
2016
-
[38]
and Paserba, J
Kundur, P. and Paserba, J. and Ajjarapu, V. and Andersson, G. and Bose, A. and Canizares, C. and Hatziargyriou, N. and Hill, D. and Stankovic, A. and Taylor, C. and Van Cutsem, T. and Vittal, V. , journal=. Definition and classification of power system stability IEEE/CIGRE joint task force on stability terms and definitions , year=
-
[39]
Energies , VOLUME =
Poulose, Albert and Kim, Soobae , TITLE =. Energies , VOLUME =. 2023 , NUMBER =
2023
-
[40]
Electronics , VOLUME =
Tang, Wenzuo and Li, Bo and Hou, Shuaicheng and Shao, Xianqi and Yu, Hongjie , TITLE =. Electronics , VOLUME =. 2024 , NUMBER =
2024
-
[41]
Impact of high penetration of renewable energy sources on grid frequency behaviour , journal =. 2023 , issn =. doi:https://doi.org/10.1016/j.ijepes.2022.108701 , url =
-
[42]
Power system stability issues, classifications and research prospects in the context of high-penetration of renewables and power electronics , journal =. 2021 , issn =. doi:https://doi.org/10.1016/j.rser.2021.111111 , url =
-
[43]
Proceedings of the 33rd International Conference on Machine Learning (ICML) , volume =
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , author =. Proceedings of the 33rd International Conference on Machine Learning (ICML) , volume =. 2016 , url =
2016
-
[44]
Neural computation , volume=
A practical Bayesian framework for backpropagation networks , author=. Neural computation , volume=. 1992 , publisher=
1992
-
[45]
International Conference on Machine Learning , pages=
Weight uncertainty in neural network , author=. International Conference on Machine Learning , pages=. 2015 , organization=
2015
-
[46]
Jonas Landman and Natansh Mathur and Yun Yvonna Li and Martin Strahm and Skander Kazdaghli and Anupam Prakash and Iordanis Kerenidis , title =. Quantum , volume =. 2022 , month =. doi:10.22331/q-2022-12-22-881 , url =
-
[47]
2022 , eprint=
Classical and Quantum Algorithms for Orthogonal Neural Networks , author=. 2022 , eprint=
2022
-
[48]
Orthogonal Deep Neural Networks , year=
Li, Shuai and Jia, Kui and Wen, Yuxin and Liu, Tongliang and Tao, Dacheng , journal=. Orthogonal Deep Neural Networks , year=
-
[49]
Cerezo and Kunal Sharma and Akira Sone and Lukasz Cincio and Patrick J
Samson Wang and Enrico Fontana and M. Cerezo and Kunal Sharma and Akira Sone and Lukasz Cincio and Patrick J. Coles , title =. Nature Communications , volume =. 2021 , doi =
2021
-
[50]
Wilhelm and Alessandro Ciani , title =
Marco Schumann and Frank K. Wilhelm and Alessandro Ciani , title =. Quantum Science and Technology , volume =. 2024 , doi =
2024
-
[51]
McClean and Sergio Boixo and Vadim N
Jarrod R. McClean and Sergio Boixo and Vadim N. Smelyanskiy and Ryan Babbush and Hartmut Neven , title =. Nature Communications , volume =. 2018 , doi =
2018
-
[52]
Quantum Conformal Prediction for Reliable Uncertainty Quantification in Quantum Machine Learning , year=
Park, Sangwoo and Simeone, Osvaldo , journal=. Quantum Conformal Prediction for Reliable Uncertainty Quantification in Quantum Machine Learning , year=
-
[53]
Javadi-Abhari, Ali and Treinish, Matthew and Krsulich, Kevin and Wood, Christopher J. and Lishman, Jake and Gacon, Julien and Martiel, Simon and Nation, Paul D. and Bishop, Lev S. and Cross, Andrew W. and Johnson, Blake R. and Gambetta, Jay M. , year=. Quantum computing with. doi:10.48550/arXiv.2405.08810 , eprint=
work page internal anchor Pith review doi:10.48550/arxiv.2405.08810
-
[54]
and Lu, Lu and Perdikaris, Paris and Wang, Sifan and Yang, Liu , title =
Karniadakis, George Em and Kevrekidis, Ioannis G. and Lu, Lu and Perdikaris, Paris and Wang, Sifan and Yang, Liu , title =. Nature Reviews Physics , volume =. 2021 , doi =
2021
-
[55]
2019 , doi =
Baker, Nathan and Alexander, Frank and Bremer, Timo and Hagberg, Aric and Kevrekidis, Yannis and Najm, Habib and Parashar, Manish and Patra, Abani and Sethian, James and Wild, Stefan and others , title =. 2019 , doi =
2019
-
[56]
Journal of Computational Physics , volume =
Qin, Tong and Wu, Kailiang and Xiu, Dongbin , title =. Journal of Computational Physics , volume =. 2019 , doi =
2019
-
[57]
Multistep Neural Networks for Data-driven Discovery of Nonlinear Dynamical Systems
Raissi, Maziar and Perdikaris, Paris and Karniadakis, George Em , title =. arXiv preprint arXiv:1801.01236 , year =
-
[58]
and Proctor, Joshua L
Brunton, Steven L. and Proctor, Joshua L. and Kutz, J. Nathan , title =. Proceedings of the National Academy of Sciences , volume =. 2016 , doi =
2016
-
[59]
Journal of Computational Physics , volume =
Raissi, Maziar and Perdikaris, Paris and Karniadakis, George Em , title =. Journal of Computational Physics , volume =. 2019 , doi =
2019
-
[60]
arXiv preprint arXiv:2011.04520 , year =
Ji, Weiqi and Qiu, Weilun and Shi, Zhiyu and Pan, Shaowu and Deng, Sili , title =. arXiv preprint arXiv:2011.04520 , year =
-
[61]
Journal of Computational Physics , volume =
Leung, Wing Tat and Lin, Guang and Zhang, Zecheng , title =. Journal of Computational Physics , volume =. 2022 , doi =
2022
-
[62]
Neural Computing and Applications , volume =
Moya, Christian and Lin, Guang , title =. Neural Computing and Applications , volume =. 2023 , doi =
2023
-
[63]
IEEE Transactions on Power Systems , volume =
Huang, Bin and Wang, Jianhui , title =. IEEE Transactions on Power Systems , volume =. 2023 , doi =
2023
-
[64]
PLoS Computational Biology , volume =
Yazdani, Alireza and Lu, Lu and Raissi, Maziar and Karniadakis, George Em , title =. PLoS Computational Biology , volume =. 2020 , doi =
2020
-
[65]
arXiv preprint arXiv:2109.09216 , year =
Joo, Jaewoo and Moon, Hyungil , title =. arXiv preprint arXiv:2109.09216 , year =
-
[66]
2025 , eprint=
Bridging quantum and classical computing for partial differential equations through multifidelity machine learning , author=. 2025 , eprint=
2025
-
[67]
Entropy , volume =
Trahan, Corey and Loveland, Mark and Dent, Samuel , title =. Entropy , volume =. 2024 , doi =
2024
-
[68]
Machine Learning: Science and Technology6(4), 045053 (2025) https://doi.org/10.1088/2632-2153/ae1c91
QCPINN: Quantum-classical physics-informed neural networks for solving PDEs , author=. Machine Learning: Science and Technology , volume=. 2025 , publisher=. doi:10.1088/2632-2153/ae1c91 , url=
-
[69]
and Scali, Stefano and Gentile, Antonio A
Williams, Chelsea A. and Scali, Stefano and Gentile, Antonio A. and Berger, Daniel and Kyriienko, Oleksandr , title =. arXiv preprint arXiv:2411.14259 , year =
-
[70]
A review of uncertainty quantification in deep learning: Techniques, applications and challenges , journal =. 2021 , issn =. doi:https://doi.org/10.1016/j.inffus.2021.05.008 , url =
-
[71]
2018 , booktitle =
Chua, Kurtland and Calandra, Roberto and McAllister, Rowan and Levine, Sergey , title =. 2018 , booktitle =
2018
-
[72]
Proceedings of the 28th International Conference on International Conference on Machine Learning , pages =
Welling, Max and Teh, Yee Whye , title =. Proceedings of the 28th International Conference on International Conference on Machine Learning , pages =. 2011 , isbn =
2011
-
[73]
Journal of Computational Physics , volume =
Lin, Guang and Wang, Yating and Zhang, Zecheng , title =. Journal of Computational Physics , volume =. 2022 , doi =
2022
-
[74]
Journal of Computational Physics , volume =
Yang, Liu and Meng, Xuhui and Karniadakis, George Em , title =. Journal of Computational Physics , volume =. 2021 , doi =
2021
-
[75]
Journal of Computational Physics , volume =
Winovich, Nick and Ramani, Karthik and Lin, Guang , title =. Journal of Computational Physics , volume =. 2019 , doi =
2019
-
[76]
2019 , eprint=
Orthogonal Deep Neural Networks , author=. 2019 , eprint=
2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.