pith. machine review for the scientific record. sign in

arxiv: 2604.23289 · v2 · submitted 2026-04-25 · 💻 cs.CV · cs.AI· cs.LG· cs.MM

Recognition: unknown

MetaErr: Towards Predicting Error Patterns in Deep Neural Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-08 08:42 UTC · model grok-4.3

classification 💻 cs.CV cs.AIcs.LGcs.MM
keywords MetaErrerror predictiondeep neural networksfailure forecastingcomputer visionsemi-supervised learningmodel-agnosticpseudo-labeling
0
0 comments X

The pith

A meta-model predicts whether a deep neural network succeeds or fails on individual samples using only task performance observations.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes MetaErr, a framework to predict when deep learning models will make errors on specific inputs. It trains a separate model that learns from the base network's performance on the overall task to forecast success or failure per sample. This matters for applications where unexpected failures in computer vision or multimedia systems can have serious consequences, allowing proactive measures. The meta-model requires no information about the base model's design, weights, or activations. Experiments show it beats baselines and aids semi-supervised learning on standard vision datasets.

Core claim

MetaErr trains a meta-model to predict per-sample success or failure of a base deep neural network by observing only the base model's performance on the given learning task. The meta-model is agnostic to the base model's architecture and training parameters. This enables error prediction in smart multimedia applications and improves pseudo-labeling in semi-supervised learning, with superior performance on three benchmark computer vision datasets.

What carries the argument

The meta-model that maps observations of the base model's aggregate performance to predictions of per-sample correctness or error.

If this is right

  • Error prediction can be integrated into deployed systems to flag risky predictions without internal access.
  • Improved pseudo-label selection in semi-supervised learning leads to better model accuracy.
  • The agnostic nature allows the same meta-model to work with various base architectures.
  • Potential for real-time failure anticipation in multimedia computing applications.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Such predictors might combine with uncertainty quantification techniques for more robust safety mechanisms.
  • Testing on non-vision domains could reveal if the approach generalizes beyond computer vision tasks.
  • If the meta-model can be trained with minimal data, it could enable on-the-fly adaptation for new tasks.

Load-bearing premise

The base model's task-level performance statistics contain enough information to predict its behavior on individual unseen samples without any internal knowledge.

What would settle it

A test where the meta-model's accuracy on predicting errors drops to chance level when the base model is changed to one with different training dynamics or data distribution.

Figures

Figures reproduced from arXiv: 2604.23289 by Shayok Chakraborty, Varun Totakura.

Figure 1
Figure 1. Figure 1: Outline of the proposed MetaErr framework. ing, and show that it consistently outperforms several strong baselines on three benchmark datasets. II. RELATED WORK Estimating Prediction Confidence of DNNs: The confidence of a deep neural network in its decision is often correlated to the likelihood of it being correct. Reliably estimating the confidence / uncertainty of DNNs is a well￾researched problem in th… view at source ↗
Figure 2
Figure 2. Figure 2: Visual illustration of the performance of view at source ↗
Figure 3
Figure 3. Figure 3: Performance of MetaErr on regression tasks. Best viewed in color. Comparison Baselines: We used Random and BALD [20] as the baselines in this setup. SR was not used as it is challenging to come up with one posterior probability measure for a single image in this context. Implementation Details: Base Model F: The U-Net ar￾chitecture was used as the base model, as it is specifically designed for image segmen… view at source ↗
Figure 4
Figure 4. Figure 4: Performance of MetaErr on semantic image sementation tasks. Best viewed in color. accurately segmented by the base model. These results further corroborate the generalizability of MetaErr across a variety of learning tasks. REFERENCES [1] S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou, “AgeDB: the first manually collected, in-the-wild age database,” in IEEE Conf. Computer V… view at source ↗
read the original abstract

Due to the unprecedented success of deep learning, it has become an integral component in several multimedia computing applications in todays world. Unfortunately, deep learning systems are not perfect and can fail, sometimes abruptly, without prior warning or explanation. While reducing the error rate of deep neural networks has been the primary focus of the multimedia community, the problem of predicting when a deep learning system is going to fail has received significantly less research attention. In this paper, we propose a simple yet effective framework, MetaErr, to address this under-explored problem in deep learning research. We train a meta-model whose goal is to predict whether a base deep neural network will succeed or fail in predicting a particular data sample, by observing the base models performance on a given learning task. The meta-model is completely agnostic of the architecture and training parameters of the base model. Such an error prediction system can be immensely useful in a variety of smart multimedia applications. Our empirical studies corroborate the promise and potential of our framework against competing baselines. We further demonstrate the usefulness of our framework to improve the performance of pseudo-labeling-based semi-supervised learning, and show that MetaErr outperforms several strong baselines on three benchmark computer vision datasets.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes MetaErr, a meta-model framework to predict per-sample success or failure of a base deep neural network. The meta-model is trained solely by observing the base model's aggregate performance on a learning task and is completely agnostic to the base model's architecture, parameters, and internal representations. The authors claim that empirical studies show MetaErr outperforming several strong baselines on three benchmark computer vision datasets and that it can improve pseudo-labeling performance in semi-supervised learning.

Significance. If the central claim holds, the work would offer a practical way to anticipate DNN failures in multimedia applications using only high-level task metrics, potentially aiding reliability and semi-supervised pipelines without requiring model internals. The empirical demonstration on multiple datasets and the pseudo-labeling use case would strengthen its applied value, though the absence of verifiable experimental details limits assessment of its actual contribution.

major comments (2)
  1. [Abstract] Abstract: The claim that a meta-model can predict per-sample success/failure 'by observing the base models performance on a given learning task' while remaining 'completely agnostic of the architecture and training parameters' is internally inconsistent. Aggregate task-level metrics contain no per-sample distinguishing information, so reliable per-sample prediction is information-theoretically impossible under the stated constraints without either feeding individual samples or supplying per-sample base-model outputs; either case violates the 'only aggregate' and 'agnostic' conditions that are presented as the framework's key separation from baselines.
  2. [Abstract] Abstract (empirical studies paragraph): The manuscript asserts that 'MetaErr outperforms several strong baselines on three benchmark computer vision datasets' and improves pseudo-labeling, yet provides no description of the experimental setup, meta-model input construction, baseline implementations, evaluation metrics, or statistical significance testing. Without these details the claimed superiority cannot be verified and the load-bearing empirical support for the framework remains unsubstantiated.
minor comments (2)
  1. [Abstract] The abstract and introduction would benefit from an explicit statement of the precise input features supplied to the meta-model during training and inference.
  2. [Abstract] Notation for 'base model performance' is used without definition; a short clarifying sentence would improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive comments on our manuscript. We address each major comment point by point below and outline the revisions we will make.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The claim that a meta-model can predict per-sample success/failure 'by observing the base models performance on a given learning task' while remaining 'completely agnostic of the architecture and training parameters' is internally inconsistent. Aggregate task-level metrics contain no per-sample distinguishing information, so reliable per-sample prediction is information-theoretically impossible under the stated constraints without either feeding individual samples or supplying per-sample base-model outputs; either case violates the 'only aggregate' and 'agnostic' conditions that are presented as the framework's key separation from baselines.

    Authors: We agree that the abstract phrasing is ambiguous and can be read as implying the use of only aggregate task-level metrics, which would indeed make per-sample prediction impossible. In the MetaErr framework, the meta-model receives supervision from the base model's observed per-sample correctness (success/failure) on samples drawn from the learning task; this provides the training labels. The meta-model itself remains agnostic to the base model's architecture, parameters, and internal representations, and does not require per-sample outputs such as logits or features from the base model at inference time. Instead, it operates on the input sample to forecast whether the base model would err. We will revise the abstract to explicitly describe the meta-model inputs, the role of performance observations as supervision rather than aggregate inputs, and the agnostic property to eliminate this inconsistency. revision: yes

  2. Referee: [Abstract] Abstract (empirical studies paragraph): The manuscript asserts that 'MetaErr outperforms several strong baselines on three benchmark computer vision datasets' and improves pseudo-labeling, yet provides no description of the experimental setup, meta-model input construction, baseline implementations, evaluation metrics, or statistical significance testing. Without these details the claimed superiority cannot be verified and the load-bearing empirical support for the framework remains unsubstantiated.

    Authors: The abstract is intentionally concise and therefore omits implementation specifics. The full manuscript details the experimental setup, meta-model input construction, baseline implementations, evaluation metrics (error-prediction accuracy), and statistical significance testing in Section 4 (Experiments) and the associated tables/figures. To improve verifiability and address the referee's concern, we will expand the experimental section with additional explicit descriptions of meta-model input construction, baseline re-implementation details, and further statistical analysis in the revised version. revision: yes

Circularity Check

0 steps flagged

No circularity: purely empirical framework with no derivations or self-referential reductions

full rationale

The paper presents MetaErr as a data-driven empirical framework: a meta-model is trained to predict per-sample base-model success/failure from task-level performance observations, with all claims validated through experiments on three benchmark datasets. No equations, derivations, first-principles results, or mathematical predictions appear anywhere in the text. There are no fitted parameters renamed as predictions, no self-citations invoked as load-bearing uniqueness theorems, no ansatzes smuggled in, and no self-definitional loops. The central claim reduces to an experimental demonstration rather than any reduction to its own inputs by construction; external benchmark evaluation supplies the necessary independence.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that base-model performance observations contain sufficient signal to train a meta-predictor of per-sample errors. No free parameters or invented entities are described in the abstract.

axioms (1)
  • domain assumption A meta-model can be trained to predict base DNN success/failure on individual samples using only aggregate performance observations of the base model.
    This is the core premise enabling the architecture-agnostic design stated in the abstract.

pith-pipeline@v0.9.0 · 5512 in / 1313 out tokens · 49233 ms · 2026-05-08T08:42:53.122180+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

69 extracted references · 5 canonical work pages · 3 internal anchors

  1. [1]

    Yoo and I

    D. Yoo and I. Kweon, ``Learning loss for active learning,'' in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019

  2. [2]

    Z. Zhao, P. Zheng, S. Xu, and X. Wu, ``Object detection with deep learning: A review,'' IEEE Transactions on Neural Networks and Learning Systems (TNNLS), vol. 30, no. 11, pp. 3212--3232, 2019

  3. [3]

    L. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, ``Encoder-decoder with atrous separable convolution for semantic image segmentation,'' in European Conference on Computer Vision (ECCV), 2018

  4. [4]

    End to End Learning for Self-Driving Cars

    M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, ``End to end learning for self-driving cars,'' in arXiv:1604.07316, 2016

  5. [5]

    Panchanathan, S

    S. Panchanathan, S. Chakraborty, T. McDaniel, R. Tadayon, B. Fakhri, N. O'Connor, M. Marsden, S. Little, K. McGuinness, and D. Monaghan, ``Enriching the fan experience in a smart stadium using internet of things technologies,'' International Journal of Semantic Computing (IJSC), vol. 11, no. 2, pp. 137--170, 2017

  6. [6]

    Tobon, M

    D. Tobon, M. Hossain, G. Muhammad, J. Bilbao, and A. E. Saddik, ``Deep learning in multimedia healthcare applications: A review,'' Multimedia Systems, vol. 28, no. 4, pp. 1465--1479, 2022

  7. [7]

    Jiang, M

    X. Jiang, M. Osl, J. Kim, and L. Ohno-Machado, ``Calibrating predictive model estimates to support personalized medicine,'' Journal of the American Medical Informatics Association (AMIA), vol. 19, no. 2, pp. 263--274, 2011

  8. [8]

    Kendall and Y

    A. Kendall and Y. Gal, ``What uncertainties do we need in bayesian deep learning for computer vision?'' in Neural Information Processing Systems (NeurIPS), 2017

  9. [9]

    ``National highway traffic safety administration,'' in Technical Report, PE 16-007, 2017

  10. [10]

    Navarro-Serment, A

    L. Navarro-Serment, A. Suppe, D. Munoz, D. Bagnell, and M. Hebert, ``An architecture for online semantic labeling on ugvs,'' in SPIE Unmanned Systems Technology XV, 2013

  11. [11]

    Ghosal, ``Applications in image aesthetics using deep learning: Attribute prediction, image captioning and score regression,'' in PhD Thesis

    K. Ghosal, ``Applications in image aesthetics using deep learning: Attribute prediction, image captioning and score regression,'' in PhD Thesis. Trinity College Dublin. School of Computer Science and Statistics, 2021

  12. [12]

    Lakshminarayanan, A

    B. Lakshminarayanan, A. Pritzel, and C. Blundell, ``Simple and scalable predictive uncertainty estimation using deep ensembles,'' in Neural Information Processing Systems (NeurIPS), 2017

  13. [13]

    Gal and Z

    Y. Gal and Z. Ghahramani, ``Dropout as a bayesian approximation: Representing model uncertainty in deep learning,'' in International Conference on Machine Learning (ICML), 2016

  14. [14]

    Brach, B

    K. Brach, B. Sick, and O. Durr, ``Single shot mc dropout approximation,'' in Workshop on Uncertainty and Robustness in Deep Learning at the International Conference on Machine Learning (ICML), 2020

  15. [15]

    Oberdiek, M

    P. Oberdiek, M. Rottmann, and H. Gottschalk, ``Classification uncertainty of deep neural networks based on gradient information,'' in IAPR Workshop on Artificial Neural Networks in Pattern Recognition, 2018

  16. [16]

    Senousy, M

    Z. Senousy, M. Abdelsamea, M. Mohamed, and M. Gaber, ``3e-net: Entropy-based elastic ensemble of deep convolutional neural networks for grading of invasive breast carcinoma histopathological microscopic images,'' Entropy, vol. 23, no. 5, 2021

  17. [17]

    Geifman and R

    Y. Geifman and R. El-Yaniv, ``Selective classification for deep neural networks,'' in Neural Information Processing Systems (NeurIPS), 2017

  18. [18]

    M. Teye, H. Azizpour, and K. Smith, ``Bayesian uncertainty estimation for batch normalized deep networks,'' in International Conference on Machine Learning (ICML), 2018

  19. [19]

    Louizos and M

    C. Louizos and M. Welling, ``Structured and efficient variational deep learning with matrix gaussian posteriors,'' in International Conference on Machine Learning (ICML), 2016

  20. [20]

    Y. Gal, R. Islam, and Z. Ghahramani, ``Deep bayesian active learning with image data,'' in International Conference on Machine Learning (ICML), 2017

  21. [21]

    Mukhoti, A

    J. Mukhoti, A. Kirsch, J. van Amersfoort, P. H. Torr, and Y. Gal, ``Deep deterministic uncertainty: A new simple baseline,'' in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2023, pp. 24\,384--24\,394

  22. [22]

    W. Liu, X. Wang, J. Owens, and Y. Li, ``Energy-based out-of-distribution detection,'' Advances in Neural Information Processing Systems, 2020

  23. [23]

    T. Chen, J. Navratil, V. Iyengar, and K. Shanmugam, ``Confidence scoring using whitebox meta-models with linear classifier probes,'' in International Conference on Artificial Intelligence and Statistics (AISTATS), 2019

  24. [24]

    Y. G. R. El-Yaniv, ``Selectivenet: A deep neural network with an integrated reject option,'' in International Conference on Machine Learning (ICML), 2019

  25. [25]

    Cortes, G

    C. Cortes, G. DeSalvo, and M. Mohri, ``Learning with rejection,'' in International Conference on Algorithmic Learning Theory, 2016

  26. [26]

    Elder, M

    B. Elder, M. Arnold, A. Murthi, and J. Navratil, ``Learning prediction intervals for model performance,'' in AAAI Conference on Artificial Intelligence, 2021

  27. [27]

    Schelter, T

    S. Schelter, T. Rukat, and F. Biessmann, ``Learning to validate the predictions of black box classifiers on unseen data,'' in ACM SIGMOD International Conference on Management of Data, 2020

  28. [28]

    Donmez, G

    P. Donmez, G. Lebanon, and K. Balasubramanian, ``Estimating classification and regression errors without labels,'' Journal of Machine Learning Research (JMLR), vol. 11, no. 4, 2010

  29. [29]

    Platanios, H

    E. Platanios, H. Poon, T. Mitchell, and E. Horvitz, ``Estimating accuracy from unlabeled data: A probabilistic logic approach,'' in Neural Information Processing Systems (NeurIPS), 2017

  30. [30]

    W. Deng, S. Gould, and L. Zheng, ``What does rotation prediction tell us about classifier accuracy under varying testing environments?'' in International Conference on Machine Learning (ICML), 2021

  31. [31]

    Deng and L

    W. Deng and L. Zheng, ``Are labels always necessary for classifier accuracy evaluation?'' in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021

  32. [32]

    T. Wu, M. T. Ribeiro, J. Heer, and D. Weld, ``Errudite: Scalable, reproducible, and testable error analysis,'' in Association for Computational Linguistics (ACL), 2019

  33. [33]

    Nushi, E

    B. Nushi, E. Kamar, and E. Horvitz, ``Towards accountable ai: Hybrid human-machine analyses for characterizing system failure,'' in AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2018

  34. [34]

    Singla, B

    S. Singla, B. Nushi, S. Shah, E. Kamar, and E. Horvitz, ``Understanding failures of deep networks via robust feature extraction,'' in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021

  35. [35]

    Krizhevsky, ``Learning multiple layers of features from tiny images,'' in Technical Report, University of Toronto, 2009

    A. Krizhevsky, ``Learning multiple layers of features from tiny images,'' in Technical Report, University of Toronto, 2009

  36. [36]

    Netzer, T

    Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Ng, ``Reading digits in natural images with unsupervised feature learning,'' in Neural Information Processing Systems (NeurIPS) Workshop, 2011

  37. [37]

    Zhang, J

    P. Zhang, J. Wang, A. Farhadi, M. Hebert, and D. Parikh, ``Predicting failures of vision systems,'' in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014

  38. [38]

    Thulasidasan, G

    S. Thulasidasan, G. Chennupati, J. Bilmes, T. Bhattacharya, and S. Michalak, ``On mixup training: Improved calibration and predictive uncertainty for deep neural networks,'' in Neural Information Processing Systems (NeurIPS), 2019

  39. [39]

    C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, ``On calibration of modern neural networks,'' in International Conference on Machine Learning (ICML), 2017

  40. [40]

    Verma, A

    V. Verma, A. Lamb, J. Kannala, Y. Bengio, and D. Lopez-Paz, ``Interpolation consistency training for semi-supervised learning,'' in International Joint Conference on Artificial Intelligence (IJCAI), 2019

  41. [41]

    Miyato, S

    T. Miyato, S. Maeda, M. Koyama, and S. Ishii, ``Virtual adversarial training: A regularization method for supervised and semi-supervised learning,'' IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 41, no. 8, pp. 1979--1993, 2018

  42. [42]

    Iscen, G

    A. Iscen, G. Tolias, Y. Avrithis, and O. Chum, ``Label propagation for deep semi-supervised learning,'' in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019

  43. [43]

    Arazo, D

    E. Arazo, D. Ortego, P. Albert, N. E. O'Connor, and K. McGuinness, ``Pseudo-labeling and confirmation bias in deep semi-supervised learning,'' in International Joint Conference on Neural Networks (IJCNN), 2020

  44. [44]

    Cascante-Bonilla, F

    P. Cascante-Bonilla, F. Tan, Y. Qi, and V. Ordonez, ``Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning,'' in AAAI Conference on Artificial Intelligence (AAAI), 2021

  45. [45]

    Tarvainen and H

    A. Tarvainen and H. Valpola, ``Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,'' in Neural Information Processing Systems (NeurIPS), 2017

  46. [46]

    H. Xiao, K. Rasul, and R. Vollgraf, ``Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,'' in arXiv:1708.07747, 2017

  47. [47]

    Moschoglou, A

    S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou, ``Age DB : the first manually collected, in-the-wild age database,'' in IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017

  48. [48]

    Zhang, Y

    Z. Zhang, Y. Song, and H. Qi, ``Age progression / regression by conditional adversarial autoencoder,'' in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017

  49. [49]

    Hariharan, P

    B. Hariharan, P. Arbelaez, L. Bourdev, S. Maji, and J. Malik, ``Semantic contours from inverse detectors,'' in IEEE International Conference on Computer Vision (ICCV), 2011

  50. [50]

    Cordts, M

    M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, ``The cityscapes dataset for semantic urban scene understanding,'' in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016

  51. [51]

    M. Buda, A. Saha, and M. Mazurowski, ``Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm,'' Computers in Biology and Medicine, vol. 109, pp. 218--225, 2019

  52. [52]

    Eason, B

    G. Eason, B. Noble, and I. N. Sneddon, ``On certain integrals of Lipschitz-Hankel type involving products of Bessel functions,'' Phil. Trans. Roy. Soc. London, vol. A247, pp. 529--551, April 1955

  53. [53]

    Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol

    J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68--73

  54. [54]

    I. S. Jacobs and C. P. Bean, ``Fine particles, thin films and exchange anisotropy,'' in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271--350

  55. [55]

    Elissa, ``Title of paper if known,'' unpublished

    K. Elissa, ``Title of paper if known,'' unpublished

  56. [56]

    Nicole, ``Title of paper with only first word capitalized,'' J

    R. Nicole, ``Title of paper with only first word capitalized,'' J. Name Stand. Abbrev., in press

  57. [57]

    Yorozu, M

    Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, ``Electron spectroscopy studies on magneto-optical media and plastic substrate interface,'' IEEE Transl. J. Magn. Japan, vol. 2, pp. 740--741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982]

  58. [58]

    Young, The Technical Writer's Handbook

    M. Young, The Technical Writer's Handbook. Mill Valley, CA: University Science, 1989

  59. [59]

    D. P. Kingma and M. Welling, ``Auto-encoding variational Bayes,'' 2013, arXiv:1312.6114. [Online]. Available: https://arxiv.org/abs/1312.6114

  60. [60]

    Liu, ``Wi-Fi Energy Detection Testbed (12MTC),'' 2023, gitHub repository

    S. Liu, ``Wi-Fi Energy Detection Testbed (12MTC),'' 2023, gitHub repository. [Online]. Available: https://github.com/liustone99/Wi-Fi-Energy-Detection-Testbed-12MTC

  61. [61]

    Department of Health and Human Services, Substance Abuse and Mental Health Services Administration, Office of Applied Studies, August, 2013, DOI:10.3886/ICPSR30122.v2

    ``Treatment episode data set: discharges (TEDS-D): concatenated, 2006 to 2009.'' U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration, Office of Applied Studies, August, 2013, DOI:10.3886/ICPSR30122.v2

  62. [62]

    Eves and J

    K. Eves and J. Valasek, ``Adaptive control for singularly perturbed systems examples,'' Code Ocean, Aug. 2023. [Online]. Available: https://codeocean.com/capsule/4989235/tree

  63. [63]

    Moschoglou, A

    S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou, ``AgeDB: the first manually collected, in-the-wild age database,'' in IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2017

  64. [64]

    Zhang, Y

    Z. Zhang, Y. Song, and H. Qi, ``Age progression / regression by conditional adversarial autoencoder,'' in IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2017

  65. [65]

    Gal and Z

    Y. Gal and Z. Ghahramani, ``Dropout as a bayesian approximation: Representing model uncertainty in deep learning,'' in Int. Conf. Machine Learning (ICML), 2016

  66. [66]

    Hariharan, P

    B. Hariharan, P. Arbelaez, L. Bourdev, S. Maji, and J. Malik, ``Semantic contours from inverse detectors,'' in IEEE Int. Conf. Computer Vision (ICCV), 2011

  67. [67]

    Cordts, M

    M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, ``The cityscapes dataset for semantic urban scene understanding,'' in IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2016

  68. [68]

    Y. Gal, R. Islam, and Z. Ghahramani, ``Deep bayesian active learning with image data,'' in Int. Conf. Machine Learning (ICML), 2017

  69. [69]

    B,2 5wl^ #Ư3 H d֬B =/(tLϜ - udmP_

    11em plus .33em minus .07em 4000 4000 100 4000 4000 500 `\.=1000 = #1 \@IEEEnotcompsoconly \@IEEEcompsoconly #1 * [1] 0pt [0pt][0pt] #1 * [1] 0pt [0pt][0pt] #1 * \| ** #1 \@IEEEauthorblockNstyle \@IEEEcompsocnotconfonly \@IEEEauthorblockAstyle \@IEEEcompsocnotconfonly \@IEEEcompsocconfonly \@IEEEauthordefaulttextstyle \@IEEEcompsocnotconfonly \@IEEEauthor...