pith. machine review for the scientific record. sign in

arxiv: 2605.10984 · v1 · submitted 2026-05-09 · 💻 cs.CV

Recognition: unknown

Principle-Guided Supervision for Interpretable Uncertainty in Medical Image Segmentation

Authors on Pith no claims yet

Pith reviewed 2026-05-13 07:22 UTC · model grok-4.3

classification 💻 cs.CV
keywords medical image segmentationuncertainty quantificationinterpretable uncertaintyevidential learningprinciple-guided supervisionACDC datasetISIC dataset
0
0 comments X

The pith

Enforcing three perception-aligned principles produces uncertainty maps in medical segmentation that align with sources of human-perceived ambiguity.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper aims to move beyond scalar uncertainty scores in medical image segmentation by making the spatial uncertainty distribution itself interpretable. It defines three explicit principles requiring that uncertainty increase where image contrast is low, where corruption is severe, and where anatomical geometry is complex. PriUS implements these principles as additional supervision objectives inside an evidential-learning model so that the uncertainty output is trained to respect them. On cardiac MRI, skin-lesion, and whole-heart datasets the resulting uncertainty estimates show stronger consistency with the inducing image attributes than prior methods while segmentation accuracy remains competitive.

Core claim

By adding explicit supervision objectives derived from image contrast, corruption severity, and geometric complexity to an evidential deep-learning model, the spatial uncertainty estimates become consistent with the sources of ambiguity that a human observer would recognize.

What carries the argument

The PriUS framework, which augments evidential learning with three principle-specific supervision losses that directly penalize deviations between predicted uncertainty and measured contrast, corruption, and geometric complexity.

If this is right

  • Clinicians can read uncertainty maps directly against visible image properties such as contrast and shape complexity.
  • Quantitative consistency metrics allow objective comparison of uncertainty interpretability across methods.
  • Segmentation accuracy is preserved while uncertainty becomes more usable for downstream decision support.
  • The same supervision approach can be applied to other high-stakes segmentation tasks that require spatially meaningful uncertainty.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The three principles could be extended with additional factors such as motion artifacts or patient-specific priors without changing the overall supervision structure.
  • If the consistency metrics generalize, they could serve as an auxiliary training signal in domains outside medical imaging.
  • The framework offers a route to audit whether a model’s uncertainty reflects genuine ambiguity rather than dataset artifacts.

Load-bearing premise

That the three chosen principles are the primary drivers of human-perceived ambiguity and that enforcing them via added losses will produce genuinely interpretable uncertainty without introducing new biases or harming calibration.

What would settle it

If uncertainty maps on new test images fail to increase systematically in low-contrast regions or in regions with higher measured corruption or greater geometric complexity, the interpretability claim is falsified.

Figures

Figures reproduced from arXiv: 2605.10984 by An Sui, Fuping Wu, Gunter Schumann, Xiahai Zhuang, Yuzhu Li.

Figure 1
Figure 1. Figure 1: Illustration of the proposed Principle-guided Uncertainty Supervision framework (PriUS) and its three principle-specific supervision designs. [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Contrast–uncertainty relationship at anatomical boundaries on the ACDC dataset. Scatter plots show the correlation between image gradient magnitude [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Uncertainty response to progressive boundary-contrast degradation induced by Gaussian blur on the ACDC dataset. The change in uncertainty is [] [][] [] [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Uncertainty response to increasing corruption severity induced by Gaussian noise on the ACDC dataset. The change in uncertainty is defined as [] [][] [] [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Boundary distance-uncertainty relationship on the ACDC dataset. Scatter plots show the relationship between pixel-wise uncertainty and distance to the [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Intra-image comparison of uncertainty responses to boundary contrast. In the [PITH_FULL_IMAGE:figures/full_fig_p011_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Inter-image uncertainty responses to global image corruption. [PITH_FULL_IMAGE:figures/full_fig_p011_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Parameter study of the boundary threshold [PITH_FULL_IMAGE:figures/full_fig_p012_8.png] view at source ↗
read the original abstract

Uncertainty quantification complements model predictions by characterizing their reliability, which is essential for high-stakes decision making such as medical image segmentation. However, most existing methods reduce uncertainty to a scalar confidence estimate, leaving its spatial distribution semantically underconstrained. In this work, we focus on uncertainty interpretability, namely, whether estimated uncertainty behaves in a human-understandable manner with respect to sources of ambiguity. We identify three perception-aligned principles requiring the spatial distribution of uncertainty to reflect: (1) image contrast between structures, (2) severity of image corruption, and (3) geometric complexity in anatomical structures. Accordingly, we develop a principle-guided uncertainty supervision framework (PriUS) based on evidential learning, in which the corresponding supervision objectives are explicitly enforced during training. We further introduce quantitative metrics to measure the consistency between predicted uncertainty and image attributes that induce ambiguity. Experiments on ACDC, ISIC, and WHS datasets showed that, compared with state-of-the-art methods, PriUS produced more consistent uncertainty estimates while maintaining competitive segmentation performance.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes PriUS, a principle-guided uncertainty supervision framework for medical image segmentation based on evidential learning. It identifies three perception-aligned principles for uncertainty interpretability—image contrast between structures, severity of image corruption, and geometric complexity in anatomical structures—and explicitly enforces corresponding supervision objectives during training. New quantitative metrics are introduced to measure consistency between predicted uncertainty and these image attributes. Experiments on ACDC, ISIC, and WHS datasets claim that PriUS yields more consistent uncertainty estimates than state-of-the-art methods while maintaining competitive segmentation performance.

Significance. If the results hold without circularity, the work could meaningfully advance interpretable uncertainty quantification in medical imaging by aligning uncertainty maps with human-perceived sources of ambiguity, which is valuable for high-stakes clinical applications. The explicit addition of principle-based objectives to evidential learning provides a structured way to constrain spatial uncertainty distributions, and the introduction of dedicated consistency metrics is a constructive step toward quantifiable interpretability. However, the significance depends on demonstrating that improvements reflect genuine gains in human-understandable behavior rather than optimization artifacts.

major comments (2)
  1. The quantitative metrics for consistency (introduced after the supervision objectives) directly score alignment with the identical three principles—image contrast, corruption severity, and geometric complexity—that are explicitly added as supervision terms in the evidential loss. This creates a risk of circular validation: higher metric scores confirm that the added objectives were optimized but do not independently establish broader interpretability or reduced bias. An independent test (e.g., correlation with expert-rated ambiguity or performance on held-out corruption types) is needed to support the central claim of improved human-understandable uncertainty.
  2. The experimental section reports improved consistency and competitive accuracy on ACDC, ISIC, and WHS but provides no error bars on the new consistency metrics, no ablation studies isolating the contribution of each supervision objective, and no statistical significance tests comparing against baselines. These omissions make it difficult to determine whether the reported gains are robust or driven by specific hyperparameter choices in the principle-guided terms.
minor comments (1)
  1. The abstract would benefit from a one-sentence description of the evidential learning backbone to orient readers who may not be familiar with Dirichlet-based uncertainty modeling.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and insightful comments. We address each major comment below and outline the revisions we will make to strengthen the manuscript.

read point-by-point responses
  1. Referee: The quantitative metrics for consistency (introduced after the supervision objectives) directly score alignment with the identical three principles—image contrast, corruption severity, and geometric complexity—that are explicitly added as supervision terms in the evidential loss. This creates a risk of circular validation: higher metric scores confirm that the added objectives were optimized but do not independently establish broader interpretability or reduced bias. An independent test (e.g., correlation with expert-rated ambiguity or performance on held-out corruption types) is needed to support the central claim of improved human-understandable uncertainty.

    Authors: We acknowledge the concern about potential circularity, as the metrics are explicitly tied to the same principles used in the supervision objectives. This design choice is deliberate to directly validate that the evidential learning framework enforces the intended perception-aligned behavior. However, to provide stronger evidence of broader interpretability, we will add independent evaluations in the revised manuscript: (1) correlation analysis between predicted uncertainty and expert-rated ambiguity on a held-out subset of images from each dataset, and (2) performance assessment on additional corruption types not used during training. These additions will help demonstrate that the improvements reflect genuine gains in human-understandable uncertainty rather than optimization artifacts alone. revision: yes

  2. Referee: The experimental section reports improved consistency and competitive accuracy on ACDC, ISIC, and WHS but provides no error bars on the new consistency metrics, no ablation studies isolating the contribution of each supervision objective, and no statistical significance tests comparing against baselines. These omissions make it difficult to determine whether the reported gains are robust or driven by specific hyperparameter choices in the principle-guided terms.

    Authors: We agree that the current experimental presentation lacks sufficient statistical detail. In the revised manuscript, we will include error bars (standard deviations across at least five random seeds) for all consistency and segmentation metrics, perform ablation studies that isolate the contribution of each individual supervision objective (contrast, corruption, and complexity), and report statistical significance tests (e.g., paired t-tests with p-values) comparing PriUS against the baselines. These changes will clarify the robustness of the reported gains. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper defines three perception-aligned principles for uncertainty interpretability, incorporates explicit supervision objectives into an evidential learning framework to enforce them, and introduces separate quantitative metrics to assess consistency with the same image attributes. No equations, derivations, or claims in the provided abstract reduce any result to its inputs by construction, nor do they rely on self-citation chains or imported uniqueness theorems. The experimental comparison to state-of-the-art methods on ACDC, ISIC, and WHS datasets supplies independent empirical content, and the metrics function as post-training evaluations rather than tautological re-statements of the training losses. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that the three listed principles capture the main sources of ambiguity and that evidential learning can be extended with supervision objectives without harming calibration. No free parameters or invented entities are mentioned in the abstract.

axioms (1)
  • domain assumption Evidential learning can be extended with explicit principle-guided supervision objectives that enforce alignment between uncertainty and image attributes
    Invoked to justify the PriUS training framework and the claim of improved interpretability.

pith-pipeline@v0.9.0 · 5489 in / 1304 out tokens · 60416 ms · 2026-05-13T07:22:07.200602+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

42 extracted references · 42 canonical work pages

  1. [1]

    The role of deep learning in advancing computer vision applications: A comprehensive systematic review,

    N. Khadem, A. Nashir, and S. Rahmatyar, “The role of deep learning in advancing computer vision applications: A comprehensive systematic review,”Journal of Advanced Computer Knowledge and Algorithms, vol. 3, no. 1, pp. 1–8, 2026

  2. [2]

    Deep learning for autosegmentation for radiotherapy treatment planning: State- of-the-art and novel perspectives,

    A. C. Erdur, D. Rusche, D. Scholz, J. Kiechle, S. Fischer, Ó. Llorián- Salvador, J. A. Buchner, M. Q. Nguyen, L. Etzel, J. Weidneret al., “Deep learning for autosegmentation for radiotherapy treatment planning: State- of-the-art and novel perspectives,”Strahlentherapie und Onkologie, pp. 1–19, 2024

  3. [3]

    Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning,

    E. ¸ SAHiN, N. N. Arslan, and D. Özdemir, “Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning,”Neural computing and applications, vol. 37, no. 2, pp. 859–965, 2025

  4. [4]

    Beyond post hoc explanations: a comprehensive framework for accountable ai in medical imaging through transparency, interpretability, and explain- ability,

    Y . Singh, Q. A. Hathaway, V . Keishing, S. Salehi, Y . Wei, N. Horvat, D. V . Vera-Garcia, A. Choudhary, A. Mula Kh, E. Quaiaet al., “Beyond post hoc explanations: a comprehensive framework for accountable ai in medical imaging through transparency, interpretability, and explain- ability,”Bioengineering, vol. 12, no. 8, p. 879, 2025

  5. [5]

    Trustwor- thy clinical ai solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis,

    B. Lambert, F. Forbes, S. Doyle, H. Dehaene, and M. Dojat, “Trustwor- thy clinical ai solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis,”Artificial Intelligence in Medicine, vol. 150, p. 102830, 2024

  6. [6]

    Dropout as a bayesian approximation: Representing model uncertainty in deep learning,

    Y . Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” ininternational conference on machine learning. PMLR, 2016, pp. 1050–1059

  7. [7]

    Simple and scalable predictive uncertainty estimation using deep ensembles,

    B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,”Advances in neural information processing systems, vol. 30, 2017

  8. [8]

    Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks,

    G. Wang, W. Li, M. Aertsen, J. Deprest, S. Ourselin, and T. Ver- cauteren, “Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks,” Neurocomputing, vol. 338, pp. 34–45, 2019

  9. [9]

    Evidential deep learning to quantify classification uncertainty,

    M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to quantify classification uncertainty,”Advances in neural information processing systems, vol. 31, 2018

  10. [10]

    On calibration of modern neural networks,

    C. Guo, G. Pleiss, Y . Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” inProceedings of the 34th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, D. Precup and Y . W. Teh, Eds., vol. 70. PMLR, 06–11 Aug 2017, pp. 1321–1330

  11. [11]

    Assessing reliability and challenges of un- certainty estimations for medical image segmentation,

    A. Jungo and M. Reyes, “Assessing reliability and challenges of un- certainty estimations for medical image segmentation,” inInternational Conference on Medical Image Computing and Computer-Assisted Inter- vention. Springer, 2019, pp. 48–56

  12. [12]

    Phiseg: Capturing uncertainty in medical image segmentation,

    C. F. Baumgartner, K. C. Tezcan, K. Chaitanya, A. M. Hötker, U. J. Muehlematter, K. Schawkat, A. S. Becker, O. Donati, and E. Konukoglu, “Phiseg: Capturing uncertainty in medical image segmentation,” in International conference on medical image computing and computer- assisted intervention. Springer, 2019, pp. 119–127

  13. [13]

    Combining motion and contrast for segmentation,

    W. B. Thompson, “Combining motion and contrast for segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 6, pp. 543–549, 1980

  14. [14]

    Benchmarking the robustness of semantic segmentation models with respect to common corruptions,

    C. Kamann and C. Rother, “Benchmarking the robustness of semantic segmentation models with respect to common corruptions,”International journal of computer vision, vol. 129, no. 2, pp. 462–483, 2021

  15. [15]

    A geometric method for the detec- tion and correction of segmentation leaks of anatomical structures in volumetric medical images,

    A. Kronman and L. Joskowicz, “A geometric method for the detec- tion and correction of segmentation leaks of anatomical structures in volumetric medical images,”International journal of computer assisted radiology and surgery, vol. 11, no. 3, pp. 369–380, 2016

  16. [16]

    Uncertainty-supervised inter- pretable and robust evidential segmentation,

    Y . Li, A. Sui, F. Wu, and X. Zhuang, “Uncertainty-supervised inter- pretable and robust evidential segmentation,” inInternational Confer- ence on Medical Image Computing and Computer-Assisted Intervention. Springer, 2025, pp. 649–658

  17. [17]

    Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management,

    S. C. Hora, “Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management,”Reliability Engi- neering & System Safety, vol. 54, no. 2, pp. 217–223, 1996, treatment of Aleatory and Epistemic Uncertainty. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, UNDER REVIEW 14

  18. [18]

    What uncertainties do we need in bayesian deep learning for computer vision?

    A. Kendall and Y . Gal, “What uncertainties do we need in bayesian deep learning for computer vision?”Advances in neural information processing systems, vol. 30, 2017

  19. [19]

    Prior and posterior networks: A survey on evidential deep learning methods for uncertainty estima- tion,

    D. Ulmer, C. Hardmeier, and J. Frellsen, “Prior and posterior networks: A survey on evidential deep learning methods for uncertainty estima- tion,”arXiv preprint arXiv:2110.03051, 2021

  20. [20]

    Uncertainty quantification and deep ensembles,

    R. Rahamanet al., “Uncertainty quantification and deep ensembles,” Advances in neural information processing systems, vol. 34, pp. 20 063– 20 075, 2021

  21. [21]

    Accurate un- certainty estimation and decomposition in ensemble learning,

    J. Liu, J. Paisley, M.-A. Kioumourtzoglou, and B. Coull, “Accurate un- certainty estimation and decomposition in ensemble learning,”Advances in neural information processing systems, vol. 32, 2019

  22. [22]

    A review of entropy measures for uncertainty quantification of stochastic processes,

    A. Namdari and Z. Li, “A review of entropy measures for uncertainty quantification of stochastic processes,”Advances in Mechanical Engi- neering, vol. 11, no. 6, p. 1687814019857350, 2019

  23. [23]

    Model uncertainty analysis by variance decomposition,

    P. Willems, “Model uncertainty analysis by variance decomposition,” Physics and Chemistry of the Earth, Parts a/b/c, vol. 42, pp. 21–30, 2012

  24. [24]

    Can you trust your model’s uncer- tainty? evaluating predictive uncertainty under dataset shift,

    Y . Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. Dillon, B. Lakshminarayanan, and J. Snoek, “Can you trust your model’s uncer- tainty? evaluating predictive uncertainty under dataset shift,”Advances in neural information processing systems, vol. 32, 2019

  25. [25]

    Confidence calibration and predictive uncertainty estimation for deep medical image segmentation,

    A. Mehrtash, W. M. Wells, C. M. Tempany, P. Abolmaesumi, and T. Kapur, “Confidence calibration and predictive uncertainty estimation for deep medical image segmentation,”IEEE transactions on medical imaging, vol. 39, no. 12, pp. 3868–3878, 2020

  26. [26]

    Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning,

    J. Zhang, B. Kailkhura, and T. Y .-J. Han, “Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning,” inInternational conference on machine learning. PMLR, 2020, pp. 11 117–11 128

  27. [27]

    Towards reliable medical image segmentation by utiliz- ing evidential calibrated uncertainty,

    K. Zou, Y . Chen, L. Huang, X. Yuan, X. Shen, M. Wang, R. Goh, Y . Liu, and H. Fu, “Towards reliable medical image segmentation by utiliz- ing evidential calibrated uncertainty,”arXiv preprint arXiv:2301.00349, 2023

  28. [28]

    Rednet: Reliable evidential discounting network for multi-modality medical image seg- mentation,

    S. Sun, Y . Chen, X. Yue, C. Ma, and X. Zhuang, “Rednet: Reliable evidential discounting network for multi-modality medical image seg- mentation,”IEEE Transactions on Medical Imaging, vol. 45, no. 1, pp. 16–27, 2026

  29. [29]

    Edue: Expert disagreement-guided one-pass uncer- tainty estimation for medical image segmentation,

    K. Abutalip, N. Saeed, I. Sobirov, V . Andrearczyk, A. Depeursinge, and M. Yaqub, “Edue: Expert disagreement-guided one-pass uncer- tainty estimation for medical image segmentation,”arXiv preprint arXiv:2403.16594, 2024

  30. [30]

    Dempster-shafer theory,

    G. Shafer, “Dempster-shafer theory,”Encyclopedia of artificial intelli- gence, vol. 1, pp. 330–331, 1992

  31. [31]

    Jøsang,Subjective logic

    A. Jøsang,Subjective logic. Springer, 2016, vol. 3

  32. [32]

    Uncertainty estimation by fisher information-based evidential deep learning,

    D. Deng, G. Chen, Y . Yu, F. Liu, and P.-A. Heng, “Uncertainty estimation by fisher information-based evidential deep learning,” in International conference on machine learning. PMLR, 2023, pp. 7596– 7616

  33. [33]

    A comprehensive survey on evidential deep learning and its applications,

    J. Gao, M. Chen, L. Xiang, and C. Xu, “A comprehensive survey on evidential deep learning and its applications,”IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025

  34. [34]

    Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved?

    O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang, P.-A. Heng, I. Cetin, K. Lekadir, O. Camara, M. A. G. Ballesteret al., “Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved?”IEEE transactions on medical imaging, vol. 37, no. 11, pp. 2514–2525, 2018

  35. [35]

    Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)

    N. C. Codella, V . Rotemberg, P. Tschandl, M. E. Celebi, S. Dusza, D. Gutman, B. Helba, A. Kalloo, K. Liopyris, M. Marchetti, H. Kittler, and A. Halpern, “Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic),”arXiv preprint arXiv:1902.03368, 2019

  36. [36]

    The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,

    P. Tschandl, C. Rosendahl, and H. Kittler, “The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,”Scientific Data, vol. 5, p. 180161, 2018

  37. [37]

    Multi-scale patch and multi-modality atlases for whole heart segmentation of mri,

    X. Zhuang and J. Shen, “Multi-scale patch and multi-modality atlases for whole heart segmentation of mri,”Medical Image Analysis, vol. 31, pp. 77–87, 2016

  38. [38]

    Multivariate mixture model for myocardial segmentation combining multi-source images,

    X. Zhuang, “Multivariate mixture model for myocardial segmentation combining multi-source images,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 12, pp. 2933–2946, 2019

  39. [39]

    Bayeseg: Bayesian modeling for medical image segmentation with interpretable generalizability,

    S. Gao, H. Zhou, Y . Gao, and X. Zhuang, “Bayeseg: Bayesian modeling for medical image segmentation with interpretable generalizability,” Medical Image Analysis, vol. 89, p. 102889, 2023

  40. [40]

    The spearman correlation formula,

    C. Wissler, “The spearman correlation formula,”Science, vol. 22, no. 558, pp. 309–311, 1905

  41. [41]

    A probabilistic u-net for segmentation of ambiguous images,

    S. Kohl, B. Romera-Paredes, C. Meyer, J. De Fauw, J. R. Ledsam, K. Maier-Hein, S. Eslami, D. Jimenez Rezende, and O. Ronneberger, “A probabilistic u-net for segmentation of ambiguous images,”Advances in neural information processing systems, vol. 31, 2018. An Suiis currently a Ph.D. candidate in Statis- tics at the School of Data Science, Fudan Univer- si...

  42. [42]

    Gunter Schumannis a professor at the ISTBI, Fudan University, and Chair and Director of the Centre for Population Neuroscience and Stratified Medicine

    Her research interests include interpretable AI, medical image analysis and multi-omics analysis. Gunter Schumannis a professor at the ISTBI, Fudan University, and Chair and Director of the Centre for Population Neuroscience and Stratified Medicine. He is also a professor in the Department of Psychiatry and Neuroscience, Charité University Medicine. He re...