pith. machine review for the scientific record. sign in

arxiv: 2605.10885 · v1 · submitted 2026-05-11 · 💻 cs.CV

Recognition: no theorem link

Geometry-aware Prototype Learning for Cross-domain Few-shot Medical Image Segmentation

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:21 UTC · model grok-4.3

classification 💻 cs.CV
keywords cross-domain few-shot segmentationmedical image segmentationprototype learninggeometric priorsordinal shape branchdomain adaptationfew-shot learning
0
0 comments X

The pith

Geometric offsets from an ordinal shape branch separate anatomical structure from domain-specific appearance for better prototype matching in cross-domain few-shot medical segmentation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that standard prototypical networks for few-shot medical segmentation entangle fixed anatomy with variable scan appearances, causing poor generalization when modalities, sequences or contexts change. It introduces an auxiliary Ordinal Shape Branch that learns embeddings enforcing monotonic variation across organ interior layers, then uses those to enrich each local appearance prototype with an explicit geometric offset. Because the branch trains on ordinary segmentation masks alone, the resulting prototypes carry a domain-stable structural reference that improves matching under shift. If the claim holds, few annotated examples suffice to segment new organs in entirely new imaging conditions, addressing the data scarcity typical in clinical deployment.

Core claim

GeoProto augments each local appearance prototype with a learned geometric offset that encodes its ordinal position inside the organ's interior topology; the offset is produced by an auxiliary Ordinal Shape Branch trained under an ordinally consistent objective that requires no labels beyond standard segmentation masks, yielding state-of-the-art results on seven datasets across cross-modality, cross-sequence and cross-context settings.

What carries the argument

Geometry-Aware Prototype Enrichment (GAPE), which augments appearance prototypes with geometric offset encodings of ordinal position within organ interior topology derived from the Ordinal Shape Branch.

If this is right

  • Prototypes gain invariance to appearance shifts while retaining structural cues for reliable matching.
  • No extra annotations are required beyond ordinary segmentation masks.
  • The same framework improves performance uniformly across cross-modality, cross-sequence and cross-context tasks.
  • The method scales to seven distinct medical datasets without task-specific redesign.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The ordinal-shape prior could be transferred to other few-shot segmentation domains where rigid structure persists but texture varies, such as remote-sensing or industrial inspection.
  • The Ordinal Shape Branch itself might serve as a lightweight unsupervised shape descriptor for downstream tasks like registration or anomaly detection.
  • Replacing the current 2-D ordinal loss with a 3-D or spatio-temporal consistency term could further tighten the geometric embeddings.

Load-bearing premise

Human anatomy possesses a consistent interior geometric structure that can be captured as ordinal positions and remains transferable across imaging domains and patients.

What would settle it

Experiments on a new cross-domain few-shot benchmark in which the geometric offset module produces no accuracy gain or in which the learned embeddings fail to vary monotonically from organ boundary to center.

Figures

Figures reproduced from arXiv: 2605.10885 by Feifan Song, Haofeng Zhang, Yuntian Bo.

Figure 1
Figure 1. Figure 1 [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of GeoProto. The shared ResNet backbone encodes support and query images into feature maps; the Ordinal Shape Branch predicts distance-to-boundary bin distributions from support features, which the Geometry-Aware Prototype Enrichment module uses to augment local prototypes with ordinal geometric offsets before cosine similarity matching against query features. totypes [15] preserve both semantic a… view at source ↗
Figure 3
Figure 3. Figure 3: (Left) EDT bin distributions across representative organs and datasets. (Right) Bhat￾tacharyya coefficient (BC) between support and query EDT distributions over 500 intra-domain episodes per organ. L) achieves nearly identical gains (+3.30%, +9.29%), suggesting that geometric offsets without ordinal supervision collapse to behaviour similar to PE, where the model lacks the structural signal needed to diffe… view at source ↗
Figure 4
Figure 4. Figure 4: OSB Convergence Visualisation. OSB convergence visualisation [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative results. From left to right: support image, support mask, GT bin map, predicted [PITH_FULL_IMAGE:figures/full_fig_p016_5.png] view at source ↗
read the original abstract

Cross-domain few-shot medical image segmentation (CD-FSMIS) requires a model to generalise simultaneously to novel anatomical categories and unseen imaging domains from only a handful of annotated examples. Existing prototypical approaches inevitably entangle anatomical structure with domain-specific appearance variations, and thus lack a stable reference for reliable matching under domain shift. We observe that the geometric structure of human anatomy constitutes a reliable, domain-transferable prior that has been overlooked. Building on this insight, we propose GeoProto, a geometry-aware CD-FSMIS framework that enriches prototypical matching with explicit structural priors. The core component, Geometry-Aware Prototype Enrichment (GAPE), augments each local appearance prototype with a learned geometric offset encoding its ordinal position within the organ's interior topology. This offset is derived from an auxiliary Ordinal Shape Branch (OSB) trained under an ordinally consistent objective that enforces monotonic variation of geometric embeddings across interior strata, requiring no annotation beyond standard segmentation masks. Extensive experiments across seven datasets spanning three evaluation settings (cross-modality, cross-sequence, and cross-context) demonstrate that GeoProto achieves state-of-the-art performance.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes GeoProto, a geometry-aware framework for cross-domain few-shot medical image segmentation (CD-FSMIS). It introduces Geometry-Aware Prototype Enrichment (GAPE) to augment local appearance prototypes with learned geometric offsets encoding ordinal position within organ interior topology. These offsets are produced by an auxiliary Ordinal Shape Branch (OSB) trained with an ordinally consistent objective on standard segmentation masks only (no extra annotations). The authors evaluate across seven datasets in three settings (cross-modality, cross-sequence, cross-context) and claim state-of-the-art performance.

Significance. If the central claim holds, the work offers a concrete way to inject domain-transferable anatomical structure into prototypical few-shot segmentation, which is a persistent challenge in medical imaging. The multi-setting evaluation spanning seven datasets is a positive aspect that strengthens potential impact. The OSB design, which requires no labels beyond masks, is an interesting technical choice that could be reusable if shown to generalize.

major comments (2)
  1. [Abstract] Abstract: the claim of 'state-of-the-art performance' and 'extensive experiments' is asserted without any quantitative numbers, baseline comparisons, ablation results, or statistical tests. This absence prevents assessment of whether the reported gains are meaningful or whether they survive standard controls for few-shot variance.
  2. [Methods (Ordinal Shape Branch)] Ordinal Shape Branch (OSB) description: the ordinally consistent objective is applied only to the handful of support-set masks for each novel category. For unseen anatomical structures this risks learning instance-specific monotonic embeddings rather than a stable, cross-domain interior-stratum prior; if so, the GAPE enrichment step adds little beyond a conventional appearance prototype and the SOTA claim would not hold.
minor comments (2)
  1. [Abstract] Abstract: specify the shot regimes (1-shot / 5-shot) used in the reported experiments.
  2. [Methods] Notation: the distinction between 'geometric offset' and the output of the OSB should be clarified with an equation or diagram to avoid ambiguity in how GAPE is computed.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment in detail below, providing clarifications and indicating where revisions will be made to strengthen the paper.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim of 'state-of-the-art performance' and 'extensive experiments' is asserted without any quantitative numbers, baseline comparisons, ablation results, or statistical tests. This absence prevents assessment of whether the reported gains are meaningful or whether they survive standard controls for few-shot variance.

    Authors: We agree that the abstract would benefit from including quantitative highlights to better substantiate the claims and allow immediate assessment of the results. In the revised manuscript, we will update the abstract to report key metrics such as the average Dice score improvements over the strongest baselines across the three evaluation settings and seven datasets. The main text already contains full tables with baseline comparisons, ablations, and statistical significance tests; we will ensure these are more explicitly referenced in the abstract revision. revision: yes

  2. Referee: [Methods (Ordinal Shape Branch)] Ordinal Shape Branch (OSB) description: the ordinally consistent objective is applied only to the handful of support-set masks for each novel category. For unseen anatomical structures this risks learning instance-specific monotonic embeddings rather than a stable, cross-domain interior-stratum prior; if so, the GAPE enrichment step adds little beyond a conventional appearance prototype and the SOTA claim would not hold.

    Authors: We thank the referee for raising this important point about potential instance-specificity. The ordinally consistent objective is formulated to enforce a general monotonic property on geometric embeddings according to ordinal position within the organ interior (e.g., boundary-to-center strata), which derives from the topological structure of anatomy rather than instance-specific appearance or texture. This prior is domain-transferable by design, as confirmed by our cross-modality, cross-sequence, and cross-context experiments where GAPE consistently improves over appearance-only prototypes. To address the concern explicitly, we will revise the Methods section with additional explanation of the objective's generalization mechanism and include a targeted ablation demonstrating the contribution of geometric offsets on held-out novel categories. revision: partial

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper defines new trainable components (GAPE enrichment and OSB with ordinally consistent objective) and reports empirical SOTA results across seven datasets in three cross-domain settings. No equations reduce a claimed prediction to a fitted input by construction, no self-citations are load-bearing for the central premise, and no uniqueness theorems or ansatzes are imported from prior author work. The derivation is self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 2 invented entities

The central claim rests on the domain assumption that geometric topology is stable across imaging domains and can be learned from segmentation masks alone; two new architectural entities are introduced without external validation.

axioms (1)
  • domain assumption Geometric structure of human anatomy is a reliable domain-transferable prior
    Explicitly stated as the key observation enabling the approach.
invented entities (2)
  • Geometry-Aware Prototype Enrichment (GAPE) no independent evidence
    purpose: Augments each local appearance prototype with a learned geometric offset encoding ordinal position within organ interior topology
    Core new component of the framework
  • Ordinal Shape Branch (OSB) no independent evidence
    purpose: Produces geometric embeddings trained under an ordinally consistent objective
    Auxiliary network providing the geometric offsets

pith-pipeline@v0.9.0 · 5498 in / 1306 out tokens · 34810 ms · 2026-05-12T04:21:37.611634+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

46 extracted references · 46 canonical work pages · 1 internal anchor

  1. [1]

    Duncan and Nicholas Ayache

    James S. Duncan and Nicholas Ayache. Medical image analysis: Progress over two decades and the challenges ahead.IEEE Trans. Pattern Anal. Mach. Intell., 22:85–106, 2000. URL https://api. semanticscholar.org/CorpusID:16815557

  2. [2]

    U-Net: Convolutional Networks for Biomedical Image Segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation.ArXiv, abs/1505.04597, 2015. URL https://api.semanticscholar.org/CorpusID: 3719281

  3. [3]

    Jaeger, Simon A

    Fabian Isensee, Paul F. Jaeger, Simon A. A. Kohl, Jens Petersen, and Klaus Hermann Maier-Hein. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation.Nature Methods, 18: 203 – 211, 2020. URLhttps://api.semanticscholar.org/CorpusID:227947847

  4. [5]

    Recurrent mask refinement for few-shot medical image segmentation.2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3898–3908, 2021

    Hao Tang, Xingwei Liu, Shanlin Sun, Xiangyi Yan, and Xiaohui Xie. Recurrent mask refinement for few-shot medical image segmentation.2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3898–3908, 2021. URL https://api.semanticscholar.org/CorpusID:236772039

  5. [6]

    Robustemd: Domain robust matching for cross-domain few-shot medical image segmentation.Artificial intelligence in medicine, 167:103197, 2024

    Yazhou Zhu, Minxian Li, Qiaolin Ye, Shidong Wang, Tong Xin, and Haofeng Zhang. Robustemd: Domain robust matching for cross-domain few-shot medical image segmentation.Artificial intelligence in medicine, 167:103197, 2024. URLhttps://api.semanticscholar.org/CorpusID:273026147

  6. [7]

    Famnet: Frequency-aware matching network for cross-domain few-shot medical image segmentation

    Yuntian Bo, Yazhou Zhu, Lunbo Li, and Haofeng Zhang. Famnet: Frequency-aware matching network for cross-domain few-shot medical image segmentation. InAAAI Conference on Artificial Intelligence, 2025. URLhttps://api.semanticscholar.org/CorpusID:277755591

  7. [8]

    Mi-segnet: Mutual information-based us segmentation for unseen domain generalization

    Yuanwei Bi, Zhongliang Jiang, Ricarda Clarenbach, Reza Ghotbi, Angelos Karlas, and Nassir Navab. Mi-segnet: Mutual information-based us segmentation for unseen domain generalization. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention, 2023. URL https: //api.semanticscholar.org/CorpusID:257663295

  8. [9]

    Learning domain- agnostic representation for disease diagnosis

    Chu ran Wang, Jing Li, Xinwei Sun, Fandong Zhang, Yizhou Yu, and Yizhou Wang. Learning domain- agnostic representation for disease diagnosis. InInternational Conference on Learning Representations,

  9. [10]

    URLhttps://api.semanticscholar.org/CorpusID:259298185

  10. [11]

    Fishman, and Alan Loddon Yuille

    Yan Wang, Xu Wei, Fengze Liu, Jieneng Chen, Yuyin Zhou, Wei Shen, Elliot K. Fishman, and Alan Loddon Yuille. Deep distance transform for tubular structure segmentation in ct scans.2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3832–3841, 2019. URL https://api. semanticscholar.org/CorpusID:208910348

  11. [12]

    Contrastive graph modeling for cross-domain few-shot medical image segmentation.IEEE transactions on medical imaging, PP, 2025

    Yuntian Bo, Tao Zhou, Zechao Li, Haofeng Zhang, and Ling Shao. Contrastive graph modeling for cross-domain few-shot medical image segmentation.IEEE transactions on medical imaging, PP, 2025. URLhttps://api.semanticscholar.org/CorpusID:284275872

  12. [13]

    Self-supervision with superpixels: Training few-shot medical image segmentation without annotation

    Cheng Ouyang, Carlo Biffi, Chen Chen, Turkay Kart, Huaqi Qiu, and Daniel Rueckert. Self-supervision with superpixels: Training few-shot medical image segmentation without annotation. InEuropean Confer- ence on Computer Vision, 2020. URLhttps://api.semanticscholar.org/CorpusID:220646864

  13. [14]

    Few shot medical image segmentation with cross attention transformer.ArXiv, abs/2303.13867, 2023

    Yi Lin, Yufan Chen, Kwang-Ting Cheng, and Hao Chen. Few shot medical image segmentation with cross attention transformer.ArXiv, abs/2303.13867, 2023. URL https://api.semanticscholar.org/ CorpusID:257757309

  14. [15]

    Partition-a-medical-image: Extracting multiple representative subregions for few-shot medical image segmentation.IEEE Transactions on Instrumentation and Measurement, 73:1–12, 2023

    Yazhou Zhu, Shidong Wang, Tong Xin, Zheng Zhang, and Haofeng Zhang. Partition-a-medical-image: Extracting multiple representative subregions for few-shot medical image segmentation.IEEE Transactions on Instrumentation and Measurement, 73:1–12, 2023. URL https://api.semanticscholar.org/ CorpusID:262064157

  15. [16]

    Few-shot medical image segmentation with high-fidelity prototypes.Medical image analysis, 100:103412, 2024

    Song Tang, Shaxu Yan, Xiaozhi Qi, Jianxin Gao, Mao Ye, Jianwei Zhang, and Xiatian Zhu. Few-shot medical image segmentation with high-fidelity prototypes.Medical image analysis, 100:103412, 2024. URLhttps://api.semanticscholar.org/CorpusID:270737528

  16. [17]

    Dual interspersion and flexible deployment for few-shot medical image segmentation.IEEE Transactions on Medical Imaging, 44:2732–2744, 2025

    Ziming Cheng, Shidong Wang, Yang Long, Tao Zhou, Haofeng Zhang, and Ling Shao. Dual interspersion and flexible deployment for few-shot medical image segmentation.IEEE Transactions on Medical Imaging, 44:2732–2744, 2025. URLhttps://api.semanticscholar.org/CorpusID:276714149. 10

  17. [18]

    Prototype correlation matching and class- relation reasoning for few-shot medical image segmentation.IEEE Transactions on Medical Imaging, 43:4041–4054, 2024

    Yumin Zhang, Hongliu Li, Yajun Gao, Haoran Duan, Yawen Huang, and Yefeng Zheng. Prototype correlation matching and class- relation reasoning for few-shot medical image segmentation.IEEE Transactions on Medical Imaging, 43:4041–4054, 2024. URL https://api.semanticscholar.org/ CorpusID:270357354

  18. [19]

    Prototype-guided graph reasoning network for few-shot medical image segmentation.IEEE Transactions on Medical Imaging, 44: 761–773, 2024

    Wendong Huang, Jinwu Hu, Junhao Xiao, Yang Wei, Xiuli Bi, and Bin Xiao. Prototype-guided graph reasoning network for few-shot medical image segmentation.IEEE Transactions on Medical Imaging, 44: 761–773, 2024. URLhttps://api.semanticscholar.org/CorpusID:272645696

  19. [20]

    Self-supervised learning for few-shot medical image segmentation.IEEE Transactions on Medical Imaging, 41:1837–1848,

    Cheng Ouyang, Carlo Biffi, Chen Chen, Turkay Kart, Huaqi Qiu, and Daniel Rueckert. Self-supervised learning for few-shot medical image segmentation.IEEE Transactions on Medical Imaging, 41:1837–1848,

  20. [21]

    URLhttps://api.semanticscholar.org/CorpusID:246700149

  21. [22]

    Few-shot medical image segmentation with high-confidence prior mask.IEEE Journal of Biomedical and Health Informatics, 29:8928–8939,

    Ziming Cheng, Jianqin Zhao, Jingjing Deng, and Haofeng Zhang. Few-shot medical image segmentation with high-confidence prior mask.IEEE Journal of Biomedical and Health Informatics, 29:8928–8939,

  22. [23]

    URLhttps://api.semanticscholar.org/CorpusID:277106625

  23. [24]

    Kampffmeyer

    Stine Hansen, Srishti Gautam, Robert Jenssen, and Michael C. Kampffmeyer. Anomaly detection-inspired few-shot medical image segmentation through self-supervision with supervoxels.Medical image analysis, 78:102385, 2022. URLhttps://api.semanticscholar.org/CorpusID:246788826

  24. [25]

    Maup: Training-free multi-center adaptive uncertainty-aware prompting for cross-domain few-shot medical image segmentation

    Yazhou Zhu and Haofeng Zhang. Maup: Training-free multi-center adaptive uncertainty-aware prompting for cross-domain few-shot medical image segmentation. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention, 2025. URL https://api.semanticscholar.org/ CorpusID:280526687

  25. [26]

    Zhang, Shaoqing Ren, and Jian Sun

    Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2015. URL https://api.semanticscholar.org/CorpusID:206594692

  26. [27]

    Panet: Few-shot image semantic segmentation with prototype alignment.2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9196–9205, 2019

    Kaixin Wang, Jun Hao Liew, Yingtian Zou, Daquan Zhou, and Jiashi Feng. Panet: Few-shot image semantic segmentation with prototype alignment.2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9196–9205, 2019. URL https://api.semanticscholar.org/CorpusID:201070109

  27. [28]

    Few-shot medical image segmentation via a region-enhanced prototypical transformer

    Yazhou Zhu, Shidong Wang, Tong Xin, and Haofeng Zhang. Few-shot medical image segmentation via a region-enhanced prototypical transformer. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention, 2023. URL https://api.semanticscholar.org/CorpusID: 261681778

  28. [29]

    Few-shot medical image segmentation via generating multiple representative descriptors.IEEE Transactions on Medical Imaging, 43:2202–2214, 2024

    Ziming Cheng, Shidong Wang, Tong Xin, Tao Zhou, Haofeng Zhang, and Ling Shao. Few-shot medical image segmentation via generating multiple representative descriptors.IEEE Transactions on Medical Imaging, 43:2202–2214, 2024. URLhttps://api.semanticscholar.org/CorpusID:267210982

  29. [30]

    Cross-domain few-shot semantic segmentation

    Shuo Lei, Xuchao Zhang, Jianfeng He, Fanglan Chen, Bowen Du, and Chang-Tien Lu. Cross-domain few-shot semantic segmentation. InEuropean Conference on Computer Vision, 2022. URL https: //api.semanticscholar.org/CorpusID:253448730

  30. [31]

    Pixel matching network for cross- domain few-shot segmentation.2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 967–976, 2024

    Hao Chen, Yonghan Dong, Zheming Lu, Yunlong Yu, and Jungong Han. Pixel matching network for cross- domain few-shot segmentation.2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 967–976, 2024. URLhttps://api.semanticscholar.org/CorpusID:269036094

  31. [32]

    Jiahao Nie, Yun Xing, Gongjie Zhang, Pei Yan, Aoran Xiao, Yap-Peng Tan, Alex Chichung Kot, and Shijian Lu. Cross-domain few-shot segmentation via iterative support-query correspondence mining.2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3380–3390, 2024. URLhttps://api.semanticscholar.org/CorpusID:267027623

  32. [33]

    Lightweight frequency masker for cross-domain few-shot semantic segmentation.ArXiv, abs/2410.22135, 2024

    Jintao Tong, Yixiong Zou, Yuhua Li, and Ruixuan Li. Lightweight frequency masker for cross-domain few-shot semantic segmentation.ArXiv, abs/2410.22135, 2024. URL https://api.semanticscholar. org/CorpusID:273661663

  33. [34]

    Segmentation outside the cranial vault challenge, 2015

    harrigr. Segmentation outside the cranial vault challenge, 2015. URL https://repo-prod.prod. sagebase.org/repo/v1/doi/locate?id=syn3193805&type=ENTITY

  34. [35]

    Lachinov, Shuo Han, Josef Pauli, Fabian Isensee, Matthias Perkonigg, Rachana Sathish, Ronnie Rajan, Sinem Aslan, Debdoot Sheet, Gurbandurdy Dovletov, Oliver Speck, A

    Ali Emre Kavur, Naciye Sinem Gezer, Mustafa Mahmut Baris, Pierre-Henri Conze, Vladimir Groza, Duc Duy Pham, Soumick Chatterjee, Philipp Ernst, Savas Özkan, Bora Baydar, D. Lachinov, Shuo Han, Josef Pauli, Fabian Isensee, Matthias Perkonigg, Rachana Sathish, Ronnie Rajan, Sinem Aslan, Debdoot Sheet, Gurbandurdy Dovletov, Oliver Speck, A. Nürnberger, Klaus ...

  35. [36]

    Campello, Karim Lekadir, Sulaiman Vesal, Nishant Ravikumar, Yashu Liu, Gongning Luo, Jingkun Chen, Hongwei Li, Buntheng Ly, Maxime Sermesant, Holger R

    Xiahai Zhuang, Jiahang Xu, Xinzhe Luo, Chen Chen, Cheng Ouyang, Daniel Rueckert, Víctor M. Campello, Karim Lekadir, Sulaiman Vesal, Nishant Ravikumar, Yashu Liu, Gongning Luo, Jingkun Chen, Hongwei Li, Buntheng Ly, Maxime Sermesant, Holger R. Roth, Wentao Zhu, Jiexiang Wang, Xinghao Ding, Xinyue Wang, Sen Yang, and Lei Li. Cardiac segmentation on late gad...

  36. [37]

    Multivariate mixture model for myocardial segmentation combining multi-source images

    Xiahai Zhuang. Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41:2933–2946, 2016. URL https: //api.semanticscholar.org/CorpusID:17758390

  37. [38]

    Ahmed, Hashim U

    Louise Dickinson, Louise Dickinson, Hashim U. Ahmed, Hashim U. Ahmed, Alex P. Kirkham, Clare Allen, Alex Freeman, Julie Barber, Richard Hindley, Tom Leslie, Tom Leslie, Chris Ogden, Raj Persad, Mathias Winkler, Mark Emberton, and Mark Emberton. A multi-centre prospective development study evaluating focal therapy using high intensity focused ultrasound fo...

  38. [39]

    Data from PROSTATE-MRI,

    Peter Choyke, Baris Turkbey, Peter Pinto, Maria Merino, and Bradford Wood. Data from PROSTATE-MRI,

  39. [40]

    URLhttps://doi.org/10.7937/K9/TCIA.2016.6046GUDV

  40. [41]

    Gayo, Qianye Yang, Zhe Min, Shaheer U

    Yiwen Li, Yunguan Fu, J.M.B. Gayo, Qianye Yang, Zhe Min, Shaheer U. Saeed, Wen Yan, Yipei Wang, J. Alison Noble, Mark Emberton, Matthew J. Clarkson, Henkjan J. Huisman, Dean C. Barratt, Victor Adrian Prisacariu, and Yipeng Hu. Prototypical few-shot segmentation for cross-institution male pelvic structures with spatial registration.Medical image analysis, ...

  41. [42]

    Folio, Jenifer Siegelman, Fiona M

    Stefan Jaeger, Alexandros Karargyris, Sema Candemir, Les R. Folio, Jenifer Siegelman, Fiona M. Callaghan, Zhiyun Xue, Kannappan Palaniappan, Rahul Kumar Singh, Sameer Kiran Antani, George R. Thoma, Yixiang Wang, Pu-Xuan Lu, and Clement J. McDonald. Automatic tuberculosis screen- ing using chest radiographs.IEEE Transactions on Medical Imaging, 33:233–245,...

  42. [43]

    Musco, Rahul Kumar Singh, Zhiyun Xue, Alexandros Karargyris, Sameer Kiran Antani, George R

    Sema Candemir, Stefan Jaeger, Kannappan Palaniappan, Jonathan P. Musco, Rahul Kumar Singh, Zhiyun Xue, Alexandros Karargyris, Sameer Kiran Antani, George R. Thoma, and Clement J. McDonald. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration.IEEE Transac- tions on Medical Imaging, 33:577–590, 2014. URL https://api.sem...

  43. [44]

    Noel C. F. Codella, Veronica M. Rotemberg, Philipp Tschandl, M. E. Celebi, Stephen W. Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Armando Marchetti, Harald Kittler, and Allan C. Halpern. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic).ArXiv, ab...

  44. [45]

    The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions.Scientific Data, 5, 2018

    Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions.Scientific Data, 5, 2018. URL https://api.semanticscholar.org/CorpusID:263789934

  45. [46]

    Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C

    Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. InEuropean Conference on Computer Vision, 2014. URLhttps://api.semanticscholar.org/CorpusID:14113767

  46. [47]

    Qianqian Shen, Yanan Li, Jiyong Jin, and B. Liu. Q-net: Query-informed few-shot medical image segmentation. InIntelligent Systems with Applications, 2022. URL https://api.semanticscholar. org/CorpusID:251765311. 12 A Bin Map Construction Details Motivation.The Ordinal Shape Branch (OSB) requires a per-pixel geometric label that captures each foreground pi...