Recognition: 2 theorem links
· Lean TheoremSAIL: Structure-Aware Interpretable Learning for Anatomy-Aligned Post-hoc Explanations in OCT
Pith reviewed 2026-05-08 18:11 UTC · model grok-4.3
The pith
Integrating retinal anatomical priors into model representations produces sharper and more anatomy-aligned post-hoc explanations for OCT images.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The SAIL framework integrates retinal anatomical priors at the representation level and couples them with semantic features via a fusion design. Without modifying standard post-hoc explainability methods, this representation yields sharper and more anatomically aligned attribution maps. Comprehensive experiments on diverse OCT datasets demonstrate that the structure-aware method consistently enhances interpretability and produces clinically meaningful explanations, while ablations establish that both priors and fusion are required for best quality.
What carries the argument
The SAIL fusion design that encodes retinal anatomical priors at the representation level and couples them with semantic features to shape post-hoc attributions.
If this is right
- Attribution maps better respect retinal layer boundaries and delineate fine-grained lesions.
- Noise is suppressed while clinically relevant structures are highlighted.
- Standard post-hoc methods produce usable explanations once the underlying representations are structure-aware.
- Interpretability gains appear consistently across multiple OCT datasets.
- Both anatomical priors and semantic features plus proper fusion are necessary for the observed quality.
Where Pith is reading between the lines
- The same prior-integration approach could be tested on other layered medical images such as ultrasound or MRI.
- Improved anatomical alignment in explanations may increase clinician trust and ease regulatory review for OCT-based AI tools.
- If the fusion step generalizes, similar representations might support more reliable lesion localization beyond explanation quality.
Load-bearing premise
Retinal anatomical priors can be accurately captured and fused at the representation level to respect boundaries and suppress noise across varied OCT datasets without adding new biases.
What would settle it
Direct visual comparison on expert-annotated OCT test images where SAIL attribution maps show no measurable improvement in boundary alignment or noise reduction over standard methods would disprove the claim.
Figures
read the original abstract
Optical coherence tomography (OCT), a commonly used retinal imaging modality, plays a central role in retinal disease diagnosis by providing high-resolution visualization of retinal layers. While deep learning (DL) has achieved expert-level accuracy in OCT-based retinal disease detection, its "black box" nature poses challenges for clinical adoption, where explainability is essential for clinical trust and regulatory approval. Existing post-hoc explainable AI (XAI) methods often struggle to delineate fine-grained lesion structures, respect anatomical boundaries, or suppress noise, limiting the trustworthiness of their explanations. To bridge these gaps, we propose a Structure-Aware Interpretable Learning (SAIL) framework that integrates retinal anatomical priors at the representation level and couples them with semantic features via a fusion design. Without modifying standard post-hoc explainability methods, this representation yields sharper and more anatomically aligned attribution maps. Comprehensive experiments on diverse OCT datasets demonstrate that our structure-aware method consistently enhances interpretability, producing clinically meaningful and anatomy-aware explanations. Ablation studies further show that strong interpretability requires both structural priors and semantic features, and that properly fusing the two is critical to achieve the best explanation quality. Together, these results highlight structure-aware representations as a key step toward reliable explainability in OCT.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes the Structure-Aware Interpretable Learning (SAIL) framework for OCT imaging. It integrates retinal anatomical priors at the representation level and fuses them with semantic features via a dedicated design. This representation is claimed to yield sharper, more anatomically aligned attribution maps when used with unmodified standard post-hoc XAI methods. Comprehensive experiments on diverse OCT datasets are said to demonstrate consistent enhancement in interpretability, with ablation studies confirming that both structural priors and semantic features, along with proper fusion, are required for best explanation quality.
Significance. If the central claims hold under rigorous validation, the work would be moderately significant for medical XAI. It offers a concrete way to inject domain-specific anatomical structure into learned representations to improve post-hoc explanations in a clinically relevant imaging modality, without requiring changes to existing explainers. This could support greater clinical trust, though the absence of quantitative metrics and robustness tests in the current presentation limits immediate impact.
major comments (2)
- [Ablation studies and experiments on diverse OCT datasets] The central claim that the fused structure-aware representation produces attribution maps that respect anatomical boundaries and suppress noise without introducing new biases rests on the assumption that retinal anatomical priors can be accurately captured. However, no experiments quantify sensitivity to errors in prior extraction (e.g., from an upstream segmentation network) or to domain shift across clinical OCT datasets. This is load-bearing because any misalignment in the priors could force explanations onto incorrect layer boundaries or mask lesions.
- [Abstract] The abstract states that experiments demonstrate 'consistent enhancement' and that ablations confirm the necessity of both priors and fusion, yet no quantitative metrics, statistical tests, dataset sizes, or implementation details are provided to support these assertions. Without such evidence, it is impossible to assess whether the observed improvements are meaningful or reproducible.
minor comments (2)
- Clarify the exact mathematical formulation of the fusion design between anatomical priors and semantic features, ideally with equations showing how the combined representation is constructed.
- Provide more detail on how the retinal anatomical priors are extracted and encoded at the representation level, including any preprocessing or network components used.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback on our manuscript. The comments highlight important aspects of robustness and clarity that we will address in the revision to strengthen the presentation of our results.
read point-by-point responses
-
Referee: [Ablation studies and experiments on diverse OCT datasets] The central claim that the fused structure-aware representation produces attribution maps that respect anatomical boundaries and suppress noise without introducing new biases rests on the assumption that retinal anatomical priors can be accurately captured. However, no experiments quantify sensitivity to errors in prior extraction (e.g., from an upstream segmentation network) or to domain shift across clinical OCT datasets. This is load-bearing because any misalignment in the priors could force explanations onto incorrect layer boundaries or mask lesions.
Authors: We agree that quantifying sensitivity to inaccuracies in the extracted anatomical priors is a valuable addition for validating the framework's reliability. In the revised manuscript, we will include new experiments that introduce controlled perturbations (e.g., boundary shifts and label noise) to the priors derived from the upstream segmentation network and report the resulting impact on attribution map quality using our existing metrics. Regarding domain shift, our current evaluation already spans multiple diverse OCT datasets acquired under different clinical conditions and scanners; we will augment this with an explicit cross-dataset transfer analysis to measure consistency in explanation alignment and faithfulness scores. revision: yes
-
Referee: [Abstract] The abstract states that experiments demonstrate 'consistent enhancement' and that ablations confirm the necessity of both priors and fusion, yet no quantitative metrics, statistical tests, dataset sizes, or implementation details are provided to support these assertions. Without such evidence, it is impossible to assess whether the observed improvements are meaningful or reproducible.
Authors: We concur that the abstract would benefit from greater specificity to allow readers to immediately gauge the scale and significance of the results. In the revision, we will update the abstract to include concrete quantitative outcomes (e.g., average percentage gains in layer-boundary alignment and explanation faithfulness), dataset sizes and diversity, and references to the statistical tests used. Full implementation details, hyperparameters, and training protocols are already provided in the Methods section and supplementary material; we will add an explicit pointer in the abstract to these resources to support reproducibility. revision: yes
Circularity Check
No significant circularity; framework proposal validated by experiments
full rationale
The paper introduces the SAIL framework as a method to integrate retinal anatomical priors with semantic features at the representation level, then applies unmodified standard post-hoc XAI techniques to produce attribution maps. The central claims rest on empirical results from experiments and ablation studies across OCT datasets, showing improved alignment and sharpness. No equations, derivations, or predictions are presented that reduce the claimed benefits to a fitted parameter, self-defined quantity, or self-citation chain. The structure is a standard proposal-plus-validation pattern with no load-bearing self-referential steps.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Retinal anatomical priors can be represented and integrated at the feature level to improve explanation quality
invented entities (1)
-
Structure-aware representation
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost (Jcost = ½(x+x⁻¹)−1)washburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
F_f = α F̂_enc + (1−α) F̂_dec, α = σ(w) ∈ (0,1), where w is a learnable scalar
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
- [1]
-
[2]
Michael D Abràmoff, Mona K Garvin, and Milan Sonka. 2010. Retinal imaging and image analysis.IEEE reviews in biomedical engineering3 (2010), 169–208
2010
-
[3]
Michael D Abràmoff, Philip T Lavin, Michele Birch, Nilay Shah, and James C Folk
-
[4]
Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices.NPJ digital medicine1, 1 (2018), 39
2018
-
[5]
Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, and Sebastian Lapuschkin. 2023. From attribution maps to human-understandable explanations through concept relevance propa- gation.Nature Machine Intelligence5, 9 (2023), 1006–1019
2023
-
[6]
Renuka Agrawal, Tawishi Gupta, Shaurya Gupta, Sakshi Chauhan, Prisha Pa- tel, and Safa Hamdare. 2025. Fostering trust and interpretability: integrating explainable AI (XAI) with machine learning for enhanced disease prediction and decision transparency.Diagnostic Pathology20, 1 (2025), 105
2025
-
[7]
Seong Joon Ahn. 2025. Retinal Thickness Analysis Using Optical Coherence Tomography: Diagnostic and Monitoring Applications in Retinal Diseases.Diag- nostics15, 7 (2025), 833
2025
-
[8]
Tasnim Sakib Apon, Mohammad Mahmudul Hasan, Abrar Islam, and Md Go- lam Rabiul Alam. 2021. Demystifying deep learning models for retinal OCT disease classification using explainable AI. In2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE). IEEE, 1–6
2021
-
[9]
Leila Arras, Ahmed Osman, and Wojciech Samek. 2022. CLEVR-XAI: A bench- mark dataset for the ground truth evaluation of neural network explanations. Information Fusion81 (2022), 14–40
2022
-
[10]
Murat Seçkin Ayhan, Jonas Neubauer, Mehmet Murat Uzel, Faik Gelisken, and Philipp Berens. 2024. Interpretable detection of epiretinal membrane from optical coherence tomography with deep neural networks.Scientific Reports14, 1 (2024), 8484
2024
-
[11]
Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. 2016. Layer-wise relevance propagation for neu- ral networks with local renormalization layers. InInternational conference on artificial neural networks. Springer, 63–71
2016
-
[12]
P Brusini. 2018. OCT Glaucoma Staging System: a new method for retinal nerve fiber layer damage classification using spectral-domain OCT.Eye32, 1 (2018), 113–119
2018
-
[13]
Matthew J Burton, Jacqueline Ramke, Ana Patricia Marques, Rupert RA Bourne, Nathan Congdon, Iain Jones, Brandon AM Ah Tong, Simon Arunga, Damodar Bachani, Covadonga Bascaran, et al. 2021. The lancet global health commission on global eye health: vision beyond 2020.The Lancet Global Health9, 4 (2021), e489–e551
2021
-
[14]
Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasub- ramanian. 2018. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In2018 IEEE winter conference on applications of computer vision (W ACV). IEEE, 839–847
2018
-
[15]
Hila Chefer, Shir Gur, and Lior Wolf. 2021. Transformer Interpretability Beyond Attention Visualization. In2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 782–791. doi:10.1109/CVPR46437.2021.00084
-
[16]
Hila Chefer, Shir Gur, and Lior Wolf. 2021. Transformer interpretability beyond attention visualization. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 782–791
2021
-
[17]
Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou. 2021. Transunet: Transformers make strong encoders for medical image segmentation.arXiv preprint arXiv:2102.04306(2021)
work page internal anchor Pith review arXiv 2021
-
[18]
Zhihan Cheng, Yue Wu, Yule Li, Lingfeng Cai, and Baha Ihnaini. 2025. A Compre- hensive Review of Explainable Artificial Intelligence (XAI) in Computer Vision. Sensors25, 13 (2025), 4166
2025
-
[19]
Stephanie J Chiu, Michael J Allingham, Priyatham S Mettu, Scott W Cousins, Joseph A Izatt, and Sina Farsiu. 2015. Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema.Biomedical optics express6, 4 (2015), 1172–1194
2015
-
[20]
Jeffrey De Fauw, Joseph R Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, et al. 2018. Clinically applicable deep learning for diagnosis and referral in retinal disease.Nature medicine24, 9 (2018), 1342–1350
2018
-
[21]
Francisco Javier Dongil-Moreno, M Ortiz, Ana Pueyo, L Boquete, Eva María Sánchez-Morla, Daniel Jimeno-Huete, JM Miguel, R Barea, Elisa Vilades, and Elena García-Martín. 2024. Diagnosis of multiple sclerosis using optical coherence tomography supported by explainable artificial intelligence.Eye38, 8 (2024), 1502–1508
2024
- [22]
-
[23]
Mohamed Elsharkawy, Ahmed Sharafeldeen, Fahmi Khalifa, Ahmed Soliman, Ahmed Elnakib, Mohammed Ghazal, Ashraf Sewelam, Aristomenis Thanos, Harpal S Sandhu, and Ayman El-Baz. 2024. A clinically explainable AI-based grading system for age-related macular degeneration using optical coherence tomography.IEEE Journal of Biomedical and Health Informatics28, 4 (2...
2024
-
[24]
Leyuan Fang, David Cunefare, Chong Wang, Robyn H Guymer, Shutao Li, and Sina Farsiu. 2017. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search.Biomedical optics express8, 5 (2017), 2732–2744
2017
-
[25]
Ruth C Fong and Andrea Vedaldi. 2017. Interpretable explanations of black boxes by meaningful perturbation. InProceedings of the IEEE international conference on computer vision. 3429–3437
2017
-
[27]
Fei Gao, Hyunsoo Yoon, Teresa Wu, and Xianghua Chu. 2020. A feature transfer enabled multi-task deep learning model on medical imaging.Expert Systems with Applications143 (2020), 112957
2020
-
[29]
Mona Kathryn Garvin, Michael David Abramoff, Xiaodong Wu, Stephen R Rus- sell, Trudy L Burns, and Milan Sonka. 2009. Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images. IEEE transactions on medical imaging28, 9 (2009), 1436–1447
2009
-
[30]
Steven J Gedde, Kateki Vinod, Martha M Wright, Kelly W Muir, John T Lind, Philip P Chen, Tianjing Li, and Steven L Mansberger. 2021. Primary open-angle glaucoma preferred practice pattern®.Ophthalmology128, 1 (2021), P71–P150
2021
-
[31]
Paul Grace, Bruce JW Evans, David F Edgar, Praveen J Patel, Dhanes Thomas, Gerald Mahon, Alison Blake, and David Bennett. 2021. Investigation of the efficacy of an online tool for improving the diagnosis of macular lesions imaged by optical coherence tomography.Journal of Optometry14, 2 (2021), 206–214
2021
-
[32]
Zaiwang Gu, Jun Cheng, Huazhu Fu, Kang Zhou, Huaying Hao, Yitian Zhao, Tianyang Zhang, Shenghua Gao, and Jiang Liu. 2019. Ce-net: Context encoder network for 2d medical image segmentation.IEEE transactions on medical imaging 38, 10 (2019), 2281–2292
2019
-
[33]
Md Mahmudul Hasan, Jack Phu, Henrietta Wang, Arcot Sowmya, Michael Kallo- niatis, and Erik Meijering. 2025. OCT-based diagnosis of glaucoma and glaucoma stages using explainable machine learning.Scientific Reports15, 1 (2025), 3592
2025
-
[34]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition. 770–778
2016
-
[35]
Xiang He, Yiming Wang, Fabio Poiesi, Weiye Song, Quanqing Xu, Zixuan Feng, and Yi Wan. 2023. Exploiting multi-granularity visual features for retinal layer segmentation in human eyes.Frontiers in Bioengineering and Biotechnology11 (2023), 1191803
2023
-
[36]
Monica Hernandez, Ubaldo Ramon-Julvez, Elisa Vilades, Beatriz Cordon, Elvira Mayordomo, and Elena Garcia-Martin. 2023. Explainable artificial intelligence toward usable and trustworthy computer-aided diagnosis of multiple sclerosis from Optical Coherence Tomography.PLoS One18, 8 (2023), e0289495
2023
-
[37]
Sarah Hooper, Mayee Chen, Khaled Saab, Kush Bhatia, Curtis Langlotz, and Christopher Ré. 2023. A case for reframing automated medical image classification as segmentation. InAdvances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.), Vol. 36. Curran Associates, Inc., 55415–55441. https://p...
2023
-
[38]
Pavel Iakubovskii. 2019. Segmentation Models Pytorch. https://github.com/ qubvel/segmentation_models.pytorch
2019
-
[39]
Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation.arXiv preprint arXiv:1902.10186(2019)
work page Pith review arXiv 2019
-
[40]
Kyoung In Jung, Hee Kyung Ryu, Si Eun Oh, Hee Jong Shin, and Chan Kee Park
-
[41]
Thicker Inner Nuclear Layer as a Predictor of Glaucoma Progression and the Impact of Intraocular Pressure Fluctuation.Journal of Clinical Medicine13, 8 (2024), 2312
2024
-
[42]
Daniel S Kermany, Michael Goldbaum, Wenjia Cai, Carolina CS Valentim, Huiying Liang, Sally L Baxter, Alex McKeown, Ge Yang, Xiaokang Wu, Fangbing Yan, et al. 2018. Identifying medical diagnoses and treatable diseases by image-based deep learning.cell172, 5 (2018), 1122–1131
2018
-
[43]
Yong Chan Kim, Ho Sik Hwang, Hae-Young Lopilly Park, and Chan Kee Park
-
[44]
Transverse separation of the outer retinal layer at the peripapillary in glaucomatous myopes.Scientific Reports8, 1 (2018), 12446
2018
-
[45]
Mikhail Kulyabin, Aleksei Zhdanov, Anastasia Nikiforova, Andrey Stepichev, Anna Kuznetsova, Mikhail Ronkin, Vasilii Borisov, Alexander Bogachev, Sergey Korotkich, Paul A Constable, et al. 2024. Octdl: Optical coherence tomography dataset for image-based deep learning methods.Scientific data11, 1 (2024), 365
2024
-
[46]
Devesh Kumawat and Pradeep Venkatesh. 2026. Diabetic macular oedema—need for a unified consensus classification based on clinical and imaging features.Eye Open2, 1 (2026), 2. Conference’17, July 2017, Washington, DC, USA TienYu et al
2026
-
[47]
Cong Li, Yongyan Fu, Shunming Liu, Honghua Yu, Xiaohong Yang, Meixia Zhang, and Lei Liu. 2023. The global incidence and disability of eye injury: an analysis from the Global Burden of Disease Study 2019.EClinicalMedicine62 (2023)
2023
-
[48]
Kang Li, Xiaodong Wu, Danny Z Chen, and Milan Sonka. 2006. Optimal surface segmentation in volumetric images-a graph-theoretic approach.IEEE transactions on pattern analysis and machine intelligence28, 1 (2006), 119–134
2006
-
[49]
Jennifer I Lim, Stephen J Kim, Steven T Bailey, Jaclyn L Kovach, G Atma Vemu- lakonda, Gui-shuang Ying, Christina J Flaxel, et al. 2025. Diabetic Retinopathy Preferred Practice Pattern®.Ophthalmology132, 4 (2025), P75–P162
2025
-
[50]
Xiaoxuan Liu, Livia Faes, Aditya U Kale, Siegfried K Wagner, Dun Jack Fu, Alice Bruynseels, Thushika Mahendiran, Gabriella Moraes, Mohith Shamdas, Christoph Kern, et al. 2019. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis.The lancet digital heal...
2019
-
[51]
Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. InProceedings of the IEEE conference on computer vision and pattern recognition. 3431–3440
2015
-
[52]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions.Advances in neural information processing systems30 (2017)
2017
-
[53]
Bin Lv, Shuang Li, Yang Liu, Wei Wang, Hongyang Li, Xiaoyue Zhang, Yanhui Sha, Xiufen Yang, Yang Yang, Yue Wang, et al. 2022. Development and validation of an explainable artificial intelligence framework for macular disease diagnosis based on optical coherence tomography images.Retina42, 3 (2022), 456–464
2022
-
[54]
Ashish Markan, Aniruddha Agarwal, Atul Arora, Krinjeela Bazgain, Vipin Rana, and Vishali Gupta. 2020. Novel imaging biomarkers in diabetic retinopathy and diabetic macular edema.Therapeutic Advances in Ophthalmology12 (2020), 2515841420950513
2020
-
[55]
Scott Mayer McKinney, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, Mary Chesus, Greg S Cor- rado, Ara Darzi, et al. 2020. International evaluation of an AI system for breast cancer screening.Nature577, 7788 (2020), 89–94
2020
-
[56]
Tomoaki Murakami and Nagahisa Yoshimura. 2013. Structural changes in indi- vidual retinal layers in diabetic macular edema.Journal of diabetes research2013, 1 (2013), 920713
2013
-
[57]
Bhadra U Pandya, Michael Grinton, Efrem D Mandelcorn, and Tina Felfeli. 2024. Retinal optical coherence tomography imaging biomarkers: a review of the literature.Retina44, 3 (2024), 369–380
2024
-
[58]
Vitali Petsiuk, Abir Das, and Kate Saenko. 2018. RISE: Randomized Input Sampling for Explanation of Black-box Models. InProceedings of the British Machine Vision Conference (BMVC)
2018
- [59]
-
[60]
Md Tanzim Reza, Farzad Ahmed, Shihab Sharar, and Annajiat Alim Rasel. 2021. Interpretable retinal disease classification from oct images using deep neural network and explainable ai. In2021 international conference on electronics, com- munications and information technology (ICECIT). IEEE, 1–4
2021
-
[61]
Why Should I Trust You?
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. InProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016. 1135–1144
2016
-
[62]
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. InInternational Conference on Medical image computing and computer-assisted intervention. Springer, 234–241
2015
-
[63]
Abhijit Guha Roy, Sailesh Conjeti, Sri Phani Krishna Karri, Debdoot Sheet, Amin Katouzian, Christian Wachinger, and Nassir Navab. 2017. ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks.Biomedical optics express8, 8 (2017), 3627–3642
2017
-
[64]
Srinivas R Sadda, Robyn Guymer, Frank G Holz, Steffen Schmitz-Valckenberg, Christine A Curcio, Alan C Bird, Barbara A Blodi, Ferdinando Bottoni, Usha Chakravarthy, Emily Y Chew, et al. 2018. Consensus definition for atrophy asso- ciated with age-related macular degeneration on OCT: classification of atrophy report 3.Ophthalmology125, 4 (2018), 537–548
2018
-
[65]
Adriel Saporta, Xiaotong Gui, Ashwin Agrawal, Anuj Pareek, Steven QH Truong, Chanh DT Nguyen, Van-Doan Ngo, Jayne Seekins, Francis G Blankenberg, An- drew Y Ng, et al. 2022. Benchmarking saliency methods for chest X-ray interpre- tation.Nature Machine Intelligence4, 10 (2022), 867–878
2022
-
[66]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedan- tam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. InProceedings of the IEEE interna- tional conference on computer vision. 618–626
2017
-
[67]
Hanfeng Shi, Jiaqi Wei, Richu Jin, Jiaxin Peng, Xingyue Wang, Yan Hu, Xiaoqing Zhang, and Jiang Liu. 2024. Retinal structure guidance-and-adaption network for early Parkinson’s disease recognition based on OCT images.Computerized Medical Imaging and Graphics118 (2024), 102463
2024
-
[68]
Mingxing Tan and Quoc Le. 2019. Efficientnet: Rethinking model scaling for convolutional neural networks. InInternational conference on machine learning. PMLR, 6105–6114
2019
-
[69]
Yubo Tan, Wen-Da Shen, Ming-Yuan Wu, Gui-Na Liu, Shi-Xuan Zhao, Yang Chen, Kai-Fu Yang, and Yong-Jie Li. 2023. Retinal layer segmentation in OCT images with boundary regression and feature polarization.IEEE Transactions on Medical Imaging43, 2 (2023), 686–700
2023
-
[70]
G Atma Vemulakonda, Steven T Bailey, Stephen J Kim, Jaclyn L Kovach, Jennifer I Lim, Gui-shuang Ying, Christina J Flaxel, et al . 2025. Age-Related Macular Degeneration Preferred Practice Pattern®.Ophthalmology132, 4 (2025), P1–P74
2025
-
[71]
Sheng Wang, Shuxian Feng, Zhina Wang, Zhenning Ji, Jiajia Liu, Wei Chen, Binzhe Fu, Rong Liu, Wenliang Chen, Yining Dai, et al. 2025. Structural-prior guided and feature-enhanced transformer with masked image modeling pretraining for retinal layers and fluid segmentation in macular edema OCT images.Biomedical Optics Express16, 12 (2025), 5096–5117
2025
-
[72]
Chi Wen, Mang Ye, He Li, Ting Chen, and Xuan Xiao. 2024. Concept-based lesion aware transformer for interpretable retinal disease diagnosis.IEEE Transactions on Medical Imaging(2024)
2024
-
[73]
Charles P Wilkinson, Frederick L Ferris III, Ronald E Klein, Paul P Lee, Carl David Agardh, Matthew Davis, Diana Dills, Anselm Kampik, Rangasamy Parara- jasegaram, Juan T Verdaguer, et al. 2003. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales.Ophthalmology 110, 9 (2003), 1677–1682
2003
-
[74]
Carolyn Yu Tung Wong, Fares Antaki, Peter Woodward-Court, Ariel Yuhan Ong, and Pearse A Keane. 2024. The role of saliency maps in enhancing ophthalmolo- gists’ trust in artificial intelligence models.Asia-Pacific Journal of Ophthalmology 13, 4 (2024), 100087
2024
-
[75]
Bichen Wu, Chenfeng Xu, Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Zhicheng Yan, Masayoshi Tomizuka, Joseph Gonzalez, Kurt Keutzer, and Peter Vajda. 2020. Visual transformers: Token-based image representation and processing for com- puter vision.arXiv preprint arXiv:2006.03677(2020)
-
[76]
Miyo Yoshida, Tomoaki Murakami, Kenji Ishihara, Yuki Mori, and Akitaka Tsu- jikawa. 2025. Explainable Artificial Intelligence-Assisted Exploration of Clinically Significant Diabetic Retinal Neurodegeneration on OCT Images.Ophthalmology Science(2025), 100804
2025
-
[77]
Huihong Zhang, Bing Yang, Sanqian Li, Xiaoqing Zhang, Xiaoling Li, Tianhang Liu, Risa Higashita, and Jiang Liu. 2025. Retinal OCT image segmentation with deep learning: A review of advances, datasets, and evaluation metrics.Comput- erized Medical Imaging and Graphics(2025), 102539
2025
-
[78]
Shiyan Zhang, Jianping Ren, Ruiting Chai, Shuang Yuan, and Yinzhu Hao. 2024. Global burden of low vision and blindness due to age-related macular degenera- tion from 1990 to 2021 and projections for 2050.BMC Public Health24, 1 (2024), 3510
2024
-
[79]
Yan Zhao, Xiuying Wang, Tongtong Che, Guoqing Bao, and Shuyu Li. 2023. Multi-task deep learning for medical image computing and analysis: A review. Computers in Biology and Medicine153 (2023), 106496
2023
-
[80]
Jian Zhong, Li Lin, Chaoran Miao, Kenneth KY Wong, and Xiaoying Tang. 2025. UniOCTSeg: Towards Universal OCT Retinal Layer Segmentation via Hierarchical Prompting and Progressive Consistency Learning. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 629–639
2025
-
[81]
Chuandi Zhou, Shu Li, Luyao Ye, Chong Chen, Shu Liu, Hongxia Yang, Peng Zhuang, Zengye Liu, Hongwen Jiang, Jing Han, et al. 2023. Visual impairment and blindness caused by retinal diseases: A nationwide register-based study. Journal of Global Health13 (2023), 04126
2023
-
[82]
Yukun Zhou, Mark A Chia, Siegfried K Wagner, Murat S Ayhan, Dominic J Williamson, Robbert R Struyven, Timing Liu, Moucheng Xu, Mateo G Lozano, Peter Woodward-Court, et al. 2023. A foundation model for generalizable disease detection from retinal images.Nature622, 7981 (2023), 156–163. A UF Dataset Processing & Statistics A.1 UF Dataset Processing Figure 5...
2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.