Recognition: unknown
CoRE: Concept-Reasoning Expansion for Continual Brain Lesion Segmentation
Pith reviewed 2026-05-07 16:56 UTC · model grok-4.3
The pith
CoRE aligns image tokens with a hierarchical concept library to simulate clinical reasoning and direct efficient continual learning for brain lesion segmentation.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Through the alignment of image tokens with a hierarchical concept library, CoRE simulates clinical reasoning to guide both interpretable expert routing and demand-based model growth. This collaborative process ensures model evolution is grounded in clinical priors, preventing redundant parameter expansion while maximizing knowledge reuse.
What carries the argument
Alignment of image tokens with a hierarchical concept library, which simulates clinical reasoning to control expert routing and demand-based model growth.
If this is right
- Model growth occurs only on demand from new tasks rather than through fixed capacity increases or full retraining.
- Prior knowledge is reused across tasks via the shared concept library instead of being overwritten.
- Expert routing becomes interpretable because decisions trace back to clinical concept matches.
- The framework creates a high-knowledge starting point that speeds adaptation to future tasks.
- Few-shot transfer improves because new tasks can leverage existing aligned concepts.
Where Pith is reading between the lines
- The same token-to-concept alignment could reduce annotation needs in other medical imaging streams that evolve over time.
- If new lesion types appear outside the initial library, the hierarchy would require explicit extension to maintain routing quality.
- Combining CoRE routing with privacy techniques such as federated updates might further suit regulated clinical environments.
Load-bearing premise
Aligning image tokens to the hierarchical concept library reliably simulates clinical reasoning and produces effective routing and growth without introducing errors or biases that reduce segmentation accuracy.
What would settle it
Segmentation accuracy on earlier tasks drops after adding later tasks, or the model still expands parameters redundantly when the concept-alignment step is removed.
Figures
read the original abstract
Accurate brain lesion segmentation in MRI is vital for effective clinical diagnosis and treatment planning. Due to high annotation costs and strict data privacy regulations, universal models require employing Continual Learning (CL) to adapt to evolving clinical tasks without losing previously acquired knowledge. However, existing CL paradigms often suffer from capacity limits or redundant parameter growth, and even advanced dynamic methods rely mostly on image-perception strategies that struggle to handle the substantial pathological and multimodal heterogeneity inherent in brain imaging. To address this issue, we propose Concept-Reasoning Expansion (CoRE) framework, which establishes a joint decision-making mechanism by integrating visual features with structured concepts. Through the alignment of image tokens with a hierarchical concept library, CoRE simulates clinical reasoning to guide both interpretable expert routing and demand-based model growth. This collaborative process ensures model evolution is grounded in clinical priors, preventing redundant parameter expansion while maximizing knowledge reuse. Extensive evaluations across 12 sequential brain lesion MRI tasks demonstrate that CoRE achieves state-of-the-art performance and provides a high knowledge starting point for efficient future adaptation. Its superior few-shot transferability and clinical interpretability further validate its effectiveness in managing non-stationary clinical data streams. Our code will be released soon.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes the Concept-Reasoning Expansion (CoRE) framework for continual learning in brain lesion segmentation from MRI. It integrates visual features with structured concepts by aligning image tokens to a hierarchical concept library, which is claimed to simulate clinical reasoning. This alignment is used to guide interpretable expert routing and demand-based model growth, addressing capacity limits, redundant expansion, and pathological/multimodal heterogeneity in non-stationary clinical data. The paper asserts state-of-the-art performance across 12 sequential brain lesion MRI tasks, along with strong few-shot transferability and clinical interpretability.
Significance. If the token-to-concept alignment reliably produces clinically grounded routing and growth decisions without systematic misalignment or bias, the framework could meaningfully advance continual learning methods in medical imaging by grounding model evolution in structured priors rather than purely visual strategies, potentially improving both efficiency and interpretability.
major comments (2)
- [Abstract] Abstract: The central claim that alignment of image tokens with the hierarchical concept library 'simulates clinical reasoning' to enable effective expert routing and demand-based growth is presented without any quantitative measure of alignment fidelity, error rates across task boundaries, or comparison to visual-only baselines. This is load-bearing for the SOTA and interpretability assertions, as even modest mismatches on heterogeneous lesion subtypes could propagate into incorrect routing or unnecessary parameter addition.
- [Abstract] Abstract: The manuscript states SOTA results on 12 sequential tasks and 'extensive evaluations' but supplies no metrics, ablation studies, baseline comparisons, or error analysis in the provided description. Without these, the extent of improvement over existing CL paradigms and the absence of new biases cannot be assessed.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. The comments highlight opportunities to strengthen the abstract's support for our core claims. We address each point below and will incorporate revisions to improve clarity and transparency.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central claim that alignment of image tokens with the hierarchical concept library 'simulates clinical reasoning' to enable effective expert routing and demand-based growth is presented without any quantitative measure of alignment fidelity, error rates across task boundaries, or comparison to visual-only baselines. This is load-bearing for the SOTA and interpretability assertions, as even modest mismatches on heterogeneous lesion subtypes could propagate into incorrect routing or unnecessary parameter addition.
Authors: We agree that the abstract would be strengthened by explicit reference to quantitative support for the alignment process. The full manuscript details the alignment mechanism in Section 3 and provides quantitative evaluations of alignment fidelity, routing error rates across task boundaries, and comparisons against visual-only baselines in Sections 4.3 and 5.2. These analyses demonstrate reliable alignment that underpins the routing and growth decisions. We will revise the abstract to include a concise reference to these quantitative results and their role in validating the clinical-reasoning simulation. revision: yes
-
Referee: [Abstract] Abstract: The manuscript states SOTA results on 12 sequential tasks and 'extensive evaluations' but supplies no metrics, ablation studies, baseline comparisons, or error analysis in the provided description. Without these, the extent of improvement over existing CL paradigms and the absence of new biases cannot be assessed.
Authors: The abstract serves as a high-level overview, while the complete experimental results—including specific performance metrics on the 12 tasks, ablation studies, baseline comparisons, and error analyses—are presented in Sections 4 and 5 with accompanying tables and figures. These establish the SOTA improvements and confirm the absence of introduced biases. We will update the abstract to briefly highlight key quantitative outcomes and direct readers to the detailed evaluations in the main text. revision: yes
Circularity Check
No circularity: method claims are descriptive without reducible derivations
full rationale
The provided abstract and description outline the CoRE framework as integrating visual features with structured concepts via token-to-library alignment to guide routing and growth. No equations, parameter-fitting steps, self-citations, or uniqueness theorems are quoted or referenced that would reduce any claimed result (e.g., 'simulates clinical reasoning' or 'SOTA performance') to its inputs by construction. The central narrative is a high-level methodological proposal evaluated on 12 tasks, with no load-bearing step that renames a fit as a prediction or imports uniqueness from prior author work. The derivation chain is therefore self-contained as an engineering description rather than a tautological reduction.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
ArXiv pp
Adewole, M., Rudie, J.D., Gbdamosi, A., Toyobo, O., Raymond, C., Zhang, D., Omidiji, O., Akinola, R., Suwaid, M.A., Emegoakor, A., et al.: The brain tumor segmentation (brats) challenge 2023: Glioma segmentation in sub-saharan africa patient population (brats-africa). ArXiv pp. arXiv–2305 (2023)
2023
-
[2]
In: Proceedings of the IEEE conference on computer vision and pattern recognition
Aljundi, R., Chakravarty, P., Tuytelaars, T.: Expert gate: Lifelong learning with a network of experts. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3366–3375 (2017)
2017
-
[3]
Neuroimage54(3), 2033–2044 (2011)
Avants, B.B., Tustison, N.J., Song, G., Cook, P.A., Klein, A., Gee, J.C.: A repro- ducibleevaluationofantssimilaritymetricperformanceinbrainimageregistration. Neuroimage54(3), 2033–2044 (2011)
2033
-
[4]
Baid, U., Ghodasara, S., Mohan, S., Bilello, M., Calabrese, E., Colak, E., Farahani, K., Kalpathy-Cramer, J., Kitamura, F.C., Pati, S., et al.: The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classifica- tion. arXiv preprint arXiv:2107.02314 (2021)
work page internal anchor Pith review arXiv 2021
-
[5]
Elsevier (2008)
Bankman, I.: Handbook of medical image processing and analysis. Elsevier (2008)
2008
-
[6]
In: International Conference on Medical Image Computing and Computer-Assisted Intervention
Basaran, B.D., Zhang, W., Qiao, M., Kainz, B., Matthews, P.M., Bai, W.: Lesion- mix: A lesion-level data augmentation method for medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 73–83. Springer (2023)
2023
-
[7]
In: International Conference on Medical Image Computing and Computer-Assisted Intervention
Bayasi, N., Fayyad, J., Bissoto, A., Hamarneh, G., Garbi, R.: Biaspruner: Debiased continual learning for medical image classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 90–101. Springer (2024)
2024
- [8]
-
[9]
In: Proceedings of the International Conference on Computer Vision (ICCV) (2021)
Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: Proceedings of the International Conference on Computer Vision (ICCV) (2021)
2021
-
[10]
In: International Conference on Medical Image Computing and Computer-Assisted Intervention
Chen, Q., Zhu, L., He, H., Zhang, X., Zeng, S., Ren, Q., Lu, Y.: Low-rank mixture- of-experts for continual medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 382–392. Springer (2024)
2024
-
[11]
Scientific reports8(1), 13650 (2018)
Commowick, O., Istace, A., Kain, M., Laurent, B., Leray, F., Simon, M., Pop, S.C., Girard, P., Ameli, R., Ferré, J.C., et al.: Objective evaluation of multiple sclero- sis lesion segmentation using a data management and processing infrastructure. Scientific reports8(1), 13650 (2018)
2018
-
[12]
In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Czolbe, S., Dalca, A.V.: Neuralizer: General neuroimage analysis without re- training. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 6217–6230 (2023)
2023
-
[13]
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Dai, D., Deng, C., Zhao, C., Xu, R.X., Gao, H., Chen, D., Li, J., Zeng, W., Yu, X., Wu, Y., Xie, Z., Li, Y.K., Huang, P., Luo, F., Ruan, C., Sui, Z., Liang, W.: Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts lan- guage models. CoRRabs/2401.06066(2024),https://arxiv.org/abs/2401. 06066
work page internal anchor Pith review arXiv 2024
-
[14]
04325 Abbreviated paper title 17
Diana-Albelda, C., Alcover-Couso, R., Álvaro García-Martín, Bescos, J., Escudero- Viñolo, M.: Gbt-sam: A parameter-efficient depth-aware model for generalizable brain tumour segmentation on mp-mri (2025),https://arxiv.org/abs/2503. 04325 Abbreviated paper title 17
2025
-
[15]
Multiple Sclerosis Journal26(10), 1217–1226 (2020)
Gabr, R.E., Coronado, I., Robinson, M., Sujit, S.J., Datta, S., Sun, X., Allen, W.J., Lublin, F.D., Wolinsky, J.S., Narayana, P.A.: Brain and lesion segmentation in multiple sclerosis using fully convolutional neural networks: A large-scale study. Multiple Sclerosis Journal26(10), 1217–1226 (2020)
2020
-
[16]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Gao, Y.: Training like a medical resident: Context-prior learning toward universal medical image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11194–11204 (2024)
2024
-
[17]
In: International MICCAI brainlesion workshop
Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R., Xu, D.: Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In: International MICCAI brainlesion workshop. pp. 272–284. Springer (2021)
2021
-
[18]
Scientific data9(1), 762 (2022)
Hernandez Petzsche, M.R., De La Rosa, E., Hanning, U., Wiest, R., Valenzuela, W., Reyes, M., Meyer, M., Liew, S.L., Kofler, F., Ezhov, I., et al.: Isles 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific data9(1), 762 (2022)
2022
-
[19]
Iclr1(2), 3 (2022)
Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W., et al.: Lora: Low-rank adaptation of large language models. Iclr1(2), 3 (2022)
2022
-
[20]
Nature methods18(2), 203–211 (2021)
Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods18(2), 203–211 (2021)
2021
-
[21]
In: European conference on computer vision
Jia, M., Tang, L., Chen, B.C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.N.: Visual prompt tuning. In: European conference on computer vision. pp. 709–727. Springer (2022)
2022
-
[22]
Medical image analysis36, 61–78 (2017)
Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Rueckert, D., Glocker, B.: Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Medical image analysis36, 61–78 (2017)
2017
-
[23]
In: International workshop on med- ical optical imaging and virtual microscopy image analysis
Kaustaban, V., Ba, Q., Bhattacharya, I., Sobh, N., Mukherjee, S., Martin, J., Miri, M.S., Guetter, C., Chaturvedi, A.: Characterizing continual learning scenarios for tumor classification in histopathology images. In: International workshop on med- ical optical imaging and virtual microscopy image analysis. pp. 177–187. Springer (2022)
2022
-
[24]
Proceedings of the national academy of sciences114(13), 3521–3526 (2017)
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences114(13), 3521–3526 (2017)
2017
-
[25]
IEEE transactions on medical imaging38(11), 2556–2568 (2019)
Kuijf, H.J., Biesbroek, J.M., De Bresser, J., Heinen, R., Andermatt, S., Bento, M., Berseth, M., Belyaev, M., Cardoso, M.J., Casamitjana, A., et al.: Standardized assessment of automatic segmentation of white matter hyperintensities and results of the wmh segmentation challenge. IEEE transactions on medical imaging38(11), 2556–2568 (2019)
2019
-
[26]
Scientific Reports15(1), 25468 (2025)
Kumari, P., Bozorgpour, A., Reisenbüchler, D., Jost, E., Crysandt, M., Matek, C., Merhof, D.: Domain-incremental white blood cell classification with privacy-aware continual learning. Scientific Reports15(1), 25468 (2025)
2025
-
[27]
Li, H., Tan, Z., Li, X., Huang, W.: Atlas: Adapter-based multi-modal continual learning with a two-stage learning strategy (2024),https://arxiv.org/abs/2410. 10923
2024
-
[28]
In: The Twelfth International Conference on Learning Representa- tions
Li, W., Yuille, A., Zhou, Z.: How well do supervised models transfer to 3d image segmentation. In: The Twelfth International Conference on Learning Representa- tions. vol. 1 (2024)
2024
-
[29]
Chen et al
Liew, S.L., Anglin, J.M., Banks, N.W., Sondag, M., Ito, K.L., Kim, H., Chan, J., Ito, J., Jung, C., Khoshab, N., et al.: A large, open source dataset of stroke 18 Q. Chen et al. anatomical brain images and manual lesion segmentations. Scientific data5(1), 180011 (2018)
2018
-
[30]
Liu, H., Tam, D., Muqeeth, M., Mohta, J., Huang, T., Bansal, M., Raffel, C.A.: Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning.AdvancesinNeuralInformationProcessingSystems35,1950–1965(2022)
1950
-
[31]
In: Proceedings of the IEEE/CVF international conference on computer vision
Liu, J., Zhang, Y., Chen, J.N., Xiao, J., Lu, Y., A Landman, B., Yuan, Y., Yuille, A., Tang, Y., Zhou, Z.: Clip-driven universal model for organ segmentation and tumor detection. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 21152–21164 (2023)
2023
-
[32]
Medical image analysis97, 103226 (2024)
Liu, J., Zhang, Y., Wang, K., Yavuz, M.C., Chen, X., Yuan, Y., Li, H., Yang, Y., Yuille, A., Tang, Y., et al.: Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Medical image analysis97, 103226 (2024)
2024
-
[33]
Nature Communications15, 654 (2024)
Ma, J., He, Y., Li, F., Han, L., You, C., Wang, B.: Segment anything in medical images. Nature Communications15, 654 (2024)
2024
-
[34]
U-mamba: Enhancing long-range dependency for biomedical image segmentation
Ma, J., Li, F., Wang, B.: U-mamba: Enhancing long-range dependency for biomed- ical image segmentation. arXiv preprint arXiv:2401.04722 (2024)
- [35]
-
[36]
Shah, A.H., Snelling, B., Bregy, A., Patel, P.R., Tememe, D., Bhatia, R., Sklar, E., Komotar, R.J.: Discriminating radiation necrosis from tumor progression in gliomas: a systematic review what is the best imaging modality? Journal of neuro- oncology112(2), 141–152 (2013)
2013
-
[37]
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., Dean, J.: Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)
work page internal anchor Pith review arXiv 2017
-
[38]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Smith, J.S., Karlinsky, L., Gutta, V., Cascante-Bonilla, P., Kim, D., Arbelle, A., Panda, R., Feris, R., Kira, Z.: Coda-prompt: Continual decomposed attention- based prompting for rehearsal-free continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 11909–11919 (June 2023)
2023
-
[39]
Annals of Emergency Medicine74(5), 647–659 (2019)
Szulewski, A., Braund, H., Egan, R., Gegenfurtner, A., Hall, A.K., Howes, D., Dagnone, D., van Merrienboer, J.J.: Starting to think like an expert: an analysis of resident cognitive processes during simulation-based resuscitation examinations. Annals of Emergency Medicine74(5), 647–659 (2019)
2019
-
[40]
IEEE Transactions on Medical Imaging43(7), 2599–2609 (2024)
Thandiackal, K., Piccinelli, L., Gupta, R., Pati, P., Goksel, O.: Multi-scale fea- ture alignment for continual learning of unlabeled domains. IEEE Transactions on Medical Imaging43(7), 2599–2609 (2024)
2024
-
[41]
In: Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR)
Wang, H., Lu, H., Yao, L., Gong, D.: Self-expansion of pre-trained models with mixture of adapters for continual learning. In: Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR). pp. 10087–10098 (June 2025)
2025
-
[42]
Wang, L., Zhang, X., Su, H., Zhu, J.: A comprehensive survey of continual learn- ing: Theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence46(8), 5362–5383 (2024).https://doi.org/10.1109/TPAMI. 2024.3367329
-
[43]
In: European conference on computer vision
Wang, Z., Zhang, Z., Ebrahimi, S., Sun, R., Zhang, H., Lee, C.Y., Ren, X., Su, G., Perot, V., Dy, J., et al.: Dualprompt: Complementary prompting for rehearsal- free continual learning. In: European conference on computer vision. pp. 631–648. Springer (2022) Abbreviated paper title 19
2022
-
[44]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Wang, Z., Zhang, Z., Lee, C.Y., Zhang, H., Sun, R., Ren, X., Su, G., Perot, V., Dy, J., Pfister, T.: Learning to prompt for continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 139–149 (June 2022)
2022
-
[45]
Medical Image Analysis78, 102391 (2022)
Wood, D.A., Kafiabadi, S., Al Busaidi, A., Guilhem, E., Montvila, A., Lynch, J., Townend, M., Agarwal, S., Mazumder, A., Barker, G.J., et al.: Deep learning models for triaging hospital head mri examinations. Medical Image Analysis78, 102391 (2022)
2022
-
[46]
Brain129(9), 2384–2393 (2006)
Wu, O., Christensen, S., Hjort, N., Dijkhuizen, R.M., Kucinski, T., Fiehler, J., Thomalla, G., Röther, J., Østergaard, L.: Characterizing physiological heterogene- ity of infarction risk in acute human ischaemic stroke using mri. Brain129(9), 2384–2393 (2006)
2006
-
[47]
IEEE Transactions on Pattern Analysis and Machine Intelligence (2026)
Xu, L., Xie, H., Qin, S.J., Tao, X., Wang, F.L.: Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment. IEEE Transactions on Pattern Analysis and Machine Intelligence (2026)
2026
-
[48]
Yu, J., Huang, Z., Zhuge, Y., Zhang, L., Hu, P., Wang, D., Lu, H., He, Y.: Moe- adapters++: Toward more efficient continual learning of vision-language models via dynamic mixture-of-experts adapters. IEEE Transactions on Pattern Analysis and Machine Intelligence47(12), 11912–11928 (2025).https://doi.org/10.1109/ TPAMI.2025.3597942
-
[49]
In: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Yu, J., Zhuge, Y., Zhang, L., Hu, P., Wang, D., Lu, H., He, Y.: Boosting continual learning of vision-language models via mixture-of-experts adapters. In: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 23219–23230 (June 2024)
2024
-
[50]
Medical image analysis91, 102996 (2024)
Zhang, S., Metaxas, D.: On the challenges and perspectives of foundation models for medical image analysis. Medical image analysis91, 102996 (2024)
2024
-
[51]
Zhang, S., Xu, Y., Usuyama, N., Xu, H., Bagga, J., Tinn, R., Preston, S., Rao, R., Wei, M., Valluri, N., et al.: Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv preprint arXiv:2303.00915 (2023)
work page internal anchor Pith review arXiv 2023
-
[52]
In: International Conference on Med- ical Image Computing and Computer-Assisted Intervention
Zhang, X., Ou, N., Basaran, B.D., Visentin, M., Qiao, M., Gu, R., Ouyang, C., Liu, Y., Matthews, P.M., Ye, C., et al.: A foundation model for brain lesion seg- mentation with mixture of modality experts. In: International Conference on Med- ical Image Computing and Computer-Assisted Intervention. pp. 379–389. Springer (2024)
2024
-
[53]
International journal of computer vision130(9), 2337–2348 (2022) 20 Q
Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. International journal of computer vision130(9), 2337–2348 (2022) 20 Q. Chen et al. A Appendix Introduction This supplementary material provides additional details and extended analyses to support the main text. The document is organized as follows: – Sec. Bprovides comp...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.