pith. machine review for the scientific record. sign in

arxiv: 2604.25376 · v1 · submitted 2026-04-28 · 💻 cs.CV · cs.AI

Recognition: unknown

CoRE: Concept-Reasoning Expansion for Continual Brain Lesion Segmentation

Authors on Pith no claims yet

Pith reviewed 2026-05-07 16:56 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords continual learningbrain lesion segmentationMRIconcept alignmentexpert routingmodel growthclinical reasoning
0
0 comments X

The pith

CoRE aligns image tokens with a hierarchical concept library to simulate clinical reasoning and direct efficient continual learning for brain lesion segmentation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes the CoRE framework to solve capacity limits and redundancy in continual learning for brain lesion MRI segmentation, where privacy rules and annotation costs prevent retraining from scratch. It integrates visual features with structured concepts by aligning image tokens to a hierarchical library, using that alignment to simulate clinical reasoning. This drives interpretable expert routing and demand-based model growth instead of relying only on image perception. The result is model evolution grounded in clinical priors that reuses prior knowledge and avoids unnecessary parameter expansion. Evaluations across 12 sequential tasks show state-of-the-art accuracy plus strong few-shot transfer and interpretability.

Core claim

Through the alignment of image tokens with a hierarchical concept library, CoRE simulates clinical reasoning to guide both interpretable expert routing and demand-based model growth. This collaborative process ensures model evolution is grounded in clinical priors, preventing redundant parameter expansion while maximizing knowledge reuse.

What carries the argument

Alignment of image tokens with a hierarchical concept library, which simulates clinical reasoning to control expert routing and demand-based model growth.

If this is right

  • Model growth occurs only on demand from new tasks rather than through fixed capacity increases or full retraining.
  • Prior knowledge is reused across tasks via the shared concept library instead of being overwritten.
  • Expert routing becomes interpretable because decisions trace back to clinical concept matches.
  • The framework creates a high-knowledge starting point that speeds adaptation to future tasks.
  • Few-shot transfer improves because new tasks can leverage existing aligned concepts.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same token-to-concept alignment could reduce annotation needs in other medical imaging streams that evolve over time.
  • If new lesion types appear outside the initial library, the hierarchy would require explicit extension to maintain routing quality.
  • Combining CoRE routing with privacy techniques such as federated updates might further suit regulated clinical environments.

Load-bearing premise

Aligning image tokens to the hierarchical concept library reliably simulates clinical reasoning and produces effective routing and growth without introducing errors or biases that reduce segmentation accuracy.

What would settle it

Segmentation accuracy on earlier tasks drops after adding later tasks, or the model still expands parameters redundantly when the concept-alignment step is removed.

Figures

Figures reproduced from arXiv: 2604.25376 by Anglin Liu, Jingyang Zhang, Qianqian Chen, Yudong Zhang.

Figure 1
Figure 1. Figure 1: Illustration of CL paradigms. (a-b) Traditional paradigm (left): Existing methods typically (a) employ a fixed expert pool shared by all tasks or (b) add task￾specific experts, which often suffer from limited capacity or linear parameter growth. (c￾d) Self-expansion paradigm (right): (c) Existing dynamic expansion methods rely solely on image perception strategies and visual shifts. (d) Our method incorpor… view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the CoRE framework. The architecture consists of three main components: (1) Brain Lesion Concept Library Generation, which establishes a structured conceptual foundation by organizing hierarchical knowledge into a concept space; (2) Concept-Guided Calibration (CGC), a module that aligns visual to￾kens with the concept space to ground expert routing decisions in interpretable brain lesion attrib… view at source ↗
Figure 3
Figure 3. Figure 3: Quantitative performance and data efficiency analysis. (a) Lesion-level per￾formance, comparing the average DSC of CoRE against comparative methods across diverse brain lesion types; (b) modality-level performance, comparing the average DSC against comparative methods across various imaging modalities; and (c) few￾shot multi-domain class incremental learning, illustrating the average DSC for Tasks 13–16 un… view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison of brain lesion segmentation results in the data stream from Task 1 to 12. 4.3 Extensive Analysis Analysis of Dynamic Adapter Expansion. Guided by preliminary experi￾ments, we restrict dynamic expansion to the final two transformer blocks, specif￾ically Block 7 and Block 8. We found that allowing expansion in shallow layers increases the total adapter count without notable performanc… view at source ↗
Figure 5
Figure 5. Figure 5: Analysis of dynamic adapter expansion and routing weight distributions. (a) Dynamic adapter expansion, illustrating the number of adapters across the 12-task data stream for Block 7 and Block 8, where ‘+’ denotes layers utilizing concept-driven expansion; (b) Image routing weights and (c) concept routing weights, showing the weight distributions of the 8-MLP layer across the 12 sequential tasks. Triggering… view at source ↗
Figure 6
Figure 6. Figure 6: Concept-adapter affinity analysis of the 8-MLP layer. For each of the 12 tasks, we display the top-3 brain lesion concepts associated with the adapter introduced at that task, extracted from the highest-weighted entries in the corresponding column of the adapter-concept weight matrix WAC. fully express, providing essential visual perception view at source ↗
Figure 7
Figure 7. Figure 7: Analysis of expandable layers and the balance hyperparameter λ. (a) Average DSC across the sequential tasks when dynamic expansion is applied to different com￾binations of transformer blocks. (b) Average DSC under varying values of the balance hyperparameter λ. D.2 Analysis of Expandable Layers We investigate the impact of applying dynamic expansion to different sets of transformer blocks to determine the … view at source ↗
read the original abstract

Accurate brain lesion segmentation in MRI is vital for effective clinical diagnosis and treatment planning. Due to high annotation costs and strict data privacy regulations, universal models require employing Continual Learning (CL) to adapt to evolving clinical tasks without losing previously acquired knowledge. However, existing CL paradigms often suffer from capacity limits or redundant parameter growth, and even advanced dynamic methods rely mostly on image-perception strategies that struggle to handle the substantial pathological and multimodal heterogeneity inherent in brain imaging. To address this issue, we propose Concept-Reasoning Expansion (CoRE) framework, which establishes a joint decision-making mechanism by integrating visual features with structured concepts. Through the alignment of image tokens with a hierarchical concept library, CoRE simulates clinical reasoning to guide both interpretable expert routing and demand-based model growth. This collaborative process ensures model evolution is grounded in clinical priors, preventing redundant parameter expansion while maximizing knowledge reuse. Extensive evaluations across 12 sequential brain lesion MRI tasks demonstrate that CoRE achieves state-of-the-art performance and provides a high knowledge starting point for efficient future adaptation. Its superior few-shot transferability and clinical interpretability further validate its effectiveness in managing non-stationary clinical data streams. Our code will be released soon.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 0 minor

Summary. The manuscript proposes the Concept-Reasoning Expansion (CoRE) framework for continual learning in brain lesion segmentation from MRI. It integrates visual features with structured concepts by aligning image tokens to a hierarchical concept library, which is claimed to simulate clinical reasoning. This alignment is used to guide interpretable expert routing and demand-based model growth, addressing capacity limits, redundant expansion, and pathological/multimodal heterogeneity in non-stationary clinical data. The paper asserts state-of-the-art performance across 12 sequential brain lesion MRI tasks, along with strong few-shot transferability and clinical interpretability.

Significance. If the token-to-concept alignment reliably produces clinically grounded routing and growth decisions without systematic misalignment or bias, the framework could meaningfully advance continual learning methods in medical imaging by grounding model evolution in structured priors rather than purely visual strategies, potentially improving both efficiency and interpretability.

major comments (2)
  1. [Abstract] Abstract: The central claim that alignment of image tokens with the hierarchical concept library 'simulates clinical reasoning' to enable effective expert routing and demand-based growth is presented without any quantitative measure of alignment fidelity, error rates across task boundaries, or comparison to visual-only baselines. This is load-bearing for the SOTA and interpretability assertions, as even modest mismatches on heterogeneous lesion subtypes could propagate into incorrect routing or unnecessary parameter addition.
  2. [Abstract] Abstract: The manuscript states SOTA results on 12 sequential tasks and 'extensive evaluations' but supplies no metrics, ablation studies, baseline comparisons, or error analysis in the provided description. Without these, the extent of improvement over existing CL paradigms and the absence of new biases cannot be assessed.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. The comments highlight opportunities to strengthen the abstract's support for our core claims. We address each point below and will incorporate revisions to improve clarity and transparency.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claim that alignment of image tokens with the hierarchical concept library 'simulates clinical reasoning' to enable effective expert routing and demand-based growth is presented without any quantitative measure of alignment fidelity, error rates across task boundaries, or comparison to visual-only baselines. This is load-bearing for the SOTA and interpretability assertions, as even modest mismatches on heterogeneous lesion subtypes could propagate into incorrect routing or unnecessary parameter addition.

    Authors: We agree that the abstract would be strengthened by explicit reference to quantitative support for the alignment process. The full manuscript details the alignment mechanism in Section 3 and provides quantitative evaluations of alignment fidelity, routing error rates across task boundaries, and comparisons against visual-only baselines in Sections 4.3 and 5.2. These analyses demonstrate reliable alignment that underpins the routing and growth decisions. We will revise the abstract to include a concise reference to these quantitative results and their role in validating the clinical-reasoning simulation. revision: yes

  2. Referee: [Abstract] Abstract: The manuscript states SOTA results on 12 sequential tasks and 'extensive evaluations' but supplies no metrics, ablation studies, baseline comparisons, or error analysis in the provided description. Without these, the extent of improvement over existing CL paradigms and the absence of new biases cannot be assessed.

    Authors: The abstract serves as a high-level overview, while the complete experimental results—including specific performance metrics on the 12 tasks, ablation studies, baseline comparisons, and error analyses—are presented in Sections 4 and 5 with accompanying tables and figures. These establish the SOTA improvements and confirm the absence of introduced biases. We will update the abstract to briefly highlight key quantitative outcomes and direct readers to the detailed evaluations in the main text. revision: yes

Circularity Check

0 steps flagged

No circularity: method claims are descriptive without reducible derivations

full rationale

The provided abstract and description outline the CoRE framework as integrating visual features with structured concepts via token-to-library alignment to guide routing and growth. No equations, parameter-fitting steps, self-citations, or uniqueness theorems are quoted or referenced that would reduce any claimed result (e.g., 'simulates clinical reasoning' or 'SOTA performance') to its inputs by construction. The central narrative is a high-level methodological proposal evaluated on 12 tasks, with no load-bearing step that renames a fit as a prediction or imports uniqueness from prior author work. The derivation chain is therefore self-contained as an engineering description rather than a tautological reduction.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review yields no explicit free parameters, axioms, or invented entities; the hierarchical concept library is mentioned but its construction, size, or grounding is unspecified.

pith-pipeline@v0.9.0 · 5517 in / 1129 out tokens · 75394 ms · 2026-05-07T16:56:46.727495+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

53 extracted references · 10 canonical work pages · 4 internal anchors

  1. [1]

    ArXiv pp

    Adewole, M., Rudie, J.D., Gbdamosi, A., Toyobo, O., Raymond, C., Zhang, D., Omidiji, O., Akinola, R., Suwaid, M.A., Emegoakor, A., et al.: The brain tumor segmentation (brats) challenge 2023: Glioma segmentation in sub-saharan africa patient population (brats-africa). ArXiv pp. arXiv–2305 (2023)

  2. [2]

    In: Proceedings of the IEEE conference on computer vision and pattern recognition

    Aljundi, R., Chakravarty, P., Tuytelaars, T.: Expert gate: Lifelong learning with a network of experts. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3366–3375 (2017)

  3. [3]

    Neuroimage54(3), 2033–2044 (2011)

    Avants, B.B., Tustison, N.J., Song, G., Cook, P.A., Klein, A., Gee, J.C.: A repro- ducibleevaluationofantssimilaritymetricperformanceinbrainimageregistration. Neuroimage54(3), 2033–2044 (2011)

  4. [4]

    The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification

    Baid, U., Ghodasara, S., Mohan, S., Bilello, M., Calabrese, E., Colak, E., Farahani, K., Kalpathy-Cramer, J., Kitamura, F.C., Pati, S., et al.: The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classifica- tion. arXiv preprint arXiv:2107.02314 (2021)

  5. [5]

    Elsevier (2008)

    Bankman, I.: Handbook of medical image processing and analysis. Elsevier (2008)

  6. [6]

    In: International Conference on Medical Image Computing and Computer-Assisted Intervention

    Basaran, B.D., Zhang, W., Qiao, M., Kainz, B., Matthews, P.M., Bai, W.: Lesion- mix: A lesion-level data augmentation method for medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 73–83. Springer (2023)

  7. [7]

    In: International Conference on Medical Image Computing and Computer-Assisted Intervention

    Bayasi, N., Fayyad, J., Bissoto, A., Hamarneh, G., Garbi, R.: Biaspruner: Debiased continual learning for medical image classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 90–101. Springer (2024)

  8. [8]

    Behnam, A.: Vgdm: Vision-guided diffusion model for brain tumor detection and segmentation (2025),https://arxiv.org/abs/2510.02086

  9. [9]

    In: Proceedings of the International Conference on Computer Vision (ICCV) (2021)

    Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: Proceedings of the International Conference on Computer Vision (ICCV) (2021)

  10. [10]

    In: International Conference on Medical Image Computing and Computer-Assisted Intervention

    Chen, Q., Zhu, L., He, H., Zhang, X., Zeng, S., Ren, Q., Lu, Y.: Low-rank mixture- of-experts for continual medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 382–392. Springer (2024)

  11. [11]

    Scientific reports8(1), 13650 (2018)

    Commowick, O., Istace, A., Kain, M., Laurent, B., Leray, F., Simon, M., Pop, S.C., Girard, P., Ameli, R., Ferré, J.C., et al.: Objective evaluation of multiple sclero- sis lesion segmentation using a data management and processing infrastructure. Scientific reports8(1), 13650 (2018)

  12. [12]

    In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

    Czolbe, S., Dalca, A.V.: Neuralizer: General neuroimage analysis without re- training. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 6217–6230 (2023)

  13. [13]

    DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models

    Dai, D., Deng, C., Zhao, C., Xu, R.X., Gao, H., Chen, D., Li, J., Zeng, W., Yu, X., Wu, Y., Xie, Z., Li, Y.K., Huang, P., Luo, F., Ruan, C., Sui, Z., Liang, W.: Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts lan- guage models. CoRRabs/2401.06066(2024),https://arxiv.org/abs/2401. 06066

  14. [14]

    04325 Abbreviated paper title 17

    Diana-Albelda, C., Alcover-Couso, R., Álvaro García-Martín, Bescos, J., Escudero- Viñolo, M.: Gbt-sam: A parameter-efficient depth-aware model for generalizable brain tumour segmentation on mp-mri (2025),https://arxiv.org/abs/2503. 04325 Abbreviated paper title 17

  15. [15]

    Multiple Sclerosis Journal26(10), 1217–1226 (2020)

    Gabr, R.E., Coronado, I., Robinson, M., Sujit, S.J., Datta, S., Sun, X., Allen, W.J., Lublin, F.D., Wolinsky, J.S., Narayana, P.A.: Brain and lesion segmentation in multiple sclerosis using fully convolutional neural networks: A large-scale study. Multiple Sclerosis Journal26(10), 1217–1226 (2020)

  16. [16]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Gao, Y.: Training like a medical resident: Context-prior learning toward universal medical image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11194–11204 (2024)

  17. [17]

    In: International MICCAI brainlesion workshop

    Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R., Xu, D.: Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In: International MICCAI brainlesion workshop. pp. 272–284. Springer (2021)

  18. [18]

    Scientific data9(1), 762 (2022)

    Hernandez Petzsche, M.R., De La Rosa, E., Hanning, U., Wiest, R., Valenzuela, W., Reyes, M., Meyer, M., Liew, S.L., Kofler, F., Ezhov, I., et al.: Isles 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific data9(1), 762 (2022)

  19. [19]

    Iclr1(2), 3 (2022)

    Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W., et al.: Lora: Low-rank adaptation of large language models. Iclr1(2), 3 (2022)

  20. [20]

    Nature methods18(2), 203–211 (2021)

    Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods18(2), 203–211 (2021)

  21. [21]

    In: European conference on computer vision

    Jia, M., Tang, L., Chen, B.C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.N.: Visual prompt tuning. In: European conference on computer vision. pp. 709–727. Springer (2022)

  22. [22]

    Medical image analysis36, 61–78 (2017)

    Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Rueckert, D., Glocker, B.: Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Medical image analysis36, 61–78 (2017)

  23. [23]

    In: International workshop on med- ical optical imaging and virtual microscopy image analysis

    Kaustaban, V., Ba, Q., Bhattacharya, I., Sobh, N., Mukherjee, S., Martin, J., Miri, M.S., Guetter, C., Chaturvedi, A.: Characterizing continual learning scenarios for tumor classification in histopathology images. In: International workshop on med- ical optical imaging and virtual microscopy image analysis. pp. 177–187. Springer (2022)

  24. [24]

    Proceedings of the national academy of sciences114(13), 3521–3526 (2017)

    Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences114(13), 3521–3526 (2017)

  25. [25]

    IEEE transactions on medical imaging38(11), 2556–2568 (2019)

    Kuijf, H.J., Biesbroek, J.M., De Bresser, J., Heinen, R., Andermatt, S., Bento, M., Berseth, M., Belyaev, M., Cardoso, M.J., Casamitjana, A., et al.: Standardized assessment of automatic segmentation of white matter hyperintensities and results of the wmh segmentation challenge. IEEE transactions on medical imaging38(11), 2556–2568 (2019)

  26. [26]

    Scientific Reports15(1), 25468 (2025)

    Kumari, P., Bozorgpour, A., Reisenbüchler, D., Jost, E., Crysandt, M., Matek, C., Merhof, D.: Domain-incremental white blood cell classification with privacy-aware continual learning. Scientific Reports15(1), 25468 (2025)

  27. [27]

    Li, H., Tan, Z., Li, X., Huang, W.: Atlas: Adapter-based multi-modal continual learning with a two-stage learning strategy (2024),https://arxiv.org/abs/2410. 10923

  28. [28]

    In: The Twelfth International Conference on Learning Representa- tions

    Li, W., Yuille, A., Zhou, Z.: How well do supervised models transfer to 3d image segmentation. In: The Twelfth International Conference on Learning Representa- tions. vol. 1 (2024)

  29. [29]

    Chen et al

    Liew, S.L., Anglin, J.M., Banks, N.W., Sondag, M., Ito, K.L., Kim, H., Chan, J., Ito, J., Jung, C., Khoshab, N., et al.: A large, open source dataset of stroke 18 Q. Chen et al. anatomical brain images and manual lesion segmentations. Scientific data5(1), 180011 (2018)

  30. [30]

    Liu, H., Tam, D., Muqeeth, M., Mohta, J., Huang, T., Bansal, M., Raffel, C.A.: Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning.AdvancesinNeuralInformationProcessingSystems35,1950–1965(2022)

  31. [31]

    In: Proceedings of the IEEE/CVF international conference on computer vision

    Liu, J., Zhang, Y., Chen, J.N., Xiao, J., Lu, Y., A Landman, B., Yuan, Y., Yuille, A., Tang, Y., Zhou, Z.: Clip-driven universal model for organ segmentation and tumor detection. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 21152–21164 (2023)

  32. [32]

    Medical image analysis97, 103226 (2024)

    Liu, J., Zhang, Y., Wang, K., Yavuz, M.C., Chen, X., Yuan, Y., Li, H., Yang, Y., Yuille, A., Tang, Y., et al.: Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Medical image analysis97, 103226 (2024)

  33. [33]

    Nature Communications15, 654 (2024)

    Ma, J., He, Y., Li, F., Han, L., You, C., Wang, B.: Segment anything in medical images. Nature Communications15, 654 (2024)

  34. [34]

    U-mamba: Enhancing long-range dependency for biomedical image segmentation

    Ma, J., Li, F., Wang, B.: U-mamba: Enhancing long-range dependency for biomed- ical image segmentation. arXiv preprint arXiv:2401.04722 (2024)

  35. [35]

    Mu, S., Lin, S.: A comprehensive survey of mixture-of-experts: Algorithms, theory, and applications (2026),https://arxiv.org/abs/2503.07137

  36. [36]

    Shah, A.H., Snelling, B., Bregy, A., Patel, P.R., Tememe, D., Bhatia, R., Sklar, E., Komotar, R.J.: Discriminating radiation necrosis from tumor progression in gliomas: a systematic review what is the best imaging modality? Journal of neuro- oncology112(2), 141–152 (2013)

  37. [37]

    Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

    Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., Dean, J.: Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)

  38. [38]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Smith, J.S., Karlinsky, L., Gutta, V., Cascante-Bonilla, P., Kim, D., Arbelle, A., Panda, R., Feris, R., Kira, Z.: Coda-prompt: Continual decomposed attention- based prompting for rehearsal-free continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 11909–11919 (June 2023)

  39. [39]

    Annals of Emergency Medicine74(5), 647–659 (2019)

    Szulewski, A., Braund, H., Egan, R., Gegenfurtner, A., Hall, A.K., Howes, D., Dagnone, D., van Merrienboer, J.J.: Starting to think like an expert: an analysis of resident cognitive processes during simulation-based resuscitation examinations. Annals of Emergency Medicine74(5), 647–659 (2019)

  40. [40]

    IEEE Transactions on Medical Imaging43(7), 2599–2609 (2024)

    Thandiackal, K., Piccinelli, L., Gupta, R., Pati, P., Goksel, O.: Multi-scale fea- ture alignment for continual learning of unlabeled domains. IEEE Transactions on Medical Imaging43(7), 2599–2609 (2024)

  41. [41]

    In: Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR)

    Wang, H., Lu, H., Yao, L., Gong, D.: Self-expansion of pre-trained models with mixture of adapters for continual learning. In: Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR). pp. 10087–10098 (June 2025)

  42. [42]

    ACDC: The adverse conditions dataset with correspondences for robust semantic driving scene perception,

    Wang, L., Zhang, X., Su, H., Zhu, J.: A comprehensive survey of continual learn- ing: Theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence46(8), 5362–5383 (2024).https://doi.org/10.1109/TPAMI. 2024.3367329

  43. [43]

    In: European conference on computer vision

    Wang, Z., Zhang, Z., Ebrahimi, S., Sun, R., Zhang, H., Lee, C.Y., Ren, X., Su, G., Perot, V., Dy, J., et al.: Dualprompt: Complementary prompting for rehearsal- free continual learning. In: European conference on computer vision. pp. 631–648. Springer (2022) Abbreviated paper title 19

  44. [44]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Wang, Z., Zhang, Z., Lee, C.Y., Zhang, H., Sun, R., Ren, X., Su, G., Perot, V., Dy, J., Pfister, T.: Learning to prompt for continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 139–149 (June 2022)

  45. [45]

    Medical Image Analysis78, 102391 (2022)

    Wood, D.A., Kafiabadi, S., Al Busaidi, A., Guilhem, E., Montvila, A., Lynch, J., Townend, M., Agarwal, S., Mazumder, A., Barker, G.J., et al.: Deep learning models for triaging hospital head mri examinations. Medical Image Analysis78, 102391 (2022)

  46. [46]

    Brain129(9), 2384–2393 (2006)

    Wu, O., Christensen, S., Hjort, N., Dijkhuizen, R.M., Kucinski, T., Fiehler, J., Thomalla, G., Röther, J., Østergaard, L.: Characterizing physiological heterogene- ity of infarction risk in acute human ischaemic stroke using mri. Brain129(9), 2384–2393 (2006)

  47. [47]

    IEEE Transactions on Pattern Analysis and Machine Intelligence (2026)

    Xu, L., Xie, H., Qin, S.J., Tao, X., Wang, F.L.: Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment. IEEE Transactions on Pattern Analysis and Machine Intelligence (2026)

  48. [48]

    IEEE Transactions on Pattern Analysis and Machine Intelligence47(12), 11912–11928 (2025).https://doi.org/10.1109/ TPAMI.2025.3597942

    Yu, J., Huang, Z., Zhuge, Y., Zhang, L., Hu, P., Wang, D., Lu, H., He, Y.: Moe- adapters++: Toward more efficient continual learning of vision-language models via dynamic mixture-of-experts adapters. IEEE Transactions on Pattern Analysis and Machine Intelligence47(12), 11912–11928 (2025).https://doi.org/10.1109/ TPAMI.2025.3597942

  49. [49]

    In: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Yu, J., Zhuge, Y., Zhang, L., Hu, P., Wang, D., Lu, H., He, Y.: Boosting continual learning of vision-language models via mixture-of-experts adapters. In: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 23219–23230 (June 2024)

  50. [50]

    Medical image analysis91, 102996 (2024)

    Zhang, S., Metaxas, D.: On the challenges and perspectives of foundation models for medical image analysis. Medical image analysis91, 102996 (2024)

  51. [51]

    BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs

    Zhang, S., Xu, Y., Usuyama, N., Xu, H., Bagga, J., Tinn, R., Preston, S., Rao, R., Wei, M., Valluri, N., et al.: Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv preprint arXiv:2303.00915 (2023)

  52. [52]

    In: International Conference on Med- ical Image Computing and Computer-Assisted Intervention

    Zhang, X., Ou, N., Basaran, B.D., Visentin, M., Qiao, M., Gu, R., Ouyang, C., Liu, Y., Matthews, P.M., Ye, C., et al.: A foundation model for brain lesion seg- mentation with mixture of modality experts. In: International Conference on Med- ical Image Computing and Computer-Assisted Intervention. pp. 379–389. Springer (2024)

  53. [53]

    International journal of computer vision130(9), 2337–2348 (2022) 20 Q

    Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. International journal of computer vision130(9), 2337–2348 (2022) 20 Q. Chen et al. A Appendix Introduction This supplementary material provides additional details and extended analyses to support the main text. The document is organized as follows: – Sec. Bprovides comp...