pith. machine review for the scientific record. sign in

arxiv: 2601.22581 · v2 · submitted 2026-01-30 · 💻 cs.CV · cs.AI· cs.LG

Recognition: 2 theorem links

· Lean Theorem

Cross-Domain Few-Shot Learning for Hyperspectral Image Classification Based on Mixup Foundation Model

Authors on Pith no claims yet

Pith reviewed 2026-05-16 09:56 UTC · model grok-4.3

classification 💻 cs.CV cs.AIcs.LG
keywords cross-domain few-shot learninghyperspectral image classificationfoundation modelmixup domain adaptationremote sensingdomain adaptation
0
0 comments X

The pith

A remote sensing foundation model adapted with coalescent projection and mixup domain adaptation outperforms prior methods by up to 14 percent in cross-domain few-shot hyperspectral image classification.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes MIFOMO to handle cross-domain few-shot learning for hyperspectral images. It starts from a pre-trained remote sensing foundation model that already carries generalizable features across many remote sensing tasks. Coalescent projection lets the model adapt quickly to a new target domain while the backbone stays frozen, mixup domain adaptation reduces large domain gaps between source and target data, and label smoothing limits damage from noisy pseudo-labels. This setup avoids both unrealistic noise-based sample enlargement and the overfitting that comes from updating many parameters in few-shot regimes. Experiments across several cross-domain settings show consistent gains reaching 14 percent over earlier approaches.

Core claim

MIFOMO rests on a remote sensing foundation model pre-trained at large scale so that its features transfer readily. Coalescent projection performs the downstream adaptation while the backbone parameters remain fixed. Mixup domain adaptation creates interpolated samples that bridge extreme source-target shifts, and label smoothing regularizes the pseudo-labels generated during adaptation.

What carries the argument

Coalescent projection (CP) that projects target samples into the frozen foundation-model feature space, paired with mixup domain adaptation (MDM) that interpolates across source and target distributions.

If this is right

  • Fewer parameters are updated during adaptation, which reduces overfitting when only a handful of target samples are available.
  • No external noise injection is required to enlarge the training set, removing an unrealistic simplification used in earlier work.
  • The same frozen-backbone strategy can be reused for other remote-sensing tasks that also face domain shifts.
  • Label smoothing mitigates the effect of imperfect pseudo-labels that arise when the domain gap remains large after mixup.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same frozen-backbone plus mixup pattern could be tested on cross-domain few-shot problems in other imaging modalities such as multispectral or SAR data.
  • Because the backbone stays fixed, the method may run efficiently on resource-limited platforms where full fine-tuning is impractical.
  • If mixup is extended to more than two domains at once, the approach might scale to multi-source adaptation settings.

Load-bearing premise

The remote sensing foundation model already contains features general enough to support quick adaptation to new hyperspectral domains without any updates to the backbone.

What would settle it

Replace the pre-trained remote sensing backbone with random weights and rerun the same cross-domain few-shot experiments; if MIFOMO no longer exceeds prior methods, the value of the frozen foundation model is refuted.

Figures

Figures reproduced from arXiv: 2601.22581 by Ary Shiddiqi, Mahardhika Pratama, Mukesh Prasad, Naeem Paeedeh, Wisnu Jatmiko, Zehong Cao.

Figure 1
Figure 1. Figure 1: The calculation of the soft prompts on top and Coalescent Projection (CP) at the bottom, in the attention module. [PITH_FULL_IMAGE:figures/full_fig_p007_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The architecture of our model. The blue blocks contain learnable parameters. [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The flowchart of the single iteration of source domain training. The red arrows indicate the backpropagated gradients. [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The flowchart of the single iteration of intermediate domain training. The red arrows indicate the backpropagated [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: Classification maps for the Salinas dataset. [PITH_FULL_IMAGE:figures/full_fig_p010_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Classification maps for the Pavia Univesity dataset. [PITH_FULL_IMAGE:figures/full_fig_p010_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Classification maps for the Houston dataset. [PITH_FULL_IMAGE:figures/full_fig_p012_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: t-SNE plot for the Indian Pines, Pavia University, [PITH_FULL_IMAGE:figures/full_fig_p013_9.png] view at source ↗
read the original abstract

Although cross-domain few-shot learning (CDFSL) for hyper-spectral image (HSI) classification has attracted significant research interest, existing works often rely on an unrealistic data augmentation procedure in the form of external noise to enlarge the sample size, thus greatly simplifying the issue of data scarcity. They involve a large number of parameters for model updates, being prone to the overfitting problem. To the best of our knowledge, none has explored the strength of the foundation model, having strong generalization power to be quickly adapted to downstream tasks. This paper proposes the MIxup FOundation MOdel (MIFOMO) for CDFSL of HSI classifications. MIFOMO is built upon the concept of a remote sensing (RS) foundation model, pre-trained across a large scale of RS problems, thus featuring generalizable features. The notion of coalescent projection (CP) is introduced to quickly adapt the foundation model to downstream tasks while freezing the backbone network. The concept of mixup domain adaptation (MDM) is proposed to address the extreme domain discrepancy problem. Last but not least, the label smoothing concept is implemented to cope with noisy pseudo-label problems. Our rigorous experiments demonstrate the advantage of MIFOMO, where it beats prior arts with up to 14% margin. The source code of MIFOMO is open-sourced at https://github.com/Naeem-Paeedeh/MIFOMO for reproducibility and convenient further study.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript proposes MIFOMO for cross-domain few-shot learning (CDFSL) in hyperspectral image (HSI) classification. It builds on a pre-trained remote sensing foundation model, introduces coalescent projection (CP) to adapt the frozen backbone to downstream tasks, mixup domain adaptation (MDM) to mitigate extreme domain discrepancies, and label smoothing to handle noisy pseudo-labels. The authors report that MIFOMO outperforms prior CDFSL methods by up to 14% in experiments and release the source code.

Significance. If the performance claims hold under rigorous validation, the work would be significant for introducing foundation-model-based adaptation to CDFSL in HSI without unrealistic external noise augmentation or heavy parameter updates. The open-sourced code at the provided GitHub link supports reproducibility and further study in remote sensing applications.

major comments (3)
  1. [Method] Method section (description of CP): the claim that freezing the backbone via coalescent projection enables quick adaptation across extreme spectral shifts (different sensors, band centers, atmospheric effects) is load-bearing for the central contribution, yet no feature-space distance analysis, t-SNE visualizations, or comparison to an unfrozen backbone is provided to test this assumption.
  2. [Experiments] Experiments section: the headline result of 'up to 14% margin' over prior arts is reported without naming the specific cross-domain HSI datasets, shot settings (e.g., 1-shot/5-shot), baseline methods, number of runs, or statistical significance tests, which prevents verification of the performance advantage.
  3. [Ablation] Ablation studies (if present in §4.3 or equivalent): no ablation isolating the contribution of CP versus MDM, or frozen versus fine-tuned backbone, is referenced, leaving the necessity of the freezing strategy untested and weakening the claim that the approach reduces overfitting risk.
minor comments (2)
  1. [Abstract] Abstract: the phrase 'rigorous experiments' is used without any concrete dataset or setting details; adding one sentence summarizing the evaluation protocol would improve clarity for readers.
  2. [Method] Notation: the definitions of 'coalescent projection' and 'mixup domain adaptation' are introduced without an accompanying equation or algorithmic pseudocode in the early method description, making the technical novelty harder to follow on first reading.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments, which help improve the clarity and rigor of our work. We provide point-by-point responses below and will revise the manuscript to address the concerns raised.

read point-by-point responses
  1. Referee: [Method] Method section (description of CP): the claim that freezing the backbone via coalescent projection enables quick adaptation across extreme spectral shifts (different sensors, band centers, atmospheric effects) is load-bearing for the central contribution, yet no feature-space distance analysis, t-SNE visualizations, or comparison to an unfrozen backbone is provided to test this assumption.

    Authors: We agree that additional empirical support would strengthen the central claim regarding coalescent projection (CP). In the revised manuscript, we will add t-SNE visualizations of feature distributions before and after CP adaptation across the cross-domain pairs, quantitative feature-space distance metrics (e.g., maximum mean discrepancy), and a direct performance comparison between the frozen-backbone MIFOMO and a fine-tuned variant. These additions will demonstrate the adaptation benefits and reduced overfitting risk under extreme spectral shifts. revision: yes

  2. Referee: [Experiments] Experiments section: the headline result of 'up to 14% margin' over prior arts is reported without naming the specific cross-domain HSI datasets, shot settings (e.g., 1-shot/5-shot), baseline methods, number of runs, or statistical significance tests, which prevents verification of the performance advantage.

    Authors: We apologize for insufficient explicit detail in the result summary. The Experiments section specifies the cross-domain HSI dataset pairs (e.g., Indian Pines to Pavia University and Salinas to Botswana), the 1-shot and 5-shot protocols, the full list of baselines, averages over five independent runs with standard deviations, and paired t-test significance results. We will revise the text to prominently restate these elements when reporting the performance margins, ensuring immediate verifiability. revision: yes

  3. Referee: [Ablation] Ablation studies (if present in §4.3 or equivalent): no ablation isolating the contribution of CP versus MDM, or frozen versus fine-tuned backbone, is referenced, leaving the necessity of the freezing strategy untested and weakening the claim that the approach reduces overfitting risk.

    Authors: We acknowledge that the existing ablations in §4.3 do not fully isolate CP from MDM or directly contrast the frozen versus fine-tuned backbone. In the revision, we will expand §4.3 with new ablation tables that separately remove or replace CP and MDM, and that compare the proposed frozen-backbone configuration against an unfrozen fine-tuning baseline, thereby directly testing and supporting the overfitting-reduction claim. revision: yes

Circularity Check

0 steps flagged

No circularity in MIFOMO derivation chain

full rationale

The paper introduces MIFOMO by building on an external pre-trained remote sensing foundation model and proposing new modules (coalescent projection for frozen-backbone adaptation, mixup domain adaptation, and label smoothing) without any equations, derivations, or predictions that reduce to fitted parameters or self-citations by construction. Performance claims rest on comparative experiments rather than self-referential mathematical steps, and no load-bearing uniqueness theorems or ansatzes from prior author work are invoked. The derivation is self-contained against external benchmarks and pre-trained models.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 2 invented entities

The central claim rests on the assumption that a pre-trained remote sensing foundation model supplies generalizable features that can be adapted without retraining the backbone; two new modules (CP and MDM) are introduced without independent validation outside the paper.

axioms (1)
  • domain assumption A remote sensing foundation model pre-trained across a large scale of RS problems features generalizable features.
    Invoked to justify freezing the backbone while using CP for quick adaptation.
invented entities (2)
  • Coalescent projection (CP) no independent evidence
    purpose: Quickly adapt the foundation model to downstream tasks while freezing the backbone network.
    New adaptation mechanism introduced in the paper.
  • Mixup domain adaptation (MDM) no independent evidence
    purpose: Address the extreme domain discrepancy problem.
    New domain-adaptation technique based on mixup.

pith-pipeline@v0.9.0 · 5591 in / 1305 out tokens · 39644 ms · 2026-05-16T09:56:16.098942+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

73 extracted references · 73 canonical work pages · 1 internal anchor

  1. [1]

    Hyperspectral image data analysis,

    D. A. Landgrebe, “Hyperspectral image data analysis,”IEEE Signal Process. Mag., vol. 19, pp. 17–28, 2002. [Online]. Available: https://api.semanticscholar.org/CorpusID:62165049

  2. [2]

    Supervised classification of remotely sensed imagery using a modified $k$-nn technique,

    L. Samaniego, A. Bárdossy, and K. Schulz, “Supervised classification of remotely sensed imagery using a modified $k$-nn technique,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, pp. 2112–2125, 2008. [Online]. Available: https://api.semanticscholar.org/ CorpusID:16372737

  3. [3]

    Classification of hyperspectral remote sensing images with support vector machines,

    F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing images with support vector machines,”IEEE Transactions on Geoscience and Remote Sensing, vol. 42, pp. 1778–1790, 2004. [Online]. Available: https://api.semanticscholar.org/CorpusID:6906514

  4. [4]

    Deep learning-based classification of hyperspectral data,

    Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep learning-based classification of hyperspectral data,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, pp. 2094–2107, 2014. [Online]. Available: https: //api.semanticscholar.org/CorpusID:44935336

  5. [5]

    Going deeper with contextual cnn for hyperspectral image classification,

    H. Lee and H. Kwon, “Going deeper with contextual cnn for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 26, pp. 4843–4855, 2016. [Online]. Available: https://api.semanticscholar.org/CorpusID:5856281

  6. [6]

    Superpixel guided deformable convolu- tion network for hyperspectral image classification,

    C. Zhao, W. Zhu, and S. Feng, “Superpixel guided deformable convolu- tion network for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 31, pp. 3838–3851, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:249045829

  7. [7]

    Diverse region-based cnn for hyperspectral image classification,

    M. Zhang, W. Li, and Q. Du, “Diverse region-based cnn for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 27, pp. 2623–2634, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:3839934

  8. [8]

    Spectralformer: Rethinking hyperspectral image classification with transformers,

    D. Hong, Z. Han, J. Yao, L. Gao, B. Zhang, A. J. Plaza, and J. Chanussot, “Spectralformer: Rethinking hyperspectral image classification with transformers,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:235755242

  9. [9]

    Dcn-t: Dual context network with transformer for hyperspectral image classification,

    D. Wang, J. Zhang, B. Du, L. Zhang, and D. Tao, “Dcn-t: Dual context network with transformer for hyperspectral image classification,” IEEE Transactions on Image Processing, vol. 32, pp. 2536–2551,

  10. [10]

    Available: https://api.semanticscholar.org/CorpusID: 258236407 JOURNAL OF LATEX CLASS FILES, VOL

    [Online]. Available: https://api.semanticscholar.org/CorpusID: 258236407 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 14

  11. [11]

    Cnn-enhanced graph convolutional network with pixel- and superpixel-level feature fusion for hyperspectral image classification,

    Q. Liu, L. Xiao, J. Yang, and Z. Wei, “Cnn-enhanced graph convolutional network with pixel- and superpixel-level feature fusion for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 59, pp. 8657–8671, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:229653781

  12. [12]

    Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification,

    Y. Dong, Q. Liu, B. Du, and L. Zhang, “Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 31, pp. 1559–1572, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:246286897

  13. [13]

    Few-shot class incremental learning via robust transformer approach,

    N. Paeedeh, M. Pratama, S. Wibirama, W. Mayer, Z. Cao, and R. Kowalczyk, “Few-shot class incremental learning via robust transformer approach,”Inf. Sci., vol. 675, p. 120751, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:269741067

  14. [14]

    Pseudolabel-based unreliable sample learning for semi-supervised hyperspectral image classification,

    H. Yao, R. Chen, W. Chen, H. Sun, W. Xie, and X. Lu, “Pseudolabel-based unreliable sample learning for semi-supervised hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:264086122

  15. [15]

    Semi-supervised neural architecture search for hyperspectral imagery classification method with dynamic feature clustering,

    W. Wei, S. Zhao, S. Xu, L. Zhang, and Y. Zhang, “Semi-supervised neural architecture search for hyperspectral imagery classification method with dynamic feature clustering,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–14, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:258903260

  16. [16]

    Cross- domain few-shot hyperspectral image classification with cross-modal alignment and supervised contrastive learning,

    Z. Li, C. Zhang, Y. Wang, W. Li, Q. Du, Z. Fang, and Y. Chen, “Cross- domain few-shot hyperspectral image classification with cross-modal alignment and supervised contrastive learning,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–19, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:270157684

  17. [17]

    Self-supervised spectral–spatial graph prototypical network for few-shot hyperspectral image classification,

    S. Ma, L. Tong, J. Zhou, J. Yu, and C. Xiao, “Self-supervised spectral–spatial graph prototypical network for few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–15, 2023. [Online]. Available: https: //api.semanticscholar.org/CorpusID:260009959

  18. [18]

    Prototypical networks for few-shot learning,

    J. Snell, K. Swersky, and R. S. Zemel, “Prototypical networks for few-shot learning,” inNeural Information Processing Systems, 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:309759

  19. [19]

    Learning to compare: Relation network for few-shot learning,

    F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,”2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1199–1208, 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:4412459

  20. [20]

    Domain adaptation with preservation of manifold geometry for hyperspectral image classification,

    H. L. Yang and M. M. Crawford, “Domain adaptation with preservation of manifold geometry for hyperspectral image classification,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, pp. 543–555, 2016. [Online]. Available: https://api.semanticscholar.org/CorpusID:10567335

  21. [21]

    Cross-dataset hyperspectral image classification based on adversarial domain adaptation,

    X. Ma, X. Mou, J. Wang, X. Liu, J. Geng, and H. Wang, “Cross-dataset hyperspectral image classification based on adversarial domain adaptation,”IEEE Transactions on Geoscience and Remote Sensing, vol. 59, pp. 4179–4190, 2021. [Online]. Available: https: //api.semanticscholar.org/CorpusID:226688043

  22. [22]

    Two-branch attention adversarial domain adaptation network for hyperspectral image classification,

    Y. Huang, J. Peng, W. SUN, N. Chen, Q. Du, Y. Ning, and H. Su, “Two-branch attention adversarial domain adaptation network for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–13, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:253250784

  23. [23]

    Cross-scene wetland mapping on hyperspectral remote sensing images using adversarial domain adaptation network,

    Y. Huang, J. Peng, N. Chen, W. SUN, Q. Du, K. Ren, and K. Huang, “Cross-scene wetland mapping on hyperspectral remote sensing images using adversarial domain adaptation network,”ISPRS Journal of Photogrammetry and Remote Sensing, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:260217897

  24. [24]

    Graph information aggregation cross-domain few-shot learning for hyperspectral image classification,

    Y. Zhang, W. Li, M. Zhang, S. Wang, R. Tao, and Q. Du, “Graph information aggregation cross-domain few-shot learning for hyperspectral image classification,”IEEE Transactions on Neural Networks and Learning Systems, vol. 35, pp. 1912–1925, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:250145582

  25. [25]

    Cross-domain few-shot learning via adaptive transformer networks,

    N. Paeedeh, M. Pratama, M. A. Ma’sum, W. Mayer, Z. Cao, and R. Kowlczyk, “Cross-domain few-shot learning via adaptive transformer networks,”Knowl. Based Syst., vol. 288, p. 111458, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:267211635

  26. [26]

    Scformer: Spectral coordinate transformer for cross-domain few-shot hyperspectral image classification,

    J. Li, Z. Zhang, R. Song, Y. Li, and Q. Du, “Scformer: Spectral coordinate transformer for cross-domain few-shot hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 33, pp. 840–855, 2024. [Online]. Available: https://api.semanticscholar.org/ CorpusID:266997240

  27. [27]

    Cross-domain meta- learning under dual-adjustment mode for few-shot hyperspectral image classification,

    L. Hu, W. He, L. Zhang, and H. Zhang, “Cross-domain meta- learning under dual-adjustment mode for few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023. [Online]. Available: https: //api.semanticscholar.org/CorpusID:263295977

  28. [28]

    Semantic guided prototype learning for cross-domain few-shot hyperspectral image classification,

    Y. Li, J. He, H. Liu, Y. Zhang, and Z. Li, “Semantic guided prototype learning for cross-domain few-shot hyperspectral image classification,” Expert Syst. Appl., vol. 260, p. 125453, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:272962267

  29. [29]

    Cross-domain few-shot learning for hyperspectral image classification based on global-to- local enhanced channel attention,

    Y. Dang, H. Li, B. Liu, and X. Zhang, “Cross-domain few-shot learning for hyperspectral image classification based on global-to- local enhanced channel attention,”IEEE Geoscience and Remote Sensing Letters, vol. 22, pp. 1–5, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:275537834

  30. [30]

    Few-shot learning with prototype rectification for cross-domain hyperspectral image classification,

    A. Qin, C. Yuan, Q. Li, X. Luo, F. Yang, T. Song, and C. Gao, “Few-shot learning with prototype rectification for cross-domain hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–15, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:270532262

  31. [31]

    Cross-domain few-shot learning based on feature disentanglement for hyperspectral image classification,

    B. Qin, S. Feng, C. Zhao, W. Li, R. Tao, and W. Xiang, “Cross-domain few-shot learning based on feature disentanglement for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–15, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:269012329

  32. [32]

    Cross- domain few-shot learning based on decoupled knowledge distillation for hyperspectral image classification,

    S. Feng, H. Zhang, B. Xi, C. Zhao, Y. Li, and J. Chanussot, “Cross- domain few-shot learning based on decoupled knowledge distillation for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–14, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:273251258

  33. [33]

    Convolutional transformer-based few-shot learning for cross-domain hyperspectral image classification,

    Y. Peng, Y. Liu, B. Tu, and Y. Zhang, “Convolutional transformer-based few-shot learning for cross-domain hyperspectral image classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 16, pp. 1335–1349, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:255738700

  34. [34]

    Multi-branch feature transformation cross- domain few-shot learning for hyperspectral image classification,

    M. X. Shi and J. Ren, “Multi-branch feature transformation cross- domain few-shot learning for hyperspectral image classification,” Pattern Recognit., vol. 160, p. 111197, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:274168009

  35. [35]

    Spatial–spectral local domain adaption for cross domain few shot hyperspectral images classification,

    B. Wang, Y. Xu, Z. Wu, T. Zhan, and Z. Wei, “Spatial–spectral local domain adaption for cross domain few shot hyperspectral images classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022. [Online]. Available: https: //api.semanticscholar.org/CorpusID:252487046

  36. [36]

    Cross-domain self-taught network for few-shot hyperspectral image classification,

    M. Zhang, H. Liu, M. Gong, H. Li, Y. Wu, and X. Jiang, “Cross-domain self-taught network for few-shot hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1– 19, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID: 258126500

  37. [37]

    Adaptive domain-adversarial few-shot learning for cross-domain hyperspectral image classification,

    Z. Ye, J. Wang, H. Liu, Y. Zhang, and W. Li, “Adaptive domain-adversarial few-shot learning for cross-domain hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–17, 2023. [Online]. Available: https: //api.semanticscholar.org/CorpusID:265347217

  38. [38]

    Spatial- spectral–semantic cross-domain few-shot learning for hyperspectral image classification,

    M. Cao, X. Zhang, J. Cheng, G. Zhao, W. Li, and X. Dong, “Spatial- spectral–semantic cross-domain few-shot learning for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–15, 2024. [Online]. Available: https: //api.semanticscholar.org/CorpusID:271565009

  39. [39]

    Dual-branch domain adaptation few-shot learning for hyperspectral image classification,

    Z. Wang, S. Zhao, G. Zhao, and X. Song, “Dual-branch domain adaptation few-shot learning for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1– 16, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID: 267144982

  40. [40]

    Multilevel prototype alignment for cross- domain few-shot hyperspectral image classification,

    H. Liu, J. He, Y. Li, and Y. Bi, “Multilevel prototype alignment for cross- domain few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–15, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:274895813

  41. [41]

    Multi-gaussian prototype metric with dual optimization for cross-domain few-shot hyperspectral image classification,

    K. Shi, X. Zhang, H. yun Meng, C. Jia, and L. Jiao, “Multi-gaussian prototype metric with dual optimization for cross-domain few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–19, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:281107662

  42. [42]

    Fromintra-distinctiveness to inter-invariance: A cycle-resemblance few-shot transformation network for cross-domain hyperspectral image classification,

    Q.Zhu,H.Li,W.Deng,Q.Guan,andJ.Luo,“Fromintra-distinctiveness to inter-invariance: A cycle-resemblance few-shot transformation network for cross-domain hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1– JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 15 16, 2025. [Online]. Available: https://ap...

  43. [43]

    Few-shot learning based on multilevel contrast for cross-domain hyperspectral image classification,

    C. Shi, W. Liu, L. Fang, Z. You, Q. Miao, and C.-M. Pun, “Few-shot learning based on multilevel contrast for cross-domain hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–18, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:280810914

  44. [44]

    Cross-domain few-shot learning based on graph convolution contrast for hyperspectral image classification,

    Z. Ye, J. Wang, T. Sun, J. Zhang, and W. Li, “Cross-domain few-shot learning based on graph convolution contrast for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–14, 2024. [Online]. Available: https: //api.semanticscholar.org/CorpusID:266953151

  45. [45]

    Hyperspectral image classification via cross-domain few-shot learning with kernel triplet loss,

    K.-K. Huang, H. tian Yuan, C.-X. Ren, Y.-E. Hou, J. li Duan, and Z. Yang, “Hyperspectral image classification via cross-domain few-shot learning with kernel triplet loss,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–18, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:265891548

  46. [46]

    Cross-domain few- shot contrastive learning for hyperspectral images classification,

    S. Zhang, Z. Chen, D. Wang, and Z. J. Wang, “Cross-domain few- shot contrastive learning for hyperspectral images classification,”IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:254435164

  47. [47]

    Domain-adversarial training of neural networks,

    Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. S. Lempitsky, “Domain-adversarial training of neural networks,” inJournal of machine learning research,

  48. [48]

    Available: https://api.semanticscholar.org/CorpusID: 2871880

    [Online]. Available: https://api.semanticscholar.org/CorpusID: 2871880

  49. [49]

    Spectraldino: Dual mixture-of- subspaces low-rank adaptation for cross-domain hyperspectral image few-shot classification,

    B. Chen, G. Zhang, T. Chen, M. Wang, J. Liu, Y. Wang, R. Zhang, S. Li, and B. Hu, “Spectraldino: Dual mixture-of- subspaces low-rank adaptation for cross-domain hyperspectral image few-shot classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–14, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:281822400

  50. [50]

    Hypersigma: Hyperspectral intelligence comprehension foundation model,

    D. Wang, M. Hu, Y. Jin, Y. Miao, J. Yang, Y. Xu, X. Qin, J. Ma, L. Sun, C. Li, C. Fu, H. Chen, C. Han, N. Yokoya, J. Zhang, M. Xu, L. Liu, L. Zhang, C. Wu, B. Du, D. Tao, and L.-Y. Zhang, “Hypersigma: Hyperspectral intelligence comprehension foundation model,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, pp. 6427–6444, 2024. [On...

  51. [51]

    Cross-domain few-shot learning with coalescent projections and latent space reservation,

    N. Paeedeh, M. Pratama, W. Mayer, J. Cao, and R. Kowlczyk, “Cross-domain few-shot learning with coalescent projections and latent space reservation,”ArXiv, vol. abs/2507.15243, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:280269504

  52. [52]

    Learning to prompt for continual learning,

    Z. Wang, Z. Zhang, C.-Y. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V. Perot, J. G. Dy, and T. Pfister, “Learning to prompt for continual learning,”2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 139–149, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:245218925

  53. [53]

    Mixup domain adaptations for dynamic remaining useful life predictions,

    M. T. Furqon, M. Pratama, L. Liu, H. Habibullah, and K. Doğançay, “Mixup domain adaptations for dynamic remaining useful life predictions,”Knowl. Based Syst., vol. 295, p. 111783, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:269005807

  54. [54]

    Metamixup: Learning adaptive interpolation policy of mixup with metalearning,

    Z. Mai, G. Hu, D. Chen, F. Shen, and H. T. Shen, “Metamixup: Learning adaptive interpolation policy of mixup with metalearning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, pp. 3050–3064, 2019

  55. [55]

    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

    A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,”ArXiv, vol. abs/2010.11929, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:225039882

  56. [56]

    Strong baselines for parameter efficient few-shot fine-tuning,

    S. Basu, D. Massiceti, S. X. Hu, and S. Feizi, “Strong baselines for parameter efficient few-shot fine-tuning,” inAAAI Conference on Artificial Intelligence, 2023. [Online]. Available: https://api. semanticscholar.org/CorpusID:257921197

  57. [57]

    Learning with local and global consistency,

    D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Scholkopf, “Learning with local and global consistency,” inNeural Information Processing Systems, 2003. [Online]. Available: https://api.semanticscholar.org/ CorpusID:508435

  58. [58]

    Progressive mix-up for few-shot supervised multi-source domain transfer,

    R. Zhu, X. Yu, and S. Li, “Progressive mix-up for few-shot supervised multi-source domain transfer,” inInternational Conference on Learning Representations, 2023. [Online]. Available: https: //api.semanticscholar.org/CorpusID:259298634

  59. [59]

    Airborne hyperspectral data over chikusei,

    N. Yokoya and A. Iwasaki, “Airborne hyperspectral data over chikusei,” Space Appl. Lab., Univ. Tokyo, Tokyo, Japan, Tech. Rep. SAL-2016-05- 27, vol. 5, no. 5, p. 5, 2016

  60. [60]

    Deep cross-domain few-shot learning for hyperspectral image classification,

    Z. Li, M. Liu, Y. Chen, Y. Xu, W. Li, and Q. Du, “Deep cross-domain few-shot learning for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1– 18, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID: 233935479

  61. [61]

    Few-shot learning framework based on classifier and domain adaptive alignment for hyperspectral classification,

    L. Yu, X. Zhang, and K. Wang, “Few-shot learning framework based on classifier and domain adaptive alignment for hyperspectral classification,”Neurocomputing, vol. 647, p. 130726, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:279194098

  62. [62]

    Causal meta-transfer learning for cross-domain few-shot hyperspectral image classification,

    Y. Cheng, W. Zhang, H. Wang, and X. Wang, “Causal meta-transfer learning for cross-domain few-shot hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1– 14, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID: 261331535

  63. [63]

    Graph meta transfer network for heterogeneous few-shot hyperspectral image classification,

    H. Wang, X. Wang, and Y. Cheng, “Graph meta transfer network for heterogeneous few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1– 12, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID: 255598678

  64. [64]

    Few- shot learning with class-covariance metric for hyperspectral image classification,

    B. Xi, J. Li, Y. Li, R. Song, D. Hong, and J. Chanussot, “Few- shot learning with class-covariance metric for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 31, pp. 5079–5092, 2022. [Online]. Available: https://api.semanticscholar.org/ CorpusID:251068999

  65. [65]

    Heterogeneous few-shot learning for hyperspectral image classification,

    Y. Wang, M. Liu, Y. Yang, Z. Li, Q. Du, Y. Chen, F. Li, and H. Yang, “Heterogeneous few-shot learning for hyperspectral image classification,”IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022. [Online]. Available: https://api.semanticscholar.org/ CorpusID:243299899

  66. [66]

    Hyperspectral image classification via discriminative convolutional neural network with an improved triplet loss,

    K.-K. Huang, C.-X. Ren, H. Liu, Z.-R. Lai, Y.-F. Yu, and D.-Q. Dai, “Hyperspectral image classification via discriminative convolutional neural network with an improved triplet loss,”Pattern Recognit., vol. 112, p. 107744, 2020. [Online]. Available: https: //api.semanticscholar.org/CorpusID:228834289

  67. [67]

    Spectral–spatial residual network for hyperspectral image classification: A 3-d deep learning framework,

    Z. Zhong, J. Li, Z. Luo, and M. A. Chapman, “Spectral–spatial residual network for hyperspectral image classification: A 3-d deep learning framework,”IEEE Transactions on Geoscience and Remote Sensing, vol. 56, pp. 847–858, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:3212989

  68. [68]

    Hyperspectral image clas- sification using mixed convolutions and covariance pooling,

    J. Zheng, Y. Feng, C. Bai, and J. Zhang, “Hyperspectral image clas- sification using mixed convolutions and covariance pooling,”IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 1, pp. 522–534, 2020

  69. [69]

    Spectral partitioning residual network with spatial attention mechanism for hyperspectral image classification,

    X. Zhang, S. Shang, X. Tang, J. Feng, and L. Jiao, “Spectral partitioning residual network with spatial attention mechanism for hyperspectral image classification,”IEEE transactions on geoscience and remote sensing, vol. 60, pp. 1–14, 2021

  70. [70]

    Category-specific prototype self-refinement contrastive learning for few-shot hyperspectral image classification,

    Q.Liu,J.Peng,N.Chen,W.Sun,Y.Ning,andQ.Du,“Category-specific prototype self-refinement contrastive learning for few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023

  71. [71]

    Glgat-cfsl: Global-local graph attention network-based cross-domain few-shot learning for hyperspectral image classification,

    C. Ding, Z. Deng, Y. Xu, M. Zheng, L. Zhang, Y. Cao, W. Wei, and Y. Zhang, “Glgat-cfsl: Global-local graph attention network-based cross-domain few-shot learning for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1– 19, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID: 270539347

  72. [72]

    Dual-alignment cross- domain few-shot learning with lightweight domain-specific attention for hyperspectral image classification,

    R. Qin, M. Lv, C. Wang, Y. Wu, and H. Du, “Dual-alignment cross- domain few-shot learning with lightweight domain-specific attention for hyperspectral image classification,”Neurocomputing, vol. 647, p. 130722, 2025. [Online]. Available: https://api.semanticscholar.org/ CorpusID:279210585

  73. [73]

    Masked autoencoders are scalable vision learners,

    K. He, X. Chen, S. Xie, Y. Li, P. Doll’ar, and R. B. Girshick, “Masked autoencoders are scalable vision learners,”2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15979–15988, 2021. [Online]. Available: https://api.semanticscholar. org/CorpusID:243985980