Recognition: 2 theorem links
· Lean TheoremCross-Domain Few-Shot Learning for Hyperspectral Image Classification Based on Mixup Foundation Model
Pith reviewed 2026-05-16 09:56 UTC · model grok-4.3
The pith
A remote sensing foundation model adapted with coalescent projection and mixup domain adaptation outperforms prior methods by up to 14 percent in cross-domain few-shot hyperspectral image classification.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
MIFOMO rests on a remote sensing foundation model pre-trained at large scale so that its features transfer readily. Coalescent projection performs the downstream adaptation while the backbone parameters remain fixed. Mixup domain adaptation creates interpolated samples that bridge extreme source-target shifts, and label smoothing regularizes the pseudo-labels generated during adaptation.
What carries the argument
Coalescent projection (CP) that projects target samples into the frozen foundation-model feature space, paired with mixup domain adaptation (MDM) that interpolates across source and target distributions.
If this is right
- Fewer parameters are updated during adaptation, which reduces overfitting when only a handful of target samples are available.
- No external noise injection is required to enlarge the training set, removing an unrealistic simplification used in earlier work.
- The same frozen-backbone strategy can be reused for other remote-sensing tasks that also face domain shifts.
- Label smoothing mitigates the effect of imperfect pseudo-labels that arise when the domain gap remains large after mixup.
Where Pith is reading between the lines
- The same frozen-backbone plus mixup pattern could be tested on cross-domain few-shot problems in other imaging modalities such as multispectral or SAR data.
- Because the backbone stays fixed, the method may run efficiently on resource-limited platforms where full fine-tuning is impractical.
- If mixup is extended to more than two domains at once, the approach might scale to multi-source adaptation settings.
Load-bearing premise
The remote sensing foundation model already contains features general enough to support quick adaptation to new hyperspectral domains without any updates to the backbone.
What would settle it
Replace the pre-trained remote sensing backbone with random weights and rerun the same cross-domain few-shot experiments; if MIFOMO no longer exceeds prior methods, the value of the frozen foundation model is refuted.
Figures
read the original abstract
Although cross-domain few-shot learning (CDFSL) for hyper-spectral image (HSI) classification has attracted significant research interest, existing works often rely on an unrealistic data augmentation procedure in the form of external noise to enlarge the sample size, thus greatly simplifying the issue of data scarcity. They involve a large number of parameters for model updates, being prone to the overfitting problem. To the best of our knowledge, none has explored the strength of the foundation model, having strong generalization power to be quickly adapted to downstream tasks. This paper proposes the MIxup FOundation MOdel (MIFOMO) for CDFSL of HSI classifications. MIFOMO is built upon the concept of a remote sensing (RS) foundation model, pre-trained across a large scale of RS problems, thus featuring generalizable features. The notion of coalescent projection (CP) is introduced to quickly adapt the foundation model to downstream tasks while freezing the backbone network. The concept of mixup domain adaptation (MDM) is proposed to address the extreme domain discrepancy problem. Last but not least, the label smoothing concept is implemented to cope with noisy pseudo-label problems. Our rigorous experiments demonstrate the advantage of MIFOMO, where it beats prior arts with up to 14% margin. The source code of MIFOMO is open-sourced at https://github.com/Naeem-Paeedeh/MIFOMO for reproducibility and convenient further study.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes MIFOMO for cross-domain few-shot learning (CDFSL) in hyperspectral image (HSI) classification. It builds on a pre-trained remote sensing foundation model, introduces coalescent projection (CP) to adapt the frozen backbone to downstream tasks, mixup domain adaptation (MDM) to mitigate extreme domain discrepancies, and label smoothing to handle noisy pseudo-labels. The authors report that MIFOMO outperforms prior CDFSL methods by up to 14% in experiments and release the source code.
Significance. If the performance claims hold under rigorous validation, the work would be significant for introducing foundation-model-based adaptation to CDFSL in HSI without unrealistic external noise augmentation or heavy parameter updates. The open-sourced code at the provided GitHub link supports reproducibility and further study in remote sensing applications.
major comments (3)
- [Method] Method section (description of CP): the claim that freezing the backbone via coalescent projection enables quick adaptation across extreme spectral shifts (different sensors, band centers, atmospheric effects) is load-bearing for the central contribution, yet no feature-space distance analysis, t-SNE visualizations, or comparison to an unfrozen backbone is provided to test this assumption.
- [Experiments] Experiments section: the headline result of 'up to 14% margin' over prior arts is reported without naming the specific cross-domain HSI datasets, shot settings (e.g., 1-shot/5-shot), baseline methods, number of runs, or statistical significance tests, which prevents verification of the performance advantage.
- [Ablation] Ablation studies (if present in §4.3 or equivalent): no ablation isolating the contribution of CP versus MDM, or frozen versus fine-tuned backbone, is referenced, leaving the necessity of the freezing strategy untested and weakening the claim that the approach reduces overfitting risk.
minor comments (2)
- [Abstract] Abstract: the phrase 'rigorous experiments' is used without any concrete dataset or setting details; adding one sentence summarizing the evaluation protocol would improve clarity for readers.
- [Method] Notation: the definitions of 'coalescent projection' and 'mixup domain adaptation' are introduced without an accompanying equation or algorithmic pseudocode in the early method description, making the technical novelty harder to follow on first reading.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which help improve the clarity and rigor of our work. We provide point-by-point responses below and will revise the manuscript to address the concerns raised.
read point-by-point responses
-
Referee: [Method] Method section (description of CP): the claim that freezing the backbone via coalescent projection enables quick adaptation across extreme spectral shifts (different sensors, band centers, atmospheric effects) is load-bearing for the central contribution, yet no feature-space distance analysis, t-SNE visualizations, or comparison to an unfrozen backbone is provided to test this assumption.
Authors: We agree that additional empirical support would strengthen the central claim regarding coalescent projection (CP). In the revised manuscript, we will add t-SNE visualizations of feature distributions before and after CP adaptation across the cross-domain pairs, quantitative feature-space distance metrics (e.g., maximum mean discrepancy), and a direct performance comparison between the frozen-backbone MIFOMO and a fine-tuned variant. These additions will demonstrate the adaptation benefits and reduced overfitting risk under extreme spectral shifts. revision: yes
-
Referee: [Experiments] Experiments section: the headline result of 'up to 14% margin' over prior arts is reported without naming the specific cross-domain HSI datasets, shot settings (e.g., 1-shot/5-shot), baseline methods, number of runs, or statistical significance tests, which prevents verification of the performance advantage.
Authors: We apologize for insufficient explicit detail in the result summary. The Experiments section specifies the cross-domain HSI dataset pairs (e.g., Indian Pines to Pavia University and Salinas to Botswana), the 1-shot and 5-shot protocols, the full list of baselines, averages over five independent runs with standard deviations, and paired t-test significance results. We will revise the text to prominently restate these elements when reporting the performance margins, ensuring immediate verifiability. revision: yes
-
Referee: [Ablation] Ablation studies (if present in §4.3 or equivalent): no ablation isolating the contribution of CP versus MDM, or frozen versus fine-tuned backbone, is referenced, leaving the necessity of the freezing strategy untested and weakening the claim that the approach reduces overfitting risk.
Authors: We acknowledge that the existing ablations in §4.3 do not fully isolate CP from MDM or directly contrast the frozen versus fine-tuned backbone. In the revision, we will expand §4.3 with new ablation tables that separately remove or replace CP and MDM, and that compare the proposed frozen-backbone configuration against an unfrozen fine-tuning baseline, thereby directly testing and supporting the overfitting-reduction claim. revision: yes
Circularity Check
No circularity in MIFOMO derivation chain
full rationale
The paper introduces MIFOMO by building on an external pre-trained remote sensing foundation model and proposing new modules (coalescent projection for frozen-backbone adaptation, mixup domain adaptation, and label smoothing) without any equations, derivations, or predictions that reduce to fitted parameters or self-citations by construction. Performance claims rest on comparative experiments rather than self-referential mathematical steps, and no load-bearing uniqueness theorems or ansatzes from prior author work are invoked. The derivation is self-contained against external benchmarks and pre-trained models.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption A remote sensing foundation model pre-trained across a large scale of RS problems features generalizable features.
invented entities (2)
-
Coalescent projection (CP)
no independent evidence
-
Mixup domain adaptation (MDM)
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
coalescent projection (CP) ... single learnable matrix ... SA(U) = Softmax(Q C K^T / sqrt(D')) V
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
mixup domain adaptation ... intermediate domain ... progressive mixup ratio
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Hyperspectral image data analysis,
D. A. Landgrebe, “Hyperspectral image data analysis,”IEEE Signal Process. Mag., vol. 19, pp. 17–28, 2002. [Online]. Available: https://api.semanticscholar.org/CorpusID:62165049
work page 2002
-
[2]
Supervised classification of remotely sensed imagery using a modified $k$-nn technique,
L. Samaniego, A. Bárdossy, and K. Schulz, “Supervised classification of remotely sensed imagery using a modified $k$-nn technique,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, pp. 2112–2125, 2008. [Online]. Available: https://api.semanticscholar.org/ CorpusID:16372737
work page 2008
-
[3]
Classification of hyperspectral remote sensing images with support vector machines,
F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing images with support vector machines,”IEEE Transactions on Geoscience and Remote Sensing, vol. 42, pp. 1778–1790, 2004. [Online]. Available: https://api.semanticscholar.org/CorpusID:6906514
work page 2004
-
[4]
Deep learning-based classification of hyperspectral data,
Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep learning-based classification of hyperspectral data,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, pp. 2094–2107, 2014. [Online]. Available: https: //api.semanticscholar.org/CorpusID:44935336
work page 2094
-
[5]
Going deeper with contextual cnn for hyperspectral image classification,
H. Lee and H. Kwon, “Going deeper with contextual cnn for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 26, pp. 4843–4855, 2016. [Online]. Available: https://api.semanticscholar.org/CorpusID:5856281
work page 2016
-
[6]
Superpixel guided deformable convolu- tion network for hyperspectral image classification,
C. Zhao, W. Zhu, and S. Feng, “Superpixel guided deformable convolu- tion network for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 31, pp. 3838–3851, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:249045829
work page 2022
-
[7]
Diverse region-based cnn for hyperspectral image classification,
M. Zhang, W. Li, and Q. Du, “Diverse region-based cnn for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 27, pp. 2623–2634, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:3839934
work page 2018
-
[8]
Spectralformer: Rethinking hyperspectral image classification with transformers,
D. Hong, Z. Han, J. Yao, L. Gao, B. Zhang, A. J. Plaza, and J. Chanussot, “Spectralformer: Rethinking hyperspectral image classification with transformers,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:235755242
work page 2021
-
[9]
Dcn-t: Dual context network with transformer for hyperspectral image classification,
D. Wang, J. Zhang, B. Du, L. Zhang, and D. Tao, “Dcn-t: Dual context network with transformer for hyperspectral image classification,” IEEE Transactions on Image Processing, vol. 32, pp. 2536–2551,
-
[10]
Available: https://api.semanticscholar.org/CorpusID: 258236407 JOURNAL OF LATEX CLASS FILES, VOL
[Online]. Available: https://api.semanticscholar.org/CorpusID: 258236407 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 14
work page 2015
-
[11]
Q. Liu, L. Xiao, J. Yang, and Z. Wei, “Cnn-enhanced graph convolutional network with pixel- and superpixel-level feature fusion for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 59, pp. 8657–8671, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:229653781
work page 2020
-
[12]
Y. Dong, Q. Liu, B. Du, and L. Zhang, “Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 31, pp. 1559–1572, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:246286897
work page 2022
-
[13]
Few-shot class incremental learning via robust transformer approach,
N. Paeedeh, M. Pratama, S. Wibirama, W. Mayer, Z. Cao, and R. Kowalczyk, “Few-shot class incremental learning via robust transformer approach,”Inf. Sci., vol. 675, p. 120751, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:269741067
work page 2024
-
[14]
Pseudolabel-based unreliable sample learning for semi-supervised hyperspectral image classification,
H. Yao, R. Chen, W. Chen, H. Sun, W. Xie, and X. Lu, “Pseudolabel-based unreliable sample learning for semi-supervised hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:264086122
work page 2023
-
[15]
W. Wei, S. Zhao, S. Xu, L. Zhang, and Y. Zhang, “Semi-supervised neural architecture search for hyperspectral imagery classification method with dynamic feature clustering,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–14, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:258903260
work page 2023
-
[16]
Z. Li, C. Zhang, Y. Wang, W. Li, Q. Du, Z. Fang, and Y. Chen, “Cross- domain few-shot hyperspectral image classification with cross-modal alignment and supervised contrastive learning,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–19, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:270157684
work page 2024
-
[17]
S. Ma, L. Tong, J. Zhou, J. Yu, and C. Xiao, “Self-supervised spectral–spatial graph prototypical network for few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–15, 2023. [Online]. Available: https: //api.semanticscholar.org/CorpusID:260009959
work page 2023
-
[18]
Prototypical networks for few-shot learning,
J. Snell, K. Swersky, and R. S. Zemel, “Prototypical networks for few-shot learning,” inNeural Information Processing Systems, 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:309759
work page 2017
-
[19]
Learning to compare: Relation network for few-shot learning,
F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,”2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1199–1208, 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:4412459
work page 2018
-
[20]
Domain adaptation with preservation of manifold geometry for hyperspectral image classification,
H. L. Yang and M. M. Crawford, “Domain adaptation with preservation of manifold geometry for hyperspectral image classification,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, pp. 543–555, 2016. [Online]. Available: https://api.semanticscholar.org/CorpusID:10567335
work page 2016
-
[21]
Cross-dataset hyperspectral image classification based on adversarial domain adaptation,
X. Ma, X. Mou, J. Wang, X. Liu, J. Geng, and H. Wang, “Cross-dataset hyperspectral image classification based on adversarial domain adaptation,”IEEE Transactions on Geoscience and Remote Sensing, vol. 59, pp. 4179–4190, 2021. [Online]. Available: https: //api.semanticscholar.org/CorpusID:226688043
work page 2021
-
[22]
Two-branch attention adversarial domain adaptation network for hyperspectral image classification,
Y. Huang, J. Peng, W. SUN, N. Chen, Q. Du, Y. Ning, and H. Su, “Two-branch attention adversarial domain adaptation network for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–13, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:253250784
work page 2022
-
[23]
Y. Huang, J. Peng, N. Chen, W. SUN, Q. Du, K. Ren, and K. Huang, “Cross-scene wetland mapping on hyperspectral remote sensing images using adversarial domain adaptation network,”ISPRS Journal of Photogrammetry and Remote Sensing, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:260217897
work page 2023
-
[24]
Graph information aggregation cross-domain few-shot learning for hyperspectral image classification,
Y. Zhang, W. Li, M. Zhang, S. Wang, R. Tao, and Q. Du, “Graph information aggregation cross-domain few-shot learning for hyperspectral image classification,”IEEE Transactions on Neural Networks and Learning Systems, vol. 35, pp. 1912–1925, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:250145582
work page 1912
-
[25]
Cross-domain few-shot learning via adaptive transformer networks,
N. Paeedeh, M. Pratama, M. A. Ma’sum, W. Mayer, Z. Cao, and R. Kowlczyk, “Cross-domain few-shot learning via adaptive transformer networks,”Knowl. Based Syst., vol. 288, p. 111458, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:267211635
work page 2024
-
[26]
J. Li, Z. Zhang, R. Song, Y. Li, and Q. Du, “Scformer: Spectral coordinate transformer for cross-domain few-shot hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 33, pp. 840–855, 2024. [Online]. Available: https://api.semanticscholar.org/ CorpusID:266997240
work page 2024
-
[27]
L. Hu, W. He, L. Zhang, and H. Zhang, “Cross-domain meta- learning under dual-adjustment mode for few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023. [Online]. Available: https: //api.semanticscholar.org/CorpusID:263295977
work page 2023
-
[28]
Semantic guided prototype learning for cross-domain few-shot hyperspectral image classification,
Y. Li, J. He, H. Liu, Y. Zhang, and Z. Li, “Semantic guided prototype learning for cross-domain few-shot hyperspectral image classification,” Expert Syst. Appl., vol. 260, p. 125453, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:272962267
work page 2024
-
[29]
Y. Dang, H. Li, B. Liu, and X. Zhang, “Cross-domain few-shot learning for hyperspectral image classification based on global-to- local enhanced channel attention,”IEEE Geoscience and Remote Sensing Letters, vol. 22, pp. 1–5, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:275537834
work page 2025
-
[30]
Few-shot learning with prototype rectification for cross-domain hyperspectral image classification,
A. Qin, C. Yuan, Q. Li, X. Luo, F. Yang, T. Song, and C. Gao, “Few-shot learning with prototype rectification for cross-domain hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–15, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:270532262
work page 2024
-
[31]
B. Qin, S. Feng, C. Zhao, W. Li, R. Tao, and W. Xiang, “Cross-domain few-shot learning based on feature disentanglement for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–15, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:269012329
work page 2024
-
[32]
S. Feng, H. Zhang, B. Xi, C. Zhao, Y. Li, and J. Chanussot, “Cross- domain few-shot learning based on decoupled knowledge distillation for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–14, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:273251258
work page 2024
-
[33]
Y. Peng, Y. Liu, B. Tu, and Y. Zhang, “Convolutional transformer-based few-shot learning for cross-domain hyperspectral image classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 16, pp. 1335–1349, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:255738700
work page 2023
-
[34]
M. X. Shi and J. Ren, “Multi-branch feature transformation cross- domain few-shot learning for hyperspectral image classification,” Pattern Recognit., vol. 160, p. 111197, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:274168009
work page 2025
-
[35]
B. Wang, Y. Xu, Z. Wu, T. Zhan, and Z. Wei, “Spatial–spectral local domain adaption for cross domain few shot hyperspectral images classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022. [Online]. Available: https: //api.semanticscholar.org/CorpusID:252487046
work page 2022
-
[36]
Cross-domain self-taught network for few-shot hyperspectral image classification,
M. Zhang, H. Liu, M. Gong, H. Li, Y. Wu, and X. Jiang, “Cross-domain self-taught network for few-shot hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1– 19, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID: 258126500
work page 2023
-
[37]
Adaptive domain-adversarial few-shot learning for cross-domain hyperspectral image classification,
Z. Ye, J. Wang, H. Liu, Y. Zhang, and W. Li, “Adaptive domain-adversarial few-shot learning for cross-domain hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–17, 2023. [Online]. Available: https: //api.semanticscholar.org/CorpusID:265347217
work page 2023
-
[38]
Spatial- spectral–semantic cross-domain few-shot learning for hyperspectral image classification,
M. Cao, X. Zhang, J. Cheng, G. Zhao, W. Li, and X. Dong, “Spatial- spectral–semantic cross-domain few-shot learning for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–15, 2024. [Online]. Available: https: //api.semanticscholar.org/CorpusID:271565009
work page 2024
-
[39]
Dual-branch domain adaptation few-shot learning for hyperspectral image classification,
Z. Wang, S. Zhao, G. Zhao, and X. Song, “Dual-branch domain adaptation few-shot learning for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1– 16, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID: 267144982
work page 2024
-
[40]
Multilevel prototype alignment for cross- domain few-shot hyperspectral image classification,
H. Liu, J. He, Y. Li, and Y. Bi, “Multilevel prototype alignment for cross- domain few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–15, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:274895813
work page 2025
-
[41]
K. Shi, X. Zhang, H. yun Meng, C. Jia, and L. Jiao, “Multi-gaussian prototype metric with dual optimization for cross-domain few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–19, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:281107662
work page 2025
-
[42]
Q.Zhu,H.Li,W.Deng,Q.Guan,andJ.Luo,“Fromintra-distinctiveness to inter-invariance: A cycle-resemblance few-shot transformation network for cross-domain hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1– JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 15 16, 2025. [Online]. Available: https://ap...
work page 2015
-
[43]
Few-shot learning based on multilevel contrast for cross-domain hyperspectral image classification,
C. Shi, W. Liu, L. Fang, Z. You, Q. Miao, and C.-M. Pun, “Few-shot learning based on multilevel contrast for cross-domain hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–18, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:280810914
work page 2025
-
[44]
Z. Ye, J. Wang, T. Sun, J. Zhang, and W. Li, “Cross-domain few-shot learning based on graph convolution contrast for hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–14, 2024. [Online]. Available: https: //api.semanticscholar.org/CorpusID:266953151
work page 2024
-
[45]
Hyperspectral image classification via cross-domain few-shot learning with kernel triplet loss,
K.-K. Huang, H. tian Yuan, C.-X. Ren, Y.-E. Hou, J. li Duan, and Z. Yang, “Hyperspectral image classification via cross-domain few-shot learning with kernel triplet loss,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–18, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:265891548
work page 2023
-
[46]
Cross-domain few- shot contrastive learning for hyperspectral images classification,
S. Zhang, Z. Chen, D. Wang, and Z. J. Wang, “Cross-domain few- shot contrastive learning for hyperspectral images classification,”IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:254435164
work page 2022
-
[47]
Domain-adversarial training of neural networks,
Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. S. Lempitsky, “Domain-adversarial training of neural networks,” inJournal of machine learning research,
-
[48]
Available: https://api.semanticscholar.org/CorpusID: 2871880
[Online]. Available: https://api.semanticscholar.org/CorpusID: 2871880
-
[49]
B. Chen, G. Zhang, T. Chen, M. Wang, J. Liu, Y. Wang, R. Zhang, S. Li, and B. Hu, “Spectraldino: Dual mixture-of- subspaces low-rank adaptation for cross-domain hyperspectral image few-shot classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 63, pp. 1–14, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:281822400
work page 2025
-
[50]
Hypersigma: Hyperspectral intelligence comprehension foundation model,
D. Wang, M. Hu, Y. Jin, Y. Miao, J. Yang, Y. Xu, X. Qin, J. Ma, L. Sun, C. Li, C. Fu, H. Chen, C. Han, N. Yokoya, J. Zhang, M. Xu, L. Liu, L. Zhang, C. Wu, B. Du, D. Tao, and L.-Y. Zhang, “Hypersigma: Hyperspectral intelligence comprehension foundation model,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, pp. 6427–6444, 2024. [On...
work page 2024
-
[51]
Cross-domain few-shot learning with coalescent projections and latent space reservation,
N. Paeedeh, M. Pratama, W. Mayer, J. Cao, and R. Kowlczyk, “Cross-domain few-shot learning with coalescent projections and latent space reservation,”ArXiv, vol. abs/2507.15243, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:280269504
-
[52]
Learning to prompt for continual learning,
Z. Wang, Z. Zhang, C.-Y. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V. Perot, J. G. Dy, and T. Pfister, “Learning to prompt for continual learning,”2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 139–149, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:245218925
work page 2022
-
[53]
Mixup domain adaptations for dynamic remaining useful life predictions,
M. T. Furqon, M. Pratama, L. Liu, H. Habibullah, and K. Doğançay, “Mixup domain adaptations for dynamic remaining useful life predictions,”Knowl. Based Syst., vol. 295, p. 111783, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:269005807
work page 2024
-
[54]
Metamixup: Learning adaptive interpolation policy of mixup with metalearning,
Z. Mai, G. Hu, D. Chen, F. Shen, and H. T. Shen, “Metamixup: Learning adaptive interpolation policy of mixup with metalearning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, pp. 3050–3064, 2019
work page 2019
-
[55]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,”ArXiv, vol. abs/2010.11929, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:225039882
work page internal anchor Pith review Pith/arXiv arXiv 2010
-
[56]
Strong baselines for parameter efficient few-shot fine-tuning,
S. Basu, D. Massiceti, S. X. Hu, and S. Feizi, “Strong baselines for parameter efficient few-shot fine-tuning,” inAAAI Conference on Artificial Intelligence, 2023. [Online]. Available: https://api. semanticscholar.org/CorpusID:257921197
work page 2023
-
[57]
Learning with local and global consistency,
D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Scholkopf, “Learning with local and global consistency,” inNeural Information Processing Systems, 2003. [Online]. Available: https://api.semanticscholar.org/ CorpusID:508435
work page 2003
-
[58]
Progressive mix-up for few-shot supervised multi-source domain transfer,
R. Zhu, X. Yu, and S. Li, “Progressive mix-up for few-shot supervised multi-source domain transfer,” inInternational Conference on Learning Representations, 2023. [Online]. Available: https: //api.semanticscholar.org/CorpusID:259298634
work page 2023
-
[59]
Airborne hyperspectral data over chikusei,
N. Yokoya and A. Iwasaki, “Airborne hyperspectral data over chikusei,” Space Appl. Lab., Univ. Tokyo, Tokyo, Japan, Tech. Rep. SAL-2016-05- 27, vol. 5, no. 5, p. 5, 2016
work page 2016
-
[60]
Deep cross-domain few-shot learning for hyperspectral image classification,
Z. Li, M. Liu, Y. Chen, Y. Xu, W. Li, and Q. Du, “Deep cross-domain few-shot learning for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1– 18, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID: 233935479
work page 2021
-
[61]
L. Yu, X. Zhang, and K. Wang, “Few-shot learning framework based on classifier and domain adaptive alignment for hyperspectral classification,”Neurocomputing, vol. 647, p. 130726, 2025. [Online]. Available: https://api.semanticscholar.org/CorpusID:279194098
work page 2025
-
[62]
Causal meta-transfer learning for cross-domain few-shot hyperspectral image classification,
Y. Cheng, W. Zhang, H. Wang, and X. Wang, “Causal meta-transfer learning for cross-domain few-shot hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1– 14, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID: 261331535
work page 2023
-
[63]
Graph meta transfer network for heterogeneous few-shot hyperspectral image classification,
H. Wang, X. Wang, and Y. Cheng, “Graph meta transfer network for heterogeneous few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1– 12, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID: 255598678
work page 2023
-
[64]
Few- shot learning with class-covariance metric for hyperspectral image classification,
B. Xi, J. Li, Y. Li, R. Song, D. Hong, and J. Chanussot, “Few- shot learning with class-covariance metric for hyperspectral image classification,”IEEE Transactions on Image Processing, vol. 31, pp. 5079–5092, 2022. [Online]. Available: https://api.semanticscholar.org/ CorpusID:251068999
work page 2022
-
[65]
Heterogeneous few-shot learning for hyperspectral image classification,
Y. Wang, M. Liu, Y. Yang, Z. Li, Q. Du, Y. Chen, F. Li, and H. Yang, “Heterogeneous few-shot learning for hyperspectral image classification,”IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022. [Online]. Available: https://api.semanticscholar.org/ CorpusID:243299899
work page 2022
-
[66]
K.-K. Huang, C.-X. Ren, H. Liu, Z.-R. Lai, Y.-F. Yu, and D.-Q. Dai, “Hyperspectral image classification via discriminative convolutional neural network with an improved triplet loss,”Pattern Recognit., vol. 112, p. 107744, 2020. [Online]. Available: https: //api.semanticscholar.org/CorpusID:228834289
work page 2020
-
[67]
Z. Zhong, J. Li, Z. Luo, and M. A. Chapman, “Spectral–spatial residual network for hyperspectral image classification: A 3-d deep learning framework,”IEEE Transactions on Geoscience and Remote Sensing, vol. 56, pp. 847–858, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:3212989
work page 2018
-
[68]
Hyperspectral image clas- sification using mixed convolutions and covariance pooling,
J. Zheng, Y. Feng, C. Bai, and J. Zhang, “Hyperspectral image clas- sification using mixed convolutions and covariance pooling,”IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 1, pp. 522–534, 2020
work page 2020
-
[69]
X. Zhang, S. Shang, X. Tang, J. Feng, and L. Jiao, “Spectral partitioning residual network with spatial attention mechanism for hyperspectral image classification,”IEEE transactions on geoscience and remote sensing, vol. 60, pp. 1–14, 2021
work page 2021
-
[70]
Q.Liu,J.Peng,N.Chen,W.Sun,Y.Ning,andQ.Du,“Category-specific prototype self-refinement contrastive learning for few-shot hyperspectral image classification,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023
work page 2023
-
[71]
C. Ding, Z. Deng, Y. Xu, M. Zheng, L. Zhang, Y. Cao, W. Wei, and Y. Zhang, “Glgat-cfsl: Global-local graph attention network-based cross-domain few-shot learning for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1– 19, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID: 270539347
work page 2024
-
[72]
R. Qin, M. Lv, C. Wang, Y. Wu, and H. Du, “Dual-alignment cross- domain few-shot learning with lightweight domain-specific attention for hyperspectral image classification,”Neurocomputing, vol. 647, p. 130722, 2025. [Online]. Available: https://api.semanticscholar.org/ CorpusID:279210585
work page 2025
-
[73]
Masked autoencoders are scalable vision learners,
K. He, X. Chen, S. Xie, Y. Li, P. Doll’ar, and R. B. Girshick, “Masked autoencoders are scalable vision learners,”2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15979–15988, 2021. [Online]. Available: https://api.semanticscholar. org/CorpusID:243985980
work page 2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.