Recognition: no theorem link
SSDA: Bridging Spectral and Structural Gaps via Dual Adaptation for Vision-Based Time Series Forecasting
Pith reviewed 2026-05-14 21:36 UTC · model grok-4.3
The pith
SSDA adapts pre-trained vision models for time series by closing spectral and structural gaps in rendered images.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Large vision models can forecast time series when the data is rendered as images, yet spectral and structural gaps still separate those images from the natural images the models were pre-trained on. The spectral gap appears as a markedly shallower power spectrum; the structural gap arises because 1D sequences reshaped into 2D grids create spurious spatial adjacencies and break genuine temporal order. SSDA closes both gaps through a dual-branch architecture: the Spectral Magnitude Aligner applies 2D FFT to selectively enhance magnitude spectra while preserving phase, and Structural-Guided Low-Rank Adaptation injects position-aware temporal encodings into patch embeddings and performs low-rank
What carries the argument
SSDA dual-branch network whose Spectral Magnitude Aligner (SMA) realigns image spectra and whose Structural-Guided Low-Rank Adaptation (SG-LoRA) injects temporal encodings and adapts attention.
If this is right
- SSDA outperforms both LVM- and LLM-based baselines on seven real-world time series benchmarks under full-shot and few-shot regimes.
- The dual adaptation works at both data level (spectrum alignment) and model level (temporal-aware low-rank updates) without requiring full retraining.
- Adaptive fusion of the spectral and structural branches yields the final forecast.
- The method remains effective when only limited labeled time series data is available.
Where Pith is reading between the lines
- Similar spectral and structural mismatches may appear when other 1D signals are rendered as images for vision models, suggesting the same dual-adaptation pattern could apply to audio or sensor data.
- If the phase-preserving property of SMA proves robust, it may allow further spectrum manipulations without destroying temporal ordering information.
- Testing whether SG-LoRA's position encodings remain beneficial when the underlying vision backbone changes would clarify how general the structural fix is.
Load-bearing premise
The gaps identified in spectrum and spatial structure are the main factors limiting transfer from natural-image pre-training.
What would settle it
Running the same LVM backbones with and without SMA plus SG-LoRA on the seven benchmarks and finding no consistent accuracy gain would falsify the claim that closing these gaps is what unlocks better forecasts.
Figures
read the original abstract
Large vision models (LVMs) have recently proven to be surprisingly effective time series forecasters, simply by rendering temporal data as images. This success, how ever, rests on a largely unexamined premise: the rendered time series images are sufficiently close to natural images for knowledge in pre-trained models to transfer effectively. We argue that two gaps still remain, i.e., spectral and structural gaps, fundamentally limiting the potential of LVMs for time series forecasting. Spectrally, we systematically reveal that rendered time series images exhibit a markedly shallower power spectrum than the natural images LVMs are pre-trained to recognize. Structurally, reshaping 1D temporal sequences into 2D grids fabricates spurious spatial adjacencies while severing genuine temporal continuities, misleading the spatial inductive biases of pre-trained LVMs. To bridge these gaps, we propose SSDA, a dual-branch network that spectrally and structurally adapts to unlock the full potential of LVMs for time series forecasting. At the data level, a Spectral Magnitude Aligner (SMA) applies 2D FFT to selectively enhance the magnitude spectrum toward natural-image statistics while preserving phase. At the model level, a Structural-Guided Low-Rank Adaptation (SG-LoRA) injects position-aware temporal encodings into patch embeddings and adapts at tention via low-rank updates. The two branches are further adaptively fused to produce the final forecast. Extensive experiments on seven real-world benchmarks demonstrate that SSDA consistently outperforms strong LVM- and LLM-based baselines under both full-shot and few-shot settings. Code is publicly available at https://anonymous.4open.science/r/SSDA-8C5B.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper argues that rendering time series as images for pre-trained large vision models (LVMs) is limited by two gaps: a spectral gap (shallower power spectrum than natural images) and a structural gap (fabricated spatial adjacencies that break temporal continuity). It proposes SSDA, a dual-branch architecture with a Spectral Magnitude Aligner (SMA) that uses 2D FFT to enhance magnitude spectra toward natural-image statistics while preserving phase, and Structural-Guided Low-Rank Adaptation (SG-LoRA) that injects position-aware temporal encodings into patch embeddings and performs low-rank attention updates. The branches are fused adaptively, and the method is shown to outperform LVM- and LLM-based baselines on seven real-world benchmarks under both full-shot and few-shot regimes.
Significance. If the gains are causally attributable to closing the identified gaps rather than generic increases in adaptation capacity, the work would provide a principled way to improve transfer from natural-image pre-training to time series forecasting, with particular value in few-shot regimes. The public code release is a positive factor for reproducibility.
major comments (3)
- [Experiments] Experiments section: the reported outperformance on seven benchmarks lacks controlled ablations that hold total parameter count fixed when adding SMA and SG-LoRA; without such controls it remains unclear whether gains arise from spectral/structural alignment or from the dual-branch architecture's extra capacity.
- [Method] Method section (SMA description): no quantitative diagnostics (e.g., pre- vs. post-SMA power-spectrum slope comparisons or Kolmogorov-Smirnov tests against natural-image statistics) are provided to confirm that the magnitude enhancement actually reduces the claimed spectral gap.
- [Introduction] Introduction and §3: the structural-gap claim (that fabricated 2D adjacencies dominate transfer failure) is central but untested; a minimal diagnostic would be to compare a temporally contiguous reshaping baseline against the standard grid reshaping while keeping all other components fixed.
minor comments (3)
- [Abstract] Abstract: 'how ever' should be 'however'.
- [Abstract] Abstract: 'at tention' should be 'attention'.
- [Abstract] Abstract: the anonymous code link should be replaced with a permanent repository URL upon acceptance.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. The comments highlight opportunities to strengthen the empirical support for our claims regarding the spectral and structural gaps. We address each major comment below and will revise the manuscript accordingly to incorporate the suggested controls and diagnostics.
read point-by-point responses
-
Referee: [Experiments] Experiments section: the reported outperformance on seven benchmarks lacks controlled ablations that hold total parameter count fixed when adding SMA and SG-LoRA; without such controls it remains unclear whether gains arise from spectral/structural alignment or from the dual-branch architecture's extra capacity.
Authors: We agree that controlling for parameter count is necessary to isolate the contribution of the proposed adaptations. In the revised manuscript, we will add new ablation experiments that match the total parameter budget of SSDA. These will include comparisons against a dual-branch baseline using generic (non-spectral, non-structural) adaptations of equivalent capacity, such as standard LoRA without temporal encodings and random magnitude perturbations, to demonstrate that performance gains are attributable to the targeted gap-closing mechanisms rather than added capacity alone. revision: yes
-
Referee: [Method] Method section (SMA description): no quantitative diagnostics (e.g., pre- vs. post-SMA power-spectrum slope comparisons or Kolmogorov-Smirnov tests against natural-image statistics) are provided to confirm that the magnitude enhancement actually reduces the claimed spectral gap.
Authors: We acknowledge that direct quantitative evidence for the spectral alignment effect would strengthen the method description. In the revision, we will add power-spectrum analysis figures and tables comparing the slope and distribution statistics (including Kolmogorov-Smirnov distances) of rendered time series images before and after SMA application against natural-image references, confirming the reduction in the spectral gap. revision: yes
-
Referee: [Introduction] Introduction and §3: the structural-gap claim (that fabricated 2D adjacencies dominate transfer failure) is central but untested; a minimal diagnostic would be to compare a temporally contiguous reshaping baseline against the standard grid reshaping while keeping all other components fixed.
Authors: We appreciate this suggestion for validating the structural gap. In the revised version, we will include a controlled experiment that replaces the standard grid reshaping with a temporally contiguous reshaping baseline (preserving sequential order more explicitly in the 2D layout) while keeping the model architecture, training protocol, and all other components identical. Results from this diagnostic will be reported to quantify the impact of fabricated adjacencies. revision: yes
Circularity Check
No significant circularity; new modules and empirical claims are independent of target metrics
full rationale
The derivation introduces SMA (2D-FFT magnitude enhancement preserving phase) and SG-LoRA (position-aware encodings plus low-rank attention updates) as explicit new components whose parameters are optimized from data. The central claim of outperformance on seven benchmarks is presented as an empirical result rather than a quantity defined in terms of itself or a fitted input renamed as prediction. No equations or steps reduce by construction to the target forecasting performance, and no load-bearing self-citations or uniqueness theorems imported from the authors' prior work are invoked to force the result. The paper is therefore self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Fredn: Spectral disentanglement for time series forecasting via learnable frequency decomposition
Zhongde An, Jinhong You, Jiyanglin Li, Yiming Tang, Wen Li, Heming Du, and Shouguo Du. Fredn: Spectral disentanglement for time series forecasting via learnable frequency decomposition. InProceedings of the AAAI Conference on Artificial Intelligence, volume 40, pages 19623–19631, 2026
work page 2026
-
[2]
Visionts: Visual masked autoencoders are free-lunch zero-shot time series forecasters
Mouxiang Chen, Lefei Shen, Zhuo Li, Xiaoyun Joy Wang, Jianling Sun, and Chenghao Liu. Visionts: Visual masked autoencoders are free-lunch zero-shot time series forecasters. In International Conference on Machine Learning, pages 8979–9007. PMLR, 2025
work page 2025
-
[3]
Sully F Chen, Zhicheng Guo, Cheng Ding, Xiao Hu, and Cynthia Rudin. Sparse learned kernels for interpretable and efficient medical time series processing.Nature machine intelligence, 6(10):1132–1144, 2024
work page 2024
-
[4]
Imagenet: A large- scale hierarchical image database
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large- scale hierarchical image database. In2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009
work page 2009
-
[5]
An image is worth 16x16 words: Transformers for image recognition at scale
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. InInternational Conference on Learning Representations, pages 1–21, 2021
work page 2021
-
[6]
Amplifier: Bringing attention to neglected low-energy components in time series forecasting
Jingru Fei, Kun Yi, Wei Fan, Qi Zhang, and Zhendong Niu. Amplifier: Bringing attention to neglected low-energy components in time series forecasting. InProceedings of the AAAI conference on artificial intelligence, volume 39, pages 11645–11653, 2025
work page 2025
-
[7]
Masked autoencoders as spatiotemporal learners
Christoph Feichtenhofer, Haoqi Fan, and Yanghao Li Kaiming He. Masked autoencoders as spatiotemporal learners. InProceedings of the 36th International Conference on Neural Information Processing Systems, pages 35946–35958, 2022
work page 2022
-
[8]
D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells.Journal of the Optical Society of America. A, Optics and Image Science, 4(12):2379–2394, 1987
work page 1987
-
[9]
Text-to-audio generation using instruction guided latent diffusion model
Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, and Soujanya Poria. Text-to-audio generation using instruction guided latent diffusion model. InProceedings of the 31st ACM international conference on multimedia, pages 3590–3598, 2023
work page 2023
-
[10]
Moment: A family of open time-series foundation models
Mononito Goswami, Konrad Szafer, Arjun Choudhry, Yifu Cai, Shuo Li, and Artur Dubrawski. Moment: A family of open time-series foundation models. InInternational Conference on Machine Learning, pages 16115–16152. PMLR, 2024
work page 2024
-
[11]
Textually pretrained speech language models
Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexandre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, et al. Textually pretrained speech language models. InAdvances in Neural Information Processing Systems, volume 36, pages 63483–63501, 2023
work page 2023
-
[12]
Masked autoencoders are scalable vision learners
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 16000–16009, 2022
work page 2022
-
[13]
Deep residual learning for image recognition
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016
work page 2016
-
[14]
Yue Hu, Jialiang Tang, Siwei Yu, Baosheng Yu, Jing Zhang, and Dacheng Tao. Timeapn: Adaptive amplitude-phase non-stationarity normalization for time series forecasting.arXiv preprint arXiv:2603.17436, 2026. 10
-
[15]
Adamamba: Adaptive frequency-gated mamba for long-term time series forecasting, 2026
Xudong Jiang, Mingshan Loo, Hanchen Yang, Wengen Li, Mingrui Zhang, Yichao Zhang, Jihong Guan, and Shuigeng Zhou. Adamamba: Adaptive frequency-gated mamba for long-term time series forecasting, 2026
work page 2026
-
[16]
Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen
Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y . Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen. Time-LLM: Time series forecasting by reprogramming large language models. InThe Twelfth International Conference on Learning Representations, 2024
work page 2024
-
[17]
Vilt: Vision-and-language transformer without convolution or region supervision
Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. InInternational conference on machine learning, pages 5583–5594. PMLR, 2021
work page 2021
-
[18]
Forecasting with time series imaging.Expert Systems with Applications, 160:113680, 2020
Xixi Li, Yanfei Kang, and Feng Li. Forecasting with time series imaging.Expert Systems with Applications, 160:113680, 2020
work page 2020
-
[19]
Foundation models for time series analysis: A tutorial and survey
Yuxuan Liang, Haomin Wen, Yuqi Nie, Yushan Jiang, Ming Jin, Dongjin Song, Shirui Pan, and Qingsong Wen. Foundation models for time series analysis: A tutorial and survey. In Proceedings of the 30th ACM SIGKDD conference on knowledge discovery and data mining, pages 6555–6565, 2024
work page 2024
-
[20]
Timecma: Towards llm-empowered multivariate time series forecasting via cross-modality alignment
Chenxi Liu, Qianxiong Xu, Hao Miao, Sun Yang, Lingzheng Zhang, Cheng Long, Ziyue Li, and Rui Zhao. Timecma: Towards llm-empowered multivariate time series forecasting via cross-modality alignment. InProceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 18780–18788, 2025
work page 2025
-
[21]
Text2freq: Learning series patterns from text via frequency domain
Ming-Chih Lo, Ching Chang, and Wen-Chih Peng. Text2freq: Learning series patterns from text via frequency domain. InNeurIPS Workshop on Time Series in the Age of Large Models, 2024
work page 2024
-
[22]
Frozen pretrained transformers as universal computation engines
Kevin Lu, Aditya Grover, Pieter Abbeel, and Igor Mordatch. Frozen pretrained transformers as universal computation engines. InProceedings of the AAAI conference on artificial intelligence, volume 36, pages 7628–7636, 2022
work page 2022
-
[23]
Huu Hiep Nguyen, Minh Hoang Nguyen, Dung Nguyen, and Hung Le. Spectral text fu- sion: A frequency-aware approach to multimodal time-series forecasting.arXiv preprint arXiv:2602.01588, 2026
-
[24]
Harnessing vision models for time series analysis: a survey
Jingchao Ni, Ziming Zhao, ChengAo Shen, Hanghang Tong, Dongjin Song, Wei Cheng, Dongsheng Luo, and Haifeng Chen. Harnessing vision models for time series analysis: a survey. InProceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence, pages 10612–10620, 2025
work page 2025
-
[25]
A time series is worth 64 words: Long-term forecasting with transformers
Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. InThe Eleventh International Conference on Learning Representations, pages 1–10, 2023
work page 2023
-
[26]
Peisong Niu, Tian Zhou, Xue Wang, Liang Sun, and Rong Jin. Understanding the role of textual prompts in llm for time series forecasting: an adapter view.arXiv preprint arXiv:2311.14782, 2023
-
[27]
Learning transferable visual models from natural language supervision
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. InInternational conference on machine learning, pages 8748–8763. PMLR, 2021
work page 2021
-
[28]
Statistics of natural images: scaling in the woods
Daniel L Ruderman and William Bialek. Statistics of natural images: scaling in the woods. In Proceedings of the 7th International Conference on Neural Information Processing Systems, pages 551–558, 1993
work page 1993
-
[29]
Multi-modal view enhanced large vision models for long-term time series forecast- ing
ChengAo Shen, Wenchao Yu, Ziming Zhao, Dongjin Song, Wei Cheng, Haifeng Chen, and Jingchao Ni. Multi-modal view enhanced large vision models for long-term time series forecast- ing. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems, pages 1–28, 2025. 11
work page 2025
-
[30]
Cross-modal fine-tuning: Align then refine
Junhong Shen, Liam Li, Lucio M Dery, Corey Staten, Mikhail Khodak, Graham Neubig, and Ameet Talwalkar. Cross-modal fine-tuning: Align then refine. InInternational Conference on Machine Learning, pages 31030–31056. PMLR, 2023
work page 2023
-
[31]
Visionts++: Cross-modal time series foundation model with continual pre-trained vision backbones,
Lefei Shen, Mouxiang Chen, Xu Liu, Han Fu, Xiaoxue Ren, Jianling Sun, Zhuo Li, and Chenghao Liu. Visionts++: Cross-modal time series foundation model with continual pre- trained vision backbones.arXiv preprint arXiv:2508.04379, 2025
-
[32]
Very deep convolutional networks for large-scale image recogni- tion
K Simonyan and A Zisserman. Very deep convolutional networks for large-scale image recogni- tion. InInternational Conference on Learning Representations, pages 1–14. Computational and Biological Learning Society, 2015
work page 2015
-
[33]
TEST: Text prototype aligned embedding to activate LLM’s ability for time series
Chenxi Sun, Hongyan Li, Yaliang Li, and Shenda Hong. TEST: Text prototype aligned embedding to activate LLM’s ability for time series. InThe Twelfth International Conference on Learning Representations, pages 1–21, 2024
work page 2024
-
[34]
ElectricityLoadDiagrams20112014
Artur Trindade. ElectricityLoadDiagrams20112014. UCI Machine Learning Repository, 2015
work page 2015
-
[35]
Cross-modal projection in multimodal llms doesn’t really project visual attributes to textual space
Gaurav Verma, Minje Choi, Kartik Sharma, Jamelle Watson-Daniels, Sejoon Oh, and Srijan Kumar. Cross-modal projection in multimodal llms doesn’t really project visual attributes to textual space. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 657–664, 2024
work page 2024
-
[36]
Timesnet: Temporal 2d-variation modeling for general time series analysis
Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. InThe Eleventh International Conference on Learning Representations, pages 1–10, 2023
work page 2023
-
[37]
Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting
Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. InAdvances in neural information processing systems, volume 34, pages 22419–22430, 2021
work page 2021
-
[38]
Hanchen Yang, Jiannong Cao, Wengen Li, Yu Yang, Xiaoyi Li, Lingbai Kong, Yichao Zhang, Jihong Guan, and Shuigeng Zhou. Towards robust and interpretable spatial-temporal graph modeling for traffic prediction.ACM Transactions on Knowledge Discovery from Data, 19(9):1– 20, 2025
work page 2025
-
[39]
Luoxiao Yang, Yun Wang, Xinqi Fan, Israel Cohen, jingdong chen, and Zijun Zhang. Vitime: Foundation model for time series forecasting powered by vision intelligence.Transactions on Machine Learning Research, 2025
work page 2025
-
[40]
Loft-llm: Low-frequency time-series forecast- ing with large language models
Jiacheng You, Jingcheng Yang, Yuhang Xie, Zhongxuan Wu, Xiucheng Li, Feng Li, Pengjie Wang, Jian Xu, Bo Zheng, and Xinyang Chen. Loft-llm: Low-frequency time-series forecast- ing with large language models. InProceedings of the 32nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining V . 1, pages 1809–1820, 2026
work page 2026
-
[41]
Harnessing llms for temporal data-a study on explainable financial time series forecasting
Xinli Yu, Zheng Chen, and Yanbin Lu. Harnessing llms for temporal data-a study on explainable financial time series forecasting. InProceedings of the 2023 conference on empirical methods in natural language processing: industry track, pages 739–753, 2023
work page 2023
-
[42]
Zhenxiang Zang, Yang Qiao, Shaozhen Yan, and Jie Lu. Reliability and validity of power spectrum slope (pss): A metric for measuring resting-state functional magnetic resonance imaging activity of single voxels.Frontiers in Neuroscience, 16:871609, 2022
work page 2022
-
[43]
Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting?Proceedings of the AAAI Conference on Artificial Intelligence, 37(9):11121–11128, Jun. 2023
work page 2023
-
[44]
From pixels to predictions: Spectrogram and vision transformer for better time series forecasting
Zhen Zeng, Rachneet Kaur, Suchetha Siddagangappa, Tucker Balch, and Manuela Veloso. From pixels to predictions: Spectrogram and vision transformer for better time series forecasting. In Proceedings of the Fourth ACM International Conference on AI in Finance, pages 82–90, 2023
work page 2023
-
[45]
Mingrui Zhang, Hanchen Yang, Wengen Li, Xudong Jiang, Jihong Guan, Yichao Zhang, and Shuigeng Zhou. Piformer: Towards subseasonal sst prediction with spatial-patched inverted transformer.Expert Systems with Applications, page 131618, 2026. 12
work page 2026
-
[46]
Time- vlm: Exploring multimodal vision-language models for augmented time series forecasting
Siru Zhong, Weilin Ruan, Ming Jin, Huan Li, Qingsong Wen, and Yuxuan Liang. Time- vlm: Exploring multimodal vision-language models for augmented time series forecasting. In International Conference on Machine Learning, pages 78478–78497. PMLR, 2025
work page 2025
-
[47]
Informer: Beyond efficient transformer for long sequence time-series forecasting
Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 11106–11115, 2021
work page 2021
-
[48]
Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting
Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. InInternational conference on machine learning, pages 27268–27286. PMLR, 2022
work page 2022
-
[49]
Tian Zhou, Peisong Niu, Liang Sun, Rong Jin, et al. One fits all: Power general time series analysis by pretrained lm. InAdvances in neural information processing systems, volume 36, pages 43322–43355, 2023. 13 A Modality Gap Analysis A.1 Experimental Principle Frequency Distribution and Its Physical SignificanceAny signal in the spatial (or temporal) dom...
work page 2023
-
[50]
Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.