Recognition: unknown
Explainable Load Forecasting with Covariate-Informed Time Series Foundation Models
Pith reviewed 2026-05-07 07:41 UTC · model grok-4.3
The pith
Time series foundation models match specialized transformers in zero-shot load forecasting while providing SHAP explanations aligned with domain knowledge on weather and calendar effects.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Covariate-informed time series foundation models, when explained via an efficient temporal and covariate masking strategy for SHAP, achieve zero-shot predictive performance on day-ahead TSO load forecasting that is competitive with a supervised transformer trained on multiple years of data, and the explanations confirm appropriate use of weather and calendar covariates in line with domain expertise.
What carries the argument
A temporal and covariate masking strategy for SHAP computation that leverages TSFMs' support for variable context lengths and additional input features to enable efficient, scalable explanations of forecasts by selectively withholding time steps or covariates.
Load-bearing premise
The temporal and covariate masking strategy yields faithful and unbiased SHAP explanations of the TSFM predictions without introducing artifacts from the masking process or approximation errors.
What would settle it
A case where the SHAP values from the masking method assign high importance to irrelevant features or contradict known physical relationships, such as temperature effects on load, or where the explained predictions deviate substantially from the model's actual outputs when inputs are masked.
Figures
read the original abstract
Time Series Foundation Models (TSFMs) have recently emerged as general-purpose forecasting models and show considerable potential for applications in energy systems. However, applications in critical infrastructure like power grids require transparency to ensure trust and reliability and cannot rely on pure black-box models. To enhance the transparency of TSFMs, we propose an efficient algorithm for computing Shapley Additive Explanations (SHAP) tailored to these models. The proposed approach leverages the flexibility of TSFMs with respect to input context length and provided covariates. This property enables efficient temporal and covariate masking (selectively withholding inputs), allowing for a scalable explanation of model predictions using SHAP. We evaluate two TSFMs - Chronos-2 and TabPFN-TS - on a day-ahead load forecasting task for a transmission system operator (TSO). In a zero-shot setting, both models achieve predictive performance competitive with a Transformer model trained specifically on multiple years of TSO data. The explanations obtained through our proposed approach align with established domain knowledge, particularly as the TSFMs appropriately use weather and calendar information for load prediction. Overall, we demonstrate that TSFMs can serve as transparent and reliable tools for operational energy forecasting.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces an efficient algorithm for computing SHAP explanations for Time Series Foundation Models (TSFMs) by using temporal and covariate masking to handle variable context lengths. It evaluates two TSFMs, Chronos-2 and TabPFN-TS, on a zero-shot day-ahead load forecasting task using data from a transmission system operator (TSO). The models are shown to achieve predictive performance competitive with a Transformer trained on multiple years of data, and the explanations are claimed to align with domain knowledge, particularly in the use of weather and calendar information.
Significance. If the central claims hold, this work is significant because it addresses the transparency requirement for using foundation models in critical infrastructure applications like energy load forecasting. The proposed masking-based SHAP method exploits the unique properties of TSFMs to provide scalable explanations without the need for model-specific adaptations. This could encourage the use of pretrained models in operational settings where both accuracy and interpretability are essential. The zero-shot capability is a notable strength if supported by robust comparisons.
major comments (3)
- [Section 3.2] Section 3.2: The temporal and covariate masking strategy for SHAP computation assumes that the TSFM's predictive function remains additive and consistent under masking. However, for models like Chronos-2 which are autoregressive and pretrained on fixed or variable contexts, selectively withholding past timesteps or covariates changes the effective input distribution, which may cause the model to impute or re-encode the context in ways that violate SHAP's assumptions. The manuscript does not provide quantitative fidelity checks, such as recovery of known feature importances on synthetic data or perturbation-based faithfulness metrics, to rule out systematic artifacts in the reported explanations.
- [Section 4.1] Section 4.1: The performance comparison in the zero-shot setting is described as 'competitive' but lacks specific quantitative metrics (e.g., MAE, RMSE values), error bars, or statistical significance tests against the supervised Transformer baseline. Without these, it is difficult to verify the claim that the TSFMs perform on par with a model trained on multiple years of TSO data, which is central to arguing for their reliability.
- [Section 4.3] Section 4.3: The alignment of explanations with domain knowledge is presented qualitatively. To strengthen the claim that TSFMs are 'transparent and reliable,' the paper should include an ablation or sensitivity analysis showing that the SHAP values are stable and not sensitive to the choice of baseline or masking order.
minor comments (3)
- [Figure 2] The explanation plots would benefit from clearer legends indicating the meaning of positive and negative SHAP values in the context of load forecasting.
- [Notation] Define the notation for the masked inputs more explicitly in the method section to avoid ambiguity in how the baseline is chosen for SHAP.
- [References] Consider adding references to recent work on explainability for time series models and foundation models to better position the contribution.
Simulated Author's Rebuttal
We thank the referee for their constructive comments on our work. These suggestions will help us clarify the methodological assumptions, strengthen the empirical comparisons, and provide additional robustness checks for the explanations. Below, we respond to each major comment and indicate the revisions we plan to incorporate.
read point-by-point responses
-
Referee: [Section 3.2] Section 3.2: The temporal and covariate masking strategy for SHAP computation assumes that the TSFM's predictive function remains additive and consistent under masking. However, for models like Chronos-2 which are autoregressive and pretrained on fixed or variable contexts, selectively withholding past timesteps or covariates changes the effective input distribution, which may cause the model to impute or re-encode the context in ways that violate SHAP's assumptions. The manuscript does not provide quantitative fidelity checks, such as recovery of known feature importances on synthetic data or perturbation-based faithfulness metrics, to rule out systematic artifacts in the reported explanations.
Authors: We thank the referee for highlighting this important consideration regarding the application of SHAP to autoregressive TSFMs. Our masking strategy is intended to exploit the models' flexibility with variable context lengths, but we recognize that autoregressive decoding may introduce non-additive effects. To rigorously address this, we will include quantitative fidelity checks in the revised manuscript. Specifically, we will conduct experiments on synthetic datasets with known ground-truth importances and report perturbation-based faithfulness metrics to validate that the explanations do not contain systematic artifacts. revision: yes
-
Referee: [Section 4.1] Section 4.1: The performance comparison in the zero-shot setting is described as 'competitive' but lacks specific quantitative metrics (e.g., MAE, RMSE values), error bars, or statistical significance tests against the supervised Transformer baseline. Without these, it is difficult to verify the claim that the TSFMs perform on par with a model trained on multiple years of TSO data, which is central to arguing for their reliability.
Authors: We agree that more detailed quantitative metrics are necessary to support the competitiveness claim. Although the manuscript describes the performance as competitive, we will revise Section 4.1 to include specific MAE and RMSE values for all models, along with error bars derived from multiple evaluation runs and results from statistical significance tests (such as Wilcoxon signed-rank tests) comparing the TSFMs to the supervised Transformer baseline. revision: yes
-
Referee: [Section 4.3] Section 4.3: The alignment of explanations with domain knowledge is presented qualitatively. To strengthen the claim that TSFMs are 'transparent and reliable,' the paper should include an ablation or sensitivity analysis showing that the SHAP values are stable and not sensitive to the choice of baseline or masking order.
Authors: We acknowledge that the current analysis of explanation alignment is primarily qualitative. To provide stronger evidence for the reliability of the explanations, we will add an ablation and sensitivity analysis in the revised Section 4.3. This will include testing the stability of SHAP values across different baseline choices and variations in masking order, with quantitative metrics to demonstrate robustness. revision: yes
Circularity Check
No significant circularity; claims rest on independent empirical evaluation and external domain knowledge
full rationale
The paper proposes a masking-based SHAP procedure for TSFMs, then reports zero-shot forecasting performance that is compared to an independently trained Transformer baseline on TSO data and validates that the resulting attributions align with established external domain knowledge on weather/calendar effects. No equations or steps reduce the central claims (performance competitiveness or explanation faithfulness) to self-definition, fitted inputs renamed as predictions, or load-bearing self-citations. The derivation chain consists of a new algorithmic adaptation followed by standard empirical benchmarking; it does not contain self-referential reductions.
Axiom & Free-Parameter Ledger
axioms (2)
- standard math SHAP values provide additive feature attributions that sum to the model output difference
- domain assumption TSFMs flexibly accept varying context lengths and covariate inputs
Reference graph
Works this paper leans on
-
[1]
Taha Aksu, Gerald Woo, Juncheng Liu, Xu Liu, Chenghao Liu, Silvio Savarese, Caiming Xiong, and Doyen Sahoo. 2024. GIFT-Eval: A Benchmark For Gen- eral Time Series Forecasting Model Evaluation. doi:10.48550/arXiv.2410.10393 arXiv:2410.10393 [cs]
-
[2]
Chronos-2: From Univariate to Universal Forecasting
Abdul Fatir Ansari, Oleksandr Shchur, Jaris Küken, Andreas Auer, Boran Han, Pedro Mercado, Syama Sundar Rangapuram, Huibin Shen, Lorenzo Stella, Xiyuan Zhang, Mononito Goswami, Shubham Kapoor, Danielle C. Maddix, M. Hertel, A. Nikoltchovska, et al. Pablo Guerron, Tony Hu, Junming Yin, Nick Erickson, Prateek Mutalik De- sai, Hao Wang, Huzefa Rangwala, Geor...
work page internal anchor Pith review doi:10.48550/arxiv.2510.15821 2025
-
[3]
Chronos: Learning the Language of Time Series
Abdul Fatir Ansari, Lorenzo Stella, Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Sundar Rangapuram, Sebas- tian Pineda Arango, Shubham Kapoor, Jasper Zschiegner, Danielle C. Maddix, Hao Wang, Michael W. Mahoney, Kari Torkkola, Andrew Gordon Wilson, Michael Bohlke-Schneider, and Yuyang Wang. 2024. Chronos: Learning the L...
work page internal anchor Pith review doi:10.48550/arxiv.2403.07815 2024
-
[4]
Lukas Baur, Konstantin Ditschuneit, Maximilian Schambach, Can Kaymakci, Thomas Wollmann, and Alexander Sauer. 2024. Explainability and Interpretability in Electric Load Forecasting Using Machine Learning Techniques – A Review. Energy and AI16 (May 2024), 100358. doi:10.1016/j.egyai.2024.100358
-
[5]
João Bento, Pedro Saleiro, André F. Cruz, Mário A.T. Figueiredo, and Pedro Bizarro. 2021. TimeSHAP: Explaining Recurrent Models through Sequence Perturbations. InProceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (KDD ’21). Association for Computing Machinery, New York, NY, USA, 2565–2573. doi:10.1145/3447548.3467166
-
[6]
Adrien Bibal, Rémi Cardon, David Alfter, Rodrigo Wilkens, Xiaoou Wang, Thomas François, and Patrick Watrin. 2022. Is Attention Explanation? An Intro- duction to the Debate. InProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Asso...
-
[7]
Matthieu Boileau, Philippe Helluy, Jeremy Pawlus, and Svitlana Vyetrenko. 2025. Towards Interpretable Time Series Foundation Models. doi:10.48550/arXiv.2507. 07439 arXiv:2507.07439 [cs]
-
[8]
G. E. P. Box and D. R. Cox. 1964. An Analysis of Transformations.Journal of the Royal Statistical Society: Series B (Methodological)26, 2 (July 1964), 211–243. doi:10.1111/j.2517-6161.1964.tb00553.x
-
[9]
Ben Cohen, Emaad Khwaja, Youssef Doubli, Salahidine Lemaachi, Chris Lettieri, Charles Masson, Hugo Miccinilli, Elise Ramé, Qiqi Ren, Afshin Rostamizadeh, Jean Ogier du Terrail, Anna-Monica Toon, Kan Wang, Stephan Xie, Zongzhe Xu, Viktoriya Zhukova, David Asker, Ameet Talwalkar, and Othmane Abou- Amal. 2025. This Time is Different: An Observability Perspec...
- [10]
-
[11]
Björn Deiseroth, Mayukh Deb, Samuel Weinbach, Manuel Brack, Patrick Schramowski, and Kristian Kersting. 2023. ATMAN: Understanding Transformer Predictions Through Memory Efficient Attention Manipula- tion.Advances in Neural Information Processing Systems36 (Dec. 2023), 63437–63460. https://proceedings.neurips.cc/paper_files/paper/2023/hash/ c83bc020a020cd...
2023
-
[12]
ENTSO-E. 2025. Transparency platform. https://transparency.entsoe.eu
2025
-
[13]
European Commission. 2024. Artificial Intelligence Act. https://eur-lex.europa. eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689#art_13
2024
-
[14]
Elena Giacomazzi, Felix Haag, and Konstantin Hopf. 2023. Short-Term Elec- tricity Load Forecasting Using the Temporal Fusion Transformer: Effect of Grid Hierarchies and Data Sources. InProceedings of the 14th ACM Interna- tional Conference on Future Energy Systems. ACM, Orlando FL USA, 353–360. doi:10.1145/3575813.3597345
-
[15]
Miha Grabner, Yi Wang, Qingsong Wen, Boštjan Blažič, and Vitomir Štruc. 2023. A Global Modeling Framework for Load Forecasting in Distribution Networks. IEEE Transactions on Smart Grid14, 6 (Nov. 2023), 4927–4941. doi:10.1109/TSG. 2023.3264525
work page doi:10.1109/tsg 2023
-
[16]
Stephen Haben, Siddharth Arora, Georgios Giasemidis, Marcus Voss, and Danica Vukadinović Greetham. 2021. Review of low voltage load forecasting: Methods, applications, and recommendations.Applied Energy304 (Dec. 2021), 117798. doi:10.1016/j.apenergy.2021.117798
-
[17]
Hans Hersbach, Bill Bell, Paul Berrisford, Shoji Hirahara, András Horányi, Joaquín Muñoz-Sabater, Julien Nicolas, Carole Peubey, Raluca Radu, Dinand Schepers, Adrian Simmons, Cornel Soci, Saleh Abdalla, Xavier Abellan, Gian- paolo Balsamo, Peter Bechtold, Gionata Biavati, Jean Bidlot, Massimo Bonavita, Giovanna De Chiara, Per Dahlgren, Dick Dee, Michail D...
-
[18]
Matthias Hertel, Lara Ambrosius, Manuel Treutlein, Ralf Mikut, and Veit Ha- genmeyer. 2025. A comparison of local, cluster-specific and global Transformer models for forecasting electrical loads of individual buildings and substations. In 2025 IEEE Kiel PowerTech. IEEE, Kiel, Germany, 1–8. doi:10.1109/PowerTech59965. 2025.11180482
-
[19]
Matthias Hertel, Maximilian Beichter, Benedikt Heidrich, Oliver Neumann, Ben- jamin Schäfer, Ralf Mikut, and Veit Hagenmeyer. 2023. Transformer training strategies for forecasting multiple load time series.Energy Informatics6, 1 (Oct. 2023), 20. doi:10.1186/s42162-023-00278-z
-
[20]
Matthias Hertel, Simon Ott, Benjamin Schäfer, Ralf Mikut, Veit Hagenmeyer, and Oliver Neumann. 2022. Evaluation of Transformer Architectures for Electrical Load Time-Series Forecasting. InProceedings 32. Workshop Computational Intelli- gence, Vol. 1. 93. https://library.oapen.org/bitstream/handle/20.500.12657/59840/ external_content.pdf?sequence=1#page=105
2022
-
[21]
Matthias Hertel, Sebastian Pütz, Ralf Mikut, Veit Hagenmeyer, and Benjamin Schäfer. 2025. Explainable time-series forecasting with sampling-free SHAP for Transformers. doi:10.48550/arXiv.2512.20514 arXiv:2512.20514 [cs]
-
[22]
Noah Hollmann, Samuel Müller, Katharina Eggensperger, and Frank Hutter. 2023. TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second. doi:10.48550/arXiv.2207.01848 arXiv:2207.01848 [cs]
work page internal anchor Pith review doi:10.48550/arxiv.2207.01848 2023
-
[23]
Tao Hong. 2014. Energy Forecasting: Past, Present, and Future.Foresight: The International Journal of Applied Forecasting32 (2014), 43–48. https://ideas.repec. org//a/for/ijafaa/y2014i32p43-48.html
2014
-
[24]
Tao Hong, Pierre Pinson, Yi Wang, Rafał Weron, Dazhi Yang, and Hamidreza Zareipour. 2020. Energy Forecasting: A Review and Outlook.IEEE Open Access Journal of Power and Energy7 (2020), 376–388. doi:10.1109/OAJPE.2020.3029979
-
[25]
Shi Bin Hoo, Samuel Müller, David Salinas, and Frank Hutter. 2025. From Tables to Time: How TabPFN-v2 Outperforms Specialized Time Series Forecasting Models. doi:10.48550/arXiv.2501.02945 arXiv:2501.02945 [cs]
-
[26]
Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Jill Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computational Linguistics, Minne...
2019
-
[27]
Nouha Karaouli, Denis Coquenet, Elisa Fromont, Martial Mermillod, and Ma- rina Reyboz. 2025. How Foundational are Foundation Models for Time Series Forecasting? doi:10.48550/arXiv.2510.00742 arXiv:2510.00742 [cs]
-
[28]
Ahsan Raza Khan, Anzar Mahmood, Awais Safdar, Zafar A. Khan, and Naveed Ahmed Khan. 2016. Load forecasting, dynamic pricing and DSM in smart grid: A review.Renewable and Sustainable Energy Reviews54 (Feb. 2016), 1311–1322. doi:10.1016/j.rser.2015.10.117
-
[29]
2014.Load management as a way of covering peak demand in Southern Germany
Marian Klobasa, Gerhard Angerer, Arne Lüllmann, Joachim Schleich, Tim Buber, Anna Gruber, Marie Hünecke, and Serafin von Roon. 2014.Load management as a way of covering peak demand in Southern Germany. Final Report 040/04-S- 2014/EN. Fraunhofer Institute for Systems and Innovation Research (ISI) and Forschungsgesellschaft für Energiewirtschaft mbH. doi:10...
-
[30]
Siva Rama Krishna Kottapalli, Karthik Hubli, Sandeep Chandrashekhara, Garima Jain, Sunayana Hubli, Gayathri Botla, and Ramesh Doddaiah. 2025. Foun- dation Models for Time Series: A Survey. doi:10.48550/arXiv.2504.04011 arXiv:2504.04011 [cs]
-
[31]
Alexander Kreusel, Matthias Hertel, Moritz Noskiewicz, Heiko Maaß, Ralf Mikut, and Veit Hagenmeyer. 2025. Evaluating Time-Series Foundation Models for Cooling Demand Forecasting with Little Data. InProceedings 35. Workshop Com- putational Intelligence. Berlin, 301–322
2025
-
[32]
Yuxuan Liang, Haomin Wen, Yuqi Nie, Yushan Jiang, Ming Jin, Dongjin Song, Shirui Pan, and Qingsong Wen. 2024. Foundation Models for Time Series Analysis: A Tutorial and Survey. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 6555–6565. doi:10.1145/3637528.3671451 arXiv:2403.14735 [cs]
-
[33]
Nan Lin, Dong Yun, Weijie Xia, Peter Palensky, and Pedro P. Vergara. 2025. Comparative Analysis of Zero-Shot Capability of Time-Series Foundation Models in Short-Term Load Prediction. In2025 IEEE Kiel PowerTech. IEEE, Kiel, Germany, 1–6. doi:10.1109/PowerTech59965.2025.11180251
-
[34]
Chenghao Liu, Taha Aksu, Juncheng Liu, Xu Liu, Hanshu Yan, Quang Pham, Silvio Savarese, Doyen Sahoo, Caiming Xiong, and Junnan Li. 2026. Moirai 2.0: When Less Is More for Time Series Forecasting. doi:10.48550/arXiv.2511.11698 arXiv:2511.11698 [cs]
-
[35]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. InAdvances in Neural Information Processing Systems, Vol. 30. Canada, 4768 – 4777. https://dl.acm.org/doi/proceedings/10.5555/3295222
-
[36]
Machlev, L
R. Machlev, L. Heistrene, M. Perl, K. Y. Levy, J. Belikov, S. Mannor, and Y. Levron
-
[37]
Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities.Energy and AI9 (Aug. 2022), 100169. doi:10.1016/j.egyai.2022.100169
- [38]
-
[39]
Marcel Meyer, David Zapata, Sascha Kaltenpoth, and Oliver Müller. 2024. Bench- marking Time Series Foundation Models for Short-Term Household Electricity Load Forecasting. doi:10.48550/arXiv.2410.09487 arXiv:2410.09487. Explainable Load Forecasting with Covariate-Informed Time Series Foundation Models
-
[40]
Pablo Montero-Manso and Rob J. Hyndman. 2021. Principles and algorithms for forecasting groups of time series: Locality and globality.International Journal of Forecasting37, 4 (Oct. 2021), 1632–1653. doi:10.1016/j.ijforecast.2021.03.004
-
[41]
Reddy, Brandon Foreman, and Vignesh Subbian
Amin Nayebi, Sindhu Tipirneni, Chandan K. Reddy, Brandon Foreman, and Vignesh Subbian. 2023. WindowSHAP: An efficient framework for explaining time-series classifiers based on Shapley values.Journal of Biomedical Informatics 144 (Aug. 2023), 104438. doi:10.1016/j.jbi.2023.104438
-
[42]
Alexandra Nikoltchovska, Sebastian Pütz, Xiao Li, Veit Hagenmeyer, and Ben- jamin Schäfer. 2025. Probabilistic and Explainable Machine Learning for Tabular Power Grid Data. InProceedings of the 16th ACM International Conference on Future and Sustainable Energy Systems (E-Energy ’25). Association for Computing Machinery, New York, NY, USA, 213–231. doi:10....
-
[43]
Atharva Pandey, Abhilash Neog, and Gautam Jajoo. 2025. On the Internal Semantics of Time-Series Foundation Models. doi:10.48550/ARXIV.2511.15324 Version Number: 1
-
[44]
Kashif Rasul, Arjun Ashok, Andrew Robert Williams, Hena Ghonia, Rishika Bhag- watkar, Arian Khorasani, Mohammad Javad Darvishi Bayazi, George Adamopou- los, Roland Riachi, Nadhir Hassen, Marin Biloš, Sahil Garg, Anderson Schneider, Nicolas Chapados, Alexandre Drouin, Valentina Zantedeschi, Yuriy Nevmyvaka, and Irina Rish. 2023. Lag-Llama: Towards Foundati...
-
[45]
David Rundel, Julius Kobialka, Constantin von Crailsheim, Matthias Feurer, Thomas Nagler, and David Rügamer. 2024. Interpretable Machine Learning for TabPFN. InExplainable Artificial Intelligence, Luca Longo, Sebastian La- puschkin, and Christin Seifert (Eds.). Vol. 2154. Springer Nature Switzerland, Cham, 465–476. doi:10.1007/978-3-031-63797-1_23 Series ...
-
[46]
Frederik vom Scheidt, Hana Medinová, Nicole Ludwig, Bent Richter, Philipp Staudt, and Christof Weinhardt. 2020. Data analytics in the electricity sector – A quantitative and qualitative literature review.Energy and AI1 (Aug. 2020), 100009. doi:10.1016/j.egyai.2020.100009
-
[47]
Shapley, Lloyd S. 1953. A Value for n-person Games.Contributions to the Theory of Games. Annals of Mathematical Studies. Princeton University Press.28 (1953), 307–317. doi:10.1515/9781400881970-018
- [48]
-
[49]
Department of Homeland Security
U.S. Department of Homeland Security. 2024.Roles and responsibilities frame- work for artificial intelligence in critical infrastructure. Technical Report. U.S. Department of Homeland Security. https://www.dhs.gov/sites/default/files/2024- 11/24_1114_dhs_ai-roles-and-responsibilities-framework-508.pdf
2024
-
[50]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. InAdvances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc., 5998–6008
2017
-
[51]
Gerald Woo, Chenghao Liu, Akshat Kumar, Caiming Xiong, Silvio Savarese, and Doyen Sahoo. 2024. Unified Training of Universal Time Series Forecasting Transformers. doi:10.48550/arXiv.2402.02592 arXiv:2402.02592 [cs]
-
[52]
Yuyi Zhang, Qiushi Sun, Dongfang Qi, Jing Liu, Ruimin Ma, and Ovanes Pet- rosian. 2024. ShapTime: A General XAI Approach for Explainable Time Series Forecasting. InIntelligent Systems and Applications, Kohei Arai (Ed.). Springer Nature Switzerland, Cham, 659–673. doi:10.1007/978-3-031-47721-8_45
-
[53]
Florian Ziel. 2018. Modeling public holidays in load forecasting: a German case study.Journal of Modern Power Systems and Clean Energy6, 2 (March 2018), 191–207. doi:10.1007/s40565-018-0385-5 A Extended Methodology Chronos-2.Chronos-2 [ 2] is a pretrained foundation model for time series forecasting capable of handling univariate, multivariate, and covari...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.