Recognition: unknown
FETS Benchmark: Foundation Models Outperform Dataset-specific Machine Learning in Energy Time Series Forecasting
Pith reviewed 2026-05-08 12:13 UTC · model grok-4.3
The pith
Foundation models outperform dataset-specific machine learning models in energy time series forecasting across multiple settings and data types.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Foundation models consistently outperform dataset-specific optimized machine learning approaches across all settings and data categories in energy time series forecasting, despite the machine learning models having seen the full historic target data during training. Covariate-informed foundation models achieve the strongest performance. Further analysis reveals a strong correlation between predictive performance and spectral entropy, performance saturation beyond a certain context length, and improved performance at higher aggregation levels such as national load, district heating, and power grid data.
What carries the argument
The FETS benchmark, which organizes energy forecasting use cases by stakeholders, attributes, and data categories and compares foundation models to classical machine learning on 54 collected datasets.
Load-bearing premise
The foundation models were not exposed to any of the evaluation datasets during pretraining and the benchmark comparisons are free of data leakage or selection effects.
What would settle it
Showing that any foundation model was pretrained on data from the 54 evaluation datasets or finding a dataset-specific machine learning model that matches or exceeds the foundation models after exhaustive tuning.
Figures
read the original abstract
Driven by the transition towards a climate-neutral energy system, accurate energy time series forecasting is critical for planning and operation. Yet, it remains largely a dataset-specific task, requiring comprehensive training data, limiting scalability, and resulting in high model development and maintenance effort. Recently, foundation models that aim to learn generalizable patterns via extensive pretraining have shown superior performance in multiple prediction tasks. Despite their success and strong potential to address challenges in energy forecasting, their application in this domain remains largely unexplored. We address this gap by presenting the Foundation Models in Energy Time Series Forecasting (FETS) benchmark. We (1) provide a structured overview of energy forecasting use cases along three main dimensions: stakeholders, attributes, and data categories; (2) collect and analyze 54 datasets across 9 data categories, guided by typical stakeholder interests; (3) benchmark foundation models against classical machine learning approaches across different forecasting settings. Foundation models consistently outperform dataset-specific optimized machine learning approaches across all settings and data categories, despite the latter having seen the full historic target data during training. In particular, covariate-informed foundation models achieve the strongest performance. Further analysis reveals a strong correlation between predictive performance and spectral entropy, performance saturation beyond a certain context length, and improved performance at higher aggregation levels such as national load, district heating, and power grid data. Overall, our findings highlight the strong potential of foundation models as scalable and generalizable forecasting solutions for the energy domain, particularly in data-constrained and privacy-sensitive settings.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces the FETS benchmark for energy time series forecasting. It structures use cases along stakeholders, attributes, and data categories; compiles and analyzes 54 public datasets across 9 categories; and empirically compares foundation models (including covariate-informed variants) against dataset-specific classical ML baselines. The central claim is that foundation models consistently outperform the baselines across all settings and categories, even though the baselines see the full historic target data, with additional findings on correlations with spectral entropy, context-length saturation, and better performance at higher aggregation levels.
Significance. If the outperformance holds after controlling for data leakage, the work provides concrete evidence that pretrained foundation models can serve as scalable, generalizable alternatives to per-dataset training in energy forecasting. This would be particularly valuable for data-constrained or privacy-sensitive applications. The structured taxonomy of use cases and the scale of the 54-dataset collection constitute a useful resource for the community.
major comments (2)
- [Section 4 and Section 5] Section 4 (Benchmark Setup) and Section 5 (Results): No explicit audit, overlap analysis, or deduplication check is reported between the 54 evaluation datasets and the pretraining corpora of the foundation models (e.g., Chronos, TimesFM). Because many public energy, load, and weather series are known to appear in large-scale time-series pretraining data, the observed superiority could reflect memorization rather than the claimed generalization; this directly undermines the headline claim that foundation models succeed 'despite the latter having seen the full historic target data.'
- [Section 5] Section 5 (Results): The manuscript reports consistent outperformance but does not supply per-dataset or aggregate quantitative metrics (e.g., MAE, RMSE, or normalized scores), statistical significance tests across the 54 datasets, or details on the hyperparameter search and model-selection protocol used for the classical ML baselines. Without these, the strength and robustness of the central empirical claim cannot be verified.
minor comments (2)
- [Abstract and Section 3] Abstract and Section 3: The specific foundation models evaluated are not named in the abstract or early sections, forcing the reader to infer them from later text.
- [Figures 4-7] Figure and table captions: Several figures comparing performance across data categories lack error bars or confidence intervals, making it difficult to judge the practical significance of the reported differences.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our FETS benchmark paper. The comments correctly identify areas where additional transparency and detail will strengthen the empirical claims. We address each major comment below, indicating the revisions we will incorporate.
read point-by-point responses
-
Referee: [Section 4 and Section 5] Section 4 (Benchmark Setup) and Section 5 (Results): No explicit audit, overlap analysis, or deduplication check is reported between the 54 evaluation datasets and the pretraining corpora of the foundation models (e.g., Chronos, TimesFM). Because many public energy, load, and weather series are known to appear in large-scale time-series pretraining data, the observed superiority could reflect memorization rather than the claimed generalization; this directly undermines the headline claim that foundation models succeed 'despite the latter having seen the full historic target data.'
Authors: We acknowledge that the manuscript does not include an explicit overlap or deduplication analysis between the 54 evaluation datasets and the pretraining corpora. This is a valid concern, as some public energy time series may appear in large-scale pretraining data. We will add a dedicated subsection in Section 4 that (1) summarizes publicly available information on the pretraining corpora of Chronos, TimesFM, and other evaluated models, (2) reports any known overlaps with our datasets, and (3) provides sensitivity results excluding datasets with confirmed overlap where possible. We note that full verification remains limited because complete pretraining data lists are not always disclosed by model developers; however, we will qualify the generalization claims accordingly and emphasize that the foundation models still operate in a zero-shot regime without access to the target series during inference. revision: partial
-
Referee: [Section 5] Section 5 (Results): The manuscript reports consistent outperformance but does not supply per-dataset or aggregate quantitative metrics (e.g., MAE, RMSE, or normalized scores), statistical significance tests across the 54 datasets, or details on the hyperparameter search and model-selection protocol used for the classical ML baselines. Without these, the strength and robustness of the central empirical claim cannot be verified.
Authors: We agree that the current presentation of results relies primarily on figures and qualitative statements. We will revise Section 5 to include: (1) tables reporting per-dataset MAE, RMSE, and normalized scores for all models and settings, (2) aggregate statistics (mean, median, standard deviation) across the full 54 datasets and broken down by the 9 data categories, (3) statistical significance tests (e.g., Wilcoxon signed-rank tests with p-values) comparing foundation models against the best baseline on each dataset, and (4) a detailed appendix describing the hyperparameter search protocol for the classical ML baselines, including the grid or random search ranges, validation strategy, and model selection criterion. These additions will allow readers to fully verify the reported outperformance. revision: yes
- Complete exhaustive deduplication against every foundation model pretraining corpus is not feasible, as some training data details remain proprietary or only partially documented by the model developers.
Circularity Check
No circularity: pure empirical benchmark with direct comparisons
full rationale
The paper is an empirical benchmark study that collects 54 public datasets, evaluates foundation models against optimized ML baselines on forecasting tasks, and reports performance metrics. No equations, derivations, fitted parameters, or first-principles claims are present in the abstract or described structure. The central claim rests on observed outperformance numbers rather than any reduction to inputs by construction. No self-citations are load-bearing for a mathematical result, and no ansatz or uniqueness theorem is invoked. The data-leakage concern raised by the skeptic is a question of experimental validity and external verification, not an internal circularity in any derivation chain.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Hossain E, Khan I, Un-Noor F, Sikander SS, Sunny MSH. Application of Big Data and Machine Learning in Smart Grid, and Associated Security Concerns: A Review. IEEE Access 2019; 7:13960–13988.https://doi.org/10.1109/ACCESS.2019.2894819
-
[2]
Rolnick D, Donti PL, Kaack LH, Kochanski K, Lacoste A, Sankaran K, et al. Tackling Climate Change with Machine Learning. ACM Comput Surv 2022; 55(2):42:1–96. https://doi.org/10.1145/3485128
-
[3]
Antonopoulos I, Robu V, Couraud B, et al. Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review. Renewable and Sustainable Energy Reviews 2020; 130:109899.https://doi.org/10.1016/j.rser.2020. 109899
-
[4]
The hybrid renewable energy forecasting and trading competition 2024
Browell J, van der Meer D, Kälvegren H, et al. The hybrid renewable energy forecasting and trading competition 2024. International Journal of Forecasting 2025. https: //doi.org/10.1016/j.ijforecast.2025.10.005
-
[6]
Benchmarking Transformer and xLSTM for Time-Series Forecasting of Heat Consumption
Wahl M, Bayer D, Rausch S, Pruckner M. Benchmarking Transformer and xLSTM for Time-Series Forecasting of Heat Consumption. In: IEEE Conference on Technologies for Sustainability (SusTech) 2026, San Diego. IEEE; 2026. (to appear)
2026
-
[7]
Electricity Demand Forecasting in Future Grid States: A Digital Twin-Based Simulation Study
Bayer D, Haag F, Pruckner M, Hopf K. Electricity Demand Forecasting in Future Grid States: A Digital Twin-Based Simulation Study. In: 9th International Conference on Smart and Sustainable Technologies (SpliTech) 2024, Split. IEEE; 2024.https: //doi.org/10.23919/SpliTech61897.2024.10612563
-
[8]
Machine Learning for Sustainable Energy Systems
Donti PL, Kolter JZ. Machine Learning for Sustainable Energy Systems. Annual Review of Environment and Resources 2021; 46:719–47.https://doi.org/10.1146/ annurev-environ-020220-061831
2021
-
[9]
Aslam S, Herodotou H, Mohsin SM, Javaid N, Ashraf N, Aslam S. A survey on deep learning methods for power load and renewable energy forecasting in smart microgrids. Renewable and Sustainable Energy Reviews 2021; 144:110992.https: //doi.org/10.1016/j.rser.2021.110992
-
[10]
IEEE Transactions on Knowledge and Data Engineering 22, 1345–1359
Pan SJ, Yang Q. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering 2010; 22(10):1345–59.https://doi.org/10.1109/TKDE.2009.191
-
[11]
Learning in Nonstationary Environments: A Survey
Ditzler G, Roveri M, Alippi C, Polikar R. Learning in Nonstationary Environments: A Survey. IEEE Computational Intelligence Magazine 2015; 10(4):12–25.https://doi. org/10.1109/MCI.2015.2471196 33
-
[12]
EVARS-GPR: EVent-Triggered Augmented Refitting of Gaussian Process Regression for Seasonal Data
Haselbeck F, Grimm DG. EVARS-GPR: EVent-Triggered Augmented Refitting of Gaussian Process Regression for Seasonal Data. In: Edelkamp S, Möller R, Rueckert E, editors. KI 2021: Advances in Artificial Intelligence. Springer International Publishing; 2021.https://doi.org/10.1007/978-3-030-87626-5_11
-
[13]
Chronos: Learning the Language of Time Series
Ansari AF, Stella L, Turkmen C, Zhang X, Mercado P, Shen H, et al. Chronos: Learning the Language of Time Series. arXiv preprint 2024; arXiv:2403.07815.https: //doi.org/10.48550/arXiv.2403.07815
work page internal anchor Pith review doi:10.48550/arxiv.2403.07815 2024
-
[14]
Chronos-2: From Univariate to Universal Forecasting
Ansari AF, Shchur O, Küken J, Zhang X, Zschiegner J, Hoof H, et al. Chronos-2: From Univariate to Universal Forecasting. arXiv preprint 2025; arXiv:2510.15821. https://doi.org/10.48550/arXiv.2510.15821
work page internal anchor Pith review doi:10.48550/arxiv.2510.15821 2025
-
[15]
Aksu T, Woo G, Liu J, Liu X, Liu C, Savarese S, et al. GIFT-Eval: A Bench- mark For General Time Series Forecasting Model Evaluation. arXiv preprint 2024; arXiv:2410.10393.https://doi.org/10.48550/arXiv.2410.10393
-
[16]
F., Turkmen, C., Stella, L., Erickson, N., Guerron, P., Bohlke-Schneider, M., and Wang, Y
Shchur O, Ansari AF, Turkmen C, Zhang X, Mercado P, Shen H, et al. fev-bench: A Re- alistic Benchmark for Time Series Forecasting. arXiv preprint 2025; arXiv:2509.26468. https://doi.org/10.48550/arXiv.2509.26468
-
[17]
Benchmarking Time Series Foundation Models for Short-Term Household Electricity Load Forecasting
Meyer M, Zapata Gonzalez D, Kaltenpoth S, Müller O. Benchmarking Time Series Foundation Models for Short-Term Household Electricity Load Forecasting. IEEE Access 2025;13:218141–218153.https://doi.org/10.1109/ACCESS.2025.3648056
-
[18]
Benchmarking Pre-Trained Time Series Models for Electricity Price Forecasting
Sartipi MJ, Hornek A, Tchappi I, Fridgen G. Benchmarking Pre-Trained Time Series Models for Electricity Price Forecasting. Proc 21st Int Conf European Energy Market (EEM) 2025; 1–7.https://doi.org/10.1109/EEM64765.2025.11050326
-
[19]
Simeone L. Time Series Foundation Models for Energy Load Forecasting on Con- sumer Hardware: A Multi-Dimensional Zero-Shot Benchmark. arXiv preprint 2026; arXiv:2602.10848.https://doi.org/10.48550/arXiv.2602.10848
-
[20]
Foundation Models for Clean Energy Forecasting: A Comprehensive Review
Ferdaus MM, Dam T, Sarkar MR, Uddin M, Anavatti SG. Foundation Models for Clean Energy Forecasting: A Comprehensive Review. arXiv preprint 2025; arXiv:2507.23147. https://doi.org/10.48550/arXiv.2507.23147
-
[21]
Improving the Writing of Research Papers: IMRAD and Beyond
Wu J. Improving the Writing of Research Papers: IMRAD and Beyond. Landscape Ecology 2011; 26(10):1345–49.https://doi.org/10.1007/s10980-011-9674-3
-
[22]
CRISP-DM: Towards a Standard Process Model for Data Mining
Wirth R, Hipp J. CRISP-DM: Towards a Standard Process Model for Data Mining. Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining 2000; 29–39.https://www.semanticscholar. org/paper/48b9293cfd4297f855867ca278f7069abc6a9c24
2000
-
[23]
Electricity price forecasting: A review of the state-of-the-art with a look into the future
Weron R. Electricity price forecasting: A review of the state-of-the-art with a look into the future. International Journal of Forecasting 2014; 30(4):1030–1081.https: //doi.org/10.1016/j.ijforecast.2014.08.008 34
-
[24]
Hong T, Fan S. Probabilistic electric load forecasting: A tutorial review. International Journal of Forecasting 2016; 32(3):914–938.https://doi.org/10.1016/j.ijforecast. 2015.11.011
-
[25]
XGBoost: A Scalable Tree Boosting System
Chen T, Guestrin C. XGBoost: A Scalable Tree Boosting System. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016; 785–794.https://doi.org/10.1145/2939672.2939785
-
[26]
Breiman L. Random Forests. Machine Learning 2001; 45(1):5–32.https://doi.org/ 10.1023/A:1010933404324
-
[27]
Lago J, De Ridder F, De Schutter B. Forecasting spot electricity prices: Deep learning approaches and empirical comparison of traditional algorithms. Applied Energy 2018; 221:386–405.https://doi.org/10.1016/j.apenergy.2018.02.069
-
[28]
A decoder- only foundation model for time-series forecasting.arXiv preprint arXiv:2310.10688,
Das A, Kong W, Sen R, Zhou Y. TimesFM: A Decoder-Only Foundation Model for Time-Series Forecasting. arXiv preprint 2024; arXiv:2310.10688.https://doi.org/10. 48550/arXiv.2310.10688
-
[29]
arXiv preprint arXiv:2511.11698 , year=
Liu C, Aksu T, Liu J, Woo G, Zhang X, Savarese S, et al. Moirai 2.0: When Less Is More for Time Series Forecasting. arXiv preprint 2025; arXiv:2511.11698. https://doi.org/10.48550/arXiv.2511.11698
-
[30]
FlowState: Sampling Rate Invariant Time Series Forecasting
Graf L, Ortner T, Woźniak S, Pantazi A. FlowState: Sampling Rate Invariant Time Series Forecasting. arXiv preprint 2025; arXiv:2508.05287.https://doi.org/10.48550/ arXiv.2508.05287
-
[31]
Tirex: Zero-shot forecasting across long and short horizons with enhanced in-context learning
Auer A, Podest P, Klotz D, Böck S, Klambauer G, Hochreiter S. TiRex: Zero-Shot Forecasting Across Long and Short Horizons with Enhanced In-Context Learning. arXiv preprint 2025; arXiv:2505.23719.https://doi.org/10.48550/arXiv.2505.23719
-
[32]
Beck M, Pöppel K, Spanring M, Auer A, Prudnikova O, Kopp M, et al. xLSTM: Extended Long Short-Term Memory. Advances in Neural Information Processing Systems 2024; 37:107547–603.https://doi.org/10.52202/079017-3417
-
[33]
TabPFN-2.5: Advancing the State of the Art in Tabular Foundation Models
Grinsztajn L, Flöge K, Key O, Hollmann N, Müller S, Hutter F, et al. TabPFN-2.5: Advancing the State of the Art in Tabular Foundation Models. arXiv preprint 2025; arXiv:2511.08667.https://doi.org/10.48550/arXiv.2511.08667
work page internal anchor Pith review doi:10.48550/arxiv.2511.08667 2025
-
[34]
B., M ¨uller, S., Salinas, D., and Hutter, F
Hoo SB, Müller S, Salinas D, Hutter F. TabPFN-TS: From Tables to Time: How TabPFN-v2 Outperforms Specialized Time Series Forecasting Models. arXiv preprint 2025; arXiv:2501.02945.https://doi.org/10.48550/arXiv.2501.02945
-
[35]
SDWPF: A Dataset for Spatial Dynamic Wind Power Forecasting Challenge at KDD Cup 2022
Zhou Z, Lu X, Xiao Y, Zhang Y, Zheng X, Wang C, et al. SDWPF: A Dataset for Spatial Dynamic Wind Power Forecasting Challenge at KDD Cup 2022. Sci Data 2024; 11(1):649.https://doi.org/10.1038/s41597-024-03427-5 35
-
[36]
TS-Arena -- A Live Forecast Pre-Registration Platform
Meyer M, Kaltenpoth S, Albers H, Zalipski K, Müller O. TS-Arena: A Live Forecast Pre-Registration Platform. arXiv preprint 2025; arXiv:2512.20761.https://doi.org/ 10.48550/arXiv.2512.20761
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2512.20761 2025
-
[37]
The Harmonised Electricity Market Role Model, Ver- sion 2022-01
ENTSO-E, EFET, ebIX. The Harmonised Electricity Market Role Model, Ver- sion 2022-01. European Network of Transmission System Operators for Electric- ity; 2022. https://eepublicdownloads.entsoe.eu/clean-documents/EDI/Library/HRM/ Harmonised_Role_Model_2022-01.pdf(Accessed: 23 November 2025)
2022
-
[38]
Energy Flow Diagram Germany 2023
BDEW Bundesverband der Energie- und Wasserwirtschaft e.V. Energy Flow Diagram Germany 2023. BDEW; 2024. https://www.bdew.de/media/documents/1_ Energieflussbild_Deutschland_2023_TWh_detailliert.svg (Accessed: 29 March 2026)
2023
-
[39]
Holidays: Open World Holidays Framework
Murza S, Siripanich P, Yakovets A. Holidays: Open World Holidays Framework. Zenodo 2025.https://doi.org/10.5281/zenodo.17944479
-
[40]
Transparency Platform
European Network of Transmission System Operators for Electricity. Transparency Platform. ENTSO-E; 2025.https://transparency.entsoe.eu/(Accessed: 29 December 2025)
2025
-
[41]
Electric vehicle charging session data of large office parking lot
de Bont K, Hoogsteen G, Hurink J, Vaessen P, Morren J. Electric vehicle charging session data of large office parking lot. 4TU.ResearchData; 2024.https://doi.org/10. 4121/80EF3824-3F5D-4E45-8794-3B8791EFBD13.V1
2024
-
[42]
Mobilithek. Mobilitätsdaten Deutschland. Mobilithek; 2025.https://mobilithek.info/ offers/714073450865197056(Accessed: 29 March 2026)
-
[43]
UrbanEV: An open benchmark dataset for urban electric vehicle charging demand prediction
Li H, Qu H, Tan X, You L, Zhu R, Fan W. UrbanEV: An open benchmark dataset for urban electric vehicle charging demand prediction. Sci Data 2025; 12(1):523. https://doi.org/10.1038/s41597-025-04874-4
-
[44]
Electric chargepoint analysis 2017: domestics
Department for Transport. Electric chargepoint analysis 2017: domestics. London: Department for Transport; 2025.https://www.data.gov.uk/dataset/5438d88d-695b- 4381-a5f2-6ea03bf3dcf0/electric-chargepoint-analysis-2017-domestics (Accessed: 29 March 2026)
2017
-
[45]
Brudermüller T, Fleisch E, González Vayá M, Staake T. HEAPO – An Open Dataset forHeatPumpOptimizationwithSmartElectricityMeterDataandOn-SiteInspection Protocols. ETH Zurich; 2025.https://doi.org/10.5281/zenodo.15056919
-
[46]
Network Data of the District Heating System for the City of Flensburg from 2020–2024
Freißmann J, Fritz M, Tuschy I, Stadtwerke Flensburg GmbH. Network Data of the District Heating System for the City of Flensburg from 2020–2024. Zenodo; 2025. https://doi.org/10.5281/zenodo.17177421
-
[47]
Hybrid Energy Forecasting and Trading Competition Data
Browell J. Hybrid Energy Forecasting and Trading Competition Data. Zenodo; 2025. https://doi.org/10.5281/zenodo.13950764 36
-
[48]
Hill of Towie wind farm open dataset
Clerc A, Lingkan E. Hill of Towie wind farm open dataset. Zenodo; 2025.https: //doi.org/10.5281/zenodo.14870023
-
[49]
A High-Resolution Three-Year Dataset Supporting Rooftop Photovoltaics (PV) Generation Analytics
Lin Z, Zhou Q, Wang Z, Wang C, Bookhart DB, Leung-Shea M. A High-Resolution Three-Year Dataset Supporting Rooftop Photovoltaics (PV) Generation Analytics. Sci Data 2025;12(1):63.https://doi.org/10.1038/s41597-025-04397-y
-
[50]
gridstatus: A Python library for accessing US electricity grid data
Kanter M. gridstatus: A Python library for accessing US electricity grid data. Grid Status; 2025. https://github.com/gridstatus/gridstatus (Accessed: 29 March 2026)
2025
-
[51]
Open Access Same-Time Information System (OASIS)
California ISO. Open Access Same-Time Information System (OASIS). CAISO; 2025. http://oasis.caiso.com(Accessed: 29 March 2026)
2025
-
[52]
Electricity consumption of 28 German companies in 15-min resolution
Huber J, Klempp N, Becker J, Weinhardt C. Electricity consumption of 28 German companies in 15-min resolution. Karlsruhe Institute of Technology; 2023. https: //doi.org/10.35097/1192
-
[53]
5359 Industrial VEA Load Profiles
Tiemann PH. 5359 Industrial VEA Load Profiles. Zenodo; 2024.https://doi.org/10. 5281/zenodo.13910298
2024
-
[54]
Dataset on Electrical Single- Family House and Heat Pump Load Profiles in Germany
Schlemminger M, Ohrdes T, Schneider E, Knoop M. Dataset on Electrical Single- Family House and Heat Pump Load Profiles in Germany. Sci Data 2022;9(1):56. https://doi.org/10.1038/s41597-022-01156-1
-
[55]
Repräsentative elektrische Lastprofile für Wohngebäude in Deutschland auf 1-sekündiger Datenbasis
Tjaden T. Repräsentative elektrische Lastprofile für Wohngebäude in Deutschland auf 1-sekündiger Datenbasis. ResearchGate; n.d.https://doi.org/10.13140/RG.2.1.5112. 0080/1
-
[56]
Regelzonenübergreifender einheitlicher Bilanzausgleichsen- ergiepreis (reBAP)
Netztransparenz.de. Regelzonenübergreifender einheitlicher Bilanzausgleichsen- ergiepreis (reBAP). Netztransparenz.de; 2025. https://www.netztransparenz.de/ Regelenergie/Ausgleichsenergiepreis/reBAP(Accessed: 29 December 2025)
2025
-
[57]
Continuous Intraday Market
EPEX SPOT SE. Continuous Intraday Market. EPEX SPOT; 2024.https://www. epexspot.com/en/marketdata/intraday(accessed via Montel Analytics)
2024
-
[58]
Grid Load in the Control Area
50Hertz Transmission GmbH. Grid Load in the Control Area. 50Hertz Trans- mission GmbH; 2025. https://www.50hertz.com/Transparenz/Kennzahlen/Netzdaten/ Netzbelastung(Accessed: 29 December 2025)
2025
-
[59]
Grid Structure Characteristics – Statutory Publication Requirements (Electricity)
Bayernwerk Netz GmbH. Grid Structure Characteristics – Statutory Publication Requirements (Electricity). Bayernwerk Netz GmbH; 2025. https://www.bayernwerk-netz.de/de/bayernwerk-netz-gmbh/netzinformation/ veroeffentlichungspflichten/strom/netzstrukturmerkmale.html (Accessed: 29 March 2026)
2025
-
[60]
Treutlein M, Bothe P, Schmidt M, Fichtner W, Suriyah MR, Leibfried T, et al. Real- World Energy Data of 200 Feeders from Low-Voltage Grids with Metadata in Germany over Two Years [Dataset]. Zenodo; 2025.https://doi.org/10.5281/zenodo.17831177 37
-
[61]
Treutlein M, Bothe P, Schmidt M, Fichtner W, Suriyah MR, Leibfried T, et al. Real- World Energy Data of 200 Feeders from Low-Voltage Grids with Metadata in Germany over Two Years. arXiv preprint 2026; arXiv:2602.03521.https://doi.org/10.48550/ arXiv.2602.03521
-
[62]
Activated Control Reserve
Netztransparenz.de. Activated Control Reserve. Netztransparenz.de; 2025. https: //www.netztransparenz.de/de-de/Regelenergie/Daten-Regelreserve/Aktivierte- Regelleistung(Accessed: 29 December 2025)
2025
-
[63]
Open-Meteo Weather API
Open-Meteo. Open-Meteo Weather API. Zenodo; 2024.https://doi.org/10.5281/ zenodo.14582479
2024
-
[64]
ICON-D2: High-Resolution Numerical Weather Prediction Model for Germany
Deutscher Wetterdienst (DWD). ICON-D2: High-Resolution Numerical Weather Prediction Model for Germany. DWD; 2025. https://www.dwd.de/EN/ourservices/ nwp_forecast_data/nwp_forecast_data.html(Accessed: 29 December 2025)
2025
-
[65]
International Electrotechnical Commission (IEC). IEC 62196-2:2022 – Plugs, Socket- Outlets, Vehicle Connectors and Vehicle Inlets: Conductive Charging of Electric Vehi- cles – Part 2: Dimensional Compatibility Requirements for AC Pin and Contact-Tube Accessories. Geneva: IEC; 2022.https://webstore.iec.ch/en/publication/64364
2022
-
[66]
Obermeier M. FETS Dataset: Foundation Models Outperform Dataset-specific Machine Learning in Energy Time Series Forecasting [Dataset]. Zenodo; 2026. https://doi.org/10.5281/zenodo.19418721
-
[67]
Attention is All you Need
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is All you Need. Advances in Neural Information Pro- cessing Systems 2017; 30. https://papers.nips.cc/paper_files/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
2017
-
[68]
Efficiently Modeling Long Sequences with Structured State Spaces
Gu A, Goel K, Ré C. Efficiently Modeling Long Sequences with Structured State Spaces. arXiv preprint 2022; arXiv:2111.00396.https://doi.org/10.48550/arXiv.2111.00396
work page internal anchor Pith review doi:10.48550/arxiv.2111.00396 2022
-
[69]
Smith JTH, Warrington A, Linderman SW. Simplified State Space Layers for Sequence Modeling. arXiv preprint 2023; arXiv:2208.04933.https://doi.org/10.48550/arXiv. 2208.04933
work page internal anchor Pith review doi:10.48550/arxiv 2023
-
[70]
An XGBoost-SHAP Model for Energy Demand Prediction With Boruta-Lasso Feature Selection
Wang Y, Cheng W, Jin Y, Li J, Yang Y, Hu S. An XGBoost-SHAP Model for Energy Demand Prediction With Boruta-Lasso Feature Selection. IEEE Access 2025; 13:135806–135821.https://doi.org/10.1109/ACCESS.2025.3593558
-
[71]
Machine Learning Model Optimization with Optuna for Accurate Prediction of Strength and Crack Behavior in Prestressed Concrete Beams
Wen Y, Guo R, Duan Z, Li J, Yang Y, Hu S, et al. Machine Learning Model Optimization with Optuna for Accurate Prediction of Strength and Crack Behavior in Prestressed Concrete Beams. Sci Rep 2026; 16(1):5822.https://doi.org/10.1038/ s41598-026-36692-y
2026
-
[72]
On the Retraining Frequency of Global Models in Retail Demand Forecast- ing
Zanotti M. On the Retraining Frequency of Global Models in Retail Demand Forecast- ing. Mach Learn Appl 2025; 22:100769.https://doi.org/10.1016/j.mlwa.2025.100769 38
-
[73]
Prediction and Entropy of Printed English
Shannon, C.E. Prediction and Entropy of Printed English. Bell System Technical Journal, Vol. 30, No. 1, pp. 50–64; 1951.https://doi.org/10.1002/j.1538-7305.1951. tb01366.x
-
[74]
Visualising Forecasting Algorithm Performance Using Time Series Instance Spaces
Kang Y, Hyndman RJ, Smith-Miles K. Visualising Forecasting Algorithm Performance Using Time Series Instance Spaces. International Journal of Forecasting 2017; 33(2): 345–358.https://doi.org/10.1016/j.ijforecast.2016.09.004
-
[75]
Karaouli N, Coquenet D, Fromont E, Mermillod M, Reyboz M. Are Time Series Foundation Models Susceptible to Catastrophic Forgetting? arXiv preprint 2025; arXiv:2510.00809.https://doi.org/10.48550/arXiv.2510.00809
-
[76]
Repräsentative VDEW-Lastprofile
VDEW (Verband der Elektrizitätswirtschaft). Repräsentative VDEW-Lastprofile. VDEW Materialien, Frankfurt am Main; 1999.https://www.bdew.de/media/documents/ 1999_Repraesentative-VDEW-Lastprofile.pdf(Accessed: 23 November 2025)
1999
-
[77]
BDEW (Bundesverband der Energie- und Wasserwirtschaft). Standardlastprofile Strom. Berlin; 2025.https://www.bdew.de/energie/standardlastprofile-strom/ (Ac- cessed: 29 March 2026). 39 Appendix A. Data Statistics & Model Specifications Table A.5: Dataset Statistics Overview: Comprehensive characteristics of all time series datasets including sample size (N)...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.