Recognition: no theorem link
U-STS-LLM A Unified Spatio-Temporal Steered Large Language Model for Traffic Prediction and Imputation
Pith reviewed 2026-05-13 06:37 UTC · model grok-4.3
The pith
A steered LLM unifies long-horizon traffic forecasting and high-missing-rate imputation on cellular networks.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
U-STS-LLM establishes that a single LLM, guided by a Dynamic Spatio-Temporal Attention Bias Generator synthesizing a persistent functional graph with transient nodal states, plus LoRA adaptation and gated fusion, learns a unified representation that outperforms prior STGNNs on both long-horizon forecasting and high-missing-rate imputation tasks across real-world cellular datasets while converging stably with far fewer trainable parameters.
What carries the argument
Dynamic Spatio-Temporal Attention Bias Generator that synthesizes a persistent functional graph with transient nodal states to produce explicit attention biases for the LLM.
If this is right
- A single model trained on the combined objective learns representations that transfer between forecasting and imputation without task-specific fine-tuning.
- Partial freezing plus low-rank adaptation keeps parameter count low while preserving the LLM's pre-trained sequence knowledge.
- Explicit graph-based steering removes the need for full retraining or heavy architectural changes when moving LLMs into structured domains.
- The same bias generator mechanism supports both short- and long-horizon predictions under the same training run.
Where Pith is reading between the lines
- The same steering approach could be tested on other spatio-temporal sensor streams such as road traffic counts or power-grid loads.
- If the bias generator proves robust, it reduces the incentive to build entirely new graph neural architectures for every new domain.
- Future work could measure whether the learned representations generalize across cities or operators without retraining the full model.
Load-bearing premise
The generator's combination of fixed graph structure and changing node states is sufficient to steer the LLM's attention toward stable convergence and better accuracy on cellular traffic without needing separate architectures for each task.
What would settle it
On a new cellular traffic dataset with comparable scale and missing patterns, the model either fails to match or exceed the best prior STGNN in both forecasting error and imputation error, or exhibits training instability such as diverging loss after the initial epochs.
Figures
read the original abstract
The efficient operation of modern cellular networks hinges on the accurate analysis of spatio-temporal traffic data. Mastering these patterns is essential for core network functions, chiefly forecasting future load to pre-empt congestion and imputing missing values caused by sensor failures or transmission errors to ensure data continuity. While deeply connected, forecasting and imputation have historically evolved as separate sub-fields. The dominant paradigm, Spatio-Temporal Graph Neural Networks (STGNNs), while effective, are often specialized, computationally intensive, and exhibit limited generalization. Concurrently, adapting large pre-trained language models (LLMs) offers a powerful alternative for sequence modeling, yet existing approaches provide weak structural guidance, leading to unstable convergence and a narrow focus on forecasting. To bridge these gaps, we propose U-STS-LLM, a unified framework built on a spatio-temporally steered LLM. Our core innovation is a Dynamic Spatio-Temporal Attention Bias Generator that synthesizes a persistent functional graph with transient nodal states to explicitly steer the LLM's attention. Coupled with a partially frozen backbone tuned via Low-Rank Adaptation (LoRA) and a Gated Adaptive Fusion mechanism, the model achieves stable, parameter-efficient adaptation. Trained under a unified multi-task objective, U-STS-LLM learns a holistic data representation. Extensive experiments on real-world cellular datasets demonstrate that U-STS-LLM establishes new state-of-the-art performance in both long-horizon forecasting and high-missing-rate imputation, while maintaining remarkable training efficiency and stability, offering a novel blueprint for harnessing foundation models in structured, non-linguistic domains.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes U-STS-LLM, a unified framework adapting a pre-trained LLM for spatio-temporal traffic forecasting and imputation on cellular data. Its core component is a Dynamic Spatio-Temporal Attention Bias Generator that synthesizes a persistent functional graph with transient nodal states to steer LLM attention; this is paired with a partially frozen backbone adapted via LoRA, a Gated Adaptive Fusion module, and training under a single multi-task objective. The authors report that the resulting model achieves state-of-the-art performance on long-horizon forecasting and high-missing-rate imputation while exhibiting stable training and parameter efficiency, positioning it as a blueprint for foundation models in non-linguistic structured domains.
Significance. If the experimental claims are substantiated, the work offers a concrete demonstration of how structural priors can be injected into LLMs via attention biasing to handle spatio-temporal tasks without full task-specific redesigns. The combination of LoRA-based parameter-efficient tuning, a unified multi-task loss, and the bias generator addresses both the specialization and instability issues noted for prior STGNNs and LLM adaptations. Successful validation would strengthen the case for reusable foundation models in network management and similar domains.
major comments (3)
- [Section 4] Section 4 (Experiments) and associated tables: the central SOTA claim for both long-horizon forecasting and high-missing-rate imputation rests on quantitative results, yet the manuscript provides no explicit ablation isolating the Dynamic Spatio-Temporal Attention Bias Generator from the contributions of LoRA adaptation and the unified multi-task objective. Without such controls (e.g., a variant with the generator disabled), it is impossible to confirm that the generator is the load-bearing mechanism for the reported stability and generalization.
- [Section 3.2] Section 3.2 (Dynamic Spatio-Temporal Attention Bias Generator): the description states that the generator 'explicitly steers' LLM attention by synthesizing persistent and transient components, but no attention-map visualizations, gradient analyses, or quantitative metrics (e.g., attention entropy before/after) are supplied to verify that the synthesized bias actually alters attention patterns in the intended way rather than being overridden by the frozen backbone or LoRA updates.
- [Section 4.3] Section 4.3 (Ablation studies): the reported training stability and efficiency gains are attributed to the overall framework, yet no comparison is given against a pure LoRA-tuned LLM baseline or against STGNNs under identical data splits and missing-rate protocols; this omission weakens the assertion that the proposed steering mechanism is necessary for the observed improvements.
minor comments (3)
- [Section 3] Notation in Section 3: the symbols for the persistent functional graph and transient nodal states are introduced without an explicit equation linking them to the attention bias tensor; a single defining equation would improve traceability.
- [Figure 2] Figure 2 (model architecture): the diagram does not clearly distinguish the flow of the bias generator output into the LLM layers versus the Gated Adaptive Fusion; adding arrows or a legend would reduce ambiguity.
- [Related Work] Related work section: several recent LLM-for-time-series papers are cited, but the discussion does not explicitly contrast the proposed attention-bias approach with prior graph-augmented LLM methods; a short comparative paragraph would strengthen positioning.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments. We have addressed each major point by incorporating additional experiments, visualizations, and baseline comparisons into the revised manuscript. Our responses are provided point by point below.
read point-by-point responses
-
Referee: [Section 4] Section 4 (Experiments) and associated tables: the central SOTA claim for both long-horizon forecasting and high-missing-rate imputation rests on quantitative results, yet the manuscript provides no explicit ablation isolating the Dynamic Spatio-Temporal Attention Bias Generator from the contributions of LoRA adaptation and the unified multi-task objective. Without such controls (e.g., a variant with the generator disabled), it is impossible to confirm that the generator is the load-bearing mechanism for the reported stability and generalization.
Authors: We agree that an explicit ablation isolating the Dynamic Spatio-Temporal Attention Bias Generator is necessary to substantiate its specific contribution. In the revised manuscript, we have added a dedicated ablation study in Section 4 that compares the full U-STS-LLM against a controlled variant where the generator is disabled (replaced by standard self-attention) while retaining LoRA adaptation and the unified multi-task objective. The new results, presented in an updated table, show clear degradation in both forecasting accuracy and training stability when the generator is removed, confirming its load-bearing role. revision: yes
-
Referee: [Section 3.2] Section 3.2 (Dynamic Spatio-Temporal Attention Bias Generator): the description states that the generator 'explicitly steers' LLM attention by synthesizing persistent and transient components, but no attention-map visualizations, gradient analyses, or quantitative metrics (e.g., attention entropy before/after) are supplied to verify that the synthesized bias actually alters attention patterns in the intended way rather than being overridden by the frozen backbone or LoRA updates.
Authors: We acknowledge the importance of direct empirical verification that the bias steers attention as described. In the revised Section 3.2, we have added attention-map visualizations for representative layers, along with quantitative metrics including attention entropy computed before and after bias application. These analyses demonstrate that the synthesized bias reduces entropy and shifts focus toward spatio-temporal structures, indicating it is not overridden by the frozen backbone or LoRA updates. revision: yes
-
Referee: [Section 4.3] Section 4.3 (Ablation studies): the reported training stability and efficiency gains are attributed to the overall framework, yet no comparison is given against a pure LoRA-tuned LLM baseline or against STGNNs under identical data splits and missing-rate protocols; this omission weakens the assertion that the proposed steering mechanism is necessary for the observed improvements.
Authors: We agree that direct comparisons under identical protocols are required. We have added a pure LoRA-tuned LLM baseline (without the attention bias generator) to Section 4.3, trained and evaluated on the exact same data splits and missing-rate settings. We have also re-executed the STGNN baselines under these identical protocols. The updated tables show that the full model outperforms both the pure LoRA baseline and the STGNNs, particularly in high-missing-rate imputation, supporting the necessity of the steering mechanism. revision: yes
Circularity Check
No circularity: framework claims rest on experimental results, not self-referential derivations
full rationale
The paper introduces U-STS-LLM via a high-level architectural description (Dynamic Spatio-Temporal Attention Bias Generator synthesizing persistent functional graph with transient nodal states, combined with LoRA and gated fusion under a unified objective). No equations, derivation steps, or parameter-fitting procedures are exhibited in the provided text that would reduce any 'prediction' or 'steering' claim to a fitted input or self-citation by construction. Performance assertions are grounded in experiments on cellular datasets rather than tautological redefinitions or load-bearing self-citations. The central innovation is therefore presented as an independent empirical contribution.
Axiom & Free-Parameter Ledger
free parameters (2)
- LoRA rank and scaling
- Attention bias generator hyperparameters
axioms (1)
- domain assumption Pre-trained LLM weights contain transferable sequence knowledge that can be steered by external bias for non-linguistic spatio-temporal data.
Reference graph
Works this paper leans on
-
[1]
Mobimixer: A multi-scale spatiotemporal mixing model for mobile traffic prediction,
J. Ma, B. Wang, P. Wang, Z. Zhou, Y . Zhang, X. Wang, and Y . Wang, “Mobimixer: A multi-scale spatiotemporal mixing model for mobile traffic prediction,”IEEE Trans- actions on Mobile Computing, 2025
work page 2025
-
[2]
R. Mavi, R. Singh, and R. Grover, “On time demand traf- fic estimation based on dbn with horse herd optimization for next generation wireless network,”Expert Systems with Applications, vol. 246, p. 123189, 2024
work page 2024
-
[3]
J. Li, B. Liu, P. Xia, Y . Ni, Y . Qian, and S. Jin, “Hybrid noise rectified flow for industrial time series generation with conditional priors and bimodal adaptive sampling,” IEEE Internet of Things Journal, 2025
work page 2025
-
[4]
G. Chen, X. Du, F. Shen, Q. Zeng, and Y .-D. Zhang, “Phased spatial-temporal targeted networks based on transformer and data augmentation for cellular traffic prediction,”IEEE Internet of Things Journal, 2026
work page 2026
-
[5]
Sttf: A spatiotemporal transformer framework for multi- task mobile network prediction,
J. Gong, Y . Liu, T. Li, J. Ding, Z. Wang, and D. Jin, “Sttf: A spatiotemporal transformer framework for multi- task mobile network prediction,”IEEE Transactions on Mobile Computing, vol. 24, no. 5, pp. 4072–4085, 2025
work page 2025
-
[6]
Dynamic spatial-temporal imputation net- work with missing features for traffic data imputation,
H. Li, S. Han, M. Yang, J. Liu, J. Zhou, T. Zhang, and C. P. Chen, “Dynamic spatial-temporal imputation net- work with missing features for traffic data imputation,” IEEE Internet of Things Journal, 2025
work page 2025
-
[7]
A survey on deep learning for cellular traffic prediction,
X. Wang, Z. Wang, K. Yang, Z. Song, C. Bian, J. Feng, and C. Deng, “A survey on deep learning for cellular traffic prediction,”Intelligent Computing, vol. 3, p. 0054, 2024
work page 2024
-
[8]
Stgnnm: Spatial-temporal graph neural network with mamba for cellular traffic predic- tion,
J. Li, X. Pu, and P. Xia, “Stgnnm: Spatial-temporal graph neural network with mamba for cellular traffic predic- tion,” in2024 16th International Conference on Wireless Communications and Signal Processing (WCSP). IEEE, 2024, pp. 1187–1192
work page 2024
-
[9]
D. Vengaimarbhan and D. Rajinigirinath, “Base station sleeping strategy in heterogeneous cellular networks us- ing transformer swarm evolutionary adaptive memory gate convolutional lstm model for cellular traffic predic- tion,”Expert Systems with Applications, p. 132037, 2026
work page 2026
-
[10]
Dynamic graph convolutional recurrent impu- tation network for spatiotemporal traffic missing data,
X. Kong, W. Zhou, G. Shen, W. Zhang, N. Liu, and Y . Yang, “Dynamic graph convolutional recurrent impu- tation network for spatiotemporal traffic missing data,” Knowledge-Based Systems, vol. 261, p. 110188, 2023
work page 2023
-
[11]
Metropolitan cellular traffic prediction using deep learning techniques,
S. Sudhakaran, A. Venkatagiri, P. A. Taukari, A. Je- ganathan, and P. Muthuchidambaranathan, “Metropolitan cellular traffic prediction using deep learning techniques,” in2020 IEEE international conference on communica- tion, networks and satellite (Comnetsat). IEEE, 2020, pp. 6–11
work page 2020
-
[12]
Mvcar: Multi-view collaborative graph network for pri- vate car carbon emission prediction,
C. Liu, Z. Xiao, C. Long, D. Wang, T. Li, and H. Jiang, “Mvcar: Multi-view collaborative graph network for pri- vate car carbon emission prediction,”IEEE Transactions on Intelligent Transportation Systems, vol. 26, no. 1, pp. 472–483, 2024
work page 2024
-
[13]
Z. Zheng, Z. Wang, Z. Hu, Z. Wan, and W. Ma, “Recovering traffic data from the corrupted noise: A doubly physics-regularized denoising diffusion model,” Transportation Research Part C: Emerging Technologies, vol. 160, p. 104513, 2024
work page 2024
-
[14]
A lightweight and accurate spatial-temporal transformer for traffic forecasting,
G. Li, S. Zhong, X. Deng, L. Xiang, S.-H. G. Chan, R. Li, Y . Liu, M. Zhang, C.-C. Hung, and W.-C. Peng, “A lightweight and accurate spatial-temporal transformer for traffic forecasting,”IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 11, pp. 10 967– 10 980, 2022
work page 2022
-
[15]
St-llm+: Graph enhanced spatio-temporal large language models for traffic pre- diction,
C. Liu, K. H. Hettige, Q. Xu, C. Long, S. Xiang, G. Cong, Z. Li, and R. Zhao, “St-llm+: Graph enhanced spatio-temporal large language models for traffic pre- diction,”IEEE Transactions on Knowledge and Data Engineering, 2025
work page 2025
-
[16]
Dif- ferential privacy for multi-modal federated learning with modality selection,
C. Ma, J. Li, Y . Zhou, M. Ding, Y . Ni, and S. Jin, “Dif- ferential privacy for multi-modal federated learning with modality selection,”IEEE Transactions on Information Forensics and Security, 2025
work page 2025
-
[17]
Federated learning in intelligent transportation systems: Recent applications and open problems,
S. Zhang, J. Li, L. Shi, M. Ding, D. C. Nguyen, W. Tan, J. Weng, and Z. Han, “Federated learning in intelligent transportation systems: Recent applications and open problems,”IEEE Transactions on Intelligent Transporta- tion Systems, vol. 25, no. 5, pp. 3259–3285, 2024
work page 2024
-
[18]
Language models are few-shot learn- ers,
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Ka- plan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askellet al., “Language models are few-shot learn- ers,”Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020
work page 1901
-
[19]
One fits all: Power general time series analysis by pretrained lm,
T. Zhou, P. Niu, L. Sun, R. Jinet al., “One fits all: Power general time series analysis by pretrained lm,”Advances in neural information processing systems, vol. 36, pp. 43 322–43 355, 2023
work page 2023
-
[20]
Large language models (llms) for network traffic prediction: A trend-aware hy- brid framework,
Y . Chen, K.-Y . Lam, and F. Li, “Large language models (llms) for network traffic prediction: A trend-aware hy- brid framework,”IEEE Internet of Things Journal, 2025
work page 2025
-
[21]
Time-llm: Time series forecasting by reprogramming large language models,
M. Jin, S. Wang, L. Ma, Z. Chu, J. Zhang, X. Shi, P.- Y . Chen, Y . Liang, Y .-f. Li, S. Panet al., “Time-llm: Time series forecasting by reprogramming large language models,” inInternational Conference on Learning Rep- resentations, 2024
work page 2024
-
[22]
Lora: Low-rank adaptation of large language models
E. J. Hu, Y . Shen, P. Wallis, Z. Allen-Zhu, Y . Li, S. Wang, L. Wang, W. Chenet al., “Lora: Low-rank adaptation of large language models.”Iclr, vol. 1, no. 2, p. 3, 2022
work page 2022
-
[23]
Support vector machine with adaptive parameters in financial time series forecasting,
L.-J. Cao and F. E. H. Tay, “Support vector machine with adaptive parameters in financial time series forecasting,” IEEE Transactions on neural networks, vol. 14, no. 6, pp. 1506–1518, 2003
work page 2003
-
[24]
B. Biller and B. L. Nelson, “Modeling and generating IEEE INTERNET OF THINGS JOURNAL 13 multivariate time-series input processes using a vector autoregressive technique,”ACM Transactions on Model- ing and Computer Simulation (TOMACS), vol. 13, no. 3, pp. 211–237, 2003
work page 2003
-
[25]
G. E. Box and D. A. Pierce, “Distribution of resid- ual autocorrelations in autoregressive-integrated moving average time series models,”Journal of the American statistical Association, vol. 65, no. 332, pp. 1509–1526, 1970
work page 1970
-
[26]
The implementation and effectiveness of linear interpolation within digital simulation,
P. Kuffel, K. Kent, and G. Irwin, “The implementation and effectiveness of linear interpolation within digital simulation,”International Journal of Electrical Power & Energy Systems, vol. 19, no. 4, pp. 221–227, 1997
work page 1997
-
[27]
The expectation-maximization algorithm,
T. K. Moon, “The expectation-maximization algorithm,” IEEE Signal processing magazine, vol. 13, no. 6, pp. 47– 60, 1996
work page 1996
-
[28]
K-nearest neighbor finding using maxnear- estdist,
H. Samet, “K-nearest neighbor finding using maxnear- estdist,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 243–252, 2008
work page 2008
-
[29]
M. Jin, H. Y . Koh, Q. Wen, D. Zambon, C. Alippi, G. I. Webb, I. King, and S. Pan, “A survey on graph neu- ral networks for time series: Forecasting, classification, imputation, and anomaly detection,”IEEE transactions on pattern analysis and machine intelligence, vol. 46, no. 12, pp. 10 466–10 485, 2024
work page 2024
-
[30]
S. Hochreiter and J. Schmidhuber, “Long short-term memory,”Neural computation, vol. 9, no. 8, pp. 1735– 1780, 1997
work page 1997
-
[31]
Learning phrase representations using rnn encoder–decoder for sta- tistical machine translation,
K. Cho, B. Van Merri ¨enboer, C ¸ . Gulc ¸ehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y . Bengio, “Learning phrase representations using rnn encoder–decoder for sta- tistical machine translation,” inProceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1724–1734
work page 2014
-
[32]
Brits: Bidirectional recurrent imputation for time series,
W. Cao, D. Wang, J. Li, H. Zhou, L. Li, and Y . Li, “Brits: Bidirectional recurrent imputation for time series,”Ad- vances in neural information processing systems, vol. 31, 2018
work page 2018
-
[33]
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
S. Bai, J. Z. Kolter, and V . Koltun, “An empirical evalua- tion of generic convolutional and recurrent networks for sequence modeling,”arXiv preprint arXiv:1803.01271, 2018
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[34]
A survey of transformer networks for time series fore- casting,
J. Zhao, F. Chu, L. Xie, Y . Che, Y . Wu, and A. F. Burke, “A survey of transformer networks for time series fore- casting,”Computer Science Review, vol. 60, p. 100883, 2026
work page 2026
-
[35]
Saits: Self-attention- based imputation for time series,
W. Du, D. C ˆot´e, and Y . Liu, “Saits: Self-attention- based imputation for time series,”Expert Systems with Applications, p. 119619, 2023
work page 2023
-
[36]
Timesnet: Temporal 2d-variation modeling for general time series analysis,
H. Wu, T. Hu, Y . Liu, H. Zhou, J. Wang, and M. Long, “Timesnet: Temporal 2d-variation modeling for general time series analysis,” inThe Eleventh International Con- ference on Learning Representations
-
[37]
Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting,
B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting,” inProceedings of the 27th Interna- tional Joint Conference on Artificial Intelligence, 2018, pp. 3634–3640
work page 2018
-
[38]
Spectral tem- poral graph neural network for multivariate time-series forecasting,
D. Cao, Y . Wang, J. Duan, C. Zhang, X. Zhu, C. Huang, Y . Tong, B. Xu, J. Bai, J. Tonget al., “Spectral tem- poral graph neural network for multivariate time-series forecasting,”Advances in neural information processing systems, vol. 33, pp. 17 766–17 778, 2020
work page 2020
-
[39]
Diffusion con- volutional recurrent neural network: Data-driven traffic forecasting,
Y . Li, R. Yu, C. Shahabi, and Y . Liu, “Diffusion con- volutional recurrent neural network: Data-driven traffic forecasting,” inInternational Conference on Learning Representations, 2018
work page 2018
-
[40]
Urban traffic prediction from spatio-temporal data using deep meta learning,
Z. Pan, Y . Liang, W. Wang, Y . Yu, Y . Zheng, and J. Zhang, “Urban traffic prediction from spatio-temporal data using deep meta learning,” inProceedings of the 25th ACM SIGKDD international conference on knowl- edge discovery & data mining, 2019, pp. 1720–1730
work page 2019
-
[41]
Foundation models for time series analysis: A tutorial and survey,
Y . Liang, H. Wen, Y . Nie, Y . Jiang, M. Jin, D. Song, S. Pan, and Q. Wen, “Foundation models for time series analysis: A tutorial and survey,” inProceedings of the 30th ACM SIGKDD conference on knowledge discovery and data mining, 2024, pp. 6555–6565
work page 2024
-
[42]
Model reprogramming: Resource-efficient cross-domain machine learning,
P.-Y . Chen, “Model reprogramming: Resource-efficient cross-domain machine learning,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 20, 2024, pp. 22 584–22 591
work page 2024
-
[43]
Unist: A prompt-empowered universal model for urban spatio- temporal prediction,
Y . Yuan, J. Ding, J. Feng, D. Jin, and Y . Li, “Unist: A prompt-empowered universal model for urban spatio- temporal prediction,” inProceedings of the 30th ACM SIGKDD conference on knowledge discovery and data mining, 2024, pp. 4095–4106
work page 2024
-
[44]
V oice2series: Reprogramming acoustic models for time series classifi- cation,
C.-H. H. Yang, Y .-Y . Tsai, and P.-Y . Chen, “V oice2series: Reprogramming acoustic models for time series classifi- cation,” inInternational conference on machine learning. PMLR, 2021, pp. 11 808–11 819
work page 2021
-
[45]
Large language models are zero-shot time series forecast- ers,
N. Gruver, M. Finzi, S. Qiu, and A. G. Wilson, “Large language models are zero-shot time series forecast- ers,”Advances in neural information processing systems, vol. 36, pp. 19 622–19 635, 2023
work page 2023
-
[46]
L. Chang, H. Yu, M. Yang, Z. Zhang, S. Chen, and J. Wang, “Npp-gpt: Forecasting nuclear power plants operating parameters using pre-trained large language model,”Applied Energy, vol. 409, p. 127438, 2026
work page 2026
-
[47]
Stellm: Spatio-temporal enhanced pre-trained large language model for wind speed fore- casting,
T. Wu and Q. Ling, “Stellm: Spatio-temporal enhanced pre-trained large language model for wind speed fore- casting,”Applied Energy, vol. 375, p. 124034, 2024
work page 2024
-
[48]
H. Cheng, Z. Gong, and C. Wang, “Llm-tfp: Integrating large language models with spatio-temporal features for urban traffic flow prediction,”Applied Soft Computing, vol. 177, p. 113174, 2025
work page 2025
-
[49]
Spatial-temporal large language model for traffic prediction,
C. Liu, S. Yang, Q. Xu, Z. Li, C. Long, Z. Li, and R. Zhao, “Spatial-temporal large language model for traffic prediction,” in2024 25th IEEE international con- ference on mobile data management (MDM). IEEE, 2024, pp. 31–40
work page 2024
-
[50]
Causal intervention is what large language models need for spatio-temporal forecasting,
S. Li, H. Li, X. Li, Y . Xu, Z. Lin, and H. Jiang, “Causal intervention is what large language models need for spatio-temporal forecasting,”IEEE Transactions on Cybernetics, 2025
work page 2025
-
[51]
Urbangpt: Spatio-temporal large language IEEE INTERNET OF THINGS JOURNAL 14 models,
Z. Li, L. Xia, J. Tang, Y . Xu, L. Shi, L. Xia, D. Yin, and C. Huang, “Urbangpt: Spatio-temporal large language IEEE INTERNET OF THINGS JOURNAL 14 models,” inProceedings of the 30th ACM SIGKDD conference on knowledge discovery and data mining, 2024, pp. 5351–5362
work page 2024
-
[52]
Graphgpt: Graph instruction tuning for large language models,
J. Tang, Y . Yang, W. Wei, L. Shi, L. Su, S. Cheng, D. Yin, and C. Huang, “Graphgpt: Graph instruction tuning for large language models,” inProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2024, pp. 491–500
work page 2024
-
[53]
Y . Chen, X. Wang, and G. Xu, “Gatgpt: A pre- trained large language model with graph attention net- work for spatiotemporal imputation,”arXiv preprint arXiv:2311.14332, 2023
-
[54]
Z. Li, Z. Zhu, Q. Li, and X. Wu, “Graph pre-trained framework with spatio-temporal importance masking and fine-grained optimizing for neural decoding,”Pattern Recognition, vol. 170, p. 112006, 2026
work page 2026
-
[55]
Telecommunications - SMS, Call, Internet - MI,
T. Italia, “Telecommunications - SMS, Call, Internet - MI,” Version: V1, 2015. [Online]. Available: https: //doi.org/10.7910/DVN/EGZHFV
-
[56]
Telecommunications - SMS, Call, Internet - TN,
——, “Telecommunications - SMS, Call, Internet - TN,” Version: V1, 2015. [Online]. Available: https: //doi.org/10.7910/DVN/QLCABU
-
[57]
Are transform- ers effective for time series forecasting?
A. Zeng, M. Chen, L. Zhang, and Q. Xu, “Are transform- ers effective for time series forecasting?” inProceedings of the AAAI conference on artificial intelligence, vol. 37, no. 9, 2023, pp. 11 121–11 128
work page 2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.