pith. machine review for the scientific record. sign in

arxiv: 2605.04902 · v2 · submitted 2026-05-06 · 💻 cs.DB

Recognition: unknown

AegisTS: A Hierarchical Agent System with Reinforcement Learning for Multivariate Time Series Data Cleaning

Authors on Pith no claims yet

Pith reviewed 2026-05-08 16:21 UTC · model grok-4.3

classification 💻 cs.DB
keywords multivariate time seriesdata cleaningreinforcement learninghierarchical agentsdata quality issuestime series analytics
0
0 comments X

The pith

A hierarchical reinforcement learning agent system can jointly optimize the processing order and method selection to clean multiple quality issues in multivariate time series data without needing ground truth.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper seeks to demonstrate that a new agent-based approach can address the common problem of multiple simultaneous data quality issues in multivariate time series, where traditional methods handle only isolated problems and often require unavailable ground truth. By using reinforcement learning with a hierarchical setup, one agent sequences the fixes while another chooses specific cleaners, and a combined reward encourages both good cleaning and better results in follow-on tasks. A sympathetic reader would care because this removes a major barrier to reliable analytics on real-world sensor or financial data that is almost always imperfect. If the approach works, it points toward more automated and effective data preparation pipelines.

Core claim

The paper establishes that framing multivariate time series cleaning as a joint optimization of issue order and cleaning model selection, solved through a hierarchical agent architecture with a dual-stage reward that couples cleaning quality and downstream performance, enables effective navigation of the cleaning pipeline space and superior results compared to existing limited-scope methods.

What carries the argument

The hierarchical agent architecture, consisting of a high-level agent that determines the order for processing data quality issues and a low-level agent that selects appropriate cleaning methods for each, directed by a dual-stage reward mechanism linking upstream cleaning to downstream analytics performance.

If this is right

  • The system can manage co-occurring issues such as missing values, outliers, and constraint violations in a single pipeline.
  • Cleaning quality improves by up to 96% and downstream task performance by up to 27% over prior methods.
  • Optimization proceeds without ground truth data or domain-specific rules, making it suitable for practical applications.
  • Joint decision-making on order and methods allows efficient exploration of many possible cleaning sequences.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar hierarchical decision structures might apply to cleaning other data formats like images or text.
  • Emphasizing end-task performance in rewards could become standard for designing unsupervised data improvement systems.
  • Future work could test the approach on streaming time series where issues arise dynamically.

Load-bearing premise

The dual-stage reward mechanism can reliably guide the hierarchical agents toward optimal cleaning pipelines even when no ground truth data is available.

What would settle it

Running the system on a benchmark dataset with available ground truth and observing that its cleaning quality or downstream improvements do not exceed those of methods that use the ground truth for supervision.

Figures

Figures reproduced from arXiv: 2605.04902 by Lu Chen, Mourad Khayati, Tianyi Li, Yuanyuan Yao, Yuhan Shi.

Figure 1
Figure 1. Figure 1: Impact of cleaning execution order. These solutions fall short when applied to MTS with compound errors, for at least two reasons. First, they do not readily extend to quality issues beyond the specific combinations they were originally designed to address, restricting their generalizability to more di￾verse scenarios. Second, naively stacking multiple cleaning methods in sequence can easily break the inhe… view at source ↗
Figure 2
Figure 2. Figure 2: The Overall Framework of AegisTS. a single-shot selection process, and (b) the search operates over full operator sequences rather than individual operator choices in isolation view at source ↗
Figure 3
Figure 3. Figure 3: Ablation Evaluation. with only a negligible -0.0034 dip (Handwriting → Libras), confirm that AegisTS successfully transfers dataset-agnostic cleaning poli￾cies to actively boost downstream utility on novel datasets without requiring retraining. This is because the agents capture universal structural priors, preserving task-relevant discriminative patterns instead of applying blind numerical smoothing, whic… view at source ↗
Figure 4
Figure 4. Figure 4: Parameter Study of 𝜆 values. 0. 2 0. 4 0. 6 0.8 0. 00 0. 04 0. 08 0.1 2 0.1 6 0. 20               ETTh1 NMSE IDF_OilTemp NMSE  1    (a) Paramater study of 1 0. 000 0. 005 0. 01 0 0. 01 5 0. 020 0. 025 N M S E 0. 2 0. 4 0. 6 0.8 0. 00 0. 04 0. 08 0.1 2 0.1 6 0. 20  2    (b) Paramater study of 2 0. 000 0. 005 0. 01 0 0. 01 5 0. 020 0. 025 N M S E 0. 2 0. 4 0. 6 0.8 0. 00 0. 04 0. 08 0.1… view at source ↗
Figure 5
Figure 5. Figure 5: Parameter Study of 𝜇 values. on the IDF_OilTemp dataset, largely because this dataset is low￾dimensional, exhibits strong inter-variable correlations, and has relatively regular temporal patterns, conditions that favor light￾weight modeling of AegisTS and its accelerated convergence. From a practical perspective, the generalization ability of AegisTS enables an effective acceleration strategy through Cross… view at source ↗
read the original abstract

Multivariate time series (MTS) are frequently affected by co-occurring quality issues, such as missing values, outliers, and constraint violations, which significantly undermine downstream analytics. Existing cleaning approaches fix only a limited set of such issues, making them ill-suited for scenarios where multiple quality problems arise simultaneously. Furthermore, these methods commonly depend on the availability of ground truth data or domain-specific rules, both of which are rarely accessible in real-world applications. In this paper, we introduce \sys, an agent system with reinforcement learning designed to clean multiple data quality issues in MTS. We cast the cleaning process as a joint optimization problem that simultaneously handles quality issue order and cleaning model selection, allowing efficient navigation of the large space of possible cleaning pipelines. Our framework relies on a hierarchical agent architecture, where a high-level agent determines the order in which data quality issues should be processed, while a low-level agent identifies the most suitable cleaning method for each issue. To guide the agent toward an optimal cleaning pipeline, we propose a dual-stage reward mechanism that couples upstream (cleaning) and downstream performance, enabling effective optimization without relying on ground truth. Our experimental results show that \sys consistently outperforms existing methods, achieving up to 96\% improvement in data cleaning quality and 27\% improvement in downstream performance.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes AegisTS, a hierarchical reinforcement learning agent system for cleaning multivariate time series data affected by co-occurring issues such as missing values, outliers, and constraint violations. It models the problem as joint optimization over issue processing order (high-level agent) and method selection (low-level agent), guided by a dual-stage reward that combines upstream cleaning signals with downstream task performance to enable optimization without ground truth. Experiments are reported to show consistent outperformance, with up to 96% gains in cleaning quality and 27% in downstream performance.

Significance. If the dual-stage reward reliably proxies true cleaning quality and the reported gains prove robust across datasets and baselines, the work could advance automated, ground-truth-free cleaning pipelines for complex MTS, with applications in sensor networks, finance, and IoT where multiple quality issues co-occur and manual rules are unavailable. The hierarchical RL framing provides a principled way to search large pipeline spaces.

major comments (2)
  1. [Abstract and §4 (Experiments)] Abstract and experimental evaluation: the central claim of up to 96% cleaning-quality and 27% downstream improvement rests on the dual-stage reward guiding agents to superior pipelines, yet no information is supplied on the datasets used, the baselines compared, the statistical significance tests performed, or the precise formulation and weighting of the upstream proxy component of the reward. This prevents evaluation of whether the upstream signal actually correlates with true quality metrics on data with injected errors.
  2. [Methodology (dual-stage reward)] The dual-stage reward mechanism (described in the abstract and methodology): the upstream component is asserted to serve as a reliable proxy for cleaning quality (missing values, outliers, constraints) without ground truth, but the manuscript provides no controlled validation—e.g., on synthetic MTS with known injected errors—showing that proxy scores correlate with actual post-cleaning quality or downstream gains. If this correlation is weak, the joint optimization over order and method selection will optimize for the proxy rather than real quality, undermining the reported improvements.
minor comments (1)
  1. [Abstract] The abstract claims 'consistent outperformance' but does not name the specific existing methods used as baselines; adding this list would improve clarity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment below and commit to revisions that will improve clarity and provide the requested validation.

read point-by-point responses
  1. Referee: [Abstract and §4 (Experiments)] Abstract and experimental evaluation: the central claim of up to 96% cleaning-quality and 27% downstream improvement rests on the dual-stage reward guiding agents to superior pipelines, yet no information is supplied on the datasets used, the baselines compared, the statistical significance tests performed, or the precise formulation and weighting of the upstream proxy component of the reward. This prevents evaluation of whether the upstream signal actually correlates with true quality metrics on data with injected errors.

    Authors: We agree that greater transparency is needed for independent evaluation. In the revised manuscript we will expand the abstract to summarize the datasets (both real-world sensor and financial MTS as well as synthetic series with controlled co-occurring errors), list the full set of baselines, and report statistical significance (paired t-tests with p-values). We will also move the exact mathematical definition of the upstream proxy—including its component scores for missing-value imputation, outlier detection, and constraint satisfaction together with the weighting coefficients—into the main methodology section and add a short correlation analysis between proxy values and ground-truth quality metrics on held-out data. revision: yes

  2. Referee: [Methodology (dual-stage reward)] The dual-stage reward mechanism (described in the abstract and methodology): the upstream component is asserted to serve as a reliable proxy for cleaning quality (missing values, outliers, constraints) without ground truth, but the manuscript provides no controlled validation—e.g., on synthetic MTS with known injected errors—showing that proxy scores correlate with actual post-cleaning quality or downstream gains. If this correlation is weak, the joint optimization over order and method selection will optimize for the proxy rather than real quality, undermining the reported improvements.

    Authors: We acknowledge that an explicit controlled validation of the upstream proxy would strengthen the central claim. Although the current experiments already compare the dual-stage reward against single-stage variants and show consistent gains in both cleaning metrics and downstream performance, we did not include a dedicated synthetic-data study. We will add such experiments in the revision: synthetic MTS will be generated with known injected missing values, outliers, and constraint violations; the upstream proxy will be computed without access to ground truth; and we will report Pearson/Spearman correlations between proxy scores and both true post-cleaning quality and downstream task improvement. These results will be presented in a new subsection of the experimental evaluation. revision: yes

Circularity Check

0 steps flagged

No circularity: hierarchical RL design and dual-stage reward are independent proposals validated by external experiments

full rationale

The paper presents AegisTS as a proposed hierarchical agent architecture with a dual-stage reward that explicitly incorporates downstream task performance as an external optimization signal, without ground truth. No equations, self-definitions, or fitted parameters are shown reducing the claimed cleaning quality or downstream gains to the inputs by construction. The reported improvements (up to 96% and 27%) are framed as empirical comparisons against existing methods rather than derived predictions. The framework relies on standard RL components and external benchmarks, making the central claims self-contained against falsifiable experiments rather than tautological.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that reinforcement learning agents can discover effective cleaning pipelines when guided solely by a dual-stage reward that does not require ground truth; no free parameters or invented physical entities are described.

axioms (1)
  • domain assumption A dual-stage reward that combines cleaning quality and downstream task performance can guide agents to optimal pipelines without ground truth data.
    This assumption is required for the framework to operate in real-world settings where ground truth is unavailable, as stated in the abstract.

pith-pipeline@v0.9.0 · 5543 in / 1358 out tokens · 77830 ms · 2026-05-08T16:21:01.387848+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

44 extracted references · 2 canonical work pages · 1 internal anchor

  1. [1]

    Mohamed Abdelaal, Anil Bora Yayak, Kai Klede, and Harald Schöning. 2024. ReClean: Reinforcement Learning for Automated Data Cleaning in ML Pipelines. InICDE 2024 - Workshops. IEEE, 324–330

  2. [2]

    A., Lines, J., Flynn, M., Large, J., Bostrom, A.,

    Anthony J. Bagnall, Hoang Anh Dau, Jason Lines, Michael Flynn, James Large, Aaron Bostrom, Paul Southam, and Eamonn J. Keogh. 2018. The UEA multivariate time series classification archive, 2018.CoRRabs/1811.00075 (2018)

  3. [3]

    An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

    Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. 2018. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling.CoRR abs/1803.01271 (2018)

  4. [4]

    Yoram Bresler and Albert Macovski. 1986. Exact maximum likelihood parameter estimation of superimposed exponential signals in noise.IEEE Trans. Acoust. Speech Signal Process.34, 5 (1986), 1081–1089

  5. [5]

    Brillinger

    David R. Brillinger. 2001.Time series - data analysis and theory. Classics in applied mathematics, Vol. 36. SIAM

  6. [6]

    Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei Li, and Yitan Li. 2018. BRITS: Bidirectional Recurrent Imputation for Time Series. InAdvances in Neural Infor- mation Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen ...

  7. [7]

    Yuhang Chen, Chaoyun Zhang, Minghua Ma, Yudong Liu, Ruomeng Ding, Bowen Li, Shilin He, Saravan Rajmohan, Qingwei Lin, and Dongmei Zhang. 2023. ImD- iffusion: Imputed Diffusion Models for Multivariate Time Series Anomaly Detec- tion.Proc. VLDB Endow.17, 3 (2023), 359–372

  8. [8]

    Schmidt, and Geoffrey I

    Angus Dempster, Daniel F. Schmidt, and Geoffrey I. Webb. 2021. MiniRocket: A Very Fast (Almost) Deterministic Transform for Time Series Classification. In KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021, Feida Zhu, Beng Chin Ooi, and Chunyan Miao (Eds.). ACM, 248–257

  9. [9]

    Xiaoou Ding, Genglong Li, Hongzhi Wang, Chen Wang, and Yichen Song. 2024. Time Series Data Cleaning Under Expressive Constraints on Both Rows and Columns. In40th IEEE International Conference on Data Engineering, ICDE 2024, Utrecht, The Netherlands, May 13-16, 2024. IEEE, 3682–3695

  10. [10]

    Xiaoou Ding, Yingze Li, Hongzhi Wang, Chen Wang, Yida Liu, and Jianmin Wang. 2024. TSDDISCOVER: Discovering Data Dependency for Time Series Data. InICDE. IEEE, 3668–3681

  11. [11]

    Xiaoou Ding, Yichen Song, Hongzhi Wang, Chen Wang, and Donghua Yang

  12. [12]

    VLDB Endow.17, 13 (2024), 4840–4852

    MTSClean: Efficient Constraint-based Cleaning for Multi-Dimensional Time Series Data.Proc. VLDB Endow.17, 13 (2024), 4840–4852

  13. [13]

    Xiaoou Ding, Yichen Song, Hongzhi Wang, Donghua Yang, Chen Wang, and Jian- min Wang. 2024. Clean4TSDB: A Data Cleaning Tool for Time Series Databases. Proc. VLDB Endow.17, 12 (2024), 4377–4380

  14. [14]

    Xiaoou Ding, Hongzhi Wang, Jiaxuan Su, Zijue Li, Jianzhong Li, and Hong Gao

  15. [15]

    VLDB Endow.12, 12 (2019), 1786–1789

    Cleanits: A Data Cleaning System for Industrial Time Series.Proc. VLDB Endow.12, 12 (2019), 1786–1789

  16. [16]

    AM Dukhovny. 1990. Markov chains with quasitoeplitz transition matrix: Appli- cations.International Journal of Stochastic Analysis3, 2 (1990), 141–152

  17. [17]

    Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. 1996. A Density- Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. InProceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96), Portland, Oregon, USA, Evangelos Simoudis, Jiawei Han, and Usama M. Fayyad (Eds.). AAAI Press, 226–231

  18. [18]

    Schmidt, Jonathan Weber, Geoffrey I

    Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, Daniel F. Schmidt, Jonathan Weber, Geoffrey I. Webb, Lhassane Idoumghar, Pierre- Alain Muller, and François Petitjean. 2020. InceptionTime: Finding AlexNet for time series classification.Data Min. Knowl. Discov.34, 6 (2020), 1936–1962

  19. [19]

    Lise Getoor, Nir Friedman, Daphne Koller, and Benjamin Taskar. 2001. Learning Probabilistic Models of Relational Structure. InProceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 - July 1, 2001, Carla E. Brodley and Andrea Po- horeckyj Danyluk (Eds.). Morgan Kaufmann, 170–177

  20. [20]

    Aditya Gupta and Bhuwan Dhingra. 2012. Stock market prediction using hidden markov models. In2012 students conference on engineering and systems. 1–4

  21. [21]

    Sean Wang

    Xiaoyu Han, Haoran Xiong, Zhenying He, Peng Wang, Chen Wang, and X. Sean Wang. 2024. Akane: Perplexity-Guided Time Series Data Cleaning.Proc. ACM Manag. Data2, 3 (2024), 121

  22. [22]

    Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory.Neural computation9, 8 (1997), 1735–1780

  23. [23]

    Rudolph Emil Kalman. 1960. A new approach to linear filtering and prediction problems. (1960)

  24. [24]

    Mourad Khayati, Guillaume Chacun, Zakhar Tymchenko, and Philippe Cudré- Mauroux. 2025. A-DARTS: Stable Model Selection for Data Repair in Time Series. In41st IEEE International Conference on Data Engineering, ICDE 2025, Hong Kong, May 19-23, 2025. IEEE, 2009–2023

  25. [25]

    Chenyang Li, Chaohong Ma, Xiaohui Yu, Cailong Li, and Xiaofeng Meng. 2026. EDITOR: Multi-Resolution Cleaning of Multivariate Time Series via Detect- Localize-Repair. InProc. (ICDE)

  26. [26]

    Lan Li, Liri Fang, Bertram Ludäscher, and Vetle I. Torvik. 2025. AutoDCWork- flow: LLM-based Data Cleaning Workflow Auto-Generation and Benchmark. In Findings of the Association for Computational Linguistics: EMNLP 2025, Suzhou, China, November 4-9, 2025, Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, and Violet Peng (Eds.). Association fo...

  27. [27]

    Peng Li, Zhiyi Chen, Xu Chu, and Kexin Rong. 2023. DiffPrep: Differentiable Data Preprocessing Pipeline Search for Learning over Tabular Data.Proc. ACM Manag. Data1, 2 (2023), 183:1–183:26

  28. [28]

    Jensen, Varun Pandey, and Volker Markl

    Xiao Li, Huan Li, Hua Lu, Christian S. Jensen, Varun Pandey, and Volker Markl

  29. [29]

    VLDB Endow.17, 3 (2023), 345–358

    Missing Value Imputation for Multi-attribute Sensor Data Streams via Message Propagation.Proc. VLDB Endow.17, 3 (2023), 345–358

  30. [30]

    Sethi, Philip Knaute, Simon R

    Carl Henning Lubba, Sarab S. Sethi, Philip Knaute, Simon R. Schultz, Ben D. Fulcher, and Nick S. Jones. 2019. catch22: CAnonical Time-series CHaracteristics - Selected through highly comparative time-series analysis.Data Min. Knowl. Discov.33, 6 (2019), 1821–1852

  31. [31]

    Pankaj Malhotra, Lovekesh Vig, Gautam Shroff, and Puneet Agarwal. 2015. Long Short Term Memory Networks for Anomaly Detection in Time Series. In23rd European Symposium on Artificial Neural Networks, ESANN 2015, Bruges, Belgium, April 22-24, 2015

  32. [32]

    Kohei Obata, Koki Kawabata, Yasuko Matsubara, and Yasushi Sakurai. 2024. Min- ing of Switching Sparse Networks for Missing Value Imputation in Multivariate Time Series. InACM SIGKDD, Ricardo Baeza-Yates and Francesco Bonchi (Eds.). ACM, 2296–2306

  33. [33]

    Ilyas, and Christopher Ré

    Theodoros Rekatsinas, Xu Chu, Ihab F. Ilyas, and Christopher Ré. 2017. Holo- Clean: Holistic Data Repairs with Probabilistic Inference.Proc. VLDB Endow.10, 11 (2017), 1190–1201

  34. [34]

    Robert H Shumway and David S Stoffer. 1982. An approach to time series smoothing and forecasting using the EM algorithm.Journal of time series analysis 3, 4 (1982), 253–264

  35. [35]

    Shaoxu Song, Fei Gao, Aoqian Zhang, Jianmin Wang, and Philip S. Yu. 2021. Stream Data Cleaning under Speed and Acceleration Constraints.ACM Trans. Database Syst.46, 3 (2021), 10:1–10:44

  36. [36]

    Shaoxu Song, Aoqian Zhang, Jianmin Wang, and Philip S. Yu. 2015. SCREEN: Stream Data Cleaning under Speed Constraints. InProceedings of the 2015 ACM SIGMOD International Conference on Management of Data, Melbourne, Victoria, Australia, May 31 - June 4, 2015, Timos K. Sellis, Susan B. Davidson, and Zachary G. Ives (Eds.). ACM, 827–841

  37. [37]

    Yuqiang Sun, Lei Peng, Huiyun Li, and Min Sun. 2018. Exploration on Spa- tiotemporal Data Repairing of Parking Lots Based on Recurrent GANs. In21st International Conference on Intelligent Transportation Systems, ITSC 2018, Maui, HI, USA, November 4-7, 2018, Wei-Bin Zhang, Alexandre M. Bayen, Javier J. Sánchez Medina, and Matthew J. Barth (Eds.). IEEE, 467–472

  38. [38]

    Jun’ichi Takeuchi and Kenji Yamanishi. 2006. A Unifying Framework for Detect- ing Outliers and Change Points from Time Series.IEEE Trans. Knowl. Data Eng. 18, 4 (2006), 482–492

  39. [39]

    Jennings

    Shreshth Tuli, Giuliano Casale, and Nicholas R. Jennings. 2022. TranAD: Deep Transformer Networks for Anomaly Detection in Multivariate Time Series Data. Proc. VLDB Endow.15, 6 (2022), 1201–1214

  40. [40]

    Wei Yin, Tianbai Yue, Hongzhi Wang, Yanhao Huang, and Yaping Li. 2018. Time Series Cleaning Under Variance Constraints. InDatabase Systems for Advanced Applications - DASFAA 2018 International Workshops: BDMS, BDQM, GDMA, and SeCoP, Gold Coast, QLD, Australia, May 21-24, 2018, Proceedings (Lecture Notes in Computer Science), Chengfei Liu, Lei Zou, and Jia...

  41. [41]

    Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. 2023. Are transformers effective for time series forecasting?. InProc. AAAI, Vol. 37. 11121–11128

  42. [42]

    Aoqian Zhang, Zexue Wu, Yifeng Gong, Ye Yuan, and Guoren Wang. 2024. Multivariate Time Series Cleaning under Speed Constraints.Proc. ACM Manag. Data2, 6 (2024), 245:1–245:26

  43. [43]

    Ruyi Zhang, Yijie Wang, Hongzuo Xu, and Haifang Zhou. 2022. Factorization Machine-based Unsupervised Model Selection Method*. InIEEE International Conference on Systems, Man, and Cybernetics, SMC 2022, Prague, Czech Republic, October 9-12, 2022. IEEE, 796–802

  44. [44]

    Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 2021. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. InAAAI. AAAI Press, 11106–11115. 13