pith. machine review for the scientific record. sign in

arxiv: 2604.27182 · v1 · submitted 2026-04-29 · 💻 cs.LG · cs.AI

Recognition: unknown

Preserving Temporal Dynamics in Time Series Generation

Authors on Pith no claims yet

Pith reviewed 2026-05-07 08:13 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords time series generationtemporal dynamicsMCMCdistribution shiftGANsynthetic datasequential generationautocorrelation
0
0 comments X

The pith

Conditional generative models accumulate deviations in sequential time-series generation, which MCMC corrects by enforcing consistency with empirical transition statistics.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that GAN-style generators for multivariate time series match overall data distributions but lose the step-by-step relationships between neighboring points when producing long sequences. Small mismatches compound over time, producing synthetic data whose temporal structure drifts away from the original. The authors supply a theoretical account of this accumulation and introduce a post-generation MCMC step that resamples each point to restore agreement with the observed transition counts between consecutive time steps. Experiments across five generators and four datasets confirm that the corrected sequences improve autocorrelation, skewness, kurtosis, and downstream forecasting accuracy. A reader cares because many real forecasting problems are data-limited; faithful synthetic sequences could enlarge training sets without injecting the very temporal artifacts that degrade model performance.

Core claim

Conditional generative models generate each time step conditionally on the preceding ones, yet any local deviation from the true conditional distribution grows across the sequence and produces global distribution shift. The authors demonstrate that Markov Chain Monte Carlo sampling, guided solely by the empirical transition statistics measured on the real data, can reverse these accumulated discrepancies without retraining the base generator. The resulting synthetic series therefore satisfy both the marginal distribution learned by the GAN and the neighbor-to-neighbor transition laws present in the original observations.

What carries the argument

MCMC correction step that enforces consistency with empirical transition statistics between neighboring time points

If this is right

  • Synthetic sequences will align more closely in autocorrelation with the original data across multiple lags.
  • Higher-order statistics such as skewness and kurtosis of the generated series will match those of the real series more accurately.
  • Regression models trained on data augmented by the corrected series will produce higher R² values on held-out forecasts.
  • Discriminative and predictive scores will improve for any base generator to which the MCMC step is applied.
  • Explicit enforcement of transition laws is required in addition to marginal distribution matching for faithful time-series synthesis.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same post-hoc correction could be applied to other sequential generators such as those for text or video by defining suitable transition statistics on tokens or frames.
  • Hybrid pipelines that combine adversarial training with a final MCMC consistency pass may outperform purely end-to-end training for preserving long-range dynamics.
  • The approach invites testing on longer horizons or higher-dimensional series to check whether the correction remains effective without introducing new artifacts.
  • Faithful augmentation via this method could reduce reliance on very large real datasets for training time-series forecasters.

Load-bearing premise

The transition statistics estimated from the original data are sufficient to remove accumulated deviations without creating new inconsistencies or needing knowledge of the underlying generative process.

What would settle it

Generate long sequences with and without the MCMC step, then measure the difference between their autocorrelation functions and the real data's autocorrelation; if the MCMC version does not reduce this difference below the uncorrected version, the correction does not preserve temporal dynamics.

Figures

Figures reproduced from arXiv: 2604.27182 by Ci Lin, Futong Li, Iluju Kiringa, Tet Yeap.

Figure 1
Figure 1. Figure 1: An MCMC-Based Correction Framework for GANs view at source ↗
Figure 2
Figure 2. Figure 2: ACF comparison illustrating temporal dependence preservation. The view at source ↗
Figure 3
Figure 3. Figure 3: t-SNE of Datasets Generated by GAN and Corresponding GAN-MCMC view at source ↗
Figure 4
Figure 4. Figure 4: PCA of Datasets Generated by GAN and Corresponding GAN-MCMC view at source ↗
read the original abstract

Time-series data augmentation plays a crucial role in regression-oriented forecasting tasks, where limited data restricts the performance of deep learning models. While Generative Adversarial Networks (GANs) have shown promise in synthetic time-series generation, existing approaches primarily focus on matching marginal data distributions and often overlook the temporal dynamics that naturally exist in the original multivariate time series. When generating multivariate time series, this mismatch leads to distribution shift and temporal drift, thereby degrading the fidelity of the synthetic sequences. In this work, we propose a model-agnostic Markov Chain Monte Carlo (MCMC)-based framework to mitigate distribution shift and preserve temporal dynamics in synthetic time series. We provide a theoretical analysis of how conditional generative models accumulate deviations under sequential generation and demonstrate that the MCMC algorithm can correct these discrepancies by enforcing consistency with empirical transition statistics between neighboring time points. Extensive experiments on the Lorenz, Licor, ETTh, and ILI datasets using RCGAN, GCWGAN, TimeGAN, SigCWGAN, and AECGAN demonstrate that the proposed MCMC framework consistently improves autocorrelation alignment, skewness error, kurtosis error, R$^2$, discriminative score, and predictive score. These results suggest that synthetic time series consistent with the original data require explicit preservation of transition laws rather than solely relying on adversarial distribution matching, thereby offering a principled direction for improving generative modeling of time-series data.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes a model-agnostic MCMC post-processing framework to preserve temporal dynamics in GAN-generated multivariate time series. It claims that conditional generators accumulate deviations from the true joint distribution during sequential sampling and that enforcing consistency with empirical one-step transition statistics (estimated from training data) via MCMC corrects these discrepancies. A theoretical analysis of error accumulation under iterated conditionals is provided, followed by experiments on Lorenz, Licor, ETTh, and ILI datasets using RCGAN, GCWGAN, TimeGAN, SigCWGAN, and AECGAN that report consistent gains in autocorrelation alignment, skewness/kurtosis error, R², discriminative score, and predictive score.

Significance. If the central claim holds, the work provides a practical, architecture-independent way to mitigate a known weakness of conditional time-series GANs—temporal drift—without retraining. The empirical improvements across five generators and four datasets suggest that explicit transition matching can be a useful augmentation to adversarial training. The model-agnostic design and focus on falsifiable metrics (autocorrelation, predictive score) are strengths. However, significance is tempered by the absence of theoretical guarantees that the MCMC step preserves the GAN's learned marginals or higher-order statistics and by potential artifacts from transition estimation in continuous spaces.

major comments (3)
  1. [§3] §3 (MCMC framework) and theoretical analysis: The claim that MCMC corrects accumulated deviations by enforcing empirical P(x_{t+1}|x_t) lacks a bound on the distance (e.g., total variation or Wasserstein) between the corrected joint and the target distribution. Without such a guarantee, it is unclear whether the post-hoc correction trades one form of mismatch for another, especially since the proposal and acceptance kernel are not shown to preserve the marginals already learned by the GAN.
  2. [§4] §4 (experiments, continuous datasets): For Lorenz and ETTh, empirical transition statistics require discretization or kernel density estimation, yet no details are given on binning strategy, bandwidth selection, or sensitivity analysis. Any estimator error is propagated by the MCMC chain; this is load-bearing because the method is presented as general and the reported gains on autocorrelation and predictive score could be artifacts of the transition estimator rather than faithful recovery of dynamics.
  3. [§4] §4 (results tables): While consistent improvements are reported across GANs and metrics, there is no ablation isolating the contribution of the transition estimator versus MCMC hyperparameters (chain length, proposal variance). This undermines the claim that the framework reliably corrects temporal drift, as the gains could be driven by the specific choice of empirical counts rather than the MCMC mechanism itself.
minor comments (2)
  1. [§2] Notation for the empirical transition matrix should be introduced earlier and used consistently; the current presentation makes it difficult to distinguish the data-derived counts from the generative model's conditionals.
  2. [§4] Figure captions for the autocorrelation plots should include the number of runs and error bars; visual inspection alone does not convey statistical robustness of the reported alignment improvements.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback on our manuscript. We address each major comment below with clarifications based on the current work and outline the revisions we will make to strengthen the paper.

read point-by-point responses
  1. Referee: [§3] §3 (MCMC framework) and theoretical analysis: The claim that MCMC corrects accumulated deviations by enforcing empirical P(x_{t+1}|x_t) lacks a bound on the distance (e.g., total variation or Wasserstein) between the corrected joint and the target distribution. Without such a guarantee, it is unclear whether the post-hoc correction trades one form of mismatch for another, especially since the proposal and acceptance kernel are not shown to preserve the marginals already learned by the GAN.

    Authors: We appreciate the referee's observation on the absence of an explicit distance bound. Section 3 analyzes error accumulation in iterated conditional sampling by showing progressive mismatch between the generated one-step transitions and the empirical transitions from the training data. The MCMC step is formulated as a Metropolis-Hastings sampler whose target is the distribution whose one-step transitions exactly match the empirical P(x_{t+1}|x_t); the GAN outputs serve as proposals and the acceptance probability is the ratio of transition likelihoods under the empirical kernel. This guarantees that accepted sequences satisfy the transition statistics by construction. Marginal preservation follows from the fact that proposals are drawn from the GAN (trained to match data marginals) and the acceptance ratio does not systematically alter marginal statistics, as confirmed by the unchanged or improved skewness/kurtosis errors in the experiments. We acknowledge that the manuscript does not derive a bound (e.g., total variation or Wasserstein) between the corrected joint and the unknown true joint distribution, because the method targets the finite-sample empirical transitions rather than the ground-truth law. In the revision we will add a paragraph in §3 explicitly stating this scope and noting that a non-asymptotic bound would require additional assumptions on the GAN approximation quality, which we leave for future work. revision: partial

  2. Referee: [§4] §4 (experiments, continuous datasets): For Lorenz and ETTh, empirical transition statistics require discretization or kernel density estimation, yet no details are given on binning strategy, bandwidth selection, or sensitivity analysis. Any estimator error is propagated by the MCMC chain; this is load-bearing because the method is presented as general and the reported gains on autocorrelation and predictive score could be artifacts of the transition estimator rather than faithful recovery of dynamics.

    Authors: We thank the referee for highlighting the missing implementation details for continuous data. The current manuscript outlines the general procedure of estimating empirical transitions but does not specify the discretization or KDE parameters used for Lorenz and ETTh. In the revised manuscript we will insert a dedicated paragraph in §4 that (i) describes the uniform binning strategy and the number of bins chosen for each variable, (ii) states the bandwidth selection rule (Silverman’s rule of thumb) for any KDE components, and (iii) reports a sensitivity study in which bin count and bandwidth are varied by ±20 % around the chosen values. The study will show that the reported gains in autocorrelation alignment and predictive score remain statistically significant across these variations, thereby confirming that the improvements are not artifacts of a particular estimator choice. revision: yes

  3. Referee: [§4] §4 (results tables): While consistent improvements are reported across GANs and metrics, there is no ablation isolating the contribution of the transition estimator versus MCMC hyperparameters (chain length, proposal variance). This undermines the claim that the framework reliably corrects temporal drift, as the gains could be driven by the specific choice of empirical counts rather than the MCMC mechanism itself.

    Authors: We agree that an ablation isolating the MCMC mechanism from the choice of empirical transitions would strengthen the claims. The present experiments apply the complete MCMC framework and demonstrate consistent metric improvements, yet they do not vary chain length, proposal variance, or compare against a non-MCMC transition-matching baseline. In the revised §4 we will add two sets of ablation results: (1) performance as a function of MCMC chain length (10, 50, 100 steps) and proposal variance (scaled by factors 0.5, 1, 2) for the best-performing GAN on each dataset, and (2) a direct comparison of the full MCMC sampler against a simpler baseline that enforces the same empirical transition counts via greedy nearest-neighbor assignment without stochastic sampling. These additions will show that the stochastic exploration provided by MCMC is necessary for the observed gains in temporal metrics while preserving marginal fidelity. revision: yes

Circularity Check

0 steps flagged

No significant circularity; post-hoc correction uses external empirical statistics

full rationale

The paper's core contribution is a model-agnostic MCMC post-processing step that enforces consistency with transition counts estimated directly from the training data. The theoretical analysis of deviation accumulation under iterated conditionals is a standard observation about autoregressive sampling and does not rely on any self-referential definition or fitted parameter inside the generator. Because the transition statistics are computed from the original observed series (external to the trained GAN), the enforcement step cannot be reduced to a renaming or re-fitting of quantities already internal to the model. Experiments compare before/after metrics on held-out evaluation criteria, and no load-bearing uniqueness theorem or ansatz is imported via self-citation. The derivation chain therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

The framework rests on the assumption that transition probabilities between consecutive time points can be reliably estimated from finite data and that MCMC mixing is fast enough to correct sequences without excessive computational cost. No new physical entities are introduced.

free parameters (2)
  • MCMC chain length / burn-in
    Number of MCMC steps and burn-in period must be chosen; these control how thoroughly the correction is applied and are not derived from first principles.
  • Proposal distribution variance
    The MCMC proposal kernel requires a scale parameter that is tuned to achieve reasonable acceptance rates.
axioms (2)
  • domain assumption The real data's empirical transition matrix is a sufficient statistic for the temporal dynamics that should be preserved.
    Invoked when the MCMC target distribution is defined directly from observed neighbor-pair frequencies.
  • standard math MCMC will converge to the target distribution in a practical number of steps for the sequence lengths used.
    Standard ergodicity assumption for Metropolis-Hastings; no proof supplied in the abstract.

pith-pipeline@v0.9.0 · 5545 in / 1430 out tokens · 52459 ms · 2026-05-07T08:13:35.528757+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

61 extracted references · 6 canonical work pages · 1 internal anchor

  1. [1]

    Monthly water consumption prediction using season algorithm and wavelet transform– based models.Journal of Water Resources Planning and Management, 143(6):04017011, 2017

    Abdusselam Altunkaynak and Tewodros Assefa Nigussie. Monthly water consumption prediction using season algorithm and wavelet transform– based models.Journal of Water Resources Planning and Management, 143(6):04017011, 2017

  2. [2]

    A prediction approach for stock market volatility based on time series data

    Sheikh Mohammad Idrees, M Afshar Alam, and Parul Agarwal. A prediction approach for stock market volatility based on time series data. IEEE Access, 7:17287–17298, 2019. 11

  3. [3]

    Zhiyong Cui, Ruimin Ke, Ziyuan Pu, and Yinhai Wang. Stacked bidirectional and unidirectional LSTM recurrent neural network for forecasting network-wide traffic state with missing values.Transportation Research Part C: Emerging Technologies, 118:102674, 2020

  4. [4]

    Stacked Bidirectional LSTM for Predicting Emission of Nitrous Oxide

    Ci Lin, Tet Yeap, and Iluju Kiringa. Stacked Bidirectional LSTM for Predicting Emission of Nitrous Oxide. Canadian Artificial Intelligence Association (CAIAC), 2022. https://caiac.pubpub.org/pub/5inot0i4

  5. [5]

    Temporal convolutional networks for action segmentation and detection

    Colin Lea, Michael D Flynn, Rene Vidal, Austin Reiter, and Gregory D Hager. Temporal convolutional networks for action segmentation and detection. Inproceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 156–165, 2017

  6. [6]

    Are transformers effective for time series forecasting? InProceedings of the AAAI conference on artificial intelligence, volume 37, pages 11121–11128, 2023

    Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? InProceedings of the AAAI conference on artificial intelligence, volume 37, pages 11121–11128, 2023

  7. [7]

    Time-series forecasting with deep learning: a survey.Philosophical transactions of the royal society a: mathematical, physical and engineering sciences, 379(2194), 2021

    Bryan Lim and Stefan Zohren. Time-series forecasting with deep learning: a survey.Philosophical transactions of the royal society a: mathematical, physical and engineering sciences, 379(2194), 2021

  8. [8]

    A review: preprocessing techniques and data augmentation for sentiment analysis.Computational Social Networks, 8(1):1–16, 2021

    Huu-Thanh Duong and Tram-Anh Nguyen-Thi. A review: preprocessing techniques and data augmentation for sentiment analysis.Computational Social Networks, 8(1):1–16, 2021

  9. [9]

    Agriculture-informed neural networks for predicting nitrous oxide emissions.ACM Transactions on Internet of Things, 5(4):1–23, 2024

    Ci Lin, Futong Li, Patrick Killeen, Tet Yeap, and Iluju Kiringa. Agriculture-informed neural networks for predicting nitrous oxide emissions.ACM Transactions on Internet of Things, 5(4):1–23, 2024

  10. [10]

    Data augmentation for time series regression: Applying transformations, autoencoders and adversarial networks to electricity price forecasting

    Sumeyra Demir, Krystof Mincev, Koen Kok, and Nikolaos G Paterakis. Data augmentation for time series regression: Applying transformations, autoencoders and adversarial networks to electricity price forecasting. Applied Energy, 304:117695, 2021

  11. [11]

    Generative adversarial networks in time series: A systematic literature review.ACM Computing Surveys, 55(10):1–31, 2023

    Eoin Brophy, Zhengwei Wang, Qi She, and Tomás Ward. Generative adversarial networks in time series: A systematic literature review.ACM Computing Surveys, 55(10):1–31, 2023

  12. [12]

    Generative adversarial nets.Advances in neural information processing systems, 27, 2014

    Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets.Advances in neural information processing systems, 27, 2014

  13. [13]

    Generative adversarial networks for handwriting image generation: a review.The Visual Computer, 41(4):2299–2322, 2025

    Randa Elanwar and Margrit Betke. Generative adversarial networks for handwriting image generation: a review.The Visual Computer, 41(4):2299–2322, 2025

  14. [14]

    Medical multivariate time series imputation and forecasting based on a recurrent conditional Wasserstein GAN and attention.Journal of biomedical informatics, 139:104320, 2023

    Sven Festag and Cord Spreckelsen. Medical multivariate time series imputation and forecasting based on a recurrent conditional Wasserstein GAN and attention.Journal of biomedical informatics, 139:104320, 2023

  15. [15]

    Quant GANs: deep generation of financial time series.Quantitative Finance, 20(9):1419–1440, 2020

    Magnus Wiese, Robert Knobloch, Ralf Korn, and Peter Kretschmer. Quant GANs: deep generation of financial time series.Quantitative Finance, 20(9):1419–1440, 2020

  16. [16]

    Time- series generative adversarial networks.Advances in neural information processing systems, 32, 2019

    Jinsung Yoon, Daniel Jarrett, and Mihaela Van der Schaar. Time- series generative adversarial networks.Advances in neural information processing systems, 32, 2019

  17. [17]

    Conditional sig-wasserstein gans for time series generation.arXiv preprint arXiv:2006.05421, 2020

    Shujian Liao, Hao Ni, Lukasz Szpruch, Magnus Wiese, Marc Sabate- Vidales, and Baoren Xiao. Conditional sig-wasserstein gans for time series generation.arXiv preprint arXiv:2006.05421, 2020

  18. [18]

    Aec-gan: adversarial error correction gans for auto-regressive long time-series generation

    Lei Wang, Liang Zeng, and Jian Li. Aec-gan: adversarial error correction gans for auto-regressive long time-series generation. InProceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 10140–10148, 2023

  19. [19]

    Data augmen- tation using random image cropping and patching for deep CNNs.IEEE Transactions on Circuits and Systems for Video Technology, 30(9):2917– 2931, 2019

    Ryo Takahashi, Takashi Matsubara, and Kuniaki Uehara. Data augmen- tation using random image cropping and patching for deep CNNs.IEEE Transactions on Circuits and Systems for Video Technology, 30(9):2917– 2931, 2019

  20. [20]

    Window-warping: a time series data augmentation of IMU data for construction equipment activity identification

    Khandakar M Rashid and Joseph Louis. Window-warping: a time series data augmentation of IMU data for construction equipment activity identification. InISARC. Proceedings of the international symposium on automation and robotics in construction, volume 36, pages 651–657. IAARC Publications, 2019

  21. [21]

    An analysis of rotation matrix and colour constancy data augmentation in classifying images of animals.Journal of Information and Telecommuni- cation, 2(4):465–491, 2018

    Emmanuel Okafor, Lambert Schomaker, and Marco A Wiering. An analysis of rotation matrix and colour constancy data augmentation in classifying images of animals.Journal of Information and Telecommuni- cation, 2(4):465–491, 2018

  22. [22]

    Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks

    Terry T Um, Franz MJ Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kuli ´c. Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks. InProceedings of the 19th ACM international conference on multimodal interaction, pages 216–220, 2017

  23. [23]

    Training with noise is equivalent to Tikhonov regularization.Neural computation, 7(1):108–116, 1995

    Chris M Bishop. Training with noise is equivalent to Tikhonov regularization.Neural computation, 7(1):108–116, 1995

  24. [24]

    The effects of adding noise during backpropagation training on a generalization performance.Neural computation, 8(3):643– 674, 1996

    Guozhong An. The effects of adding noise during backpropagation training on a generalization performance.Neural computation, 8(3):643– 674, 1996

  25. [25]

    Simulating time-series data for improved deep neural network performance.IEEE Access, 7:131248–131255, 2019

    Jordan Yeomans, Simon Thwaites, William SP Robertson, David Booth, Brian Ng, and Dominic Thewlis. Simulating time-series data for improved deep neural network performance.IEEE Access, 7:131248–131255, 2019

  26. [26]

    SMOTE: synthetic minority over-sampling technique

    Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321–357, 2002

  27. [27]

    A global averaging method for dynamic time warping, with applications to clustering.Pattern recognition, 44(3):678–693, 2011

    François Petitjean, Alain Ketterlin, and Pierre Gançarski. A global averaging method for dynamic time warping, with applications to clustering.Pattern recognition, 44(3):678–693, 2011

  28. [28]

    Time series data augmentation for neural networks by time warping with a discriminative teacher

    Brian Kenji Iwana and Seiichi Uchida. Time series data augmentation for neural networks by time warping with a discriminative teacher. In2020 25th International Conference on Pattern Recognition (ICPR), pages 3558–3565. IEEE, 2021

  29. [29]

    Aenet: Learning deep audio features for video analysis.IEEE Transactions on Multimedia, 20(3):513–524, 2017

    Naoya Takahashi, Michael Gygli, and Luc Van Gool. Aenet: Learning deep audio features for video analysis.IEEE Transactions on Multimedia, 20(3):513–524, 2017

  30. [30]

    Data augmentation for deep neural network acoustic modeling.IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(9):1469–1477, 2015

    Xiaodong Cui, Vaibhava Goel, and Brian Kingsbury. Data augmentation for deep neural network acoustic modeling.IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(9):1469–1477, 2015

  31. [31]

    Bagging exponential smoothing methods using stl decomposition and box–cox transformation.International journal of forecasting, 32(2):303–312, 2016

    Christoph Bergmeir, Rob J Hyndman, and José M Benítez. Bagging exponential smoothing methods using stl decomposition and box–cox transformation.International journal of forecasting, 32(2):303–312, 2016

  32. [32]

    Independent component analysis, a new concept?Signal processing, 36(3):287–314, 1994

    Pierre Comon. Independent component analysis, a new concept?Signal processing, 36(3):287–314, 1994

  33. [33]

    The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis.Proceedings of the Royal Society of London

    Norden E Huang, Zheng Shen, Steven R Long, Manli C Wu, Hsing H Shih, Quanan Zheng, Nai-Chyuan Yen, Chi Chao Tung, and Henry H Liu. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis.Proceedings of the Royal Society of London. Series A: mathematical, physical and engineering sciences, 454(1971):9...

  34. [34]

    Hong Cao, Vincent YF Tan, and John ZF Pang. A parsimonious mixture of Gaussian trees model for oversampling in imbalanced and multimodal time-series classification.IEEE transactions on neural networks and learning systems, 25(12):2226–2239, 2014

  35. [35]

    Data augmentation and dynamic linear models.Journal of time series analysis, 15(2):183–202, 1994

    Sylvia Frühwirth-Schnatter. Data augmentation and dynamic linear models.Journal of time series analysis, 15(2):183–202, 1994

  36. [36]

    Seeking efficient data augmentation schemes via conditional and marginal augmentation.Biometrika, 86(2):301–320, 1999

    Xiao-Li Meng and David A Van Dyk. Seeking efficient data augmentation schemes via conditional and marginal augmentation.Biometrika, 86(2):301–320, 1999

  37. [37]

    Sequence-to-sequence data augmentation for dialogue language understanding.arXiv preprint arXiv:1807.01554, 2018

    Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. Sequence-to-sequence data augmentation for dialogue language understanding.arXiv preprint arXiv:1807.01554, 2018

  38. [38]

    Data preprocessing and augmentation for multiple short time series forecasting with recurrent neural networks

    Slawek Smyl and Karthik Kuber. Data preprocessing and augmentation for multiple short time series forecasting with recurrent neural networks. In36th international symposium on forecasting, 2016

  39. [39]

    GRATIS: GeneRAting TIme Series with diverse and controllable characteristics.Statistical Analysis and Data Mining: The ASA Data Science Journal, 13(4):354–376, 2020

    Yanfei Kang, Rob J Hyndman, and Feng Li. GRATIS: GeneRAting TIme Series with diverse and controllable characteristics.Statistical Analysis and Data Mining: The ASA Data Science Journal, 13(4):354–376, 2020

  40. [40]

    Data augmentation using generative adversarial network for environmental sound classification

    Aswathy Madhu and Suresh Kumaraswamy. Data augmentation using generative adversarial network for environmental sound classification. In 2019 27th European Signal Processing Conference (EUSIPCO), pages 1–5. IEEE, 2019

  41. [41]

    Speech augmentation using wavenet in speech recognition

    Jisung Wang, Sangki Kim, and Yeha Lee. Speech augmentation using wavenet in speech recognition. InICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6770–6774. IEEE, 2019

  42. [42]

    Spatial-temporal data augmentation based on LSTM autoencoder network for skeleton-based human action recognition

    Juanhui Tu, Hong Liu, Fanyang Meng, Mengyuan Liu, and Runwei Ding. Spatial-temporal data augmentation based on LSTM autoencoder network for skeleton-based human action recognition. In2018 25th IEEE International Conference on Image Processing (ICIP), pages 3478–3482. IEEE, 2018

  43. [43]

    Data augmentation using a combination of independent component analysis and non-linear time-series prediction

    Torbjorn Eltoft. Data augmentation using a combination of independent component analysis and non-linear time-series prediction. InProceed- ings of the 2002 International Joint Conference on Neural Networks. IJCNN’02 (Cat. No. 02CH37290), volume 1, pages 448–453. IEEE, 2002

  44. [44]

    Model- free renewable scenario generation using generative adversarial networks

    Yize Chen, Yishen Wang, Daniel Kirschen, and Baosen Zhang. Model- free renewable scenario generation using generative adversarial networks. IEEE Transactions on Power Systems, 33(3):3265–3275, 2018

  45. [45]

    Sensegen: A deep learning architecture for synthetic sensor data generation

    Moustafa Alzantot, Supriyo Chakraborty, and Mani Srivastava. Sensegen: A deep learning architecture for synthetic sensor data generation. In 2017 IEEE International Conference on Pervasive Computing and 12 Communications Workshops (PerCom Workshops), pages 188–193. IEEE, 2017

  46. [46]

    Generating spiking time series with generative adversarial networks: an application on banking transactions.MS thesis- Univ

    Luca Simonetto. Generating spiking time series with generative adversarial networks: an application on banking transactions.MS thesis- Univ. of Amsterdam, 2018

  47. [47]

    Biosignal data augmentation based on generative adversarial networks

    Shota Haradal, Hideaki Hayashi, and Seiichi Uchida. Biosignal data augmentation based on generative adversarial networks. In2018 40th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pages 368–371. IEEE, 2018

  48. [48]

    Generative adversarial network for synthetic time series data generation in smart grids

    Chi Zhang, Sanmukh R Kuppannagari, Rajgopal Kannan, and Viktor K Prasanna. Generative adversarial network for synthetic time series data generation in smart grids. In2018 IEEE international conference on communications, control, and computing technologies for smart grids (SmartGridComm), pages 1–6. IEEE, 2018

  49. [49]

    Generating text via adversarial training

    Yizhe Zhang, Zhe Gan, and Lawrence Carin. Generating text via adversarial training. InNIPS workshop on Adversarial Training, volume 21, pages 21–32. academia. edu, 2016

  50. [50]

    Wasserstein generative adversarial networks

    Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. InInternational conference on machine learning, pages 214–223. PMLR, 2017

  51. [51]

    Improved training of wasserstein gans.Advances in neural information processing systems, 30, 2017

    Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans.Advances in neural information processing systems, 30, 2017

  52. [52]

    C-RNN-GAN: Continuous recurrent neural networks with adversarial training.arXiv preprint arXiv:1611.09904, 2016

    Olof Mogren. C-RNN-GAN: Continuous recurrent neural networks with adversarial training.arXiv preprint arXiv:1611.09904, 2016

  53. [53]

    Generating energy data for machine learning with recurrent generative adversarial networks.Energies, 13(1):130, 2019

    Mohammad Navid Fekri, Ananda Mohon Ghosh, and Katarina Grolinger. Generating energy data for machine learning with recurrent generative adversarial networks.Energies, 13(1):130, 2019

  54. [54]

    Real-valued (medical) time series generation with recurrent conditional gans.arXiv preprint arXiv:1706.02633, 2017

    Cristóbal Esteban, Stephanie L Hyland, and Gunnar Rätsch. Real-valued (medical) time series generation with recurrent conditional gans.arXiv preprint arXiv:1706.02633, 2017

  55. [55]

    Adversarial audio synthesis.arXiv preprint arXiv:1802.04208, 2018

    Chris Donahue, Julian McAuley, and Miller Puckette. Adversarial audio synthesis.arXiv preprint arXiv:1802.04208, 2018

  56. [56]

    Gt-gan: General purpose time series synthesis with generative adversarial networks.Advances in Neural Information Processing Systems, 35:36999–37010, 2022

    Jinsung Jeon, Jeonghak Kim, Haryong Song, Seunghyeon Cho, and Noseong Park. Gt-gan: General purpose time series synthesis with generative adversarial networks.Advances in Neural Information Processing Systems, 35:36999–37010, 2022

  57. [57]

    Conditional Generative Adversarial Nets

    Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets.arXiv preprint arXiv:1411.1784, 2014

  58. [58]

    Generalization and equilibrium in generative adversarial nets (gans)

    Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). In International conference on machine learning, pages 224–232. PMLR, 2017

  59. [59]

    A classification-based study of covariate shift in gan distributions

    Shibani Santurkar, Ludwig Schmidt, and Aleksander Madry. A classification-based study of covariate shift in gan distributions. In International Conference on Machine Learning, pages 4480–4489. PMLR, 2018

  60. [60]

    Banach wasserstein gan.Advances in neural information processing systems, 31, 2018

    Jonas Adler and Sebastian Lunz. Banach wasserstein gan.Advances in neural information processing systems, 31, 2018

  61. [61]

    Real-valued (medical) time series generation with recurrent conditional gans.stat, 1050(8), 2017

    Stephanie L Hyland, Cristóbal Esteban, and Gunnar Rätsch. Real-valued (medical) time series generation with recurrent conditional gans.stat, 1050(8), 2017