pith. machine review for the scientific record. sign in

arxiv: 2605.00069 · v1 · submitted 2026-04-30 · 💻 cs.LG

Recognition: unknown

Soft-MSM: Differentiable Context-Aware Elastic Alignment for Time Series

Authors on Pith no claims yet

Pith reviewed 2026-05-09 20:36 UTC · model grok-4.3

classification 💻 cs.LG
keywords time serieselastic alignmentdifferentiable lossMove-Split-Mergedynamic time warpingbarycentreclusteringclassification
0
0 comments X

The pith

Soft-MSM replaces the hard split and merge penalties in MSM with a smooth gated surrogate to produce a fully differentiable elastic alignment loss.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops Soft-MSM, a differentiable relaxation of the Move-Split-Merge distance that keeps the original context-dependent transition costs. MSM cannot be used directly inside gradient-based training because its split and merge operations are piecewise and non-differentiable. The authors replace those operations with a smooth gated surrogate, derive the forward and backward recursions, the soft alignment matrix, and a closed-form gradient, and add a divergence correction. On 112 UCR datasets the resulting loss produces lower MSM barycentre error than prior MSM barycentre methods and yields better clustering and nearest-centroid classification accuracy than Soft-DTW alternatives.

Core claim

Soft-MSM is obtained by inserting a smooth gated surrogate for MSM's piecewise split/merge cost into the standard dynamic-programming recurrence, yielding forward recursion, backward recursion, soft alignment matrix, closed-form gradient, limiting behaviour, and a divergence-corrected formulation that together make the full MSM distance usable as a gradient-based loss while preserving its context-aware transition structure.

What carries the argument

The smooth gated surrogate for MSM's piecewise split/merge cost, which routes gradients through both the dynamic-programming recursion and the local context-dependent transition costs.

If this is right

  • Soft-MSM barycentres achieve lower MSM loss than existing non-differentiable MSM barycentre constructions.
  • Clustering with Soft-MSM centroids outperforms clustering that uses Soft-DTW centroids.
  • Nearest-centroid classification with Soft-MSM centroids outperforms the same classifier built on Soft-DTW centroids.
  • The closed-form gradient and divergence correction allow direct use of MSM-style alignment inside any gradient-based time-series pipeline.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Soft-MSM could be dropped into existing deep sequence models wherever Soft-DTW is currently used, provided the model can tolerate the extra cost of the gated transition logic.
  • The same gating trick may generalise to other elastic distances whose transition costs also depend on local alignment context.
  • Because the method supplies both a soft alignment matrix and a scalar loss, it could support new regularisation schemes that penalise particular alignment patterns during training.

Load-bearing premise

The smooth gated surrogate produces alignments and gradients that stay faithful to the original hard MSM distance across the sequences and parameter settings used in practice.

What would settle it

A side-by-side comparison on sequences containing clear split or merge decisions in which the Soft-MSM alignment matrix differs substantially from the hard MSM alignment matrix, or in which gradient descent using Soft-MSM fails to reduce the true (non-smooth) MSM distance to a new minimum.

Figures

Figures reproduced from arXiv: 2605.00069 by Anthony Bagnall, Christopher Holder.

Figure 1
Figure 1. Figure 1: Visualisation of optimal alignment paths between two time series for the Euclidean and [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Two different representations of the optimal alignment path through a cost matrix. [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Example prototypes formed for class 12 (six runs) of the C [PITH_FULL_IMAGE:figures/full_fig_p017_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Average ranks of four clustering algorithms. [PITH_FULL_IMAGE:figures/full_fig_p018_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Summary performance of four clustering algorithms on the UCR datasets. [PITH_FULL_IMAGE:figures/full_fig_p018_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Critical difference diagrams of test data performance for four classification algorithms on [PITH_FULL_IMAGE:figures/full_fig_p019_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Scatter plots of accuracy for two versions of k-centroids. The first variant (left) uses the [PITH_FULL_IMAGE:figures/full_fig_p019_7.png] view at source ↗
read the original abstract

Elastic distances like dynamic time warping (DTW) are central to time series machine learning because they compare sequences under local temporal misalignment. Soft-DTW is an adaptation of DTW that can be used as a gradient-based loss by replacing the hard minimum in its dynamic-programming recursion with a smooth relaxation. However, this approach does not directly extend to elastic distances whose transition costs depend on the local alignment context. Move-Split-Merge (MSM) is one such distance: it uses context-aware split and merge penalties and has often outperformed DTW in supervised and unsupervised time series machine learning tasks such as classification and clustering. We introduce Soft-MSM, a smooth relaxation of MSM and an elastic alignment loss with context-aware transition costs. Central to the formulation is a smooth gated surrogate for MSM's piecewise split/merge cost, which enables gradients through both the dynamic-programming recursion and the local transition structure. We derive the forward recursion, backward recursion, soft alignment matrix, closed-form gradient, limiting behaviour, and divergence-corrected formulation. Experiments on 112 UCR datasets show that Soft-MSM gives lower MSM barycentre loss than existing MSM barycentre methods, and yields significantly better clustering and nearest-centroid classification performance than Soft-DTW-based alternatives. An implementation is available in the open-source \texttt{aeon} toolkit.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript introduces Soft-MSM, a smooth differentiable relaxation of the Move-Split-Merge (MSM) elastic distance. It replaces MSM's piecewise context-dependent split/merge costs with a smooth gated surrogate inside the dynamic-programming recursion, derives the forward/backward passes, soft alignment matrix, closed-form gradient, limiting behaviour, and a divergence-corrected formulation. Experiments on 112 UCR datasets report lower MSM barycentre loss than existing MSM barycentre methods and significantly better clustering and nearest-centroid classification performance than Soft-DTW alternatives, with an open-source implementation in the aeon toolkit.

Significance. If the gated surrogate remains faithful to MSM, the work would usefully extend the family of gradient-based elastic losses beyond Soft-DTW to context-aware distances that have previously shown advantages in time-series tasks. The explicit derivation of gradients through both the recursion and the local transition structure, together with the open implementation, are concrete strengths that would support adoption in barycentre, clustering, and classification pipelines.

major comments (2)
  1. [§3] §3 (smooth gated surrogate definition): No analytic error bounds are derived for the approximation error between the soft gated surrogate and MSM's original piecewise split/merge cost, nor are there systematic empirical checks (e.g., alignment or distance error as a function of sequence length, amplitude scale, or c-parameter) across the regimes present in the 112 UCR experiments. This is load-bearing for the central claim that reported gains reflect faithful elastic alignment rather than surrogate artifacts.
  2. [§5] §5 (experiments): The claims of 'significantly better' clustering and nearest-centroid classification lack reported statistical tests, p-values, effect sizes, or multiple-comparison corrections. Ablations on the smoothing temperature and the divergence-corrected formulation are also absent, preventing assessment of whether the performance edge is robust or sensitive to these choices.
minor comments (1)
  1. [§3] The notation for the soft alignment matrix and the forward/backward recursions would benefit from an explicit variable glossary or additional inline definitions to improve readability for readers unfamiliar with the MSM formulation.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for their constructive feedback and positive evaluation of the manuscript. We address the major comments point by point below.

read point-by-point responses
  1. Referee: [§3] §3 (smooth gated surrogate definition): No analytic error bounds are derived for the approximation error between the soft gated surrogate and MSM's original piecewise split/merge cost, nor are there systematic empirical checks (e.g., alignment or distance error as a function of sequence length, amplitude scale, or c-parameter) across the regimes present in the 112 UCR experiments. This is load-bearing for the central claim that reported gains reflect faithful elastic alignment rather than surrogate artifacts.

    Authors: We agree that analytic error bounds would be desirable but are difficult to derive due to the recursive structure of the dynamic programming and the dependence on variable sequence properties. In the revised manuscript, we will incorporate systematic empirical checks, including quantitative assessments of the approximation error for alignment paths and distances as functions of sequence length, amplitude, and the c-parameter, using representative datasets from the UCR archive. This will support that the observed performance gains are not due to surrogate artifacts. revision: partial

  2. Referee: [§5] §5 (experiments): The claims of 'significantly better' clustering and nearest-centroid classification lack reported statistical tests, p-values, effect sizes, or multiple-comparison corrections. Ablations on the smoothing temperature and the divergence-corrected formulation are also absent, preventing assessment of whether the performance edge is robust or sensitive to these choices.

    Authors: We will revise the experimental section to include statistical significance tests, such as Wilcoxon signed-rank tests with p-values, effect sizes, and appropriate multiple-comparison corrections (e.g., Bonferroni). Additionally, we will add ablations on the smoothing temperature parameter and the divergence-corrected formulation to demonstrate the robustness of the results. revision: yes

standing simulated objections not resolved
  • Deriving analytic error bounds for the approximation error of the soft gated surrogate to the original MSM piecewise costs.

Circularity Check

0 steps flagged

Soft-MSM derivation is a direct, independent relaxation of MSM with no load-bearing reductions

full rationale

The paper introduces Soft-MSM by replacing MSM's piecewise context-dependent split/merge costs with an explicitly derived smooth gated surrogate inside the DP recursion. It then derives the forward recursion, backward recursion, soft alignment matrix, closed-form gradient, limiting behaviour, and divergence correction directly from this construction. No equation reduces to a fitted parameter renamed as a prediction, no uniqueness theorem is imported from the authors' prior work, and no ansatz is smuggled via self-citation. The performance claims on 112 UCR datasets are empirical evaluations of the resulting loss and alignments rather than tautological consequences of the formulation itself. The derivation chain is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 0 axioms · 0 invented entities

The central claim rests on the correctness of the smooth approximation to MSM's piecewise costs and the validity of the dynamic-programming derivations; no free parameters or invented entities are mentioned in the abstract, but a smoothing temperature parameter is expected in any soft-min construction.

free parameters (1)
  • smoothing temperature
    Standard in soft-DTW-style relaxations; required to control the softness of the min and gated surrogate, though its specific value or selection method is not stated in the abstract.

pith-pipeline@v0.9.0 · 5530 in / 1121 out tokens · 29217 ms · 2026-05-09T20:36:20.101253+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

299 extracted references · 85 canonical work pages · 2 internal anchors

  1. [1]

    Haussler, David , title =

  2. [2]

    e- SMOTE : a Train Set Rebalancing Algorithm for Time Series Classification

    Qiu, Chuanhang and Middlehurst, Matthew and Holder, Christopher and Bagnall, Anthony. e- SMOTE : a Train Set Rebalancing Algorithm for Time Series Classification. Advanced Analytics and Learning on Temporal Data. 2026

  3. [3]

    and Bolck, Annemieke , title =

    Veenman, Cor J. and Bolck, Annemieke , title =. Pattern Recognition Letters , year =

  4. [4]

    Advances in Neural Information Processing Systems , volume =

    Shape and Time Distortion Loss for Training Deep Time Series Forecasting Models , author =. Advances in Neural Information Processing Systems , volume =

  5. [5]

    Energy Reports , volume =

    Qing Li and Xinyan Zhang and Tianjiao Ma and Dagui Liu and Heng Wang and Wei Hu , title =. Energy Reports , volume =

  6. [6]

    Journal of Control, Automation and Electrical Systems , volume =

    Day-Ahead Photovoltaic Power Forecasting Using Deep Learning with an Autoencoder-Based Correction Strategy , author =. Journal of Control, Automation and Electrical Systems , volume =

  7. [7]

    2024 , eprint=

    Generative Adversarial Network with Soft-Dynamic Time Warping and Parallel Reconstruction for Energy Time Series Anomaly Detection , author=. 2024 , eprint=

  8. [8]

    Data Augmentation with Suboptimal Warping for Time-Series Classification , journal =

    Krzysztof Kamycki and Tomasz Kapu\'. Data Augmentation with Suboptimal Warping for Time-Series Classification , journal =. 2020 , volume =

  9. [9]

    and Venkatesh, S

    Ming Hsiao Ko and West, G. and Venkatesh, S. and Kumar, M. , booktitle=. Online Context Recognition in Multisensor Systems using Dynamic Time Warping , year=

  10. [10]

    Proceedings of the National Academy of Sciences , volume=

    Diagnosis of multiple cancer types by shrunken centroids of gene expression , author=. Proceedings of the National Academy of Sciences , volume=

  11. [11]

    Algorithm 778:

    Zhu, Ciyou and Byrd, Richard H and Lu, Peihuang and Nocedal, Jorge , journal=. Algorithm 778:

  12. [12]

    Handbook of Cluster Analysis , pages =

    J Caiado and E Maharaj and P D'Urso , title=. Handbook of Cluster Analysis , pages =

  13. [13]

    The Scientific World Journal , volume=

    A hybrid algorithm for clustering of time series data based on affinity search technique , author=. The Scientific World Journal , volume=. 2014 , publisher=

  14. [14]

    End-to-end deep representation learning for time series clustering: a comparative study

    Baptiste Lafabregue and Jonathan Weber and Pierre Gancarski and Germain Forestier. End-to-end deep representation learning for time series clustering: a comparative study. Data Mining and Knowledge Discovery. 2022

  15. [15]

    J. Dem. Statistical Comparisons of Classifiers over Multiple Data Sets , journal =. 2006 , pages =

  16. [16]

    Statistical Comparisons of Classifiers over Multiple Data Sets

    S. Garc\'. An Extension on “Statistical Comparisons of Classifiers over Multiple Data Sets” for all Pairwise Comparisons , journal =. 2008 , pages =

  17. [17]

    Benavoli and G

    A. Benavoli and G. Corani and F. Mangili , title =. Journal of Machine Learning Research , volume =. 2016 , pages =

  18. [18]

    TheScientificWorldJournal , year =

    Seyedjamal Zolhavarieh and Saeed Aghabozorgi and Ying Wah Teh , title =. TheScientificWorldJournal , year =

  19. [19]

    Knowledge and Information Systems , year =

    Christopher Holder and Matthew Middlehurst and Anthony Bagnall , title =. Knowledge and Information Systems , year =

  20. [20]

    , journal=

    Mori, Usue and Mendiburu, Alexander and Lozano, Jose A. , journal=. Similarity Measure Selection for Clustering Time Series Databases , year=

  21. [21]

    Proceedings of the 2004 SIAM International Conference on Data Mining (SDM) , chapter =

    Chotirat Ann Ratanamahatana and Eamonn Keogh , title =. Proceedings of the 2004 SIAM International Conference on Data Mining (SDM) , chapter =. doi:10.1137/1.9781611972740.2 , URL =. https://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.2 , publisher=

  22. [22]

    and Dubes, Richard C

    Jain, Anil K. and Dubes, Richard C. , title =. 1988 , isbn =

  23. [23]

    Data Mining and Knowledge Discovery , year =

    Middlehurst, Matthew and Schäfer, Patrick and Bagnall, Anthony , title =. Data Mining and Knowledge Discovery , year =

  24. [24]

    2010 , note =

    Data clustering: 50 years beyond K-means , journal =. 2010 , note =

  25. [25]

    and Raftery, A

    Fraley, C. and Raftery, A. E. , title = ". The Computer Journal , volume =. 1998 , month =

  26. [26]

    2007 , publisher=

    The EM algorithm and extensions , author=. 2007 , publisher=

  27. [27]

    Time series clustering with an EM algorithm for mixtures of linear Gaussian state space models , journal =

    Ryohei Umatani and Takashi Imai and Kaoru Kawamoto and Shutaro Kunimasa , keywords =. Time series clustering with an EM algorithm for mixtures of linear Gaussian state space models , journal =. 2023 , issn =. doi:https://doi.org/10.1016/j.patcog.2023.109375 , url =

  28. [28]

    Data Mining and Knowledge Discovery , year =

    Marco-Blanco Jorge and Rubén Cuevas , title =. Data Mining and Knowledge Discovery , year =. doi:10.1007/s10618-024-01018-x , url =

  29. [29]

    and Varoquaux, G

    Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E. , journal=. Scikit-learn: Machine Learning in

  30. [30]

    2017 , journal =

    A review of clustering techniques and developments , journal =. 2017 , issn =. doi:https://doi.org/10.1016/j.neucom.2017.06.053 , url =

  31. [31]

    Ezugwu, A.M

    A comprehensive survey of clustering algorithms: State-of-the-art machine learning applications, taxonomy, challenges, and future research prospects , journal =. 2022 , issn =. doi:https://doi.org/10.1016/j.engappai.2022.104743 , url =

  32. [32]

    Joe H. Ward Jr. , title =. Journal of the American Statistical Association , volume =. 1963 , publisher =

  33. [33]

    2020 , issn =

    A benchmark study on time series clustering , journal =. 2020 , issn =

  34. [34]

    Rousseeuw , isbn =

    Kaufman, Leonard and Peter J. Rousseeuw , isbn =. Clustering Large Applications (Program "CLARA") , booktitle =. doi:https://doi.org/10.1002/9780470316801.ch3 , url =. https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470316801.ch3 , year =

  35. [35]

    2020 , issn =

    A time-series clustering methodology for knowledge extraction in energy consumption data , journal =. 2020 , issn =. doi:https://doi.org/10.1016/j.eswa.2020.113731 , url =

  36. [36]

    IEEE Transactions on Knowledge and Data Engineering , doi =

    Ng, Raymond and Han, Jiawei , year =. IEEE Transactions on Knowledge and Data Engineering , doi =

  37. [37]

    Advances in Neural Information Processing Systems , pages=

    BanditPAM: Almost Linear Time k-medoids Clustering via Multi-Armed Bandits , author=. Advances in Neural Information Processing Systems , pages=

  38. [38]

    2000 , journal =

    Vesanto, Juha and Alhoniemi, Esa , title =. 2000 , journal =. doi:10.1109/72.846731 , url =

  39. [39]

    Exploring time-series motifs through DTW-SOM , doi =

    Silva, Maria and Henriques, Roberto , year =. Exploring time-series motifs through DTW-SOM , doi =

  40. [40]

    Rizzo and Byung Suk Lee and Robert Gramling , title =

    Ali Javed and Donna M. Rizzo and Byung Suk Lee and Robert Gramling , title =. Data Mining and Knowledge Discovery , year =. doi:10.1007/s10618-023-00979-9 , url =

  41. [41]

    2022 , issn =

    On analyzing GNSS displacement field variability of Taiwan: Hierarchical Agglomerative Clustering based on Dynamic Time Warping technique , journal =. 2022 , issn =. doi:https://doi.org/10.1016/j.cageo.2022.105243 , url =

  42. [42]

    Agglomerative Hierarchical Clustering with Dynamic Time Warping for Household Load Curve Clustering , doi =

    Almahamid, Fadi and Grolinger, Katarina , year =. Agglomerative Hierarchical Clustering with Dynamic Time Warping for Household Load Curve Clustering , doi =

  43. [43]

    Proceedings of the 1996 ACM SIGMOD International Conference on Management of Data , pages =

    Zhang, Tian and Ramakrishnan, Raghu and Livny, Miron , title =. ACM SIGMOD Record , year =. doi:10.1145/233269.233324 , pages =

  44. [44]

    An extended BIRCH-based clustering algorithm for large time-series datasets , keywords =

    Lei, Jiahuan , institution =. An extended BIRCH-based clustering algorithm for large time-series datasets , keywords =

  45. [45]

    A BIRCH-Based Clustering Method for Large Time Series Databases

    Le Quy Nhon, Vo and Anh, Duong Tuan. A BIRCH-Based Clustering Method for Large Time Series Databases. New Frontiers in Applied Data Mining. 2012

  46. [46]

    Unsupervised Feature Extraction for Time Series Clustering Using Orthogonal Wavelet Transform

    Zhang, Hui and Ho, Tu and Zhang, Yang and Lin, Song , year =. Unsupervised Feature Extraction for Time Series Clustering Using Orthogonal Wavelet Transform. , volume =

  47. [47]

    and Eui-Hong Han and Kumar, V

    Karypis, G. and Eui-Hong Han and Kumar, V. , journal=. Chameleon: hierarchical clustering using dynamic modeling , year=

  48. [48]

    Time-series clustering – A decade review , journal =

    Saeed Aghabozorgi and Ali. Time-series clustering – A decade review , journal =. 2015 , issn =. doi:https://doi.org/10.1016/j.is.2015.04.007 , url =

  49. [49]

    Clustering of time series data—a survey , journal =

    T. Clustering of time series data—a survey , journal =. 2005 , issn =. doi:https://doi.org/10.1016/j.patcog.2005.01.025 , url =

  50. [50]

    1998 , isbn =

    Guha, Sudipto and Rastogi, Rajeev and Shim, Kyuseok , title =. 1998 , isbn =. doi:10.1145/276304.276312 , booktitle =

  51. [51]

    2014 , booktitle =

    Chapter 20 - Clustering , editor =. 2014 , booktitle =. doi:https://doi.org/10.1016/B978-0-12-396502-8.00020-6 , url =

  52. [52]

    MacQueen, J. B. Some methods for classification and analysis of multivariate observations. Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. 1967

  53. [53]

    , journal=

    Lloyd, S. , journal=. Least squares quantization in "PCM" , year=

  54. [54]

    2023 , issn =

    K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data , journal =. 2023 , issn =. doi:https://doi.org/10.1016/j.ins.2022.11.139 , url =

  55. [55]

    International Research Journal Engineering Technology , volume=

    Review of existing methods in K-means clustering algorithm , author=. International Research Journal Engineering Technology , volume=

  56. [56]

    2013 , issn =

    A comparative study of efficient initialization methods for the k-means clustering algorithm , journal =. 2013 , issn =. doi:https://doi.org/10.1016/j.eswa.2012.07.021 , url =

  57. [57]

    Journal of the Royal Statistical Society: Series B (Statistical Methodology) , volume =

    Tibshirani, Robert and Walther, Guenther and Hastie, Trevor , title = ". Journal of the Royal Statistical Society Series B: Statistical Methodology , volume =. 2002 , month =. doi:10.1111/1467-9868.00293 , url =

  58. [58]

    Using Mahout for Clustering Wikipedia's Latest Articles: A Comparison between K-means and Fuzzy C-means in the Cloud , year =

    Esteves, Rui M\'. Using Mahout for Clustering Wikipedia's Latest Articles: A Comparison between K-means and Fuzzy C-means in the Cloud , year =. doi:10.1109/CloudCom.2011.86 , booktitle =

  59. [59]

    Finding Groups in Data: An Introduction To Cluster Analysis , isbn =

    Kaufman, Leonard and Rousseeuw, Peter , year =. Finding Groups in Data: An Introduction To Cluster Analysis , isbn =. Wiley, New York. ISBN 0-471-87876-6. , doi =

  60. [60]

    2019 , issn =

    How much can k-means be improved by using better initialization and repeats? , journal =. 2019 , issn =. doi:https://doi.org/10.1016/j.patcog.2019.04.014 , url =

  61. [61]

    2011 , isbn =

    Yang, Jaewon and Leskovec, Jure , title =. 2011 , isbn =. doi:10.1145/1935826.1935863 , booktitle =

  62. [62]

    SIGMOD Rec

    Paparrizos, John and Gravano, Luis , title =. SIGMOD Rec. , month =. 2016 , volume =

  63. [63]

    2012 , publisher=

    Matrix Computations , author=. 2012 , publisher=

  64. [64]

    2011 , author =

    A global averaging method for dynamic time warping, with applications to clustering , journal =. 2011 , author =

  65. [65]

    and Palaniswami, Marimuthu , booktitle=

    Datta, Shreyasi and Karmakar, Chandan K. and Palaniswami, Marimuthu , booktitle=. Averaging Methods using Dynamic Time Warping for Time Series Classification , year=

  66. [66]

    2018 , issn =

    Nonsmooth analysis and subgradient methods for averaging in dynamic time warping spaces , journal =. 2018 , issn =

  67. [67]

    Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms

    Schubert, Erich and Rousseeuw, Peter J. Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms. Similarity Search and Applications. 2019

  68. [68]

    Time-series clustering by approximate prototypes , year=

    Hautamaki, Ville and Nykanen, Pekka and Franti, Pasi , booktitle=. Time-series clustering by approximate prototypes , year=

  69. [69]

    Rousseeuw , isbn =

    Leonard Kaufman, Peter J. Rousseeuw , isbn =. Partitioning Around Medoids (Program PAM) , booktitle =

  70. [70]

    Frontiers of Computer Science , year =

    Panthadeep Bhattacharjee and Pinaki Mitra , title =. Frontiers of Computer Science , year =. doi:10.1007/s11704-019-9059-3 , url =

  71. [71]

    A density-based algorithm for discovering clusters in large spatial databases with noise , year =

    Ester, Martin and Kriegel, Hans-Peter and Sander, J\". A density-based algorithm for discovering clusters in large spatial databases with noise , year =. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining , pages =

  72. [72]

    Science , volume =

    Alex Rodriguez and Alessandro Laio , title =. Science , volume =. 2014 , doi =

  73. [73]

    ArXiv , year=

    A General Framework for Density Based Time Series Clustering Exploiting a Novel Admissible Pruning Strategy , author=. ArXiv , year=

  74. [74]

    Campello, Ricardo J. G. B. and Moulavi, Davoud and Sander, Joerg. Density-Based Clustering Based on Hierarchical Density Estimates. Advances in Knowledge Discovery and Data Mining. 2013

  75. [75]

    and Kriegel, Hans-Peter and Sander, J\"

    Ankerst, Mihael and Breunig, Markus M. and Kriegel, Hans-Peter and Sander, J\". OPTICS: ordering points to identify the clustering structure , year =. SIGMOD Rec. , month =. doi:10.1145/304181.304187 , abstract =

  76. [76]

    Ankerst, M

    Ankerst, Mihael and Breunig, Markus M. and Kriegel, Hans-Peter and Sander, J\". OPTICS: ordering points to identify the clustering structure , year =. doi:10.1145/304182.304187 , booktitle =

  77. [77]

    2022 , issn =

    Improved time series clustering based on new geometric frameworks , journal =. 2022 , issn =. doi:https://doi.org/10.1016/j.patcog.2021.108423 , url =

  78. [78]

    IEEE Trans

    Mean Shift: A Robust Approach Toward Feature Space Analysis , author=. IEEE Trans. Pattern Anal. Mach. Intell. , year=

  79. [79]

    and Kalaba, R

    Bellman, R. and Kalaba, R. , journal=. On adaptive control processes , year=

  80. [80]

    and Clifford, James , title =

    Berndt, Donald J. and Clifford, James , title =. 1994 , booktitle =

Showing first 80 references.