pith. machine review for the scientific record. sign in

arxiv: 2604.10490 · v1 · submitted 2026-04-12 · 💻 cs.HC

Recognition: unknown

Make it Simple, Make it Dance: Dance Motion Simplification to Support Novices' Dance Learning

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:28 UTC · model grok-4.3

classification 💻 cs.HC
keywords dance motion simplificationnovice dance learningmotion complexity factorsrule-based simplificationlearning-based simplificationdance education technologychoreography adaptation
0
0 comments X

The pith

Dance motions can be automatically simplified to help novices learn without losing naturalness or style.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper investigates how to simplify complex dance movements for beginners who struggle with online tutorials. Through surveys of novices and focus groups with professional choreographers, the authors identify five complexity factors and build both rule-based and learning-based algorithms to reduce them. They validate the approach in three evaluations covering technical accuracy, choreographer judgments of naturalness and style, and novice measures of workload, self-efficacy, and performance. If the methods work, online dance learning could become more approachable by adapting motions to skill level while retaining core movement qualities.

Core claim

The authors establish that dance motion complexity can be quantified through five factors identified from expert choreographers, and that rule-based and learning-based simplification methods can be applied to produce versions that maintain motion naturalness, preserve stylistic elements, and enhance learning effectiveness for novices as shown in evaluations of workload, self-efficacy, and objective performance.

What carries the argument

Five complexity factors derived from choreographer strategies, automated via rule-based methods and learning-based models to simplify dance motions.

If this is right

  • Professional choreographers rate the simplified motions as adequately simplified, natural, and style-preserving.
  • Novices report lower workload, higher self-efficacy, and better objective performance with simplified versions.
  • Technical evaluations confirm that the complexity measures accurately reflect reductions achieved by the algorithms.
  • The methods support dance education by making tutorials more approachable without altering essential characteristics.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar simplification techniques could be applied to other skill-based physical activities like sports training or yoga instruction.
  • Real-time adaptation of dance motions based on user performance data might become feasible with further development.
  • Personalized simplification levels could be created by integrating user skill assessments into the algorithms.

Load-bearing premise

The strategies identified from choreographers can be automated reliably while still preserving the naturalness, style, and educational value of the original motions for novices.

What would settle it

A study where novices show equivalent or higher difficulty and no performance gains when practicing with the simplified motions compared to original versions would falsify the learning benefits claim.

Figures

Figures reproduced from arXiv: 2604.10490 by Hyunyoung Han, Murad Eynizada, Sang Ho Yoon, Son Xuan Nghiem.

Figure 1
Figure 1. Figure 1: We present a dance motion simplification approach developed through extensive user research. First, [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the entire process of our study. [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Overview of the focus group session and data collection: [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Original and simplified motion sequences for each simplification criterion. [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: The custom annotation tool used to construct the dance motion simplification dataset. The interface [PITH_FULL_IMAGE:figures/full_fig_p011_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Overall and criterion-wise (𝐶1–𝐶5) comparisons between original and simplified conditions using notched boxplots with significance annotations (∗𝑝 < .05, ∗∗𝑝 < .01, ∗∗∗𝑝 < .001). a workload (NASA–TLX) (↓); b self-efficacy (↑); c objective performance (motion similarity; DTW cost) (↓); d perceived difficulty (↓). revealed significant reductions for 𝐶2 (78.06±9.86 to 51.63±18.60; t(15) = −5.55, 𝑝 < .001, 𝑑𝑧 … view at source ↗
Figure 7
Figure 7. Figure 7: Overview of the rule-based approach. Beginning with an original dance sequence, the model detects [PITH_FULL_IMAGE:figures/full_fig_p018_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Overview of the learning-based approach. Given a pair of [PITH_FULL_IMAGE:figures/full_fig_p022_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Overall and criterion-wise expert assessments. Notched boxplots ( [PITH_FULL_IMAGE:figures/full_fig_p027_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Overall and criterion-wise (𝐶1–𝐶5) comparisons between original and simplified–{ GT , rule-based , and learning-based } conditions using notched boxplots with significance annotations (∗𝑝 < .05, ∗∗𝑝 < .01, ∗∗∗𝑝 < .001). Significance annotations on each boxplot indicate within-method paired tests (vs. each participant’s original or, for objective performance, tests of Δ against zero. a workload (NASA–TLX);… view at source ↗
read the original abstract

Online dance tutorials have gained widespread popularity. However, many novices encounter difficulties when dance motion complexity exceeds their skill level, potentially leading to discouragement. This study explores dance motion simplification to address this challenge. We surveyed 30 novices to identify challenging movements, then conducted focus groups with 30 professional choreographers across 10 genres to explore simplification strategies and collect paired original-simplified dance datasets. We identified five complexity factors and developed automated simplification methods using both rule-based and learning-based approaches. We validated our approach through three evaluations. Technical evaluation confirmed our complexity measures and algorithms. 20 professional choreographers assessed motion naturalness, simplification adequacy, and style preservation. 18 novices evaluated learning effectiveness through workload, self-efficacy, objective performance, and perceived difficulty. This work contributes to dance education technology by proposing methods that help make choreography more approachable for beginners while preserving essential characteristics.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The paper claims that dance motion complexity can be characterized by five factors identified via novice surveys (n=30) and choreographer focus groups (n=30 across 10 genres), that paired original-simplified datasets enable both rule-based and learning-based automated simplification algorithms, and that these algorithms were validated in three studies: a technical evaluation of the complexity measures and algorithms, ratings by 20 choreographers on naturalness, simplification adequacy and style preservation, and a novice study (n=18) measuring workload, self-efficacy, objective performance and perceived difficulty.

Significance. If the central claims hold, the work offers a practical, expert-informed pipeline for making online dance tutorials more accessible to beginners without sacrificing motion naturalness or stylistic integrity. The mixed-methods design—combining empirical factor elicitation, dual automation strategies, and layered validation (technical, expert, user)—provides a template for similar simplification problems in other embodied learning domains. The explicit collection of paired datasets and the inclusion of both rule-based and data-driven methods are particular strengths that support reproducibility and allow comparison of automation trade-offs.

minor comments (3)
  1. Abstract: The abstract states that three evaluations were performed but reports no quantitative outcomes (e.g., agreement scores, statistical tests, or effect sizes). Adding one or two key results would strengthen the summary and help readers gauge the magnitude of the reported benefits.
  2. The five complexity factors are introduced in the abstract and methods but their precise operational definitions, measurement scales, and inter-rater reliability statistics are not summarized in a single table or figure early in the paper; this makes it harder for readers to quickly grasp the core contribution.
  3. Section describing the learning-based model: the manuscript should clarify the exact input representation (e.g., joint angles, velocity features), training/validation split, and any hyper-parameter search procedure so that the learning-based results can be reproduced or compared with future work.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive and encouraging review, which accurately summarizes our contributions and highlights the practical value of the mixed-methods pipeline for dance motion simplification. We appreciate the recognition of our empirical factor elicitation, dual automation approaches, and layered validation strategy. Since the report recommends minor revision but lists no specific major comments requiring changes, we have no points to address point-by-point at this stage.

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper's central claims rest on newly collected empirical data: a survey of 30 novices to identify challenging movements, focus groups with 30 choreographers to gather simplification strategies and paired datasets, identification of five complexity factors, development of rule-based and learning-based automation methods, and three independent validations (technical evaluation of measures/algorithms, choreographer ratings of naturalness/style/adequacy, and novice study on workload/self-efficacy/performance). No load-bearing step reduces to a self-definition, fitted input renamed as prediction, self-citation chain, imported uniqueness theorem, smuggled ansatz, or renaming of a known result. The derivation chain is self-contained against external benchmarks from the collected data and separate evaluations.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is an empirical applied research paper in human-computer interaction; the abstract does not specify any free parameters, mathematical axioms, or newly invented entities.

pith-pipeline@v0.9.0 · 5458 in / 1143 out tokens · 83706 ms · 2026-05-10T16:28:56.959833+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

102 extracted references · 10 canonical work pages · 1 internal anchor

  1. [1]

    Yeuhi Abe, C Karen Liu, and Zoran Popović. 2004. Momentum-based parameterization of dynamic character motion. InProceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation. 173–182

  2. [2]

    Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung, Daniel Cohen-Or, and Baoquan Chen. 2020. Skeleton-aware networks for deep motion retargeting.ACM Transactions on Graphics (TOG)39, 4 (2020), 62–1

  3. [3]

    Sri Wulan Anggraeni, Yayan Alpian, Harmawati Harmawati, and Winda Anggraeni. 2024. Exploring Confidence in Boys’ Elementary Dance Education.Journal of Education and Learning (EduLearn)18, 1 (2024), 201–208

  4. [4]

    Ben Baker, Tony Liu, Jordan Matelsky, Felipe Parodi, Brett Mensh, John W Krakauer, and Konrad Kording. 2024. Computational kinematics of dance: distinguishing hip hop genres.Frontiers in Robotics and AI11 (2024), 1295308. , Vol. 1, No. 1, Article . Publication date: April 2026. Make itSimple,Make itDance: Dance Motion Simplification to Support Novices’ Da...

  5. [5]

    Gigi Berardi. 2004. Teaching Dance Skills: A Motor Learning and Development Approach.Journal of Dance Medicine & Science8, 4 (2004), 125–125

  6. [6]

    Julien Blanchet and Sixuan Han. 2023. Integrating a LLM into an Automatic Dance Practice Support System: Breathing Life Into The Virtual Coach. InAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–2

  7. [7]

    Julien Blanchet, Megan E Hillis, Yeongji Lee, Qijia Shao, Xia Zhou, David JM Kraemer, and Devin Balkcom. 2023. Learnthatdance: Augmenting tiktok dance challenge videos with an interactive practice support system powered by automatically generated lesson plans. InAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–4

  8. [8]

    Jules Brooks Blanchet, Megan E Hillis, Yeongji Lee, Qijia Shao, Xia Zhou, Devin Balkcom, and David JM Kraemer

  9. [9]

    InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems

    Enhancing the Educational Potential of Online Movement Videos: System Development and Empirical Studies with TikTok Dance Challenges. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, 1–19

  10. [10]

    Ann E Blandford. 2013. Semi-structured qualitative studies. Interaction Design Foundation

  11. [11]

    Armin Bruderlin and Lance Williams. 1995. Motion signal processing. InProceedings of the 22nd annual conference on Computer graphics and interactive techniques. 97–104

  12. [12]

    Chan, Jia Huang, and Xin Di

    Raymond C.K. Chan, Jia Huang, and Xin Di. 2009. Dexterous movement complexity and cerebellar activation: A meta-analysis.Brain Research Reviews59, 2 (2009), 316–323. doi:10.1016/j.brainresrev.2008.09.003

  13. [13]

    Michael Chang, Nicholas O’Dwyer, Roger Adams, Stephen Cobley, Kwee-Yum Lee, and Mark Halaki. 2020. Whole- body kinematics and coordination in a complex dance sequence: Differences across skill levels.Human movement science69 (2020), 102564

  14. [14]

    Jingwen Chen, Yingwei Pan, Ting Yao, and Tao Mei. 2023. ControlStyle: Text-Driven Stylized Image Generation Using Diffusion Priors. InProceedings of the 31st ACM International Conference on Multimedia(Ottawa ON, Canada) (MM ’23). Association for Computing Machinery, New York, NY, USA, 7540–7548. doi:10.1145/3581783.3612524

  15. [15]

    Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. InProceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 785–794

  16. [16]

    Diane Chi, Monica Costa, Liwei Zhao, and Norman Badler. 2000. The EMOTE model for effort and shape. InProceedings of the 27th annual conference on Computer graphics and interactive techniques. 173–182

  17. [17]

    Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Khuc, Krishna Kumar Singh, Jingwan Lu, David Inouye, and Ajinkya Kale. 2024. Enhanced Controllability of Diffusion Models via Feature Disentanglement and Realism-Enhanced Sampling Methods. InComputer Vision – ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Par...

  18. [18]

    Lucy Coates, Jian Shi, Lynn Rochester, Silvia Del Din, and Annette Pantall. 2020. Entropy of real-world gait in Parkinson’s disease determined from wearable sensors as a digital marker of altered ambulatory behavior.Sensors20, 9 (2020), 2631

  19. [19]

    Cormen, Charles E

    Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2009.Introduction to Algorithms(3rd ed.). MIT Press, Cambridge, MA. See Chapter 33: Computational Geometry

  20. [20]

    Wenxun Dai, Ling-Hao Chen, Jingbo Wang, Jinpeng Liu, Bo Dai, and Yansong Tang. 2024. Motionlcm: Real-time controllable motion generation via latent consistency model. InEuropean Conference on Computer Vision. Springer, 390–408

  21. [21]

    Vincenzo D’Amato, Luca Oneto, Antonio Camurri, and Davide Anguita. 2021. Keep it simple: handcrafting Feature and tuning Random Forests and XGBoost to face the affective Movement Recognition Challenge 2021. In2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 1–7

  22. [22]

    Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. 2020. Jukebox: A Generative Model for Music.arXiv preprint arXiv:2005.00341(2020)

  23. [23]

    Océane Dubois, Agnès Roby-Brami, Ross Parry, Mahdi Khoramshahi, and Nathanaël Jarrassé. 2023. A guide to inter-joint coordination characterization for discrete movements: a comparative study.Journal of NeuroEngineering and Rehabilitation20, 1 (2023), 132

  24. [24]

    Koki Endo, Shuhei Tsuchida, Tsukasa Fukusato, and Takeo Igarashi. 2024. Automatic Dance Video Segmentation for Understanding Choreography. InProceedings of the 9th International Conference on Movement and Computing. 1–9

  25. [25]

    Franz Faul, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences.Behavior research methods39, 2 (2007), 175–191

  26. [26]

    Jane Forman and Laura Damschroder. 2007. Qualitative content analysis. InEmpirical methods for bioethics: A primer. Emerald Group Publishing Limited, 39–62

  27. [27]

    Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine.Annals of statistics(2001), 1189–1232. , Vol. 1, No. 1, Article . Publication date: April 2026. 36 Han et al

  28. [28]

    2023.Bayesian optimization

    Roman Garnett. 2023.Bayesian optimization. Cambridge University Press

  29. [29]

    Michael Gleicher. 1997. Motion editing with spacetime constraints. InProceedings of the 1997 symposium on Interactive 3D graphics. 139–ff

  30. [30]

    Mark A Guadagnoli and Timothy D Lee. 2004. Challenge point: a framework for conceptualizing the effects of various practice conditions in motor learning.Journal of Motor Behavior36, 2 (2004), 212–224

  31. [31]

    Hyunyoung Han, Kyungeun Jung, and Sang Ho Yoon. 2025. ChoreoCraft: In-situ Crafting of Choreography in Virtual Reality through Creativity Support Tool. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–21

  32. [32]

    Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. InAdvances in psychology. Vol. 52. Elsevier, 139–183

  33. [33]

    Saad Hassan, Caluã de Lacerda Pataca, Laleh Nourian, Garreth W Tigwell, Briana Davis, and Will Zhenya Silver Wag- man. 2024. Designing and Evaluating an Advanced Dance Video Comprehension Tool with In-situ Move Identification Capabilities. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–19

  34. [34]

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778

  35. [35]

    Teresa L Heiland, Robert Rovetti, and Jan Dunn. 2012. Effects of visual, auditory, and kinesthetic imagery interventions on dancers’ plié arabesques.Journal of Imagery Research in Sport and Physical Activity7, 1 (2012)

  36. [36]

    Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium.Advances in neural information processing systems 30 (2017)

  37. [37]

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models.arXiv preprint arxiv:2006.11239(2020)

  38. [38]

    Jonathan Ho and Tim Salimans. 2021. Classifier-Free Diffusion Guidance. InNeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications. https://openreview.net/forum?id=qw8AKxfYbI

  39. [39]

    Daniel Holden, Jun Saito, and Taku Komura. 2016. A deep learning framework for character motion synthesis and editing.ACM Transactions on Graphics (ToG)35, 4 (2016), 1–11

  40. [40]

    2015.Sources of difficulty in dance movements acquisition

    Dayana Hristova. 2015.Sources of difficulty in dance movements acquisition. Master’s thesis. University of Vienna

  41. [41]

    Eugene Hsu, Kari Pulli, and Jovan Popović. 2005. Style translation for human motion. InACM SIGGRAPH 2005 Papers. 1082–1089

  42. [42]

    Henry Hsu and Peter A Lachenbruch. 2014. Paired t test.Wiley StatsRef: statistics reference online(2014)

  43. [43]

    Han-Ping Huang, Chang Francis Hsu, Yi-Chih Mao, Long Hsu, and Sien Chi. 2021. Gait stability measurement by using average entropy.Entropy23, 4 (2021), 412

  44. [44]

    Deok-Kyeong Jang, Soomin Park, and Sung-Hee Lee. 2022. Motion puzzle: Arbitrary motion style transfer by body part.ACM Transactions on Graphics (TOG)41, 3 (2022), 1–16

  45. [45]

    Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. 2024. Motiongpt: Human motion as a foreign language.Advances in Neural Information Processing Systems36 (2024)

  46. [46]

    Hye-Young Jo, Laurenz Seidel, Michel Pahud, Mike Sinclair, and Andrea Bianchi. 2023. Flowar: How different augmented reality visualizations of online fitness videos support flow for at-home yoga exercises. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17

  47. [47]

    Korrawe Karunratanakul, Konpat Preechakul, Supasorn Suwajanakorn, and Siyu Tang. 2023. Guided motion diffusion for controllable human motion synthesis. InProceedings of the IEEE/CVF International Conference on Computer Vision. 2151–2162

  48. [48]

    Taewoo Kim, Sanga Yun, Junhyuk Park, and Sami Yli-Piipari. 2025. A Cluster Randomized Controlled Trial to Compare Online and In-Person Motor Skill Acquisition.Journal of Teaching in Physical Education1, aop (2025), 1–15

  49. [49]

    2015.Motor learning and control for dance: Principles and practices for performers and teachers

    Donna Krasnow and Mary Virginia Wilmerding. 2015.Motor learning and control for dance: Principles and practices for performers and teachers. Human Kinetics

  50. [50]

    Donna Krasnow and Virginia Wilmerding. 2024. Motor Learning for Dance Teachers and Performers. (2024)

  51. [51]

    2003.Observing the user experience: a practitioner’s guide to user research

    Mike Kuniavsky. 2003.Observing the user experience: a practitioner’s guide to user research. Elsevier

  52. [52]

    Markus Laattala, Roosa Piitulainen, Nadia M Ady, Monica Tamariz, and Perttu Hämäläinen. 2024. Wave: Anticipatory movement visualization for vr dancing. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–9

  53. [53]

    2025.Motor learning and performance: From principles to application

    Timothy D Lee and Richard A Schmidt. 2025.Motor learning and performance: From principles to application. Human Kinetics

  54. [54]

    Ross, and Angjoo Kanazawa

    Ruilong Li, Shan Yang, David A. Ross, and Angjoo Kanazawa. 2021. Learn to Dance with AIST++: Music Conditioned 3D Dance Generation. arXiv:2101.08779 [cs.CV]

  55. [55]

    Ronghui Li, Junfan Zhao, Yachao Zhang, Mingyang Su, Zeping Ren, Han Zhang, Yansong Tang, and Xiu Li. 2023. Finedance: A fine-grained choreography dataset for 3d full body dance generation. InProceedings of the IEEE/CVF , Vol. 1, No. 1, Article . Publication date: April 2026. Make itSimple,Make itDance: Dance Motion Simplification to Support Novices’ Dance...

  56. [56]

    Olga V Limanskaya, Olena V Yefimova, Irina V Kriventsova, Krzysztof Wnorowski, and Abdelkrim Bensbaa. 2021. The coordination abilities development in female students based on dance exercises.Physical education of students25, 4 (2021), 249–256

  57. [57]

    Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. 2022. Compositional visual generation with composable diffusion models. InComputer Vision–ECCV 2022: 17th European Conference, Tel A viv, Israel, October 23–27, 2022, Proceedings, Part XVII. Springer, 423–439

  58. [58]

    Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. 2023. SMPL: A skinned multi-person linear model. InSeminal Graphics Papers: Pushing the Boundaries, Volume 2. 851–866

  59. [59]

    Lee Fay Low, Shane Carroll, Dafna Merom, Jess R Baker, N Kochan, Frances Moran, and Henry Brodaty. 2016. We think you can dance! A pilot randomised controlled trial of dance for nursing home residents with moderate to severe dementia.Complementary therapies in medicine29 (2016), 42–44

  60. [60]

    Zhenye Luo, Min Ren, Xuecai Hu, Yongzhen Huang, and Li Yao. 2024. Popdg: Popular 3d dance generation with popdanceset. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 26984–26993

  61. [61]

    Alexandros Malkogeorgos, Eleni Zaggelidou, Georgios Zaggelidis, and Galazoulas Christos. 2013. Physiological elements required by dancers.Sport Science Review22, 5-6 (2013), 343

  62. [62]

    Desire L Massart, Leonard Kaufman, Peter J Rousseeuw, and Annick Leroy. 1986. Least median of squares: a robust method for outlier and model error detection in regression and calibration.Analytica Chimica Acta187 (1986), 171–179

  63. [63]

    Kevin D McCay, Edmond SL Ho, Claire Marcroft, and Nicholas D Embleton. 2019. Establishing pose based features using histograms for the detection of abnormal infant movements. In2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, 5469–5472

  64. [64]

    Dafna Merom, Ding Ding, and Emmanuel Stamatakis. 2016. Dancing participation and cardiovascular disease mortality: a pooled analysis of 11 population-based British cohorts.American journal of preventive medicine50, 6 (2016), 756–760

  65. [65]

    Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved Denoising Diffusion Probabilistic Models.. In ICML (Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 8162–8171. http://dblp.uni-trier.de/db/conf/icml/icml2021.html#NicholD21

  66. [66]

    Byungjoo Noh, Changhong Youm, Eunkyoung Goh, Myeounggon Lee, Hwayoung Park, Hyojeong Jeon, and Oh Yoen Kim. 2021. XGBoost based machine learning approach to predict the risk of fall in older adults using gait outcomes. Scientific reports11, 1 (2021), 12183

  67. [67]

    Pauline PL Poon and Wendy M Rodgers. 2000. Learning and remembering strategies of novice and advanced jazz dancers for skill level appropriate dance routines.Research Quarterly for Exercise and Sport71, 2 (2000), 135–144

  68. [68]

    Unnikrishnan Radhakrishnan, Francesco Chinello, and Konstantinos Koumaditis. 2023. Investigating the effectiveness of immersive VR skill training and its link to physiological arousal.Virtual reality27, 2 (2023), 1091–1115

  69. [69]

    Jean-Philippe Rivière, Sarah Fdili Alaoui, Baptiste Caramiaux, and Wendy E Mackay. 2019. Capturing movement decomposition to support learning and teaching in contemporary dance.Proceedings of the ACM on Human-Computer Interaction3, CSCW (2019), 1–22

  70. [70]

    Peter J Rousseeuw. 1991. Tutorial to robust statistics.Journal of chemometrics5, 1 (1991), 1–20

  71. [71]

    Ziad Salam Patrous. 2018. Evaluating XGBoost for user classification by using behavioral features extracted from smartphone sensors

  72. [72]

    Nahoko Sato, Hiroyuki Nunome, and Yasuo Ikegami. 2014. Key features of hip hop dance motions affect evaluation by judges.Journal of applied biomechanics30, 3 (2014), 439–445

  73. [73]

    Ronald W Schafer. 2011. What is a savitzky-golay filter?[lecture notes].IEEE Signal processing magazine28, 4 (2011), 111–117

  74. [74]

    2020.Learning and understanding partner dance through motion analysis in a virtual environment

    Simon Senecal. 2020.Learning and understanding partner dance through motion analysis in a virtual environment. Doctoral Thesis. University of Geneva, Geneva, Switzerland. doi:10.13097/archive-ouverte/unige:142477

  75. [75]

    S Shaphiro and MBJB Wilk. 1965. An analysis of variance test for normality.Biometrika52, 3 (1965), 591–611

  76. [76]

    Hannah Lena Siebers, Waleed Alrawashdeh, Marcel Betsch, Filippo Migliorini, Frank Hildebrand, and Jörg Eschweiler

  77. [77]

    Comparison of different symmetry indices for the quantification of dynamic joint angles.BMC Sports Science, Medicine and Rehabilitation13, 1 (2021), 130

  78. [78]

    Roger W Simmons. 2005. Sensory organization determinants of postural stability in trained ballet dancers.International journal of neuroscience115, 1 (2005), 87–97

  79. [79]

    2019.Dance composition basics

    Pamela Anderson Sofras. 2019.Dance composition basics. Human Kinetics

  80. [80]

    Seyoon Tak and Hyeong-Seok Ko. 2005. A physically-based motion retargeting filter.ACM Transactions on Graphics (ToG)24, 1 (2005), 98–117. , Vol. 1, No. 1, Article . Publication date: April 2026. 38 Han et al

Showing first 80 references.