Recognition: unknown
AnimationDiff: A Visual Comparison Tool for Generated 3D Character Animations
Pith reviewed 2026-05-09 18:26 UTC · model grok-4.3
The pith
AnimationDiff is a visualization tool that compares generated 3D character animations by placing them in scene context and using temporal lenses to align and overview motion.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
AnimationDiff enables contextual comparisons in the intended scene and camera angle, embeds spatial information through established animation visualization techniques with easy switching between overlaid and side-by-side comparisons, supports filtering to manage information overload, and introduces Temporal Lenses that visualize entire animations over time for overview, alignment, and comparison; evaluation in a user study demonstrates its efficacy for animation comparison and supplies design insights for motion comparison.
What carries the argument
Temporal Lenses, which render the complete animation sequence as a time-based visual summary to support overview, alignment, and direct comparison, combined with scene-context embedding and overlay/side-by-side switching.
If this is right
- Animators and designers can select preferred outputs from generative systems more quickly and with greater confidence.
- Temporal misalignment between animation variants becomes easier to detect and correct through the timeline overviews.
- Spatial information overload is reduced, allowing focus on key differences rather than raw motion data.
- The same visualization patterns may guide the design of comparison interfaces for other time-varying 3D content.
Where Pith is reading between the lines
- The approach could extend beyond character animation to comparing other temporal 3D data such as robotic trajectories or physics simulations.
- Automatic alignment suggestions based on the temporal lens data could further reduce manual effort in future versions.
- The filtering and context techniques might apply to non-animation domains like comparing multiple video recordings of the same event.
Load-bearing premise
The combination of scene context, view switching, filtering, and Temporal Lenses is enough to overcome temporal misalignment and spatial data overload when users compare animations.
What would settle it
A controlled test in which participants using AnimationDiff show no improvement in accuracy or speed when selecting the best animation compared with a standard side-by-side viewer that lacks the new features.
Figures
read the original abstract
Creating 3D character animations traditionally requires significant time and effort from the animator. Advancements in generative methods now enable easy creation of multiple character animation variations for use or further editing. However, this capability introduces a new challenge in comparing character animations to select the best animation, which is challenging due to temporal misalignment and the large amount of spatial data. We present AnimationDiff, a visual comparison tool for generated character animations. AnimationDiff enables contextual comparisons in the intended scene and camera angle, and embedding of spatial information by combining established animation visualization techniques and easy switching between overlaid and side-by-side comparisons. AnimationDiff also supports filtering to handle information overload, and Temporal Lenses that visualize entire animations over time for overview, alignment, and comparison. We evaluated AnimationDiff in a user study, showcasing its efficacy in animation comparison and providing design insights for comparing motion.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents AnimationDiff, a visual comparison tool for generated 3D character animations. It addresses challenges of temporal misalignment and spatial data overload by enabling contextual comparisons within the intended scene and camera angle, combining established visualization techniques with easy switching between overlaid and side-by-side views, supporting filtering for information overload, and introducing Temporal Lenses to visualize entire animations over time for overview, alignment, and comparison. The work evaluates the tool through a user study that claims to demonstrate its efficacy in animation comparison while providing design insights for motion comparison.
Significance. If the user study provides robust evidence of practical utility, the work offers moderate significance as a tool contribution in HCI and computer graphics. It integrates established methods (overlays, side-by-side, filtering) with the novel Temporal Lenses element to support animation workflows, which could inform reusable design patterns for visual comparison of motion data. The paper's strength lies in its focus on a concrete software artifact and applied evaluation rather than formal derivations.
major comments (1)
- [Abstract and evaluation section] User study evaluation (as referenced in the abstract and presumably detailed in the evaluation section): The manuscript asserts that the user study showcases efficacy, but the provided description supplies no information on participant numbers, specific tasks performed, metrics collected, statistical analysis methods, or quantitative results. This absence makes it impossible to assess whether the claimed practical efficacy holds beyond the study setting or addresses the weakest assumption that the described features sufficiently solve temporal misalignment and spatial overload.
minor comments (2)
- [Abstract] The abstract would be strengthened by briefly noting key quantitative outcomes or participant scale from the user study to better support the efficacy claim without requiring readers to reach the full evaluation section.
- Consider adding a dedicated figure or diagram illustrating the Temporal Lenses feature in operation, as the textual description alone may not fully convey how it enables overview, alignment, and comparison across entire animations.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on our manuscript. We address the major comment point by point below.
read point-by-point responses
-
Referee: [Abstract and evaluation section] User study evaluation (as referenced in the abstract and presumably detailed in the evaluation section): The manuscript asserts that the user study showcases efficacy, but the provided description supplies no information on participant numbers, specific tasks performed, metrics collected, statistical analysis methods, or quantitative results. This absence makes it impossible to assess whether the claimed practical efficacy holds beyond the study setting or addresses the weakest assumption that the described features sufficiently solve temporal misalignment and spatial overload.
Authors: We agree that the current manuscript does not supply the requested details on the user study. The evaluation section references the study and its outcomes at a high level but omits participant counts, task specifications, metrics, statistical methods, and quantitative findings. This limits readers' ability to judge the strength of evidence for efficacy and for how the tool's features (contextual viewing, overlay/side-by-side modes, filtering, and Temporal Lenses) mitigate temporal misalignment and spatial overload. In the revised manuscript we will expand the evaluation section to include these elements, thereby allowing a clearer assessment of the study's scope and results. revision: yes
Circularity Check
No significant circularity
full rationale
The paper presents a software tool for animation comparison by combining established visualization methods (overlays, side-by-side views, filtering, temporal lenses) and evaluates it through a user study. No equations, derivations, fitted parameters, or load-bearing self-citations appear in the provided text or abstract. The central claims rest on the described artifact and empirical results rather than any internal reduction to inputs by construction, making the contribution self-contained for an HCI tool paper.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Adobe Systems Incorporated. 2025. Mixamo. https://www.mixamo.com/
2025
-
[2]
Autodesk, Inc. 2025. Autodesk Maya. https://www.autodesk.com/products/ maya/overview DIS ’26, June 13–17, 2026, Singapore, Singapore Sidenmark et al
2025
-
[3]
B. Bach, P. Dragicevic, D. Archambault, C. Hurter, and S. Carpendale. 2014. A Review of Temporal Data Visualizations Based on Space-Time Cube Operations. InEuroVis - STARs, R. Borgo, R. Maciejewski, and I. Viola (Eds.). The Eurographics Association. doi:/10.2312/eurovisstar.20141171
-
[4]
Adam Baker, Carl Gutwin, Justin Matejka, and Ian Stavness. 2024. Interaction Techniques for Comparing Video. InProceedings of the 50th Graphics Interface Conference(Halifax, NS, Canada)(GI ’24). Association for Computing Machinery, New York, NY, USA, Article 5, 13 pages. doi:10.1145/3670947.3670948
-
[5]
Guha Balakrishnan, Frédo Durand, and John Guttag. 2015. Video diff: highlighting differences between similar actions in videos.ACM Trans. Graph.34, 6, Article 194 (Nov. 2015), 10 pages. doi:10.1145/2816795.2818125
-
[6]
Aaron Bangor, Philip T. Kortum, and James T. Miller. 2008. An Empiri- cal Evaluation of the System Usability Scale.International Journal of Hu- man–Computer Interaction24, 6 (2008), 574–594. doi:10.1080/10447310802205776 arXiv:https://doi.org/10.1080/10447310802205776
-
[7]
Elena Benedetto, Gabriele Romano, Ilaria Torre, Mario Vallarino, and Gi- anni Viardo Vercelli. 2024. A Visual Comparison interface for educational videos. InProceedings of the 2024 International Conference on Advanced Visual Interfaces (Arenzano, Genoa, Italy)(A VI ’24). Association for Computing Machinery, New York, NY, USA, Article 46, 5 pages. doi:10.1...
- [8]
-
[9]
Blender Foundation. 2025. Blender. https://www.blender.org/
2025
-
[10]
Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Grossman
-
[11]
In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ’23)
Promptify: Text-to-Image Generation through Interactive Prompt Explo- ration with Large Language Models. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology(San Francisco, CA, USA) (UIST ’23). Association for Computing Machinery, New York, NY, USA, Article 96, 14 pages. doi:10.1145/3586183.3606725
-
[12]
Wolfgang Büschel, Anke Lehmann, and Raimund Dachselt. 2021. MIRIA: A Mixed Reality Toolkit for the In-Situ Visualization and Analysis of Spatio-Temporal Interaction Data. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 470, 15 pages. d...
-
[13]
Loïc Ciccone, Cengiz Öztireli, and Robert W. Sumner. 2019. Tangent-space optimization for interactive animation control.ACM Trans. Graph.38, 4, Article 101 (July 2019), 10 pages. doi:10.1145/3306346.3322938
- [14]
-
[15]
Hai Dang and Daniel Buschek. 2021. GestureMap: Supporting Visual Analytics and Quantitative Analysis of Motion Elicitation Data by Learning 2D Embeddings. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan)(CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 317, 12 pages. doi:10.1145/3411...
-
[16]
Ruta Desai, Fraser Anderson, Justin Matejka, Stelian Coros, James McCann, George Fitzmaurice, and Tovi Grossman. 2019. Geppetto: Enabling Seman- tic Design of Expressive Robot Behaviors. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems(Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–...
-
[17]
Dow, Alana Glassco, Jonathan Kass, Melissa Schwarz, Daniel L
Steven P. Dow, Alana Glassco, Jonathan Kass, Melissa Schwarz, Daniel L. Schwartz, and Scott R. Klemmer. 2011. Parallel prototyping leads to better design results, more divergence, and increased self-efficacy.ACM Trans. Comput.-Hum. Interact.17, 4, Article 18 (Dec. 2011), 24 pages. doi:10.1145/1879831.1879836
-
[18]
Pierre Dragicevic, Gonzalo Ramos, Jacobo Bibliowitcz, Derek Nowrouzezahrai, Ravin Balakrishnan, and Karan Singh. 2008. Video browsing by direct manipu- lation. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Florence, Italy)(CHI ’08). Association for Computing Machinery, New York, NY, USA, 237–246. doi:10.1145/1357054.1357096
-
[19]
Andreas Fender, Jörg Müller, and David Lindlbauer. 2015. Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements. In Proceedings of the 3rd ACM Symposium on Spatial User Interaction(Los Angeles, California, USA)(SUI ’15). Association for Computing Machinery, New York, NY, USA, 113–122. doi:10.1145/2788940.2788944
- [20]
-
[21]
Michael Gleicher, Danielle Albers, Rick Walker, Ilir Jusufi, Charles D. Hansen, and Jonathan C. Roberts. 2011. Visual comparison for information visualization. Information Visualization10, 4 (2011), 289–309. doi:10.1177/1473871611416549
-
[22]
Hyunyoung Han, Kyungeun Jung, and Sang Ho Yoon. 2025. ChoreoCraft: In-situ Crafting of Choreography in Virtual Reality through Creativity Support Tool. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 1059, 21 pages. doi:10.1145/3706598.3714220
-
[23]
Yueqi Hu, Shuangyuan Wu, Shihong Xia, Jinghua Fu, and Wei Chen. 2010. Motion track: Visualizing variations of human motion data. In2010 IEEE Pacific Visual- ization Symposium (PacificVis). 153–160. doi:10.1109/PACIFICVIS.2010.5429596
- [24]
-
[25]
Deok-Kyeong Jang, Soomin Park, and Sung-Hee Lee. 2022. Motion Puzzle: Arbi- trary Motion Style Transfer by Body Part.ACM Trans. Graph.41, 3, Article 33 (June 2022), 16 pages. doi:10.1145/3516429
-
[26]
Sujin Jang, Niklas Elmqvist, and Karthik Ramani. 2014. GestureAnalyzer: visual analytics for pattern analysis of mid-air hand gestures. InProceedings of the 2nd ACM Symposium on Spatial User Interaction(Honolulu, Hawaii, USA)(SUI ’14). Association for Computing Machinery, New York, NY, USA, 30–39. doi:10.1145/ 2659766.2659772
-
[27]
Sujin Jang, Niklas Elmqvist, and Karthik Ramani. 2016. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.IEEE Transactions on Visualization and Computer Graphics22, 1 (2016), 21–30. doi:10.1109/TVCG.2015.2468292
-
[28]
Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. 2024. Mo- tionGPT: Human Motion as a Foreign Language.Advances in Neural Information Processing Systems36 (2024)
2024
-
[29]
Biao Jiang, Xin Chen, Chi Zhang, Fukun Yin, Zhuoyuan Li, Gang Yu, and Jiayuan Fan. 2025. MotionChain: Conversational Motion Controllers via Multimodal Prompts. InComputer Vision – ECCV 2024, Aleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, and Gül Varol (Eds.). Springer Nature Switzerland, Cham, 54–74. doi:10.1007/978-3-031-73347-5_4
-
[30]
Isaac V. Kerlow. 2009.The art of 3D computer animation and effects. John Wiley & Sons
2009
-
[31]
Jihoon Kim, Taehyun Byun, Seungyoun Shin, Jungdam Won, and Sungjoon Choi
-
[32]
doi:10.1016/j.patcog.2022.108894
Conditional motion in-betweening.Pattern Recognition132 (2022), 108894. doi:10.1016/j.patcog.2022.108894
-
[33]
Kyungyoon Kim, John V. Carlis, and Daniel F. Keefe. 2017. Comparison techniques utilized in spatial 3D and 4D data visualizations: A survey and future directions. Computers & Graphics67 (2017), 138–147. doi:10.1016/j.cag.2017.05.005
-
[35]
Jiannan Li, Jiahe Lyu, Mauricio Sousa, Ravin Balakrishnan, Anthony Tang, and Tovi Grossman. 2021. Route Tapestries: Navigating 360°Virtual Tour Videos Using Slit-Scan Visualizations. InThe 34th Annual ACM Symposium on User Interface Software and Technology(Virtual Event, USA)(UIST ’21). Association for Computing Machinery, New York, NY, USA, 223–238. doi:...
-
[37]
Jingyi Li, Eric Rawn, Jacob Ritchie, Jasper Tran O’Leary, and Sean Follmer. 2023. Beyond the Artifact: Power as a Lens for Creativity Support Tools. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (San Francisco, CA, USA)(UIST ’23). Association for Computing Machinery, New York, NY, USA, Article 47, 15 pages. doi:1...
-
[38]
Klemen Lilija, Henning Pohl, and Kasper Hornbæk. 2020. Who Put That There? Temporal Navigation of Spatial Recordings by Direct Manipulation. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–11. doi:10.1145/3313831.3376604
-
[39]
David Chuan-En Lin, Fabian Caba Heilbron, Joon-Young Lee, Oliver Wang, and Nikolas Martelaro. 2024. VideoMap: Supporting Video Exploration, Brainstorm- ing, and Prototyping in the Latent Space. InProceedings of the 16th Conference on Creativity & Cognition(Chicago, IL, USA)(C&C ’24). Association for Computing Machinery, New York, NY, USA, 311–327. doi:10....
-
[40]
Yimeng Liu and Misha Sra. 2024. DanceGen: Supporting Choreography Ideation and Prototyping with Generative AI. InProceedings of the 2024 ACM Designing Interactive Systems Conference(Copenhagen, Denmark)(DIS ’24). Association for Computing Machinery, New York, NY, USA, 920–938. doi:10.1145/3643834. 3661594
-
[41]
Qiujing Lu, Yipeng Zhang, Mingjian Lu, and Vwani Roychowdhury. 2022. Action- conditioned On-demand Motion Generation. InProceedings of the 30th ACM International Conference on Multimedia(Lisboa, Portugal)(MM ’22). Association for Computing Machinery, New York, NY, USA, 2249–2257. doi:10.1145/3503161. 3548287 AnimationDiff DIS ’26, June 13–17, 2026, Singap...
-
[42]
Thomas Lucas, Fabien Baradel, Philippe Weinzaepfel, and Grégory Rogez. 2022. PoseGPT: Quantization-Based 3D Human Motion Generation and Forecasting. InComputer Vision – ECCV 2022, Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner (Eds.). Springer Nature Switzerland, Cham, 417–435. doi:10.1007/978-3-031-20068-7_24
-
[43]
George Madges, Idris Miles, and Eike Falk Anderson. 2017. Differencing and Merg- ing for 3D Animation Revision Control. InEG 2017 - Posters, Pierre Benard and Daniel Sykora (Eds.). The Eurographics Association. doi:10.2312/egp.20171038
-
[44]
Karthik Mahadevan, Qian Zhou, George Fitzmaurice, Tovi Grossman, and Fraser Anderson. 2023. Tesseract: Querying Spatial Design Recordings by Manipulating Worlds in Miniature. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 460, 16 pag...
-
[45]
Carl Malmstrom, Yaying Zhang, Philippe Pasquier, Thecla Schiphorst, and Lyn Bartram. 2016. MoComp: A Tool for Comparative Visualization between Takes of Motion Capture Data. InProceedings of the 3rd International Symposium on Movement and Computing(Thessaloniki, GA, Greece)(MOCO ’16). Association for Computing Machinery, New York, NY, USA, Article 11, 8 p...
-
[46]
Justin Matejka, Michael Glueck, Erin Bradner, Ali Hashemi, Tovi Grossman, and George Fitzmaurice. 2018. Dream Lens: Exploration and Visualization of Large-Scale Generative Design Datasets. InProceedings of the 2018 CHI Conference on Human Factors in Computing Systems(Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1...
-
[47]
Justin Matejka, Tovi Grossman, and George Fitzmaurice. 2014. Video lens: rapid playback and exploration of large video collections and associated metadata. InProceedings of the 27th Annual ACM Symposium on User Interface Software and Technology(Honolulu, Hawaii, USA)(UIST ’14). Association for Computing Machinery, New York, NY, USA, 541–550. doi:10.1145/2...
-
[48]
Bryan Min, Allen Chen, Yining Cao, and Haijun Xia. 2025. Malleable Overview- Detail Interfaces. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 688, 25 pages. doi:10.1145/3706598.3714164
-
[49]
Cuong Nguyen, Yuzhen Niu, and Feng Liu. 2012. Video summagator: an in- terface for video summarization and navigation. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Austin, Texas, USA)(CHI ’12). Association for Computing Machinery, New York, NY, USA, 647–650. doi:10.1145/2207676.2207767
-
[50]
Jeongseok Oh and SeungJun Kim. 2025. MoWa: An Authoring Tool for Refining AI-Generated Human Avatar Motions Through Latent Waveform Manipulation. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 1240, 21 pages. doi:10.1145/3706598.3714253
-
[51]
Karran Pandey, Fanny Chevalier, and Karan Singh. 2023. Juxtaform: interactive visual summarization for exploratory shape design.ACM Trans. Graph.42, 4, Article 52 (July 2023), 14 pages. doi:10.1145/3592436
-
[52]
2015.Storyboarding: A critical history
Steven Price and Chris Pallant. 2015.Storyboarding: A critical history. Springer
2015
-
[53]
Patrick Reipschläger, Frederik Brudy, Raimund Dachselt, Justin Matejka, George Fitzmaurice, and Fraser Anderson. 2022. AvatAR: An Immersive Analysis Environ- ment for Human Motion Data Combining Interactive 3D Avatars and Trajectories. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA)(CHI ’22). Associati...
-
[54]
Yoni Shafir, Guy Tevet, Roy Kapon, and Amit Haim Bermano. 2024. Human Motion Diffusion as a Generative Prior. InThe Twelfth International Conference on Learning Representations
2024
-
[55]
Sangho Suh, Meng Chen, Bryan Min, Toby Jia-Jun Li, and Haijun Xia. 2024. Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Art...
-
[56]
Sindharta Tanuwijaya and Yoshio Ohno. 2010. A Simple and Efficient Method to Extract Keyposes from Mocap Data.Journal of Information Processing9, 3 (2010), 130–139. doi:10.3756/artsci.9.130
-
[57]
Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. 2023. Human Motion Diffusion Model. InThe Eleventh International Conference on Learning Representations. https://openreview.net/ forum?id=SJ1kSyO2jwu
2023
-
[58]
1981.The illusion of life
Frank Thomas and Ollie Johnston. 1981.The illusion of life. Abbeville Press
1981
-
[59]
Maryam Tohidi, William Buxton, Ronald Baecker, and Abigail Sellen. 2006. Get- ting the right design and the design right. InProceedings of the SIGCHI Con- ference on Human Factors in Computing Systems(Montréal, Québec, Canada) (CHI ’06). Association for Computing Machinery, New York, NY, USA, 1243–1252. doi:10.1145/1124772.1124960
-
[60]
Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, and Lingjie Liu. 2025. TLControl: Trajectory and Language Control for Human Motion Synthesis. InComputer Vision – ECCV 2024, Aleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, and Gül Varol (Eds.). Springer Nature Switzerland, Cham, 37–54
2025
-
[61]
Yeping Wang, Alexander Peseckis, Zelong Jiang, and Michael Gleicher. 2024. Motion Comparator: Visual Comparison of Robot Motions.IEEE Robotics and Automation Letters9, 9 (2024), 7699–7706. doi:10.1109/LRA.2024.3430649
-
[62]
2013.Timing for animation
Harold Whitaker and John Halas. 2013.Timing for animation. Routledge. doi:10. 4324/9780080519272
2013
-
[63]
Hongbo Zhang, Pei Chen, Xuelong Xie, Zhaoqu Jiang, Yifei Wu, Zejian Li, Xi- aoyu Chen, and Lingyun Sun. 2025. FusionProtor: A Mixed-Prototype Tool for Component-level Physical-to-Virtual 3D Transition and Simulation. InProceed- ings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, N...
-
[64]
Qian Zhou, David Ledo, George Fitzmaurice, and Fraser Anderson. 2024. TimeTun- nel: Integrating Spatial and Temporal Motion Editing for Character Animation in Virtual Reality. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Ma- chinery, New York, NY, USA, Article 101, 17...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.