pith. machine review for the scientific record. sign in

arxiv: 2605.10014 · v1 · submitted 2026-05-11 · 💻 cs.HC · cs.GR

Recognition: no theorem link

Elemental Alchemist: A Generative Interface for Semantic Control of Particle Systems Across Dynamic Levels of Abstraction

Authors on Pith no claims yet

Pith reviewed 2026-05-12 03:29 UTC · model grok-4.3

classification 💻 cs.HC cs.GR
keywords particle systemsvisual effectsgenerative interfacessemantic controluser intentVFX editingabstraction levelshuman-computer interaction
0
0 comments X

The pith

Elemental Alchemist turns high-level creative goals into contextual controls and abstracted parameters for particle-system visual effects.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Particle systems used in digital visual effects contain many interdependent parameters, making it hard for users to translate artistic intentions such as an angry fire into the right technical settings. The paper presents Elemental Alchemist, a generative interface that adds two components to address this gap. A contextual brush palette creates tools suited to the current scene, while a generative control panel surfaces relevant low-level parameters and abstracts them into mid-level semantic attributes and high-level conceptual controls. User studies with ten novices and five expert VFX practitioners indicate that these features help people move from intent to working particle settings. If the approach holds, it reduces the systematic parameter exploration that currently slows down creative work in storytelling and animation pipelines.

Core claim

The paper claims that a generative interface equipped with a contextual brush palette and a generative control panel can interpret user intent at multiple levels of abstraction, generate appropriate tools and semantic controls, and thereby support the translation of high-level creative goals into usable particle-system parameters, as shown by participants who successfully produced desired visual outcomes during evaluation.

What carries the argument

The generative interface built around a contextual brush palette that produces scene-specific tools and a generative control panel that surfaces technical parameters while abstracting them into semantic attributes and conceptual controls.

If this is right

  • Users can specify effects through high-level concepts instead of manually locating and adjusting dozens of low-level sliders.
  • Both novices and experts gain access to relevant controls without first building a complete mental model of the entire parameter space.
  • Creative iteration speeds up because mid-level semantic attributes and high-level conceptual controls are generated on demand from scene context.
  • The same particle system can be edited at different abstraction levels without losing direct access to the underlying technical parameters.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The abstraction-layer approach could be tested on other simulation domains such as fluid or cloth systems where parameter-to-visual mappings are equally complex.
  • Integrating user feedback loops into the generative panel might allow the system to refine its abstractions over repeated sessions with the same artist.
  • If the brush-palette generation proves reliable, similar context-aware tool creation could reduce interface clutter in broader creative software beyond VFX.

Load-bearing premise

The generative components can correctly interpret varied user intents and produce accurate, non-limiting mappings to parameters and abstractions across different creative tasks.

What would settle it

A controlled study in which participants repeatedly fail to achieve intended particle effects or report that the surfaced controls do not correspond to their stated goals would show the mapping mechanism does not work as claimed.

Figures

Figures reproduced from arXiv: 2605.10014 by Evan Atherton, George Fitzmaurice, Kyzyl Monteiro, Qian Zhou.

Figure 1
Figure 1. Figure 1: Elemental Alchemist proposes a generative interface for semantic control grounded in task context and user intent. It generates a contextual brush palette with scene-aware tools for sketch-based semantic edits. From a sketch or text prompt, it generates a control panel that decomposes intent into synchronized controls across three abstraction levels: conceptual, semantic, and technical. Together, these com… view at source ↗
Figure 2
Figure 2. Figure 2: The two core components of Elemental Alchemist. (a) The contextual brush palette, which generates scene-aware [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Elemental Alchemist decomposes user intent into a three-level hierarchy of controls. A) Example decomposition of [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Elemental Alchemist scaffolds the n-dimensional parameter space into an intent-aligned subspace, shaped by concepts, [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: System pipeline of Elemental Alchemist: contextual brush palette generation is triggered when a scene is uploaded, [PITH_FULL_IMAGE:figures/full_fig_p009_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Scenes with particle effects used in the novice user [PITH_FULL_IMAGE:figures/full_fig_p010_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Responses to the post-task questionnaire across all three tasks. [PITH_FULL_IMAGE:figures/full_fig_p011_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: An overview of the responses for Creativity Support Index and System Usability Scale questionnaire [PITH_FULL_IMAGE:figures/full_fig_p011_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Responses to the overall user experience question [PITH_FULL_IMAGE:figures/full_fig_p011_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Distribution of participants’ interaction traces across generative control levels during editing sessions. [PITH_FULL_IMAGE:figures/full_fig_p012_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Action transition frequency matrix showing nav [PITH_FULL_IMAGE:figures/full_fig_p012_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Semantic similarity scores between user prompts [PITH_FULL_IMAGE:figures/full_fig_p012_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Various novices’ creations in exploratory task for the superhero scene. They selected an animated character from the [PITH_FULL_IMAGE:figures/full_fig_p013_13.png] view at source ↗
read the original abstract

Editing particle-system visual effects (VFX) is vital for digital storytelling, but achieving controllable, art-directable results remains challenging due to their multi-dimensional nature. Given a large collection of parameters, users must find the ones relevant to their creative goals -- a task that requires a systematic understanding of the particle system and how parameters map to high-level intents, such as making a fire look angry. Elemental Alchemist is a generative interface that transforms user intent into contextualized controls for semantic editing of particle systems. The system introduces two components: a contextual brush palette that generates tools based on scene context, and a generative control panel that surfaces relevant technical parameters and abstracts them to generate mid-level semantic attributes and high-level conceptual controls. An evaluation with 10 novice and 5 expert VFX practitioners shows the system supported users in translating high-level creative goals into particle system parameters.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The manuscript presents Elemental Alchemist, a generative interface for semantic editing of particle systems in VFX. It introduces a contextual brush palette that generates tools from scene context and a generative control panel that surfaces technical parameters while abstracting them into mid-level semantic attributes and high-level conceptual controls. The central claim is that an evaluation with 10 novice and 5 expert VFX practitioners demonstrates the system supports users in translating high-level creative goals into particle system parameters.

Significance. If the evaluation can be substantiated with full methodological details, objective metrics, and baselines, the work would offer a meaningful advance in HCI for creative tools by addressing the parameter-mapping challenge in complex particle systems. The dynamic abstraction approach has potential applicability beyond VFX to other generative design domains. The integration of generative components for intent interpretation is a timely contribution given current interest in AI-assisted interfaces.

major comments (1)
  1. [Evaluation] Evaluation section (and abstract): The reported user study with 15 participants claims positive support for translating high-level goals but provides no details on study design, tasks, metrics (e.g., intent-match accuracy or parameter error rates), statistical analysis, or comparison to baselines. This directly undermines verification of the central claim that the contextual brush palette and generative control panel accurately interpret diverse intents without introducing mapping errors or expressiveness limits.
minor comments (1)
  1. The abstract could more explicitly state the specific generative techniques or models underlying the control panel to allow readers to assess technical novelty without reading the full methods.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive review and for recognizing the potential significance of Elemental Alchemist in addressing parameter-mapping challenges in particle systems. We address the major comment below and will incorporate the necessary changes to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Evaluation] Evaluation section (and abstract): The reported user study with 15 participants claims positive support for translating high-level goals but provides no details on study design, tasks, metrics (e.g., intent-match accuracy or parameter error rates), statistical analysis, or comparison to baselines. This directly undermines verification of the central claim that the contextual brush palette and generative control panel accurately interpret diverse intents without introducing mapping errors or expressiveness limits.

    Authors: We agree that the Evaluation section in the current manuscript lacks the methodological detail required to fully substantiate the central claims. In the revised version we will expand this section to include a complete description of the study design and protocol, the tasks assigned to participants, the specific metrics collected (including intent-match accuracy and parameter error rates), the statistical analyses performed, and direct comparisons to baseline interfaces. These additions will enable verification that the system supports translation of high-level goals into parameters while limiting mapping errors and preserving expressiveness. We will also ensure the abstract accurately reflects the expanded evaluation. revision: yes

Circularity Check

0 steps flagged

No significant circularity; system description and user study are self-contained

full rationale

The paper presents a generative interface (contextual brush palette and generative control panel) for particle-system editing and evaluates it via a 15-participant user study with VFX practitioners. No mathematical derivations, equations, fitted parameters, or first-principles claims appear in the provided text. The central claim—that the system supports translating high-level goals into parameters—rests on the described evaluation rather than reducing to self-definition, self-citation chains, or renamed inputs. The study is framed as independent, with no load-bearing self-citations or ansatzes that collapse the result to its own assumptions by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a description of a software prototype and user study in human-computer interaction. No mathematical free parameters, axioms, or new physical entities are introduced or required for the central claim.

pith-pipeline@v0.9.0 · 5460 in / 1149 out tokens · 55208 ms · 2026-05-12T03:29:29.462348+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

90 extracted references · 90 canonical work pages · 1 internal anchor

  1. [1]

    Tyler Angert, Miroslav Suzara, Jenny Han, Christopher Pondoc, and Hariharan Subramonyam. 2023. Spellburst: A node-based interface for exploratory creative coding with natural language prompts. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–22

  2. [2]

    Rahul Arora, Rubaiat Habib Kazi, Danny M Kaufman, Wilmot Li, and Karan Singh

  3. [3]

    InProceedings of the 32nd annual ACM symposium on user interface software and technology

    Magicalhands: Mid-air hand gestures for animating in vr. InProceedings of the 32nd annual ACM symposium on user interface software and technology. 463–477

  4. [4]

    2025.Bifrost for Maya: Simulate Dynamic Effects

    Autodesk. 2025.Bifrost for Maya: Simulate Dynamic Effects. https: //help.autodesk.com/view/MAYAUL/2024/ENU/?guid=Bifrost_MayaPlugin_ bifrost_for_maya_html Accessed 2025-09-11

  5. [5]

    Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining what indi- vidual SUS scores mean: Adding an adjective rating scale.Journal of usability studies4, 3 (2009), 114–123

  6. [6]

    Benjamin B Bederson and James D Hollan. 1994. Pad++ a zooming graphical interface for exploring alternate interface physics. InProceedings of the 7th annual ACM symposium on User interface software and technology. 17–26

  7. [7]

    Samuelle Bourgault, Li-Yi Wei, Jennifer Jacobs, and Rubaiat Habib Kazi. 2025. Narrative Motion Blocks: Combining Direct Manipulation and Natural Language Interactions for Animation Creation. InProceedings of the 2025 ACM Designing Interactive Systems Conference. 1366–1386

  8. [8]

    Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Gross- man. 2023. Promptify: Text-to-image generation through interactive prompt exploration with large language models. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–14

  9. [9]

    John Brooke et al. 1996. SUS-A quick and dirty usability scale.Usability evaluation in industry189, 194 (1996), 4–7

  10. [10]

    Sofie Busch, Nikoline Sander, and Mário Barros. 2023. Decoding design briefs: The role of abstraction levels in textual and visual stimuli. InNordes 2023. Linköping University

  11. [11]

    Ricardo Cabello and Contributors. 2010. three.js: JavaScript 3D Library. https: //threejs.org/. https://threejs.org/ Open-source under MIT license

  12. [12]

    Yining Cao, Peiling Jiang, and Haijun Xia. 2025. Generative and Malleable User Interfaces with Generative and Evolving Task-Driven Data Model. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–20

  13. [13]

    2006.Constructing grounded theory: A practical guide through qualitative analysis

    Kathy Charmaz. 2006.Constructing grounded theory: A practical guide through qualitative analysis. sage

  14. [14]

    Jiaqi Chen, Yanzhe Zhang, Yutong Zhang, Yijia Shao, and Diyi Yang. 2025. Gen- erative interfaces for language models.arXiv preprint arXiv:2508.19227(2025)

  15. [15]

    Yiru Chen and Eugene Wu. 2022. Pi2: End-to-end interactive visualization inter- face generation from queries. InProceedings of the 2022 International Conference on Management of Data. 1711–1725

  16. [16]

    Ruijia Cheng, Titus Barik, Alan Leung, Fred Hohman, and Jeffrey Nichols. 2024. BISCUIT: Scaffolding LLM-generated code with ephemeral UIs in computational notebooks. In2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 13–23

  17. [17]

    Erin Cherry and Celine Latulipe. 2014. Quantifying the creativity support of digital tools through the creativity support index.ACM Transactions on Computer- Human Interaction (TOCHI)21, 4 (2014), 1–25

  18. [18]

    John Joon Young Chung and Eytan Adar. 2023. Promptpaint: Steering text-to- image generation through paint medium-like interactions. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–17

  19. [19]

    John Joon Young Chung and Max Kreminski. 2024. Patchview: Llm-powered worldbuilding with generative dust and magnet visualization. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. 1–19

  20. [20]

    CreativeLifeForm. 2021. three-nebula: A Particle System Engine for Three.js. https://three-nebula.org/. https://three-nebula.org/ MIT licensed, includes JSON support and GUI editor

  21. [21]

    Hai Dang, Frederik Brudy, George Fitzmaurice, and Fraser Anderson. 2023. World- smith: Iterative and expressive prompting for world building with a generative ai. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–17

  22. [22]

    Richard Lee Davis, Thiemo Wambsganss, Wei Jiang, Kevin Gonyop Kim, Tanja Käser, and Pierre Dillenbourg. 2024. Fashioning creative expertise with generative AI: Graphical interfaces for design space exploration better support ideation than text prompts. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–26

  23. [23]

    2025.Overview of Niagara Effects for Unreal Engine

    Epic Games. 2025.Overview of Niagara Effects for Unreal Engine. https://dev.epicgames.com/documentation/en-us/unreal-engine/overview-of- niagara-effects-for-unreal-engine Accessed 2025-09-11

  24. [24]

    Leah Findlater and Joanna McGrenere. 2004. A comparison of static, adaptive, and adaptable menus. InProceedings of the SIGCHI conference on Human factors in computing systems. 89–96

  25. [25]

    Leah Findlater, Karyn Moffatt, Joanna McGrenere, and Jessica Dawson. 2009. Ephemeral adaptation: The use of gradual onset to improve menu selection per- formance. Inproceedings of the SIGCHI conference on human factors in computing systems. 1655–1664

  26. [26]

    Krzysztof Gajos and Daniel S Weld. 2004. SUPPLE: automatically generating user interfaces. InProceedings of the 9th international conference on Intelligent user interfaces. 93–100

  27. [27]

    Krzysztof Z Gajos, Katherine Everitt, Desney S Tan, Mary Czerwinski, and Daniel S Weld. 2008. Predictability and accuracy in adaptive user interfaces. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1271–1274

  28. [28]

    Rohit Gandikota, Joanna Materzyńska, Tingrui Zhou, Antonio Torralba, and David Bau. 2024. Concept sliders: Lora adaptors for precise control in diffusion models. InEuropean Conference on Computer Vision. Springer, 172–188

  29. [29]

    2009.Elemental Magic: The Art of Special Effects Animation

    Joseph Gilland. 2009.Elemental Magic: The Art of Special Effects Animation. Vol. 1. Taylor & Francis

  30. [30]

    2012.Elemental Magic, Volume 2: The Technique of Special Effects Animation

    Joseph Gilland. 2012.Elemental Magic, Volume 2: The Technique of Special Effects Animation. Routledge

  31. [31]

    Björn Hartmann, Loren Yu, Abel Allison, Yeonsoo Yang, and Scott R Klemmer

  32. [32]

    InProceedings of the 21st annual ACM symposium on User interface software and technology

    Design as exploration: creating interface alternatives through parallel authoring and runtime tuning. InProceedings of the 21st annual ACM symposium on User interface software and technology. 91–100

  33. [33]

    1990.Language in thought and action

    Samuel Ichiyé Hayakawa and Alan R Hayakawa. 1990.Language in thought and action. Houghton Mifflin Harcourt

  34. [34]

    Zhuangze Hou, Jingze Tian, Nianlong Li, Farong Ren, and Can Liu. 2025. EchoLad- der: Progressive AI-Assisted Design of Immersive VR Scenes. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology. 1–22

  35. [35]

    Rahul Jain, Amit Goel, Koichiro Niinuma, and Aakar Gupta. 2025. AdaptiveSliders: User-aligned Semantic Slider-based Editing of Text-to-Image Model Output. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–27

  36. [36]

    Jun Kato, Tomoyasu Nakano, and Masataka Goto. 2015. TextAlive: Integrated design environment for kinetic typography. InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 3403–3412

  37. [37]

    Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, and George Fitzmaurice

  38. [38]

    InProceedings of the 27th annual ACM symposium on User interface software and technology

    Kitty: sketching dynamic and interactive illustrations. InProceedings of the 27th annual ACM symposium on User interface software and technology. 395–405

  39. [39]

    Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, Shengdong Zhao, and George Fitzmaurice. 2014. Draco: bringing life to illustrations with kinetic Preprint, , Kyzyl Monteiro, Evan Atherton, George Fitzmaurice, and Qian Zhou textures. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 351–360

  40. [40]

    Yuki Koyama, Issei Sato, and Masataka Goto. 2020. Sequential gallery for interac- tive visual design optimization.ACM Transactions on Graphics (TOG)39, 4 (2020), 88–1

  41. [41]

    Jaewook Lee, Filippo Aleotti, Diego Mazala, Guillermo Garcia-Hernando, Sara Vicente, Oliver James Johnston, Isabel Kraus-Liang, Jakub Powierza, Donghoon Shin, Jon E Froehlich, et al. 2025. Imaginatear: Ai-assisted in-situ authoring in augmented reality. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology. 1–21

  42. [42]

    Jingyi Li, Eric Rawn, Jacob Ritchie, Jasper Tran O’Leary, and Sean Follmer. 2023. Beyond the artifact: power as a lens for creativity support tools. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–15

  43. [43]

    Haichuan Lin, Yilin Ye, Jiazhi Xia, and Wei Zeng. 2025. SketchFlex: Facilitating Spatial-Semantic Coherence in Text-to-Image Generation with Region-Based Sketches. InProceedings of the 2025 CHI Conference on Human Factors in Comput- ing Systems. 1–19

  44. [44]

    Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J Cai

  45. [45]

    InProceedings of the 2020 CHI conference on human factors in computing systems

    Novice-AI music co-creation via AI-steering tools for deep generative models. InProceedings of the 2020 CHI conference on human factors in computing systems. 1–13

  46. [46]

    Damien Masson, Jo Vermeulen, George Fitzmaurice, and Justin Matejka. 2022. Supercharging trial-and-error for learning complex software applications. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–13

  47. [47]

    Justin Matejka, Wei Li, Tovi Grossman, and George Fitzmaurice. 2009. Com- munityCommands: command recommendations for software applications. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. 193–202

  48. [48]

    Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice.Proceedings of the ACM on human-computer interaction3, CSCW (2019), 1–23

  49. [49]

    Bryan Min, Allen Chen, Yining Cao, and Haijun Xia. 2025. Malleable Overview- Detail Interfaces. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–25

  50. [50]

    Afshin Mobramaein and Jim Whitehead. 2019. A methodology for designing natural language interfaces for procedural content generation. InProceedings of the 14th International Conference on the Foundations of Digital Games. 1–9

  51. [51]

    Jeffrey Nichols, Brad A Myers, Michael Higgins, Joseph Hughes, Thomas K Harris, Roni Rosenfeld, and Mathilde Pignol. 2002. Generating remote control interfaces for complex appliances. InProceedings of the 15th annual ACM symposium on User interface software and technology. 161–170

  52. [52]

    Elisabeth Pacherie. 2008. The phenomenology of action: A conceptual framework. Cognition107, 1 (2008), 179–217

  53. [53]

    2012.Computer Animation: Algorithms and Techniques(3rd ed.)

    Rick Parent. 2012.Computer Animation: Algorithms and Techniques(3rd ed.). Morgan Kaufmann

  54. [54]

    William T Reeves. 1998. Particle systems—a technique for modeling a class of fuzzy objects. InSeminal graphics: pioneering efforts that shaped the field. 203–220

  55. [55]

    Giuseppe Riva, John A Waterworth, Eva L Waterworth, and Fabrizia Mantovani

  56. [56]

    From intention to action: The role of presence.New Ideas in Psychology29, 1 (2011), 24–37

  57. [57]

    Nazmus Saquib, Rubaiat Habib Kazi, Li-yi Wei, Gloria Mark, and Deb Roy. 2021. Constructing embodied algebra by sketching. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16

  58. [58]

    Andrew Sears and Ben Shneiderman. 1994. Split menus: effectively using selection frequency to organize menus.ACM Transactions on Computer-Human Interaction (TOCHI)1, 1 (1994), 27–51

  59. [59]

    Evan Shimizu, Matthew Fisher, Sylvain Paris, James McCann, and Kayvon Fa- tahalian. 2020. Design adjectives: a framework for interactive model-guided exploration of parameterized design spaces. InProceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 261–278

  60. [60]

    Ben Shneiderman. 2002. Promoting universal usability with multi-layer interface design.ACM SIGCAPH computers and the physically handicapped73-74 (2002), 1–8

  61. [61]

    2025.POP Solver (Houdini) — particle update/forces solver

    SideFX. 2025.POP Solver (Houdini) — particle update/forces solver. https://www. sidefx.com/docs/houdini/ Accessed 2025-09-11

  62. [62]

    Ruben Smelik, Krzysztof Galka, Klaas Jan De Kraker, Frido Kuijper, and Rafael Bidarra. 2011. Semantic constraints for procedural generation of virtual worlds. InProceedings of the 2nd International Workshop on Procedural Content Generation in Games. 1–4

  63. [63]

    Sangho Suh, Meng Chen, Bryan Min, Toby Jia-Jun Li, and Haijun Xia. 2024. Lumi- nate: Structured generation and exploration of design space with large language models for human-ai co-creation. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–26

  64. [64]

    Sangho Suh, Bryan Min, Srishti Palani, and Haijun Xia. 2023. Sensecape: En- abling multilevel exploration and sensemaking with large language models. In Proceedings of the 36th annual ACM symposium on user interface software and technology. 1–18

  65. [65]

    Ryo Suzuki, Parastoo Abtahi, Chen Zhu-Tian, Mustafa Doga Dogan, Andrea Colaco, Eric J Gonzalez, Karan Ahuja, and Mar Gonzalez-Franco. 2025. Pro- grammable reality.Frontiers in Virtual Reality6 (2025), 1649785

  66. [66]

    Yuki Tatsukawa, I-Chao Shen, Mustafa Doga Dogan, Anran Qi, Yuki Koyama, Ariel Shamir, and Takeo Igarashi. 2025. FontCraft: Multimodal Font Design Using Interactive Bayesian Optimization. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–14

  67. [67]

    Adrien Treuille, Antoine McNamara, Zoran Popović, and Jos Stam. 2003. Keyframe control of smoke simulations. InACM SIGGRAPH 2003 Papers. 716–723

  68. [68]

    2025.Visual Effect Graph Manual — Node-based author- ing

    Unity Technologies. 2025.Visual Effect Graph Manual — Node-based author- ing. https://docs.unity3d.com/Packages/com.unity.visualeffectgraph@latest/ Accessed 2025-09-11

  69. [69]

    Priyan Vaithilingam, Elena L Glassman, Jeevana Priya Inala, and Chenglong Wang. 2024. Dynavis: Dynamically synthesized ui widgets for visualization editing. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–17

  70. [70]

    Bret Victor. 2011. Up and down the ladder of abstraction.Retrieved September2 (2011), 2015

  71. [71]

    Zhijie Wang, Yuheng Huang, Da Song, Lei Ma, and Tianyi Zhang. 2024. Promptcharm: Text-to-image generation through multi-modal prompting and refinement. InProceedings of the 2024 CHI Conference on Human Factors in Com- puting Systems. 1–21

  72. [72]

    Haijun Xia, Ken Hinckley, Michel Pahud, Xiao Tu, and Bill Buxton. [n. d.]. Writ- Large: Ink Unleashed by Unified Scope, Action, & Zoom

  73. [73]

    Zhijie Xia, Kyzyl Monteiro, Kevin Van, and Ryo Suzuki. 2023. RealityCanvas: Augmented Reality Sketching for Embedded and Responsive Scribble Animation Effects. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–14

  74. [74]

    Liwenhan Xie, Yanna Lin, Can Liu, Huamin Qu, and Xinhuan Shu. 2025. DataWink: Reusing and Adapting SVG-based Visualization Examples with Large Multimodal Models.arXiv preprint arXiv:2507.17734(2025)

  75. [75]

    Liwenhan Xie, Chengbo Zheng, Haijun Xia, Huamin Qu, and Chen Zhu-Tian. 2024. Waitgpt: Monitoring and steering conversational llm agent in data analysis with on-the-fly code visualization. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. 1–14

  76. [76]

    Jun Xing, Rubaiat Habib Kazi, Tovi Grossman, Li-Yi Wei, Jos Stam, and George Fitzmaurice. 2016. Energy-brushes: Interactive tools for illustrating stylized elemental dynamics. InProceedings of the 29th Annual Symposium on User Interface Software and Technology. 755–766

  77. [77]

    Lei Zhang, Jin Pan, Jacob Gettig, Steve Oney, and Anhong Guo. 2024. Vrcopilot: Authoring 3d layouts with generative ai models in vr. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. 1–13

  78. [78]

    add " ,

    Yifei Zhang, Lin-Ping Yuan, Yuheng Zhao, Jielin Feng, and Siming Chen. 2025. KinemaFX: A Kinematic-Driven Interactive System for Particle Effect Exploration and Customization.arXiv preprint arXiv:2507.19782(2025). Elemental Alchemist: A Generative Interface for Semantic Control of Particle Systems Preprint, , A LLM Prompt Structure This appendix documents...

  79. [79]

    ** CONCEPTUAL LAYER **: Abstract concepts or properties extracted from the user's natural language , usually similar in vocabulary to the user's request

  80. [80]

    ** SEMANTIC LAYER **: Interpretable , grounded attributes derived from concepts and linkable to technical parameters

Showing first 80 references.