Recognition: no theorem link
Elemental Alchemist: A Generative Interface for Semantic Control of Particle Systems Across Dynamic Levels of Abstraction
Pith reviewed 2026-05-12 03:29 UTC · model grok-4.3
The pith
Elemental Alchemist turns high-level creative goals into contextual controls and abstracted parameters for particle-system visual effects.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper claims that a generative interface equipped with a contextual brush palette and a generative control panel can interpret user intent at multiple levels of abstraction, generate appropriate tools and semantic controls, and thereby support the translation of high-level creative goals into usable particle-system parameters, as shown by participants who successfully produced desired visual outcomes during evaluation.
What carries the argument
The generative interface built around a contextual brush palette that produces scene-specific tools and a generative control panel that surfaces technical parameters while abstracting them into semantic attributes and conceptual controls.
If this is right
- Users can specify effects through high-level concepts instead of manually locating and adjusting dozens of low-level sliders.
- Both novices and experts gain access to relevant controls without first building a complete mental model of the entire parameter space.
- Creative iteration speeds up because mid-level semantic attributes and high-level conceptual controls are generated on demand from scene context.
- The same particle system can be edited at different abstraction levels without losing direct access to the underlying technical parameters.
Where Pith is reading between the lines
- The abstraction-layer approach could be tested on other simulation domains such as fluid or cloth systems where parameter-to-visual mappings are equally complex.
- Integrating user feedback loops into the generative panel might allow the system to refine its abstractions over repeated sessions with the same artist.
- If the brush-palette generation proves reliable, similar context-aware tool creation could reduce interface clutter in broader creative software beyond VFX.
Load-bearing premise
The generative components can correctly interpret varied user intents and produce accurate, non-limiting mappings to parameters and abstractions across different creative tasks.
What would settle it
A controlled study in which participants repeatedly fail to achieve intended particle effects or report that the surfaced controls do not correspond to their stated goals would show the mapping mechanism does not work as claimed.
Figures
read the original abstract
Editing particle-system visual effects (VFX) is vital for digital storytelling, but achieving controllable, art-directable results remains challenging due to their multi-dimensional nature. Given a large collection of parameters, users must find the ones relevant to their creative goals -- a task that requires a systematic understanding of the particle system and how parameters map to high-level intents, such as making a fire look angry. Elemental Alchemist is a generative interface that transforms user intent into contextualized controls for semantic editing of particle systems. The system introduces two components: a contextual brush palette that generates tools based on scene context, and a generative control panel that surfaces relevant technical parameters and abstracts them to generate mid-level semantic attributes and high-level conceptual controls. An evaluation with 10 novice and 5 expert VFX practitioners shows the system supported users in translating high-level creative goals into particle system parameters.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents Elemental Alchemist, a generative interface for semantic editing of particle systems in VFX. It introduces a contextual brush palette that generates tools from scene context and a generative control panel that surfaces technical parameters while abstracting them into mid-level semantic attributes and high-level conceptual controls. The central claim is that an evaluation with 10 novice and 5 expert VFX practitioners demonstrates the system supports users in translating high-level creative goals into particle system parameters.
Significance. If the evaluation can be substantiated with full methodological details, objective metrics, and baselines, the work would offer a meaningful advance in HCI for creative tools by addressing the parameter-mapping challenge in complex particle systems. The dynamic abstraction approach has potential applicability beyond VFX to other generative design domains. The integration of generative components for intent interpretation is a timely contribution given current interest in AI-assisted interfaces.
major comments (1)
- [Evaluation] Evaluation section (and abstract): The reported user study with 15 participants claims positive support for translating high-level goals but provides no details on study design, tasks, metrics (e.g., intent-match accuracy or parameter error rates), statistical analysis, or comparison to baselines. This directly undermines verification of the central claim that the contextual brush palette and generative control panel accurately interpret diverse intents without introducing mapping errors or expressiveness limits.
minor comments (1)
- The abstract could more explicitly state the specific generative techniques or models underlying the control panel to allow readers to assess technical novelty without reading the full methods.
Simulated Author's Rebuttal
We thank the referee for their constructive review and for recognizing the potential significance of Elemental Alchemist in addressing parameter-mapping challenges in particle systems. We address the major comment below and will incorporate the necessary changes to strengthen the manuscript.
read point-by-point responses
-
Referee: [Evaluation] Evaluation section (and abstract): The reported user study with 15 participants claims positive support for translating high-level goals but provides no details on study design, tasks, metrics (e.g., intent-match accuracy or parameter error rates), statistical analysis, or comparison to baselines. This directly undermines verification of the central claim that the contextual brush palette and generative control panel accurately interpret diverse intents without introducing mapping errors or expressiveness limits.
Authors: We agree that the Evaluation section in the current manuscript lacks the methodological detail required to fully substantiate the central claims. In the revised version we will expand this section to include a complete description of the study design and protocol, the tasks assigned to participants, the specific metrics collected (including intent-match accuracy and parameter error rates), the statistical analyses performed, and direct comparisons to baseline interfaces. These additions will enable verification that the system supports translation of high-level goals into parameters while limiting mapping errors and preserving expressiveness. We will also ensure the abstract accurately reflects the expanded evaluation. revision: yes
Circularity Check
No significant circularity; system description and user study are self-contained
full rationale
The paper presents a generative interface (contextual brush palette and generative control panel) for particle-system editing and evaluates it via a 15-participant user study with VFX practitioners. No mathematical derivations, equations, fitted parameters, or first-principles claims appear in the provided text. The central claim—that the system supports translating high-level goals into parameters—rests on the described evaluation rather than reducing to self-definition, self-citation chains, or renamed inputs. The study is framed as independent, with no load-bearing self-citations or ansatzes that collapse the result to its own assumptions by construction.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Tyler Angert, Miroslav Suzara, Jenny Han, Christopher Pondoc, and Hariharan Subramonyam. 2023. Spellburst: A node-based interface for exploratory creative coding with natural language prompts. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–22
work page 2023
-
[2]
Rahul Arora, Rubaiat Habib Kazi, Danny M Kaufman, Wilmot Li, and Karan Singh
-
[3]
InProceedings of the 32nd annual ACM symposium on user interface software and technology
Magicalhands: Mid-air hand gestures for animating in vr. InProceedings of the 32nd annual ACM symposium on user interface software and technology. 463–477
-
[4]
2025.Bifrost for Maya: Simulate Dynamic Effects
Autodesk. 2025.Bifrost for Maya: Simulate Dynamic Effects. https: //help.autodesk.com/view/MAYAUL/2024/ENU/?guid=Bifrost_MayaPlugin_ bifrost_for_maya_html Accessed 2025-09-11
work page 2025
-
[5]
Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining what indi- vidual SUS scores mean: Adding an adjective rating scale.Journal of usability studies4, 3 (2009), 114–123
work page 2009
-
[6]
Benjamin B Bederson and James D Hollan. 1994. Pad++ a zooming graphical interface for exploring alternate interface physics. InProceedings of the 7th annual ACM symposium on User interface software and technology. 17–26
work page 1994
-
[7]
Samuelle Bourgault, Li-Yi Wei, Jennifer Jacobs, and Rubaiat Habib Kazi. 2025. Narrative Motion Blocks: Combining Direct Manipulation and Natural Language Interactions for Animation Creation. InProceedings of the 2025 ACM Designing Interactive Systems Conference. 1366–1386
work page 2025
-
[8]
Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Gross- man. 2023. Promptify: Text-to-image generation through interactive prompt exploration with large language models. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–14
work page 2023
-
[9]
John Brooke et al. 1996. SUS-A quick and dirty usability scale.Usability evaluation in industry189, 194 (1996), 4–7
work page 1996
-
[10]
Sofie Busch, Nikoline Sander, and Mário Barros. 2023. Decoding design briefs: The role of abstraction levels in textual and visual stimuli. InNordes 2023. Linköping University
work page 2023
-
[11]
Ricardo Cabello and Contributors. 2010. three.js: JavaScript 3D Library. https: //threejs.org/. https://threejs.org/ Open-source under MIT license
work page 2010
-
[12]
Yining Cao, Peiling Jiang, and Haijun Xia. 2025. Generative and Malleable User Interfaces with Generative and Evolving Task-Driven Data Model. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–20
work page 2025
-
[13]
2006.Constructing grounded theory: A practical guide through qualitative analysis
Kathy Charmaz. 2006.Constructing grounded theory: A practical guide through qualitative analysis. sage
work page 2006
-
[14]
Jiaqi Chen, Yanzhe Zhang, Yutong Zhang, Yijia Shao, and Diyi Yang. 2025. Gen- erative interfaces for language models.arXiv preprint arXiv:2508.19227(2025)
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[15]
Yiru Chen and Eugene Wu. 2022. Pi2: End-to-end interactive visualization inter- face generation from queries. InProceedings of the 2022 International Conference on Management of Data. 1711–1725
work page 2022
-
[16]
Ruijia Cheng, Titus Barik, Alan Leung, Fred Hohman, and Jeffrey Nichols. 2024. BISCUIT: Scaffolding LLM-generated code with ephemeral UIs in computational notebooks. In2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 13–23
work page 2024
-
[17]
Erin Cherry and Celine Latulipe. 2014. Quantifying the creativity support of digital tools through the creativity support index.ACM Transactions on Computer- Human Interaction (TOCHI)21, 4 (2014), 1–25
work page 2014
-
[18]
John Joon Young Chung and Eytan Adar. 2023. Promptpaint: Steering text-to- image generation through paint medium-like interactions. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–17
work page 2023
-
[19]
John Joon Young Chung and Max Kreminski. 2024. Patchview: Llm-powered worldbuilding with generative dust and magnet visualization. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. 1–19
work page 2024
-
[20]
CreativeLifeForm. 2021. three-nebula: A Particle System Engine for Three.js. https://three-nebula.org/. https://three-nebula.org/ MIT licensed, includes JSON support and GUI editor
work page 2021
-
[21]
Hai Dang, Frederik Brudy, George Fitzmaurice, and Fraser Anderson. 2023. World- smith: Iterative and expressive prompting for world building with a generative ai. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–17
work page 2023
-
[22]
Richard Lee Davis, Thiemo Wambsganss, Wei Jiang, Kevin Gonyop Kim, Tanja Käser, and Pierre Dillenbourg. 2024. Fashioning creative expertise with generative AI: Graphical interfaces for design space exploration better support ideation than text prompts. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–26
work page 2024
-
[23]
2025.Overview of Niagara Effects for Unreal Engine
Epic Games. 2025.Overview of Niagara Effects for Unreal Engine. https://dev.epicgames.com/documentation/en-us/unreal-engine/overview-of- niagara-effects-for-unreal-engine Accessed 2025-09-11
work page 2025
-
[24]
Leah Findlater and Joanna McGrenere. 2004. A comparison of static, adaptive, and adaptable menus. InProceedings of the SIGCHI conference on Human factors in computing systems. 89–96
work page 2004
-
[25]
Leah Findlater, Karyn Moffatt, Joanna McGrenere, and Jessica Dawson. 2009. Ephemeral adaptation: The use of gradual onset to improve menu selection per- formance. Inproceedings of the SIGCHI conference on human factors in computing systems. 1655–1664
work page 2009
-
[26]
Krzysztof Gajos and Daniel S Weld. 2004. SUPPLE: automatically generating user interfaces. InProceedings of the 9th international conference on Intelligent user interfaces. 93–100
work page 2004
-
[27]
Krzysztof Z Gajos, Katherine Everitt, Desney S Tan, Mary Czerwinski, and Daniel S Weld. 2008. Predictability and accuracy in adaptive user interfaces. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1271–1274
work page 2008
-
[28]
Rohit Gandikota, Joanna Materzyńska, Tingrui Zhou, Antonio Torralba, and David Bau. 2024. Concept sliders: Lora adaptors for precise control in diffusion models. InEuropean Conference on Computer Vision. Springer, 172–188
work page 2024
-
[29]
2009.Elemental Magic: The Art of Special Effects Animation
Joseph Gilland. 2009.Elemental Magic: The Art of Special Effects Animation. Vol. 1. Taylor & Francis
work page 2009
-
[30]
2012.Elemental Magic, Volume 2: The Technique of Special Effects Animation
Joseph Gilland. 2012.Elemental Magic, Volume 2: The Technique of Special Effects Animation. Routledge
work page 2012
-
[31]
Björn Hartmann, Loren Yu, Abel Allison, Yeonsoo Yang, and Scott R Klemmer
-
[32]
InProceedings of the 21st annual ACM symposium on User interface software and technology
Design as exploration: creating interface alternatives through parallel authoring and runtime tuning. InProceedings of the 21st annual ACM symposium on User interface software and technology. 91–100
-
[33]
1990.Language in thought and action
Samuel Ichiyé Hayakawa and Alan R Hayakawa. 1990.Language in thought and action. Houghton Mifflin Harcourt
work page 1990
-
[34]
Zhuangze Hou, Jingze Tian, Nianlong Li, Farong Ren, and Can Liu. 2025. EchoLad- der: Progressive AI-Assisted Design of Immersive VR Scenes. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology. 1–22
work page 2025
-
[35]
Rahul Jain, Amit Goel, Koichiro Niinuma, and Aakar Gupta. 2025. AdaptiveSliders: User-aligned Semantic Slider-based Editing of Text-to-Image Model Output. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–27
work page 2025
-
[36]
Jun Kato, Tomoyasu Nakano, and Masataka Goto. 2015. TextAlive: Integrated design environment for kinetic typography. InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 3403–3412
work page 2015
-
[37]
Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, and George Fitzmaurice
-
[38]
InProceedings of the 27th annual ACM symposium on User interface software and technology
Kitty: sketching dynamic and interactive illustrations. InProceedings of the 27th annual ACM symposium on User interface software and technology. 395–405
-
[39]
Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, Shengdong Zhao, and George Fitzmaurice. 2014. Draco: bringing life to illustrations with kinetic Preprint, , Kyzyl Monteiro, Evan Atherton, George Fitzmaurice, and Qian Zhou textures. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 351–360
work page 2014
-
[40]
Yuki Koyama, Issei Sato, and Masataka Goto. 2020. Sequential gallery for interac- tive visual design optimization.ACM Transactions on Graphics (TOG)39, 4 (2020), 88–1
work page 2020
-
[41]
Jaewook Lee, Filippo Aleotti, Diego Mazala, Guillermo Garcia-Hernando, Sara Vicente, Oliver James Johnston, Isabel Kraus-Liang, Jakub Powierza, Donghoon Shin, Jon E Froehlich, et al. 2025. Imaginatear: Ai-assisted in-situ authoring in augmented reality. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology. 1–21
work page 2025
-
[42]
Jingyi Li, Eric Rawn, Jacob Ritchie, Jasper Tran O’Leary, and Sean Follmer. 2023. Beyond the artifact: power as a lens for creativity support tools. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–15
work page 2023
-
[43]
Haichuan Lin, Yilin Ye, Jiazhi Xia, and Wei Zeng. 2025. SketchFlex: Facilitating Spatial-Semantic Coherence in Text-to-Image Generation with Region-Based Sketches. InProceedings of the 2025 CHI Conference on Human Factors in Comput- ing Systems. 1–19
work page 2025
-
[44]
Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J Cai
-
[45]
InProceedings of the 2020 CHI conference on human factors in computing systems
Novice-AI music co-creation via AI-steering tools for deep generative models. InProceedings of the 2020 CHI conference on human factors in computing systems. 1–13
work page 2020
-
[46]
Damien Masson, Jo Vermeulen, George Fitzmaurice, and Justin Matejka. 2022. Supercharging trial-and-error for learning complex software applications. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–13
work page 2022
-
[47]
Justin Matejka, Wei Li, Tovi Grossman, and George Fitzmaurice. 2009. Com- munityCommands: command recommendations for software applications. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. 193–202
work page 2009
-
[48]
Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice.Proceedings of the ACM on human-computer interaction3, CSCW (2019), 1–23
work page 2019
-
[49]
Bryan Min, Allen Chen, Yining Cao, and Haijun Xia. 2025. Malleable Overview- Detail Interfaces. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–25
work page 2025
-
[50]
Afshin Mobramaein and Jim Whitehead. 2019. A methodology for designing natural language interfaces for procedural content generation. InProceedings of the 14th International Conference on the Foundations of Digital Games. 1–9
work page 2019
-
[51]
Jeffrey Nichols, Brad A Myers, Michael Higgins, Joseph Hughes, Thomas K Harris, Roni Rosenfeld, and Mathilde Pignol. 2002. Generating remote control interfaces for complex appliances. InProceedings of the 15th annual ACM symposium on User interface software and technology. 161–170
work page 2002
-
[52]
Elisabeth Pacherie. 2008. The phenomenology of action: A conceptual framework. Cognition107, 1 (2008), 179–217
work page 2008
-
[53]
2012.Computer Animation: Algorithms and Techniques(3rd ed.)
Rick Parent. 2012.Computer Animation: Algorithms and Techniques(3rd ed.). Morgan Kaufmann
work page 2012
-
[54]
William T Reeves. 1998. Particle systems—a technique for modeling a class of fuzzy objects. InSeminal graphics: pioneering efforts that shaped the field. 203–220
work page 1998
-
[55]
Giuseppe Riva, John A Waterworth, Eva L Waterworth, and Fabrizia Mantovani
-
[56]
From intention to action: The role of presence.New Ideas in Psychology29, 1 (2011), 24–37
work page 2011
-
[57]
Nazmus Saquib, Rubaiat Habib Kazi, Li-yi Wei, Gloria Mark, and Deb Roy. 2021. Constructing embodied algebra by sketching. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16
work page 2021
-
[58]
Andrew Sears and Ben Shneiderman. 1994. Split menus: effectively using selection frequency to organize menus.ACM Transactions on Computer-Human Interaction (TOCHI)1, 1 (1994), 27–51
work page 1994
-
[59]
Evan Shimizu, Matthew Fisher, Sylvain Paris, James McCann, and Kayvon Fa- tahalian. 2020. Design adjectives: a framework for interactive model-guided exploration of parameterized design spaces. InProceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 261–278
work page 2020
-
[60]
Ben Shneiderman. 2002. Promoting universal usability with multi-layer interface design.ACM SIGCAPH computers and the physically handicapped73-74 (2002), 1–8
work page 2002
-
[61]
2025.POP Solver (Houdini) — particle update/forces solver
SideFX. 2025.POP Solver (Houdini) — particle update/forces solver. https://www. sidefx.com/docs/houdini/ Accessed 2025-09-11
work page 2025
-
[62]
Ruben Smelik, Krzysztof Galka, Klaas Jan De Kraker, Frido Kuijper, and Rafael Bidarra. 2011. Semantic constraints for procedural generation of virtual worlds. InProceedings of the 2nd International Workshop on Procedural Content Generation in Games. 1–4
work page 2011
-
[63]
Sangho Suh, Meng Chen, Bryan Min, Toby Jia-Jun Li, and Haijun Xia. 2024. Lumi- nate: Structured generation and exploration of design space with large language models for human-ai co-creation. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–26
work page 2024
-
[64]
Sangho Suh, Bryan Min, Srishti Palani, and Haijun Xia. 2023. Sensecape: En- abling multilevel exploration and sensemaking with large language models. In Proceedings of the 36th annual ACM symposium on user interface software and technology. 1–18
work page 2023
-
[65]
Ryo Suzuki, Parastoo Abtahi, Chen Zhu-Tian, Mustafa Doga Dogan, Andrea Colaco, Eric J Gonzalez, Karan Ahuja, and Mar Gonzalez-Franco. 2025. Pro- grammable reality.Frontiers in Virtual Reality6 (2025), 1649785
work page 2025
-
[66]
Yuki Tatsukawa, I-Chao Shen, Mustafa Doga Dogan, Anran Qi, Yuki Koyama, Ariel Shamir, and Takeo Igarashi. 2025. FontCraft: Multimodal Font Design Using Interactive Bayesian Optimization. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–14
work page 2025
-
[67]
Adrien Treuille, Antoine McNamara, Zoran Popović, and Jos Stam. 2003. Keyframe control of smoke simulations. InACM SIGGRAPH 2003 Papers. 716–723
work page 2003
-
[68]
2025.Visual Effect Graph Manual — Node-based author- ing
Unity Technologies. 2025.Visual Effect Graph Manual — Node-based author- ing. https://docs.unity3d.com/Packages/com.unity.visualeffectgraph@latest/ Accessed 2025-09-11
work page 2025
-
[69]
Priyan Vaithilingam, Elena L Glassman, Jeevana Priya Inala, and Chenglong Wang. 2024. Dynavis: Dynamically synthesized ui widgets for visualization editing. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–17
work page 2024
-
[70]
Bret Victor. 2011. Up and down the ladder of abstraction.Retrieved September2 (2011), 2015
work page 2011
-
[71]
Zhijie Wang, Yuheng Huang, Da Song, Lei Ma, and Tianyi Zhang. 2024. Promptcharm: Text-to-image generation through multi-modal prompting and refinement. InProceedings of the 2024 CHI Conference on Human Factors in Com- puting Systems. 1–21
work page 2024
-
[72]
Haijun Xia, Ken Hinckley, Michel Pahud, Xiao Tu, and Bill Buxton. [n. d.]. Writ- Large: Ink Unleashed by Unified Scope, Action, & Zoom
-
[73]
Zhijie Xia, Kyzyl Monteiro, Kevin Van, and Ryo Suzuki. 2023. RealityCanvas: Augmented Reality Sketching for Embedded and Responsive Scribble Animation Effects. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–14
work page 2023
- [74]
-
[75]
Liwenhan Xie, Chengbo Zheng, Haijun Xia, Huamin Qu, and Chen Zhu-Tian. 2024. Waitgpt: Monitoring and steering conversational llm agent in data analysis with on-the-fly code visualization. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. 1–14
work page 2024
-
[76]
Jun Xing, Rubaiat Habib Kazi, Tovi Grossman, Li-Yi Wei, Jos Stam, and George Fitzmaurice. 2016. Energy-brushes: Interactive tools for illustrating stylized elemental dynamics. InProceedings of the 29th Annual Symposium on User Interface Software and Technology. 755–766
work page 2016
-
[77]
Lei Zhang, Jin Pan, Jacob Gettig, Steve Oney, and Anhong Guo. 2024. Vrcopilot: Authoring 3d layouts with generative ai models in vr. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. 1–13
work page 2024
-
[78]
Yifei Zhang, Lin-Ping Yuan, Yuheng Zhao, Jielin Feng, and Siming Chen. 2025. KinemaFX: A Kinematic-Driven Interactive System for Particle Effect Exploration and Customization.arXiv preprint arXiv:2507.19782(2025). Elemental Alchemist: A Generative Interface for Semantic Control of Particle Systems Preprint, , A LLM Prompt Structure This appendix documents...
-
[79]
** CONCEPTUAL LAYER **: Abstract concepts or properties extracted from the user's natural language , usually similar in vocabulary to the user's request
-
[80]
** SEMANTIC LAYER **: Interpretable , grounded attributes derived from concepts and linkable to technical parameters
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.