Recognition: unknown
Prop-Chromeleon: Adaptive Haptic Props in Mixed Reality through Generative Artificial Intelligence
Pith reviewed 2026-05-09 18:33 UTC · model grok-4.3
The pith
Generative AI can align virtual assets to physical object shapes to turn them into adaptive passive haptic props for mixed reality.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Prop-Chromeleon is an MR system in which a generative AI pipeline creates and anchors virtual assets to conform to the shapes of physical props according to user text prompts. A generation study with quantitative shape similarity metrics and qualitative prompt analysis, together with a user study, shows higher perceived realism, immersion, and enjoyment than static baselines. The results indicate that shape-aware generation enables both effective passive haptic feedback and creative engagement.
What carries the argument
The generative AI pipeline that generates and geometrically anchors virtual assets to match physical prop shapes under prompt-based control.
Load-bearing premise
Generative AI models can reliably create virtual assets whose geometry aligns closely enough with arbitrary physical props to produce effective passive haptic feedback without unacceptable visual-tactile mismatches.
What would settle it
A user study in which participants rate the adaptive system no higher than static baselines in realism and immersion, or report frequent visual-tactile mismatches when interacting with common objects.
Figures
read the original abstract
Mixed Reality (MR) aims to blend digital and physical worlds, but the absence of haptic feedback often breaks visual-tactile consistency. We introduce Prop-Chromeleon, a MR system based on generative artificial intelligence (AI) that dynamically transforms everyday objects into adaptive passive haptic props through user-provided text prompts. Our AI pipeline performs generation and anchoring of virtual assets that align with the shape of physical props, allowing us to study how virtual content generation behaves under geometric and prompt-based constraints. We evaluate Prop-Chromeleon's effectiveness through a generation study using varied object shapes and user prompts, combining quantitative shape similarity metrics with qualitative prompt fidelity analysis. Our user study further showcases Prop-Chromeleon's improvements in perceived realism, immersion, and enjoyment compared to static baselines. These results show that shape-aware generation can support both believable haptic interaction and creative engagement in MR.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Prop-Chromeleon, an MR system that uses generative AI to dynamically transform everyday physical objects into adaptive passive haptic props via user text prompts. The AI pipeline generates and anchors virtual assets to match prop geometry under geometric and prompt constraints. Evaluation includes a generation study with quantitative shape similarity metrics and qualitative prompt fidelity analysis, plus a user study showing gains in perceived realism, immersion, and enjoyment over static baselines. The central claim is that shape-aware generation enables believable haptic interaction and creative engagement in MR.
Significance. If the results hold with proper validation, the work could advance MR haptics by demonstrating how generative AI enables flexible, prompt-driven adaptation of arbitrary physical props without specialized hardware. Strengths include the empirical focus on both technical metrics (shape similarity) and experiential outcomes (user study), providing a concrete test of alignment between virtual generation and passive feedback. This could support broader applications in creative MR interfaces.
major comments (2)
- [Abstract and Generation Study] Abstract and Generation Study section: The manuscript reports quantitative shape similarity metrics (e.g., for varied object shapes) but provides no numerical thresholds (such as maximum Chamfer distance or surface deviation) that would still support believable passive haptic feedback, nor any perceptual calibration linking metrics to tactile mismatch tolerance. This is load-bearing for the central claim, as global metrics may miss local protrusions salient to touch, preventing clear attribution of user-study gains to successful shape-aware haptics rather than visual novelty.
- [User Study] User Study section: No sample sizes, statistical tests, or details on alignment error measurement are reported, despite claims of improvements over static baselines in realism and immersion. Without these, the evidence for effective haptic interaction remains difficult to assess and weakens support for the conclusion that the system delivers believable passive haptics.
minor comments (1)
- [Abstract] The abstract would be strengthened by including at least one key numerical result (e.g., average shape similarity score) to summarize the quantitative findings.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback. The comments highlight important areas for strengthening the link between our technical metrics and haptic outcomes, as well as improving the reporting of our user study. We address each major comment below and indicate the corresponding revisions to the manuscript.
read point-by-point responses
-
Referee: [Abstract and Generation Study] Abstract and Generation Study section: The manuscript reports quantitative shape similarity metrics (e.g., for varied object shapes) but provides no numerical thresholds (such as maximum Chamfer distance or surface deviation) that would still support believable passive haptic feedback, nor any perceptual calibration linking metrics to tactile mismatch tolerance. This is load-bearing for the central claim, as global metrics may miss local protrusions salient to touch, preventing clear attribution of user-study gains to successful shape-aware haptics rather than visual novelty.
Authors: We agree that explicit numerical thresholds and a direct perceptual calibration study would make the connection between shape similarity metrics and believable passive haptics more robust. Our generation study uses established metrics (e.g., Chamfer distance) drawn from the 3D generation literature, supplemented by qualitative inspection of local geometry and prompt fidelity. The user study then supplies the perceptual validation through direct interaction and ratings. To address the concern, we will revise the Generation Study section to discuss the range of observed metric values, cite prior passive-haptics work on tolerable surface deviations, and explicitly note the limitations of global metrics while describing how local features were reviewed qualitatively. This addition clarifies the attribution of user-study gains to shape-aware generation rather than visual novelty alone. revision: yes
-
Referee: [User Study] User Study section: No sample sizes, statistical tests, or details on alignment error measurement are reported, despite claims of improvements over static baselines in realism and immersion. Without these, the evidence for effective haptic interaction remains difficult to assess and weakens support for the conclusion that the system delivers believable passive haptics.
Authors: We acknowledge the reporting omission. The submitted manuscript did not include these details in the User Study section. In the revised version we have expanded the section to report the sample size, the statistical tests performed (including test type and significance values), and a full description of the alignment error measurement method together with the observed error statistics. These additions supply the quantitative foundation needed to evaluate the improvements in realism, immersion, and enjoyment relative to the static baselines. revision: yes
Circularity Check
No circularity: empirical system evaluation with no derivations or self-referential fits
full rationale
The paper introduces an MR system using generative AI to create virtual assets aligned to physical props, evaluated through a generation study (shape similarity metrics plus prompt fidelity) and a user study (perceived realism, immersion, enjoyment vs. static baselines). No equations, fitted parameters, predictions, or derivation chains appear in the abstract or described content. Central claims rest on external empirical results from user studies and quantitative metrics rather than any quantity defined in terms of itself or reduced by construction to inputs. Self-citations are not invoked as load-bearing uniqueness theorems or ansatzes. The work is self-contained against its reported benchmarks.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Adilzhan Adilkhanov, Matteo Rubagotti, and Zhanat Kappassov. 2022. Haptic Devices: Wearability-Based Taxonomy and Literature Review.IEEE Access10 (2022), 91923–91947. doi:10.1109/ACCESS.2022.3202986 Prop-Chromeleon DIS ’26, June 13–17, 2026, Singapore, Singapore
-
[2]
Setareh Aghel Manesh, Tianyi Zhang, Yuki Onishi, Kotaro Hara, Scott Bateman, Jiannan Li, and Anthony Tang. 2024. How People Prompt Generative AI to Create Interactive VR Scenes. InDesigning Interactive Systems Conference (DIS ’24, Vol. 7). ACM, 2319–2340. doi:10.1145/3643834.3661547
-
[3]
Ahmed Anwar, Tianzheng Shi, and Oliver Schneider. 2023. Factors of Haptic Experience across Multiple Haptic Modalities. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 260, 12 pages. doi:10.1145/3544548.3581514
-
[4]
Mahdi Azmandian, Mark Hancock, Hrvoje Benko, Eyal Ofek, and Andrew D. Wilson. 2016. Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences. InProceedings of the 2016 CHI Conference on Human Factors in Computing Systems(San Jose, California, USA)(CHI ’16). Association for Computing Machinery, New York, NY, USA, ...
-
[5]
2025.Blender Python API Documentation
Blender Foundation. 2025.Blender Python API Documentation. Blender Founda- tion. https://docs.blender.org/api/current/ Accessed: 2025-03-29
2025
-
[6]
G. Borgefors. 1988. Hierarchical chamfer matching: a parametric edge matching algorithm.IEEE Transactions on Pattern Analysis and Machine Intelligence10, 6 (1988), 849–865. doi:10.1109/34.9107
-
[7]
can i touch this?
Elodie Bouzbib, Gilles Bailly, Sinan Haliyo, and Pascal Frey. 2021. “can i touch this?”: Survey of virtual reality interactions via haptic solutions: Revue de littéra- ture des interactions en réalité virtuelle par le biais de solutions haptiques. In Proceedings of the 32nd Conference on l’Interaction Homme-Machine. 1–16
2021
-
[8]
Hanqun Cao, Cheng Tan, Zhangyang Gao, Yilun Xu, Guangyong Chen, Pheng- Ann Heng, and Stan Z. Li. 2022. A Survey on Generative Diffusion Models. IEEE Transactions on Knowledge and Data Engineering36 (2022), 2814–2830. https://api.semanticscholar.org/CorpusID:265039918
2022
-
[9]
Parth Chandak. 2023. Leveraging Haptic Feedback in Mixed Reality: Enhancing Training, Skill Acquisition and Robotic Simulation.Journal of Engineering and Applied Sciences Technology(08 2023), 1–6. doi:10.47363/JEAST/2023(5)E170
-
[10]
ShapeNet: An Information-Rich 3D Model Repository
Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianx- iong Xiao, Li Yi, and Fisher Yu. 2015.ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR]. Stanford University — Princeton University — Toyota Technological Institute ...
work page internal anchor Pith review arXiv 2015
-
[11]
Lung-Pan Cheng, Li Chang, Sebastian Marwecki, and Patrick Baudisch. 2018. iTurk: Turning Passive Haptics into Active Haptics by Making Users Reconfigure Props in Virtual Reality. InProceedings of the 2018 CHI Conference on Human Factors in Computing Systems(Montreal QC, Canada)(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–10. doi:10...
-
[12]
Lung-Pan Cheng, Eyal Ofek, Christian Holz, Hrvoje Benko, and Andrew D. Wilson. 2017. Sparse Haptic Proxy: Touch Feedback in Virtual Environments Using a General Passive Prop. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems(Denver, Colorado, USA)(CHI ’17). Association for Computing Machinery, New York, NY, USA, 3718–3728. doi...
-
[13]
ComfyAnonymous. 2025. ComfyUI: A powerful and modular GUI for Stable Diffusion. https://github.com/comfyanonymous/ComfyUI
2025
-
[14]
CompVis. 2022. Stable Diffusion v1.4. https://huggingface.co/CompVis/stable- diffusion-v-1-4-original. Accessed: 2025-03-23
2022
-
[15]
Florian Daiber, Donald Degraen, André Zenner, Tanja Döring, Frank Steinicke, Oscar Javier Ariza Nunez, and Adalberto L. Simeone. 2021. Everyday Proxy Objects for Virtual Reality. InExtended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan)(CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Articl...
-
[16]
Fernanda De La Torre, Cathy Mengying Fang, Han Huang, Andrzej Banburski- Fahey, Judith Amores Fernandez, and Jaron Lanier. 2024. LLMR: Real-time Prompting of Interactive Worlds using Large Language Models. InProceedings of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA) (CHI ’24). Association for Computing Machinery, New York, ...
-
[17]
Benjamin Eckstein, Eva Krapp, Anne Elsässer, and Birgit Lugrin. 2019. Smart sub- stitutional reality: Integrating the smart home into virtual reality.Entertainment Computing31 (2019), 100306. doi:10.1016/j.entcom.2019.100306
-
[18]
Ziv Epstein, Aaron Hertzmann, L. Herman, Robert Mahari, M. Frank, Matthew Groh, Hope Schroeder, Amy Smith, Memo Akten, Jessica Fjeld, H. Farid, Neil Leach, A. Pentland, and Olga Russakovsky. 2023. Art and the science of generative AI.Science380 (2023), 1110 – 1111. doi:10.1126/science.adh4451
-
[19]
Cathy Mengying Fang, Ryo Suzuki, and Daniel Leithinger. 2023. VR Haptics at Home: Repurposing Everyday Objects and Environment for Casual and On- Demand VR Haptic Experiences. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article ...
-
[20]
Mehrad Faridan, Bheesha Kumari, and Ryo Suzuki. 2023. ChameleonControl: Teleoperating Real Human Surrogates through Mixed Reality Gestural Guidance for Remote Hands-on Classrooms. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 203, 1...
-
[21]
Daniel Gatis. 2023. rembg. https://github.com/danielgatis/rembg Accessed: 2023-09-12
2023
-
[22]
Roberto Gozalo-Brizuela and Eduardo Eduardo Garrido Merchan. 2024. A Survey of Generative AI Applications.Journal of Computer Science20, 8 (May 2024), 801–818. doi:10.3844/jcssp.2024.801.818
-
[23]
GPT-4o. 2024. GPT-4o: OpenAI’s New Multimodal Flagship Model. https://openai. com/index/gpt-4o. Accessed: 2025-03-29
2024
-
[24]
Khronos Group. 2017. UnityGLTF: Runtime glTF 2.0 Loader for Unity. https: //github.com/KhronosGroup/UnityGLTF. Accessed: 2025-03-29
2017
-
[25]
Rishabh Gupta, Jianjun He, R. Ranjan, W. Gan, Florian Klein, C. Schneiderwind, Annika Neidhardt, K. Brandenburg, and V. Välimäki. 2022. Augmented/Mixed Re- ality Audio for Hearables: Sensing, control, and rendering.IEEE Signal Processing Magazine39 (2022), 63–89. doi:10.1109/MSP.2021.3110108
-
[26]
Violet Yinuo Han, Hyunsung Cho, Kiyosu Maeda, Alexandra Ion, and David Lindlbauer. 2023. BlendMR: A Computational Method to Create Ambient Mixed Reality Interfaces.Proc. ACM Hum.-Comput. Interact.7, ISS, Article 436 (Nov. 2023), 25 pages. doi:10.1145/3626472
-
[27]
Steven J. Henderson and Steven Feiner. 2008. Opportunistic controls: leveraging natural affordances as tangible user interfaces for augmented reality. InPro- ceedings of the 2008 ACM Symposium on Virtual Reality Software and Technology (Bordeaux, France)(VRST ’08). Association for Computing Machinery, New York, NY, USA, 211–218. doi:10.1145/1450579.1450625
-
[28]
Anuruddha Hettiarachchi and Daniel Wigdor. 2016. Annexing Reality: Enabling Opportunistic Use of Everyday Objects as Tangible Proxies in Augmented Reality. InProceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA)(CHI ’16). Association for Computing Machinery, New York, NY, USA, 1957–1967. doi:10.1145/28580...
-
[29]
Ken Hinckley, Randy Pausch, John C. Goble, and Neal F. Kassell. 1994. Passive real- world interface props for neurosurgical visualization. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Boston, Massachusetts, USA) (CHI ’94). Association for Computing Machinery, New York, NY, USA, 452–458. doi:10.1145/191666.191821
-
[31]
H.G. Hoffman. 1998. Physically touching virtual objects using tactile augmenta- tion enhances the realism of virtual environments. InProceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180). 59–63. doi:10.1109/VRAIS.1998.658423
-
[32]
Junlei Hong, Tobias Langlotz, Jonathan Sutton, and Holger Regenbrecht. 2024. Exploring Visual Discomfort and Opportunities for Vision Augmentations: Vi- sual Noise Cancellation and Head-worn LCD Light Actuators for Perception Modulation. InAdjunct Proceedings of the 26th International Conference on Mobile Human-Computer Interaction(Melbourne, VIC, Austral...
-
[33]
Junlei Hong, Tobias Langlotz, Jonathan Sutton, and Holger Regenbrecht. 2024. Visual Noise Cancellation: Exploring Visual Discomfort and Opportunities for Vision Augmentations.ACM Trans. Comput.-Hum. Interact.31, 2, Article 22 (Jan. 2024), 26 pages. doi:10.1145/3634699
-
[34]
Erzhen Hu, Mingyi Li, Jungtaek Hong, Xun Qian, Alex Olwal, David Kim, Seongkook Heo, and Ruofei Du. 2025. Thing2Reality: Enabling Spontaneous Creation of 3D Objects from 2D Content using Generative AI in XR Meetings. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (UIST ’25). Association for Computing Machinery, Ne...
-
[35]
Zhihua Hu, Bo Duan, Yanfeng Zhang, Mingwei Sun, and Jingwei Huang. 2022. MVLayoutNet: 3D Layout Reconstruction with Multi-view Panoramas. InProceed- ings of the 30th ACM International Conference on Multimedia(Lisboa, Portugal) (MM ’22). Association for Computing Machinery, New York, NY, USA, 1289–1298. doi:10.1145/3503161.3548071
-
[36]
Weihan Huang, Stephanie Bourgeois, Yun Suen Pai, and Kouta Minamizawa
-
[37]
InSIGGRAPH Asia 2021 Posters(Tokyo, Japan)(SA ’21 Posters)
ARMixer: Live Stage Monitor Mixing through Gestural Interaction in Augmented Reality. InSIGGRAPH Asia 2021 Posters(Tokyo, Japan)(SA ’21 Posters). Association for Computing Machinery, New York, NY, USA, Article 18, 2 pages. doi:10.1145/3476124.3488632
-
[38]
Xincheng Huang, Michael Yin, Ziyi Xia, and Robert Xiao. 2024. VirtualNexus: Enhancing 360-Degree Video AR/VR Collaboration with Environment Cutouts and Virtual Replicas. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology(Pittsburgh, PA, USA)(UIST ’24). Association for Computing Machinery, New York, NY, USA, Article 55...
-
[39]
Hugging Face. 2024. Hugging Face Spaces. https://huggingface.co/spaces. Ac- cessed: 2025-03-22
2024
-
[40]
Comparing images using the Hausdorff distance
D.P. Huttenlocher, G.A. Klanderman, and W.J. Rucklidge. 1993. Comparing images using the Hausdorff distance.IEEE Transactions on Pattern Analysis and Machine Intelligence15, 9 (1993), 850–863. doi:10.1109/34.232073
-
[41]
Lukas Höllein, Aljaž Božič, Norman Müller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollhöfer, and Matthias Nießner. 2024. ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models. In2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 5043–5052. doi:10. 1109/CVPR52733.2024.00482
-
[42]
Apple Inc. [n. d.]. Apple Vision Pro. https://www.apple.com/apple-vision-pro/
-
[43]
2001.Passive haptics significantly enhances virtual environ- ments
Brent Edward Insko. 2001.Passive haptics significantly enhances virtual environ- ments. Ph. D. Dissertation. The University of North Carolina. Advisor(s) Brooks, Frederick P. AAI3007820
2001
-
[44]
Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, and Andrew Fitzgibbon. 2011. KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. InProceedings of the 24th Annual ACM Symposium on User Interface Software and Technolog...
-
[45]
Rahul Jain, Jingyu Shi, Runlin Duan, Zhengzhe Zhu, Xun Qian, and Karthik Ramani. 2023. Ubi-TOUCH: Ubiquitous Tangible Object Utilization through Consistent Hand-object interaction in Augmented Reality. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology(San Francisco, CA, USA)(UIST ’23). Association for Computing Machi...
-
[46]
Yu Jiang, Zhipeng Li, Mufei He, David Lindlbauer, and Yukang Yan. 2023. HandA- vatar: Embodying Non-Humanoid Virtual Avatars through Hands. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany)(CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 309, 17 pages. doi:10.1145/3544548.3581027
-
[47]
Mohamed Kari, Tobias Grosse-Puppendahl, Luis Falconeri Coelho, Andreas Rene Fender, David Bethge, Reinhard Schütte, and Christian Holz. 2021. TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 69–79. doi:10.1109/ISMAR52148.2021.00021
-
[48]
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuehler, and George Drettakis
-
[49]
https://doi.org/10.1145/3592433 Xiaonan Kong and Riley G
3D Gaussian Splatting for Real-Time Radiance Field Rendering.ACM Trans. Graph.42, 4, Article 139 (jul 2023), 14 pages. doi:10.1145/3592433
-
[50]
S. J. Lederman and R. L. Klatzky. 2009. Haptic perception: A tutorial.Attention, Perception, & Psychophysics71, 7 (01 Oct 2009), 1439–1459. doi:10.3758/APP.71.7. 1439
-
[51]
Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E
Jaewook Lee, Andrew D. Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E. Froehlich, Yapeng Tian, and Yuhang Zhao. 2024. CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology(Pittsburgh, PA, USA)(U...
-
[52]
Boyu Li, Linping Yuan, Zhe Yan, Qianxi Liu, Yulin Shen, and Zeyu Wang. 2024. AniCraft: Crafting Everyday Objects as Physical Proxies for Prototyping 3D Character Animation in Mixed Reality. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology(Pittsburgh, PA, USA) (UIST ’24). Association for Computing Machinery, New York...
-
[53]
Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi. 2024. Instant3D: Fast Text-to-3D with Sparse-view Generation and Large Reconstruction Model. InThe Twelfth International Conference on Learning Representations. https: //openreview.net/forum?id=2lDQLiH1W4
2024
-
[54]
Zisu Li, Jiawei Li, Zeyu Xiong, Shumeng Zhang, Faraz Faruqi, Stefanie Mueller, Chen Liang, Xiaojuan Ma, and Mingming Fan. 2025. InteRecon: Towards Re- constructing Interactivity of Personal Memorable Items in Mixed Reality. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New Y...
-
[55]
Chuan-en Lin, Ta Ying Cheng, and Xiaojuan Ma. 2020. ARchitect: Building Interactive Virtual Experiences from Physical Affordances by Bringing Human- in-the-Loop. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. doi:10.1145/3313831.3376614
-
[56]
David Lindlbauer and Andy D. Wilson. 2018. Remixed Reality: Manipulat- ing Space and Time in Augmented Reality. InProceedings of the 2018 CHI Conference on Human Factors in Computing Systems(Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. doi:10.1145/3173574.3173703
-
[57]
Zhengzhe Liu, Peng Dai, Ruihui Li, Xiaojuan Qi, and Chi-Wing Fu. 2023. DreamStone: Image as a Stepping Stone for Text-Guided 3D Shape Genera- tion.IEEE Trans. Pattern Anal. Mach. Intell.45, 12 (Dec. 2023), 14385–14403. doi:10.1109/TPAMI.2023.3321329
-
[58]
Meta Platforms, Inc. 2024. Introducing Orion, Our First True Augmented Reality Glasses. https://about.fb.com/news/2024/09/introducing-orion-our-first-true- augmented-reality-glasses/ Accessed: 2025-03-23
2024
-
[59]
Microsoft. 2023. Discover Mixed Reality. https://learn.microsoft.com/en-us/ windows/mixed-reality/discover/mixed-reality. Accessed: 2023-07-30
2023
-
[60]
Midjourney. [n. d.]. Midjourney: An independent research lab. https://www. midjourney.com/home. Accessed: 2024-08-06
2024
-
[61]
Srinivasan, Matthew Tancik, Jonathan T
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2021. NeRF: representing scenes as neural radiance fields for view synthesis.Commun. ACM65, 1 (dec 2021), 99–106. doi:10.1145/ 3503250
2021
-
[62]
OBS Project. 2012. OBS Studio – Open Broadcaster Software. https://obsproject. com/. Version 30.0.2, accessed 2025-03-29
2012
-
[63]
OpenAI. 2023. ChatGPT: Optimizing Language Models for Dialogue. https: //www.openai.com/chatgpt
2023
-
[64]
Navaneeth Prabha, Naeema Ziyad, Navya Prasad, J. P. Abraham, Pristy Paul T, and Rini T Paul. 2024. Enhanced Medical Analysis: Leveraging 3D Visualization and VR-AR Technology.Journal of Sensor Networks and Data Communications (2024). doi:10.33140/jsndc.04.03.03
-
[65]
PTC. 2023. Vuforia Engine Product Page. https://www.ptc.com/en/products/ vuforia/vuforia-engine. Accessed: 2023-07-30
2023
-
[66]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen
-
[67]
Hierarchical Text-Conditional Image Generation with CLIP Latents
Hierarchical Text-Conditional Image Generation with CLIP Latents.ArXiv abs/2204.06125 (2022). https://api.semanticscholar.org/CorpusID:248097655
work page internal anchor Pith review arXiv 2022
-
[68]
Patrick Reipschlager, Tamara Flemisch, and Raimund Dachselt. 2021. Personal Augmented Reality for Information Visualization on Large Interactive Displays. IEEE Transactions on Visualization and Computer Graphics27, 2 (2021), 1182–1192. doi:10.1109/TVCG.2020.3030460
-
[69]
Replicate. 2024. Replicate: Run machine learning models in the cloud. https: //replicate.com. Accessed: 2025-03-22
2024
-
[70]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695
2022
-
[71]
Sara Mah- davi, Raphael Gontijo-Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Lit, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mah- davi, Raphael Gontijo-Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic text-to-image diffusion models with deep language understanding. InProceedings of the 36th Inter...
2022
-
[72]
Yulin Shen, Yifei Shen, Jiawen Cheng, Chutian Jiang, Mingming Fan, and Zeyu Wang. 2024. Neural Canvas: Supporting Scenic Design Prototyping by Integrating 3D Sketching and Generative AI. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Articl...
-
[73]
Simeone, Eduardo Velloso, and Hans Gellersen
Adalberto L. Simeone, Eduardo Velloso, and Hans Gellersen. 2015. Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences. InProceedings of the 33rd Annual ACM Conference on Human Factors in Com- puting Systems(Seoul, Republic of Korea)(CHI ’15). Association for Computing Machinery, New York, NY, USA, 3307–3316. doi:10....
-
[74]
Misha Sra, Sergio Garrido-Jurado, Chris Schmandt, and Pattie Maes. 2016. Pro- cedurally generated virtual reality from 3D reconstructed physical space. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technol- ogy(Munich, Germany)(VRST ’16). Association for Computing Machinery, New York, NY, USA, 191–200. doi:10.1145/2993369.2993372
-
[75]
Misha Sra, Prashanth Vijayaraghavan, Ognjen Rudovic, Pattie Maes, and Deb Roy. 2017. DeepSpace: Mood-Based Image Texture Generation for Virtual Reality from Music. In2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2289–2298. doi:10.1109/CVPRW.2017.283
-
[76]
Stability AI. 2023. Stability AI Official Website. https://stability.ai. Accessed: 2023-07-30
2023
-
[77]
2024.Camera Controls - Stereolabs
Stereolabs. 2024.Camera Controls - Stereolabs. https://www.stereolabs.com/ docs/video/camera-controls Accessed: 2025-03-22
2024
-
[78]
I. Tashev and H. Gamper. 2017. Head-related transfer function personalization for the needs of spatial audio in mixed and virtual reality.Journal of the Acoustical Society of America141 (2017), 3536–3536. doi:10.1121/1.4987470
-
[79]
Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao
-
[80]
arXiv preprint arXiv:2403.02151 , year=
TripoSR: Fast 3D Object Reconstruction from a Single Image.ArXiv abs/2403.02151 (2024). https://api.semanticscholar.org/CorpusID:268248244 Prop-Chromeleon DIS ’26, June 13–17, 2026, Singapore, Singapore
-
[81]
Vuforia. 2024. Model Targets - Vuforia Library. https://developer.vuforia.com/ library/vuforia-engine/images-and-objects/model-targets/model-targets/. Ac- cessed: 2025-03-22
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.