pith. machine review for the scientific record. sign in

arxiv: 2604.23129 · v1 · submitted 2026-04-25 · 💻 cs.HC · cs.AI· cs.IR· cs.MA

Recognition: unknown

MindTrellis: Co-Creating Knowledge Structures with AI through Interactive Visual Exploration

Authors on Pith no claims yet

Pith reviewed 2026-05-08 07:32 UTC · model grok-4.3

classification 💻 cs.HC cs.AIcs.IRcs.MA
keywords human-AI collaborationknowledge graphsinteractive visualizationknowledge synthesisuser studycognitive loadinformation organizationdocument exploration
0
0 comments X

The pith

MindTrellis lets users and AI jointly build and edit dynamic knowledge graphs from documents, producing better organized structures with lower cognitive load than retrieval-only tools.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Knowledge workers need to turn scattered documents into coherent mental models, yet most tools either let AI answer queries without letting users shape the overall structure or let users build diagrams without AI assistance. MindTrellis closes the gap by presenting a shared visual knowledge graph that both the user and the AI can modify in real time. Users retrieve document-grounded facts, add or remove concepts, change links, and rearrange hierarchy as their understanding changes. A study with twelve participants who built slide decks found that the system produced higher expert-rated content coverage and structural quality while reducing measured cognitive load compared with plain retrieval baselines.

Core claim

MindTrellis is an interactive visual system in which a user and an AI collaboratively construct an evolving knowledge graph: the user queries the graph for information grounded in source documents, introduces new concepts, alters relationships, and reorganizes the hierarchy to match developing understanding; a controlled study showed that this joint construction produced slide decks rated higher by experts on coverage and structure and imposed less cognitive load than retrieval-only interfaces.

What carries the argument

The dynamic knowledge graph that functions as the single editable, queryable artifact jointly maintained by the user and the AI.

If this is right

  • Expert ratings of content coverage rise when users can directly edit the shared graph.
  • Structural quality of the final knowledge representation improves through iterative user-AI reorganization.
  • Reported cognitive load falls during the synthesis task.
  • Users can refine their mental models by manipulating concepts and links in the same visual space used by the AI.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same joint-editing pattern could support team-based knowledge work where multiple people edit the graph alongside the AI.
  • Document-grounded graphs of this kind might reduce factual drift in long-form outputs by keeping every node traceable to sources.
  • The approach could be tested in domains that require frequent model revision, such as literature reviews or policy analysis, to see whether the benefits persist beyond slide-deck creation.

Load-bearing premise

The assumption that expert ratings of slide decks created by twelve participants will indicate reliable gains in knowledge synthesis for other users and tasks.

What would settle it

A follow-up study with more participants or a different synthesis task such as report writing in which expert ratings of coverage and structure show no advantage for the interactive system over retrieval baselines.

Figures

Figures reproduced from arXiv: 2604.23129 by Can Liu, Cara Li, Emily Kuang, Jian Zhao, Xiang Li.

Figure 1
Figure 1. Figure 1: MindTrellis is an interactive visual system that enables human-AI collaborative knowledge construction, where view at source ↗
Figure 2
Figure 2. Figure 2: Key functionalities of MindTrellis demonstrated through the user scenario. (a) Initial exploration generating a view at source ↗
Figure 3
Figure 3. Figure 3: Overview of the MindTrellis multi-agent pipeline supporting bidirectional interaction. User input is routed by the view at source ↗
Figure 4
Figure 4. Figure 4: The Adaptive Retriever pipeline consists of four view at source ↗
Figure 5
Figure 5. Figure 5: Participants’ ratings on usability (UMUX), task support, and depth and breadth of information exploration. Usability view at source ↗
Figure 6
Figure 6. Figure 6: Participants’ ratings on MindTrellis’s effectiveness view at source ↗
Figure 7
Figure 7. Figure 7: Baseline system interfaces used in the user study. view at source ↗
read the original abstract

Knowledge workers face increasing challenges in synthesizing information from multiple documents into structured conceptual understanding. This process is inherently iterative: users explore content, identify relationships between concepts, and continuously reorganize their mental models. However, current approaches offer limited support. LLM-based systems let users query information but not shape how knowledge is organized; manual tools like mind maps support structure creation but lack intelligent assistance. This leaves an open opportunity: supporting collaborative construction where users and AI jointly develop an evolving knowledge representation. We present MindTrellis, an interactive visual system where users and AI collaboratively build a dynamic knowledge graph. Users can query the graph to retrieve document-grounded information, and contribute by introducing new concepts, modifying relationships, and reorganizing the hierarchy to reflect their developing understanding. In a user study where 12 participants created slide decks, MindTrellis outperformed retrieval-only baselines in knowledge organization and cognitive load, as measured by expert ratings of content coverage and structural quality.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper presents MindTrellis, an interactive visual system enabling users and AI to collaboratively construct and evolve a dynamic knowledge graph from multiple documents. Users query for grounded information, introduce concepts, edit relationships, and reorganize hierarchies to reflect developing understanding. A within-subjects user study with 12 participants creating slide decks reports that MindTrellis outperformed retrieval-only baselines on expert ratings of content coverage and structural quality, interpreted as superior knowledge organization and reduced cognitive load.

Significance. If the empirical results hold under more rigorous evaluation, the work advances HCI research on knowledge synthesis by demonstrating the value of interactive co-creation mechanisms over pure retrieval interfaces. The system design for visual graph manipulation paired with LLM assistance is a concrete contribution, and the comparative study provides initial evidence that such tools can improve structured output in document synthesis tasks.

major comments (3)
  1. [User Study] User Study section: The central claim of outperformance rests on a within-subjects study with only 12 participants and expert ratings of final slide decks; without reported statistical tests, effect sizes, power analysis, or inter-rater reliability, it is unclear whether the differences in content coverage and structural quality are robust or generalizable beyond the specific task and rater pool.
  2. [Evaluation] Evaluation subsection: Expert ratings of static final artifacts serve as the sole proxy for both knowledge organization and cognitive load, yet no process logs, interaction traces, or validated subjective instruments (e.g., NASA-TLX) are described; this indirect measure does not directly substantiate claims about the benefits of the iterative user-AI reorganization process.
  3. [Abstract and User Study] Abstract and User Study: The reported superiority lacks details on expert selection, blinding procedures, rating scales, or how conditions were counterbalanced, raising the possibility that observed differences reflect task-specific biases or rater expectations rather than system effects.
minor comments (2)
  1. [System Overview] Figure captions and system description would benefit from additional screenshots illustrating a complete user-AI collaboration sequence to clarify the visual interaction mechanics.
  2. [Related Work] Related work section could more explicitly contrast MindTrellis with recent LLM-augmented mind-mapping tools to sharpen the novelty claim.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback. We address each major comment below and have made revisions to improve the reporting of the user study where possible.

read point-by-point responses
  1. Referee: [User Study] User Study section: The central claim of outperformance rests on a within-subjects study with only 12 participants and expert ratings of final slide decks; without reported statistical tests, effect sizes, power analysis, or inter-rater reliability, it is unclear whether the differences in content coverage and structural quality are robust or generalizable beyond the specific task and rater pool.

    Authors: We agree that the modest sample size of 12 participants and absence of formal statistical reporting limit the strength of claims about robustness and generalizability. The study was designed as an initial exploration of the co-creation paradigm rather than a confirmatory experiment. In the revised manuscript we will add an explicit limitations subsection that discusses the sample size, notes the lack of a priori power analysis, reports any post-hoc effect sizes calculable from the rating data, and includes inter-rater reliability statistics for the expert evaluations. These additions will clarify the scope of the evidence without overstating its statistical foundation. revision: partial

  2. Referee: [Evaluation] Evaluation subsection: Expert ratings of static final artifacts serve as the sole proxy for both knowledge organization and cognitive load, yet no process logs, interaction traces, or validated subjective instruments (e.g., NASA-TLX) are described; this indirect measure does not directly substantiate claims about the benefits of the iterative user-AI reorganization process.

    Authors: We acknowledge that final-artifact ratings provide only an indirect proxy for the benefits of the iterative reorganization process and that process logs or validated instruments such as NASA-TLX would offer stronger process-level evidence. Our choice of outcome ratings was motivated by the task goal of producing high-quality slide decks, which we view as a downstream indicator of effective knowledge organization. In revision we will explicitly label the ratings as outcome-based proxies, discuss their relationship to the iterative process described in the system section, and note the absence of direct process measures as a limitation for future studies. revision: partial

  3. Referee: [Abstract and User Study] Abstract and User Study: The reported superiority lacks details on expert selection, blinding procedures, rating scales, or how conditions were counterbalanced, raising the possibility that observed differences reflect task-specific biases or rater expectations rather than system effects.

    Authors: We agree that these procedural details are necessary for assessing potential bias. The revised User Study section will include the expert selection criteria, blinding procedures employed, the specific rating scales used, and the counterbalancing method for conditions. We will also update the abstract to reference the improved methodological transparency if space permits. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical user study with no derivations or fitted parameters

full rationale

The paper describes an interactive system for collaborative knowledge graph construction and reports results from a within-subjects user study (n=12) comparing MindTrellis to retrieval baselines on expert-rated slide-deck quality. No equations, parameter fitting, uniqueness theorems, or self-citational derivations appear in the text; the claims rest on direct experimental measurements rather than any reduction of outputs to inputs by construction. The evaluation is therefore self-contained as an empirical design study.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the design of the MindTrellis interface and the validity of a small-scale user study. No free parameters or new invented entities are introduced. The key domain assumption is that collaborative visual interaction with AI improves knowledge synthesis over retrieval-only methods.

axioms (1)
  • domain assumption Collaborative human-AI visual interaction improves knowledge organization and reduces cognitive load compared with retrieval-only baselines
    This is the core hypothesis directly tested and claimed in the user study results.

pith-pipeline@v0.9.0 · 5476 in / 1255 out tokens · 29477 ms · 2026-05-08T07:32:04.918573+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

92 extracted references · 24 canonical work pages · 3 internal anchors

  1. [1]

    Marwan Al-Tawil, Vania Dimitrova, and Dhavalkumar Thakker. 2020. Using knowledge anchors to facilitate user exploration of data graphs.Semantic Web 11, 2 (2020), 205–234

  2. [2]

    Marwan Al-Tawil, Vania Dimitrova, Dhavalkumar Thakker, and Bilal Abu-Salih

  3. [3]

    Emerging Exploration Strategies of Knowledge Graphs.IEEE Access11 (2023), 94713–94731

  4. [4]

    Adam Binks, Alice Toniolo, and Miguel A. Nacenta. 2022. Representational transformations: Using maps to write essays.International Journal of Human- Computer Studies165 (2022), 102851. doi:10.1016/j.ijhcs.2022.102851

  5. [5]

    2006.Use your head

    Tony Buzan. 2006.Use your head. Pearson Education

  6. [6]

    Alberto J Cañas, Roger Carff, Greg Hill, Marco Carvalho, Marco Arguedas, Thomas C Eskridge, James Lott, and Rodrigo Carvajal. 2005. Concept maps: Integrating knowledge and information visualization.Knowledge and informa- tion visualization: Searching for synergies(2005), 205–219

  7. [7]

    Ncenta, and Aaron J

    Guilherme Carneiro, Alice Toniolo, Miguel A. Ncenta, and Aaron J. Quigley

  8. [8]

    Graphs in Argument Analysis

    Text vs. Graphs in Argument Analysis . In2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE Computer Society, Los Alamitos, CA, USA, 1–9. doi:10.1109/VL/HCC51201.2021.9576493

  9. [9]

    Quinn, Rastislav Bodík, Maneesh Agrawala, and Adriana Schulz

    Dan Cas,caval, Mira Shalah, Phillip L. Quinn, Rastislav Bodík, Maneesh Agrawala, and Adriana Schulz. 2021. Differentiable 3D CAD Programs for Bidirectional Editing.Computer Graphics Forum41 (2021). https://api.semanticscholar.org/ CorpusID:238259125

  10. [10]

    Philip R Cohen, Mary Dalrymple, Douglas B Moran, FC Pereira, and Joseph W Sullivan. 1989. Synergistic use of direct manipulation and natural language. In Proceedings of the SIGCHI conference on Human factors in computing systems. 227–233

  11. [11]

    1998.Working knowledge: How organizations manage what they know

    Thomas H Davenport and Laurence Prusak. 1998.Working knowledge: How organizations manage what they know. Harvard Business Press

  12. [12]

    Mingyi Deng, Lijun Huang, Yani Fan, Jiayi Zhang, Fashen Ren, Jinyi Bai, Fuzhen Yang, Dayi Miao, Zhaoyang Yu, Yifan Wu, Yanfei Zhang, Fengwei Teng, Yingjia Wan, Song Hu, Yude Li, Xin Jin, Conghao Hu, Haoyu Li, Qirui Fu, Tai Zhong, Xinyu Wang, Xiangru Tang, Nan Tang, Chenglin Wu, and Yuyu Luo. 2025. InteractComp: Evaluating Search Agents With Ambiguous Quer...

  13. [13]

    Reinildes Dias. 2011. Concept maps powered by computer software: a strategy for enhancing reading comprehension in English for Specific Purposes.Revista Brasileira de Linguística Aplicada11 (2011), 896–911

  14. [14]

    Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Mur- phy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: a web-scale approach to probabilistic knowledge fusion. InProceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (New York, New York, USA)(KDD ’14). Association fo...

  15. [15]

    Moritz Dück, Steffen Holter, Robin Shing Moon Chan, Rita Sevastjanova, and Mennatallah El-Assady. 2025. Finding Needles in Document Haystacks: Aug- menting Serendipitous Claim Retrieval Workflows. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Associa- tion for Computing Machinery, New York, NY, USA, Article 1003...

  16. [16]

    Martin J Eppler and Remo A Burkhard. 2008. Knowledge visualization. InKnowl- edge management: concepts, methodologies, tools, and applications. IGI Global, 781–793

  17. [17]

    Aliye Erdem. 2017. Mind Maps as a Lifelong Learning Tool.Universal Journal of Educational Research5, n12A (2017), 1–7

  18. [18]

    Nathan Foster, Michael B

    J. Nathan Foster, Michael B. Greenwald, Jonathan T. Moore, Benjamin C. Pierce, and Alan Schmitt. 2007. Combinators for bidirectional tree transformations: A linguistic approach to the view-update problem.ACM Trans. Program. Lang. Syst. 29, 3 (May 2007), 17–es. doi:10.1145/1232420.1232424

  19. [19]

    Brian Hempel, Justin Lubin, and Ravi Chugh. 2019. Sketch-n-Sketch: Output- Directed Programming for SVG. InProceedings of the 32nd Annual ACM Sym- posium on User Interface Software and Technology(New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 281–292. doi:10.1145/3332165.3347925

  20. [20]

    Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d’Amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, et al. 2021. Knowledge graphs.ACM Computing Surveys (Csur)54, 4 (2021), 1–37

  21. [21]

    Edwin L Hutchins, James D Hollan, and Donald A Norman. 1985. Direct manipu- lation interfaces.Human–computer interaction1, 4 (1985), 311–338

  22. [22]

    Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and S Yu Philip. 2022. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE transactions on neural networks and learning systems33, 2 (2022), 494–514

  23. [23]

    Dow, and Haijun Xia

    Peiling Jiang, Jude Rayan, Steven P. Dow, and Haijun Xia. 2023. Graphologue: Exploring Large Language Model Responses with Interactive Diagrams. InPro- ceedings of the ACM Symposium on User Interface Software and Technology(San Francisco, CA, USA)(UIST ’23). Association for Computing Machinery, New York, NY, USA, Article 3, 20 pages. doi:10.1145/3586183.3606737

  24. [24]

    Rebecca Oluwayimika Kasumu and Rebecca Oluwayimika. 2022. CONCEPT MAPPING AS A TEACHING STRATEGY: BENEFITS AND CHALLENGES IN HIGHER INSTITUTION.International Journal Of Trendy Research In Engineering And Technology6, 06 (2022), 5–10

  25. [25]

    David Kirsh. 2010. Thinking with external representations.AI & SOCIETY25 (2010), 441–454. https://api.semanticscholar.org/CorpusID:16683273

  26. [26]

    LangChain. 2024. LangGraph: A LangChain Application. https://www.langchain. com/langgraph Accessed: 2024-10-08

  27. [27]

    Larkin and Herbert A

    Jill H. Larkin and Herbert A. Simon. 1987. Why a Diagram is (Sometimes) Worth Ten Thousand Words.Cognitive Science11, 1 (1987), 65–100. doi:10.1016/S0364- 0213(87)80026-5

  28. [28]

    Jae Hwa Lee and Aviv Segev. 2012. Knowledge maps for e-learning.Computers & Education59, 2 (2012), 353–364

  29. [29]

    Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv:2005.11401 [cs.CL] https://arxiv.org/abs/ 2005.11401

  30. [30]

    Huao Li, Yu Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Charles Lewis, and Katia Sycara. 2023. Theory of Mind for Multi-Agent Collaboration via Large Language Models. InProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguisti...

  31. [31]

    Michael Xieyang Liu, Tongshuang Wu, Tianying Chen, Franklin Mingzhe Li, Aniket Kittur, and Brad A Myers. 2024. Selenite: Scaffolding Online Sensemak- ing with Comprehensive Overviews Elicited from Large Language Models. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’24). Association for Computing M...

  32. [32]

    Cristiane Tolentino Machado and Ana Amélia Carvalho. 2020. Concept mapping: Benefits and challenges in higher education.The Journal of Continuing Higher Education68, 1 (2020), 38–53

  33. [33]

    Mayer and Roxana Moreno

    Richard E. Mayer and Roxana Moreno. 2003. Nine Ways to Reduce Cognitive Load in Multimedia Learning.Educational Psychologist38 (2003), 43 – 52. https: //api.semanticscholar.org/CorpusID:13667935

  34. [34]

    Robert Meyer. 2010. Knowledge visualization.Trends in information visualization 23 (2010), 23–30

  35. [35]

    Sheryl L Miller and J Gregory Trafton. 1999. Improving information search using natural language and direct manipulation tools in a multimodal interface. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 43. SAGE Publications Sage CA: Los Angeles, CA, 462–466

  36. [36]

    Mitchell, W

    T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2018. Never-ending learning.Commun. ACM61, 5 (A...

  37. [37]

    Martin Nečask`y and Štěpán Stenchlák. 2022. Interactive and iterative visual explo- ration of knowledge graphs based on shareable and reusable visual configurations. Journal of Web Semantics73 (2022), 100713

  38. [38]

    Jakob Nielsen. 1993. Usability engineering. InThe Computer Science and Engi- neering Handbook. https://api.semanticscholar.org/CorpusID:260423343

  39. [39]

    JD Novak. 1984. Learning how to learn.Press Syndicate of the University of Cambridge(1984)

  40. [40]

    Angela M O’donnell, Donald F Dansereau, and Richard H Hall. 2002. Knowledge maps as scaffolds for cognitive processing.Educational psychology review14 (2002), 71–86

  41. [41]

    Fred Paas. 1992. Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive-load approach.Journal of Educational Psychology 84 (1992), 429–434. https://api.semanticscholar.org/CorpusID:145052264 DIS ’26, June 13–17, 2026, Singapore, Singapore Li et al

  42. [42]

    Peter Pirolli and Stuart Card. 2005. The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In Proceedings of the International Conference on Intelligence Analysis, Vol. 5. McLean, VA, USA, 2–4

  43. [43]

    Ehsan Rassaei. 2019. Effects of two forms of concept mapping on L2 reading comprehension and strategy awareness.Applied Linguistics Review10, 2 (2019), 93–116

  44. [44]

    Russell, Mark J

    Daniel M. Russell, Mark J. Stefik, Peter Pirolli, and Stuart K. Card. 1993. The cost structure of sensemaking. InProceedings of the INTERACT ’93 and CHI ’93 Con- ference on Human Factors in Computing Systems(Amsterdam, The Netherlands) (CHI ’93). Association for Computing Machinery, New York, NY, USA, 269–276. doi:10.1145/169059.169209

  45. [45]

    Ammar H Safar, Yaqoub J Jafer, and Mohammad A Alqadiri. 2014. Mind maps as facilitative tools in science education.College Student Journal48, 4 (2014), 629–647

  46. [46]

    Bahareh Sarrafzadeh, Alexandra Vtyurina, Edward Lank, and Olga Vechtomova

  47. [47]

    InProceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval

    Knowledge graphs versus hierarchies: An analysis of user behaviours and perspectives in information seeking. InProceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval. 91–100

  48. [48]

    Parth Sarthi, Salman Abdullah, Aditi Tuli, Shubh Khanna, Anna Goldie, and Christopher D. Manning. 2024. RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval. arXiv:2401.18059 [cs.CL] https://arxiv.org/abs/2401. 18059

  49. [49]

    Schroeder, John C

    Noah L. Schroeder, John C. Nesbit, Carlos J. Anguiano, and Olusola O. Adesope

  50. [50]

    doi:10.1007/s10648-017-9403-9

    Studying and Constructing Concept Maps: a Meta-Analysis.Educational Psychology Review30, 2 (June 2018), 431–455. doi:10.1007/s10648-017-9403-9

  51. [51]

    Ragan, and Jaime Ruiz

    Reza Shahriari, Eric D. Ragan, and Jaime Ruiz. 2025. Natural Language Interaction for Editing Visual Knowledge Graphs. InProceedings of the 13th Knowledge Capture Conference 2025 (K-CAP ’25). Association for Computing Machinery, New York, NY, USA, 26–34. doi:10.1145/3731443.3771344

  52. [52]

    Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston

  53. [53]

    Retrieval augmentation reduces hallucination in conversation

    Retrieval Augmentation Reduces Hallucination in Conversation. arXiv:2104.07567 [cs.CL] https://arxiv.org/abs/2104.07567

  54. [54]

    Winnie Street. 2024. LLM Theory of Mind and Alignment: Opportunities and Risks. arXiv:2405.08154 [cs.HC] https://arxiv.org/abs/2405.08154

  55. [55]

    Sangho Suh, Meng Chen, Bryan Min, Toby Jia-Jun Li, and Haijun Xia. 2024. Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation. InProceedings of the CHI Conference on Human Factors in Computing Systems. ACM. doi:10.1145/3613904.3642400

  56. [56]

    Sangho Suh, Bryan Min, Srishti Palani, and Haijun Xia. 2023. Sensecape: En- abling Multilevel Exploration and Sensemaking with Large Language Models. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST ’23). Association for Computing Machinery, New York, NY, USA, 1–18. doi:10.1145/3586183.3606756

  57. [57]

    Theodore Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L Griffiths

  58. [58]

    Cognitive Architectures for Language Agents.Transactions on Machine Learning Research(2024)

  59. [59]

    Lei Wang, Chen Ma, Xueyang Feng, et al. 2024. A survey on large language model based autonomous agents.Frontiers of Computer Science18, 6 (2024), 186345

  60. [60]

    Minhong Wang and Michael J Jacobson. 2011. Guest editorial-knowledge vi- sualization for learning and knowledge management.Journal of Educational Technology & Society14, 3 (2011), 1–3

  61. [61]

    Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji. 2024. Executable Code Actions Elicit Better LLM Agents. arXiv:2402.01030 [cs.CL] https://arxiv.org/abs/2402.01030

  62. [62]

    Amila Wickramasinghe, Nimali Widanapathirana, Osuka Kuruppu, Isurujith Liyanage, and IMK Karunathilake. 2011. Effectiveness of mind maps as a learning tool for medical students.South East Asian J Med Educ1 (01 2011)

  63. [63]

    Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, and Chi Wang. 2023. AutoGen: Enabling Next- Gen LLM Applications via Multi-Agent Conversation. arXiv:2308.08155 [cs.AI] https://arxiv.org/abs/2308.08155

  64. [64]

    Hellerstein, and Arvind Satyanarayan

    Yifan Wu, Joseph M. Hellerstein, and Arvind Satyanarayan. 2020. B2: Bridging Code and Interactive Visualization in Computational Notebooks. InProceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA)(UIST ’20). Association for Computing Machinery, New York, NY, USA, 152–165. doi:10.1145/3379337.3415851

  65. [65]

    Katherine Ye, Wode Ni, Max Krieger, Dor Ma’ayan, Jenna Wise, Jonathan Aldrich, Joshua Sunshine, and Keenan Crane. 2020. Penrose: from mathematical notation to beautiful diagrams.ACM Trans. Graph.39, 4, Article 144 (Aug. 2020), 16 pages. doi:10.1145/3386569.3392375 MindTrellis DIS ’26, June 13–17, 2026, Singapore, Singapore A Prompts A.1 Parse Query You ar...

  66. [66]

    What is the definition of X ?

    Information Retrieval Queries : - Examples : " What is the definition of X ?" , " Is X the best player in the world ?" - Generate query category : " search " - Generate query content : Preserve the original query without modifying meaning

  67. [67]

    Add a new concept called'X'under the concept'Y'

    Graph Editing Commands : - Examples : " Add a new concept called'X'under the concept'Y'" - Generate query category : " edit " - Generate query content : Preserve the original command without modification

  68. [68]

    Tell me more about X

    Expansion Requests : - Examples : " Tell me more about X " , " Explain X in more detail " , " Elaborate on X " - Generate query category : " expansion " - Generate query content : " What are the sub - topics covered in the document related to' X'?" User : { query } Context : { chat_history } The Oracle Module uses this classification system to route queri...

  69. [69]

    What is the definition of XXX ?

    Create a new node for the answer if the answer does not fit under any existing node . Link the new node to the existing nodes that are related to the answer . Consider both the node name and description when identifying related existing nodes . For example , if the user asks " What is the definition of XXX ?" , check if there's a node with the name " XXX ...

  70. [70]

    Break down the user's question and the answer into key points

  71. [71]

    Maintain hierarchy : general key points as parent nodes , specific details as child nodes

  72. [72]

    Identify the relevant nodes in the graph related to the key point

    For each key point : a . Identify the relevant nodes in the graph related to the key point . b . If the key point is a sub - topic of an existing node , add it as a child node . c . If the key point is a parent - topic of an existing node , add it as a parent node . Remember : You're creating a hierarchical knowledge graph , not a flat list . This prompt ...

  73. [73]

    Focus on the parts of the query that cannot be answered by the graph

  74. [74]

    Make the refined query more specific and targeted

  75. [75]

    Remove any parts of the query that can already be answered by the graph

  76. [76]

    DIS ’26, June 13–17, 2026, Singapore, Singapore Li et al

    Ensure the refined query is clear and self - contained . DIS ’26, June 13–17, 2026, Singapore, Singapore Li et al

  77. [77]

    Directly output the refined query , do not output any other text

    If the entire query can be answered by the graph , generate a minimal query to confirm or expand on the information . Directly output the refined query , do not output any other text . Original query : { original_query } Graph retriever response : { graph_response } This prompt helps the system refine user queries by identifying information gaps in the cu...

  78. [78]

    Read the given existing node content carefully and understand the context

  79. [79]

    Based on your knowledge and the context of the existing node , generate relevant suggestions for expansion

  80. [80]

    Review the existing suggestions of the current node and do not suggest the same topic twice

Showing first 80 references.