pith. machine review for the scientific record. sign in

arxiv: 2604.10545 · v1 · submitted 2026-04-12 · 💻 cs.HC

Recognition: unknown

Enhanced Self-Learning with Epistemologically-Informed LLM Dialogue

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:15 UTC · model grok-4.3

classification 💻 cs.HC
keywords self-learningLLM dialogueepistemologyAristotle Four CausesHCIeducational agentscognitive supportdialogue systems
0
0 comments X

The pith

Incorporating Aristotle's Four Causes into LLM prompts creates more engaging and insightful self-learning dialogues.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that adding classical epistemological structure to AI conversations can help independent learners handle complex material without getting lost or superficial. After observing how people actually use LLMs for self-study, the authors built CausaDisco, a system that automatically weaves questions about material, efficient cause, formal cause, and final cause into the dialogue. A controlled test with 36 users found this produced livelier back-and-forth, more thorough probing of ideas, and consideration of several angles at once. If the pattern holds, it offers a concrete method for turning general-purpose chatbots into steadier companions for solo learning.

Core claim

CausaDisco integrates Aristotle's Four Causes framework into LLM prompts to generate coherent and contextually appropriate follow-up questions, guiding self-learning by reducing cognitive load and producing more engaging interactions, sophisticated exploration, and multifaceted perspectives, as measured in a controlled study of 36 participants.

What carries the argument

CausaDisco, the dialogue system that embeds Aristotle's Four Causes into LLM prompts to automatically generate follow-up questions during self-learning sessions.

If this is right

  • Learners report more engaging interactions than with standard LLM tools.
  • The approach prompts more sophisticated exploration of the material.
  • Users consider multiple perspectives on the topics they study.
  • Educational AI designers gain a template for adding cognitive scaffolding without constant human input.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same prompt technique could be tried with other epistemological lenses to match different learning preferences.
  • Over repeated sessions such systems might reduce the chance of learners stopping at surface-level understanding of difficult subjects.
  • Future versions could blend multiple frameworks and adapt their depth according to how far a user has progressed.

Load-bearing premise

That Aristotle's Four Causes can be translated directly into prompts to structure natural dialogue without feeling forced or artificial.

What would settle it

A larger, more diverse replication study across varied topics that finds no measurable gain in engagement, exploration depth, or perspective-taking compared with ordinary LLM chats.

Figures

Figures reproduced from arXiv: 2604.10545 by David Gotz, Huamin Qu, Kento Shigyo, Weijia Liu, Xiyuan Wang, Yang Wang, Yi-Fan Cao, Yitong Gu, Zhilan Zhou.

Figure 1
Figure 1. Figure 1: CausaDisco provides users with original learning materials (A) and a core concept graph (B) to facilitate self-learning. Once a user initiates a dialogue, the LLM chatbot offers preliminary answers and then automatically generates four epistemologically-informed follow-up questions (C) to encourage deeper exploration. Concurrently, CausaDisco creates a query tree map (D) to assist users in managing their c… view at source ↗
Figure 2
Figure 2. Figure 2: Formative Study Workflow (N=26): A three-phase investigation of LLM interaction patterns during self-learning. (A) Data Collection: Gathered dialogue records, quiz scores, survey responses, and transcripts of semi-structured interviews. (B) Thematic Analysis: Classified four distinct LLM interaction patterns (proactive, validation-seeking, content-focused, receptive). (C) Epistemological Analysis and Desig… view at source ↗
Figure 3
Figure 3. Figure 3: This figure presents our six-step process for analyzing participants’ interaction behaviors and epistemological frameworks based on data collected from our formative study. strategies) to AI-interactive (high interaction frequency; diverse exploration dimensions and conversation strategies) (see Fig. 3B). To further contextualize these patterns, the analysts conducted a detailed examination of each partici… view at source ↗
Figure 4
Figure 4. Figure 4: This 2x2 matrix presents four distinct interaction patterns, categorized by AI interactivity and reflective-mindedness: Prompt-Naive, Confirmatory-Oriented, Reflective-Minded, and AI-Interactive. This classification reflects the mental models associated with different learning strategies observed during self-learning. 4 FINDINGS This section unpacks four interaction patterns and their epistemological diffe… view at source ↗
Figure 5
Figure 5. Figure 5: A substantial portion of participants reported challenges using the LLM-supported chatbot for self-study, particularly with question formulation, response clarity, and the achievement of deep understanding. need for systematic query approaches and metacognitive support to help learners examine their thinking processes and develop more comprehensive exploration strategies. 4.2.3 Design Requirements. The cha… view at source ↗
Figure 6
Figure 6. Figure 6: System Overview: The interface features four main views: A) [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: This figure presents participants’ subjective ratings (seven-point Likert scale) comparing [PITH_FULL_IMAGE:figures/full_fig_p018_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Within-subject comparison of participants’ subjective ratings of key measures (engagement, efficiency, comprehen [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: This figure illustrates participants’ subjective ratings of usability and design for [PITH_FULL_IMAGE:figures/full_fig_p021_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Design of the probe system for self-learning tasks in the formative study. This includes 1) reading materials; 2) a [PITH_FULL_IMAGE:figures/full_fig_p029_10.png] view at source ↗
read the original abstract

Large Language Models (LLMs) have advanced self-learning tools, enabling more personalized interactions. However, learners struggle to engage in meaningful dialogue and process complex information. To alleviate this, we incorporate epistemological frameworks within an LLM-based approach to self-learning, reducing the cognitive load on learners and fostering deeper engagement and holistic understanding. Through a formative study (N=26), we identified epistemological differences in self-learner interaction patterns. Building upon these findings, we present \textit{CausaDisco}, a dialogue-based interactive system that integrates Aristotle's \textit{Four Causes} framework into LLM prompts to enhance cognitive support for self-learning. This approach guides learners' self-learning journeys by automatically generating coherent and contextually appropriate follow-up questions. A controlled study (N=36) demonstrated that, compared to baseline, \textit{CausaDisco} fostered more engaging interactions, inspired sophisticated exploration, and facilitated multifaceted perspectives. This research contributes to HCI by expanding the understanding of LLMs as educational agents and providing design implications for this emerging class of tools.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces CausaDisco, an LLM-based interactive system that embeds Aristotle's Four Causes epistemological framework into prompts to automatically generate coherent follow-up questions for self-learners. It reports a formative study (N=26) that identified epistemological differences in interaction patterns and a controlled study (N=36) claiming that CausaDisco produced more engaging interactions, inspired sophisticated exploration, and facilitated multifaceted perspectives relative to a baseline condition. The work positions this as a contribution to HCI on LLMs as educational agents with associated design implications.

Significance. If the empirical results hold after proper reporting, the paper would offer a concrete design approach for reducing cognitive load in LLM self-learning dialogues by drawing on classical epistemology, with potential to improve engagement and perspective-taking. The use of iterative user studies to ground the system is a positive aspect, and the focus on follow-up question generation addresses a practical challenge in conversational agents.

major comments (3)
  1. [Controlled study] Controlled study section: The central claim that CausaDisco outperformed the baseline rests on the N=36 study, yet the manuscript provides no description of the outcome measures (e.g., engagement scales, coded dialogue depth, perspective diversity rubrics), statistical tests, effect sizes, power analysis, or inter-rater reliability for any qualitative analysis. Without these, attribution to the Four Causes framework versus confounds such as prompt structure or verbosity cannot be evaluated.
  2. [Controlled study] Controlled study section: The baseline prompt is not specified in detail, making it impossible to determine whether observed differences arise from the epistemological content or from any structured prompting approach; this is load-bearing for the claim that the Four Causes integration is responsible for the reported benefits.
  3. [Formative study] Formative study section: The N=26 study is presented as identifying 'epistemological differences in self-learner interaction patterns' that directly informed CausaDisco, but no analysis methods, coding scheme, or reliability metrics are reported, weakening the link between the formative findings and the system design choices.
minor comments (2)
  1. [Abstract] Abstract: The phrase 'positive outcomes' could be replaced with a brief indication of the measured constructs to improve precision.
  2. [Overall] The manuscript would benefit from a table summarizing the two studies' designs, sample sizes, and key variables for quick reference.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback, which identifies key areas where additional methodological transparency will strengthen the manuscript. We address each major comment below and will revise the paper to incorporate the requested clarifications while preserving the core contributions.

read point-by-point responses
  1. Referee: [Controlled study] Controlled study section: The central claim that CausaDisco outperformed the baseline rests on the N=36 study, yet the manuscript provides no description of the outcome measures (e.g., engagement scales, coded dialogue depth, perspective diversity rubrics), statistical tests, effect sizes, power analysis, or inter-rater reliability for any qualitative analysis. Without these, attribution to the Four Causes framework versus confounds such as prompt structure or verbosity cannot be evaluated.

    Authors: We agree that these details are necessary for rigorous evaluation of the results. The original manuscript prioritized high-level findings within space constraints, but we will expand the Controlled study section in the revision to fully describe the outcome measures (including engagement scales, dialogue depth coding, and perspective diversity rubrics), the statistical tests and their results, effect sizes, power analysis, and inter-rater reliability metrics. This will enable readers to assess potential confounds and the specific contribution of the Four Causes framework. revision: yes

  2. Referee: [Controlled study] Controlled study section: The baseline prompt is not specified in detail, making it impossible to determine whether observed differences arise from the epistemological content or from any structured prompting approach; this is load-bearing for the claim that the Four Causes integration is responsible for the reported benefits.

    Authors: We acknowledge that greater specificity is required to isolate the effect of the epistemological framework. The baseline consisted of a generic prompt for generating follow-up questions without the Four Causes structure. In the revised manuscript we will provide the exact wording of both the CausaDisco and baseline prompts, together with a comparison table highlighting their structural differences, to support the attribution of benefits to the integration of Aristotle's framework rather than prompting in general. revision: yes

  3. Referee: [Formative study] Formative study section: The N=26 study is presented as identifying 'epistemological differences in self-learner interaction patterns' that directly informed CausaDisco, but no analysis methods, coding scheme, or reliability metrics are reported, weakening the link between the formative findings and the system design choices.

    Authors: We agree that the formative study section requires more methodological detail to demonstrate how its findings shaped the system. We will revise this section to include the qualitative analysis methods, the coding scheme for identifying epistemological differences in interaction patterns, the thematic analysis process, and inter-coder reliability metrics. This will clarify the direct connection between the formative results and the design decisions underlying CausaDisco. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical claims rest on external user-study data with no derivations or self-referential fits

full rationale

The paper contains no mathematical derivations, equations, fitted parameters, or predictions that reduce to inputs by construction. Its central claims derive from two separate empirical studies (formative N=26 and controlled N=36) involving external participants, not from self-citations, ansatzes, or renamed known results. The design of CausaDisco incorporates Aristotle's Four Causes as an external epistemological framework into prompts, but this is a design choice evaluated via user data rather than a self-defining loop. Standard HCI evaluation structure (formative insights informing prototype, then controlled comparison) does not constitute circularity under the specified patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is an empirical HCI study; no free parameters, formal axioms, or new invented entities are introduced. The epistemological framework is imported from classical philosophy.

pith-pipeline@v0.9.0 · 5505 in / 1059 out tokens · 65794 ms · 2026-05-10T16:15:43.162443+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

116 extracted references · 13 canonical work pages · 1 internal anchor

  1. [1]

    Rania Abdelghani, Yen-Hsiang Wang, Xingdi Yuan, Tong Wang, Pauline Lucas, Hélène Sauzéon, and Pierre-Yves Oudeyer. 2024. GPT-3-Driven Pedagogical Agents to Train Children’s Curious Question-Asking Skills.International Journal of Artificial Intelligence in Education34, 2 (2024), 483–518

  2. [2]

    Mubashra Akhtar, Julia Neidhardt, and Hannes Werthner. 2019. The Potential of Chatbots: Analysis of Chatbot Conversations. In2019 IEEE 21st Conference on Business Informatics (CBI), Vol. 01. 397–404. https://doi.org/10.1109/CBI.2019.00052

  3. [3]

    Nilgun Aksan. 2009. A Descriptive Study: Epistemological Beliefs and Self-regulated Learning.Procedia-Social and Behavioral Sciences 1, 1 (2009), 896–901. , Vol. 1, No. 1, Article . Publication date: April 2026. Epistemologically-Informed LLM for Self-Learning 25

  4. [4]

    Mohammad Aliannejadi, Leif Azzopardi, Hamed Zamani, Evangelos Kanoulas, Paul Thomas, and Nick Craswell. 2021. Analysing mixed initiatives and search strategies during conversational search. InProceedings of the 30th ACM International Conference on Information & Knowledge Management. 16–26

  5. [5]

    Amanda R Baker and Lynley H Anderman. 2020. Are Epistemic Beliefs and Motivation Associated with Belief Revision Among Postsecondary Service-learning Participants?Learning and Individual Differences78 (2020), 101843

  6. [6]

    Margherita Bernabei, Silvia Colabianchi, Andrea Falegnami, and Francesco Costantino. 2023. Students’ Use of LLM in Engineering Education: A Case Study on Technology Acceptance, Perceptions, Efficacy, and Detection chances.Computers and Education: Artificial Intelligence5 (2023), 100172

  7. [7]

    Ivar Bråten. 2010. Personal Epistemology in Education: Concepts, Issues, and Implications. (2010)

  8. [8]

    Virginia Braun and Victoria Clarke. 2019. Reflecting on Reflexive Thematic Analysis.Qualitative Research in Sport, Exercise and Health 11, 4 (2019), 589–597

  9. [9]

    Virginia Braun and Victoria Clarke. 2021. One Size Fits All? What Counts as Quality Practice in (Reflexive) Thematic Analysis? Qualitative Research in Psychology18, 3 (2021), 328–352

  10. [10]

    2014.Make it Stick: The Science of Successful Learning

    Peter C Brown, Henry L Roediger III, and Mark A McDaniel. 2014.Make it Stick: The Science of Successful Learning. Harvard University Press

  11. [11]

    Joanne Brownlee, Gillian Boulton-Lewis, and Nola Purdie. 2002. Core Beliefs About Knowing and Peripheral Beliefs About Learning: Developing an Holistic Conceptualisation of Epistemological Beliefs.Australian Journal of Educational and Developmental Psychology2 (2002), 1–16

  12. [12]

    Kirsten R Butcher and Tamara Sumner. 2011. Self-Directed Learning and the Sensemaking Paradox.Human–Computer Interaction26, 1-2 (2011), 123–159

  13. [13]

    Nancy Carter. 2014. The Use of Triangulation in Qualitative Research.Number 5/September 201441, 5 (2014), 545–547

  14. [14]

    Guido Cassinadri. 2024. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Philosophy & Technology37, 1 (2024), 1–28

  15. [15]

    1983.Aristotle’s Physics: Books I and II

    William Charlton et al. 1983.Aristotle’s Physics: Books I and II. Oxford University Press

  16. [16]

    Siraprapa Chavanayarn. 2023. Navigating Ethical Complexities Through Epistemological Analysis of ChatGPT.Bulletin of Science, Technology & Society43, 3-4 (2023), 105–114

  17. [17]

    John Chen, Xi Lu, Yuzhou Du, Michael Rejtig, Ruth Bagley, Mike Horn, and Uri Wilensky. 2024. Learning Agent-Based Modeling with LLM Companions: Experiences of Novices and Experts Using ChatGPT & NetLogo Chat. InProceedings of the CHI Conference on Human Factors in Computing Systems. 1–18

  18. [18]

    Weihao Chen, Chun Yu, Huadong Wang, Zheng Wang, Lichen Yang, Yukun Wang, Weinan Shi, and Yuanchun Shi. 2023. From Gap to Synergy: Enhancing Contextual Understanding through Human-Machine Collaboration in Personalized Systems. InProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–15

  19. [19]

    Yu Chen, Scott Jensen, Leslie J Albert, Sambhav Gupta, and Terri Lee. 2023. Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success.Information Systems Frontiers25, 1 (2023), 161–182

  20. [20]

    Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, and Huamin Qu. 2024. StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions.arXiv preprint arXiv:2407.12423(2024)

  21. [21]

    Michelene TH Chi. 2009. Active-Constructive-Interactive: A Conceptual Framework for Differentiating Learning Activities.Topics in Cognitive Science1, 1 (2009), 73–105

  22. [22]

    Michelene TH Chi, Joshua Adams, Emily B Bogusch, Christiana Bruchok, Seokmin Kang, Matthew Lancaster, Roy Levy, Na Li, Katherine L McEldoon, Glenda S Stump, et al. 2018. Translating the ICAP Theory of Cognitive Engagement into Practice.Cognitive Science42, 6 (2018), 1777–1832

  23. [23]

    Michelene TH Chi and Ruth Wylie. 2014. The ICAP Framework: Linking Cognitive Engagement to Active Learning Outcomes. Educational psychologist49, 4 (2014), 219–243

  24. [24]

    Victoria Clarke and Virginia Braun. 2017. Thematic Analysis.The Journal of Positive Psychology12, 3 (2017), 297–298

  25. [25]

    S Marc Cohen and Charles DC Reeve. 2000. Aristotle’s Metaphysics. (2000)

  26. [26]

    Clayton Cohn, Caitlin Snyder, Justin Montenegro, and Gautam Biswas. 2024. Towards A Human-in-the-Loop LLM Approach to Collaborative Discourse Analysis. InInternational Conference on Artificial Intelligence in Education. Springer, 11–19

  27. [27]

    2023.NFTs Explained: A Must-Read Guide to Everything Non-Fungible

    Jolene Creighton. 2023.NFTs Explained: A Must-Read Guide to Everything Non-Fungible. Retrieved Feb 05, 2024 from https://nftnow. com/guides/what-is-nft-meaning/

  28. [28]

    Veronica Cucuiat and Jane Waite. 2024. Feedback Literacy: Holistic Analysis of Secondary Educators’ Views of LLM Explanations of Program Error Messages. InProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1. 192–198

  29. [29]

    Mehmet Demirbag. 2021. Modeling the Relations Among Argumentativeness, Epistemological Beliefs and Self-Regulation Skills. International Journal of Progressive Education17, 4 (2021), 327–340

  30. [30]

    Yang Deng, Wenqiang Lei, Minlie Huang, and Tat-Seng Chua. 2023. Rethinking Conversational Agents in the Era of LLMs: Proactivity, Non-Collaborativity, and Beyond. InProceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region. ACM, New York, NY, USA. http://dx.doi.org/10.1145/...

  31. [31]

    Richard A Duschl, Richard Hamilton, and Richard E Grandy. 1990. Psychology and Epistemology: Match or Mismatch When Applied to Science Education?International Journal of Science Education12, 3 (1990), 230–243

  32. [32]

    Andrea Falcon. 2006. Aristotle on Causality. (2006)

  33. [33]

    Alexander J Fiannaca, Chinmay Kulkarni, Carrie J Cai, and Michael Terry. 2023. Programming Without a Programming Language: Challenges and Opportunities for Designing Developer Tools for Prompt Programming. InExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–7

  34. [34]

    Nuzulira Janeusse Fratiwi, Achmad Samsudin, Taufik Ramlan Ramalis, Antomi Saregar, Rahma Diani, Konstantinos Ravanis, et al. 2020. Developing MeMoRI on Newton’s Laws: For Identifying Students’ Mental Models.European Journal of Educational Research9, 2 (2020), , Vol. 1, No. 1, Article . Publication date: April 2026. 26 Cao et al. 699–708

  35. [35]

    Yao Fu, Zhenjie Weng, and Jiaxi Wang. 2024. Examining AI Use in Educational Contexts: A Scoping Meta-Review and Bibliometric Analysis.International Journal of Artificial Intelligence in Education(2024), 1–57

  36. [36]

    Paul Galdas. 2017. Revisiting Bias in Qualitative Research: Reflections on its Relationship with Funding and Impact. , 1609406917748992 pages

  37. [37]

    Wensheng Gan, Zhenlian Qi, Jiayang Wu, and Jerry Chun-Wei Lin. 2023. LLMs in education: Vision and opportunities. In2023 IEEE international conference on big data (BigData). IEEE, 4776–4785

  38. [38]

    Lin Gao, Jing Lu, Zekai Shao, Ziyue Lin, Shengbin Yue, Chiokit Ieong, Yi Sun, Rory James Zauner, Zhongyu Wei, and Siming Chen

  39. [39]

    Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education.arXiv preprint arXiv:2407.20570(2024)

  40. [40]

    Katy Ilonka Gero, Chelse Swoopes, Ziwei Gu, Jonathan K Kummerfeld, and Elena L Glassman. 2024. Supporting Sensemaking of Large Language Model Outputs at Scale. InProceedings of the CHI Conference on Human Factors in Computing Systems. 1–21

  41. [41]

    Elena L Glassman, Jeremy Scott, Rishabh Singh, Philip J Guo, and Robert C Miller. 2015. OverCode: Visualizing Variation in Student Solutions to Programming Problems at Scale.ACM Transactions on Computer-Human Interaction (TOCHI)22, 2 (2015), 1–35

  42. [42]

    Alex Goslen, Yeo Jin Kim, Jonathan Rowe, and James Lester. 2024. LLM-based Student Plan Generation for Adaptive Scaffolding in Game-based Learning Environments.International Journal of Artificial Intelligence in Education(2024), 1–26

  43. [43]

    Edward Groenland. 2018. Employing the Matrix Method as a Tool for the Analysis of Qualitative Research Data in the Business Domain. International Journal of Business and Globalisation21, 1 (2018), 119–134

  44. [44]

    2011.Applied Thematic Analysis

    Greg Guest, Kathleen M MacQueen, and Emily E Namey. 2011.Applied Thematic Analysis. sage publications

  45. [45]

    Mengtian Guo, Zhilan Zhou, David Gotz, and Yue Wang. 2023. Grafs: Graphical Faceted Search System to Support Conceptual Understanding in Exploratory Search.ACM Transactions on Interactive Intelligent Systems13, 2 (2023), 1–36

  46. [46]

    Reza Hadi Mogavi, Yuanhao Zhang, Ehsan-Ul Haq, Yongjin Wu, Pan Hui, and Xiaojuan Ma. 2022. What do Users Think of Promotional Gamification Schemes? a Qualitative Case Study in a Question Answering Website.Proceedings of the ACM on Human-Computer Interaction6, CSCW2 (2022), 1–34

  47. [47]

    Karin Hammarberg, Maggie Kirkman, and Sheryl de Lacey. 2016. Qualitative Research Methods: When to Use Them and How to Judge Them.Human reproduction31, 3 (2016), 498–501

  48. [48]

    Kendall Hartley, Merav Hayak, and Un Hyeok Ko. 2024. Artificial Intelligence Supporting Independent Student Learning: An Evaluative Case Study of ChatGPT and Learning to Code.Education Sciences14, 2 (2024), 120

  49. [49]

    Helen Ai He, Jagoda Walny, Sonja Thoma, Sheelagh Carpendale, and Wesley Willett. 2023. Enthusiastic and Grounded, Avoidant and Cautious: Understanding Public Receptivity to Data and Visualizations.IEEE Transactions on Visualization and Computer Graphics (2023)

  50. [50]

    Max Hocutt. 1974. Aristotle’s Four Becauses.Philosophy49, 190 (1974), 385–399

  51. [51]

    Barbara K Hofer and Paul R Pintrich. 1997. The Development of Epistemological Theories: Beliefs about Knowledge and Knowing and Their Relation to Learning.Review of Educational Research67, 1 (1997), 88–140

  52. [52]

    Jiaxiong Hu, Jingya Guo, Ningjing Tang, Xiaojuan Ma, Yuan Yao, Changyuan Yang, and Yingqing Xu. 2024. Designing the Conversational Agent: Asking Follow-up Questions for Information Elicitation.Proceedings of the ACM on Human-Computer Interaction8, CSCW1 (2024), 1–30

  53. [53]

    Chiao Ling Huang, Chuxiang Wu, and Shu Ching Yang. 2023. How Students View Online Knowledge: Epistemic Beliefs, Self-Regulated Learning and Academic Misconduct.Computers & Education200 (2023), 104796

  54. [54]

    Weijiao Huang, Khe Foon Hew, and Luke K Fryer. 2022. Chatbots for Language Learning—Are They Really Useful? A Systematic Review of Chatbot-supported Language Learning.Journal of Computer Assisted Learning38, 1 (2022), 237–257

  55. [55]

    Stefan E Huber, Kristian Kiili, Steve Nebel, Richard M Ryan, Michael Sailer, and Manuel Ninaus. 2024. Leveraging the Potential of LLMs in Education through Playful and Game-based Learning.Educational Psychology Review36, 1 (2024), 25

  56. [56]

    Hyoungwook Jin, Seonghee Lee, Hyungyu Shin, and Juho Kim. 2024. Teach AI How to Code: Using Large Language Models as Teachable Agents for Programming Education. InProceedings of the CHI Conference on Human Factors in Computing Systems. 1–28

  57. [57]

    Stuart A Karabenick and Myron H Dembo. 2011. Understanding and Facilitating Self-Regulated Help Seeking.New Directions for Teaching and Learning2011, 126 (2011), 33–43

  58. [58]

    Seong-Gon Kim. 2023. Using ChatGPT for Language Editing in Scientific Articles.Maxillofacial plastic and reconstructive surgery45, 1 (2023), 13

  59. [59]

    Ben Kotzee. 2018. Applied Epistemology of Education. InThe Routledge Handbook of Applied Epistemology. Routledge, 211–230

  60. [60]

    Susanne Krasmann. 2020. The Logic of the Surface: on the Epistemology of Algorithms in Times of Big Data.Information, Communication & Society23, 14 (2020), 2096–2109

  61. [61]

    Harsh Kumar, Ilya Musabirov, Mohi Reza, Jiakai Shi, Anastasia Kuzminykh, Joseph Jay Williams, and Michael Liut. 2023. Impact of Guidance and Interaction Strategies for LLM use on Learner Performance and Perception.arXiv preprint arXiv:2310.13712(2023)

  62. [62]

    Arthur Lacerda, Sergio Antonio Andrade Freitas, and Cristiane Soares Ramos. 2024. Gamified Chatbot Management Process: A Way to Build Gamified Chatbots. InIntelligent Systems Conference. Springer, 18–36

  63. [63]

    Sam Yu-Te Lee and Kwan-Liu Ma. 2024. HINTs: Sensemaking on Large Collections of Documents with Hypergraph Visualization and Intelligent Agents.arXiv preprint arXiv:2403.02752(2024)

  64. [64]

    Unggi Lee, Haewon Jung, Younghoon Jeon, Younghoon Sohn, Wonhee Hwang, Jewoong Moon, and Hyeoncheol Kim. 2024. Few-Shot is Enough: Exploring ChatGPT Prompt Engineering Method for Automatic Question Feneration in English Education.Education and Information Technologies29, 9 (2024), 11483–11515

  65. [65]

    Hang Li, Tianlong Xu, Chaoli Zhang, Eason Chen, Jing Liang, Xing Fan, Haoyang Li, Jiliang Tang, and Qingsong Wen. 2024. Bringing generative AI to adaptive learning in education.arXiv preprint arXiv:2402.14601(2024). , Vol. 1, No. 1, Article . Publication date: April 2026. Epistemologically-Informed LLM for Self-Learning 27

  66. [66]

    Zhui Li, Fenghe Li, Qining Fu, Xuehu Wang, Hong Liu, Yu Zhao, and Wei Ren. 2024. LLMs and Medical Education: a Paradigm Shift in Educator Roles.Smart Learning Environments11, 1 (2024), 26

  67. [67]

    Zhuoyang Li, Minhui Liang, Hai Trung Le, Ray Lc, and Yuhan Luo. 2023. Exploring Design Opportunities for Reflective Conversational Agents to Reduce Compulsive Smartphone Use. InProceedings of the 5th International Conference on Conversational User Interfaces. ACM, New York, NY, USA, 1–6. http://dx.doi.org/10.1145/3571884.3604305

  68. [68]

    Pongsakorn Limna, Tanpat Kraiwanit, Kris Jangjarat, Prapasiri Klayklung, and Piyawatjana Chocksathaporn. 2023. The Use of ChatGPT in the Digital Era: Perspectives on Chatbot Implementation.Journal of Applied Learning and Teaching6, 1 (2023)

  69. [69]

    Xi Lin. 2023. Exploring the Role of ChatGPT As a Facilitator for Motivating Self-Directed Learning Among Adult Learners.Adult Learning(2023), 10451595231184928

  70. [70]

    Michael Xieyang Liu, Tongshuang Wu, Tianying Chen, Franklin Mingzhe Li, Aniket Kittur, and Brad A Myers. 2024. Selenite: Scaffolding Online Sensemaking with Comprehensive Overviews Elicited from Large Language Models. InProceedings of the CHI Conference on Human Factors in Computing Systems. 1–26

  71. [71]

    Yiren Liu, Pranav Sharma, Mehul Jitendra Oswal, Haijun Xia, and Yun Huang. 2024. Personaflow: Boosting Research Ideation with LLM-Simulated Expert Personas.arXiv preprint arXiv:2409.12538(2024)

  72. [72]

    Zhengyuan Liu, Stella Xin Yin, Carolyn Lee, and Nancy F Chen. 2024. Scaffolding Language Learning via Multi-Modal Tutoring Systems with Pedagogical Instructions.arXiv preprint arXiv:2404.03429(2024)

  73. [73]

    Bei Luo, Raymond YK Lau, Chunping Li, and Yain-Whar Si. 2022. A Critical Review of State-of-the-Art Chatbot Designs and Applications. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery12, 1 (2022), e1434

  74. [74]

    Xiao Ma, Swaroop Mishra, Ariel Liu, Sophie Ying Su, Jilin Chen, Chinmay Kulkarni, Heng-Tze Cheng, Quoc Le, and Ed Chi. 2024. Beyond Chatbots: Explorellm for Structured Thoughts and Personalized Model Responses. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–12

  75. [75]

    James Macallister. 2012. Virtue Epistemology and the Philosophy of Education.Journal of Philosophy of Education46, 2 (2012), 251–270

  76. [76]

    Gary Marchionini. 2019. Search, Sensemaking and Learning: Closing gaps.Information and Learning Sciences120, 1/2 (2019), 74–86

  77. [77]

    Reza Hadi Mogavi, Chao Deng, Justin Juho Kim, Pengyuan Zhou, Young D Kwon, Ahmed Hosny Saleh Metwally, Ahmed Tlili, Simone Bassanelli, Antonio Bucchiarone, Sujit Gujar, et al. 2024. ChatGPT in Education: A Blessing or a Curse? A Qualitative Study Exploring Early Adopters’ Utilization and Perceptions.Computers in Human Behavior: Artificial Humans2, 1 (2024...

  78. [78]

    Heidar Mokhtari. 2014. A Quantitative Survey on the Influence of Students’ Epistemic Beliefs on Their General Information Seeking Behavior.The Journal of Academic Librarianship40, 3-4 (2014), 259–263

  79. [79]

    Trevor T Moores and Jerry Cha-Jan Chang. 2009. Self-Efficacy, Overconfidence, and the Negative Effect on Subsequent Performance: A Field Study.Information & Management46, 2 (2009), 69–76

  80. [80]

    Carlos Herrera Pérez and Tom Ziemke. 2007. Aristotle, Autonomy and the Explanation of Behaviour.Pragmatics & cognition15, 3 (2007), 547–571

Showing first 80 references.