Recognition: no theorem link
Thinking Less, Trusting More: GenAI's Impacts on Students' Cognitive Habits
Pith reviewed 2026-05-16 09:58 UTC · model grok-4.3
The pith
Students who trust and routinely use generative AI report significantly lower cognitive engagement in STEM coursework.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Trust-driven routine use of generative AI reduces students' cognitive engagement habits in STEM coursework, with the reduction appearing larger for students who already show high technophilia, risk tolerance, and computer self-efficacy; prior experience with AI or academics does not mitigate the association, suggesting a self-reinforcing cognitive debt cycle.
What carries the argument
The Partial Least Squares Structural Equation Model that links trust and routine genAI use to measured cognitive engagement (reflection, need for understanding, and critical thinking) while testing moderation by cognitive style traits.
If this is right
- Routine genAI use is associated with lower reflection and critical thinking during coursework.
- Students with higher technophilic motivations and computer self-efficacy show greater vulnerability to this reduction.
- Prior genAI experience or academic background does not reduce the association.
- The pattern can initiate a self-reinforcing cycle of declining intellectual habits and rising dependence.
Where Pith is reading between the lines
- Design of genAI tools for education could incorporate prompts that require users to articulate their own reasoning before accepting suggestions.
- STEM curricula might add explicit practice in evaluating AI outputs to counteract the observed drop in engagement.
- The same trust-and-use mechanism could affect skill retention in other domains where professionals adopt generative tools for daily work.
Load-bearing premise
Self-reported survey answers accurately measure actual cognitive habits and the observed links reflect the causal impact of AI use rather than reverse causation or unmeasured differences among students.
What would settle it
A longitudinal or experimental study that tracks the same students' cognitive engagement scores before and after a controlled increase in routine genAI use, or that compares otherwise similar groups with and without AI access.
Figures
read the original abstract
Objectives: When students use generative AI in coursework, what are its persistent effects on their intellectual development? We investigate (RQ1-How) how students' trust in and routine use of genAI affect their cognitive engagement habits in STEM coursework, and (RQ2-Who) which students are particularly vulnerable to cognitive disengagement. Method: Drawing on dual-process, cognitive offloading, and automation bias theories, we developed a statistical model explaining how and to what extent students' trust-driven routine genAI use affected their cognitive engagement -- specifically, reflection, the need for understanding, and critical thinking in coursework, and how these effects differed across students' cognitive styles. We empirically evaluated this model using Partial Least Squares Structural Equation Modeling on survey data from 299 STEM students across five North American universities. Results: Students who trusted and routinely used genAI reported significantly lower cognitive engagement. Unexpectedly, students with higher technophilic motivations, risk tolerance, and computer self-efficacy -- traits often celebrated in STEM -- were more prone to these effects. Interestingly, students' prior experience with genAI or academia did not protect them from cognitively disengaging. Implications: Our findings suggest a potential cognitive debt cycle where routine genAI use weakens students' intellectual habits, potentially driving and escalating over-reliance. This poses challenges for curricula and genAI system design, requiring interventions that actively support cognitive engagement.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that students' trust in and routine use of generative AI in STEM coursework is associated with significantly lower cognitive engagement (specifically reflection, need for understanding, and critical thinking), based on a PLS-SEM model fitted to cross-sectional survey data from 299 students across five North American universities. It further reports that students with higher technophilic motivations, risk tolerance, and computer self-efficacy are more vulnerable to these associations, that prior genAI or academic experience does not buffer the effects, and that the pattern suggests a 'cognitive debt cycle' with implications for curricula and genAI design.
Significance. If the reported associations prove robust to longitudinal designs and bias checks, the work would add empirical weight to concerns about cognitive offloading in education and could inform targeted interventions for high-risk student profiles. The unexpected moderation by traits typically viewed as STEM strengths is a potentially valuable contribution, though its policy relevance hinges on establishing directionality.
major comments (2)
- [Abstract, Results, Implications] Abstract, Results, and Implications sections: The manuscript repeatedly frames the PLS-SEM paths as evidence that trust-driven routine genAI use 'affected' or 'impacts' cognitive engagement and posits a 'cognitive debt cycle' in which use weakens intellectual habits. Because the data are single-wave self-reports, the design supports only contemporaneous associations; it cannot distinguish causation from reverse causation (low engagement prompting greater AI reliance) or unmeasured confounders (e.g., motivation, workload). This interpretive step is load-bearing for the policy claims and should be revised to reflect correlational limits.
- [Method] Method section: The description of the PLS-SEM analysis omits key diagnostics such as full reliability and validity metrics (Cronbach's α, composite reliability, AVE), model fit indices (SRMR, χ²/df), and explicit tests for common-method bias or multicollinearity. Without these, it is difficult to evaluate whether the reported paths are statistically stable or artifactual.
minor comments (2)
- [Abstract] Abstract: The parenthetical notation 'RQ1-How' and 'RQ2-Who' is nonstandard and reduces readability; spell out the research questions in plain prose.
- [Results] Results: The abstract states 'significantly lower cognitive engagement' without reporting effect sizes, path coefficients, or confidence intervals; these should be added for interpretability.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments. We agree that the cross-sectional design limits causal claims and will revise interpretive language accordingly. We will also expand the Method section with the requested diagnostics.
read point-by-point responses
-
Referee: [Abstract, Results, Implications] Abstract, Results, and Implications sections: The manuscript repeatedly frames the PLS-SEM paths as evidence that trust-driven routine genAI use 'affected' or 'impacts' cognitive engagement and posits a 'cognitive debt cycle' in which use weakens intellectual habits. Because the data are single-wave self-reports, the design supports only contemporaneous associations; it cannot distinguish causation from reverse causation (low engagement prompting greater AI reliance) or unmeasured confounders (e.g., motivation, workload). This interpretive step is load-bearing for the policy claims and should be revised to reflect correlational limits.
Authors: We agree that the single-wave survey design supports only contemporaneous associations and does not permit causal claims. In the revised manuscript we will replace causal language ('affected', 'impacts') with associative phrasing ('were associated with', 'linked to') throughout the abstract, results, and implications. The 'cognitive debt cycle' will be reframed as a hypothesized mechanism suggested by the observed pattern, with explicit caveats that longitudinal data are needed to test directionality and rule out reverse causation or confounding. We will also expand the limitations section to discuss these issues directly. revision: yes
-
Referee: [Method] Method section: The description of the PLS-SEM analysis omits key diagnostics such as full reliability and validity metrics (Cronbach's α, composite reliability, AVE), model fit indices (SRMR, χ²/df), and explicit tests for common-method bias or multicollinearity. Without these, it is difficult to evaluate whether the reported paths are statistically stable or artifactual.
Authors: We acknowledge the omission. The revised Method section will include a dedicated table reporting Cronbach's α, composite reliability, and AVE for all constructs, along with model fit indices (SRMR, χ²/df, and others). We will also report variance inflation factors to assess multicollinearity and include results from Harman's single-factor test for common-method bias. These additions will allow readers to evaluate the stability of the reported paths. revision: yes
Circularity Check
No circularity: empirical PLS-SEM on survey data
full rationale
The paper reports a cross-sectional survey of 299 students analyzed via Partial Least Squares Structural Equation Modeling. No mathematical derivations, equations, or first-principles results are presented that reduce to their own inputs by construction. The model is built from established external theories (dual-process, cognitive offloading, automation bias) and tested against observed associations; no fitted parameters are relabeled as independent predictions, no self-citations serve as load-bearing uniqueness theorems, and no ansatz or renaming of known results occurs. The analysis therefore remains self-contained against external benchmarks and receives the default non-circularity finding.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Dual-process, cognitive offloading, and automation bias theories apply directly to student genAI use in coursework
Reference graph
Works this paper leans on
- [1]
-
[2]
Muhammad Abbas, Farooq Ahmed Jam, and Tariq Iqbal Khan. 2024. Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students.International Journal of Educational Technology in Higher Education21, 1 (2024), 10
work page 2024
-
[3]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. Guidelines for human-AI interaction. In2019 CHI Conference on Human Factors in Computing Systems. 1–13. Manuscript submitted to ACM 22 Choudhuri et al
work page 2019
-
[4]
Matin Amoozadeh, David Daniels, Daye Nam, Aayush Kumar, Stella Chen, Michael Hilton, Sruti Srinivasa Ragavan, and Mohammad Amin Alipour
-
[5]
InProceedings of the 55th ACM Technical Symposium on Computer Science Education V
Trust in Generative AI among Students: An exploratory study. InProceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1. 67–73
-
[6]
Andrew Anderson, Jimena Noa Guevara, Fatima Moussaoui, Tianyi Li, Mihaela Vorvoreanu, and Margaret Burnett. 2024. Measuring user experience inclusivity in human-AI interaction via five user problem-solving styles.ACM Transactions on Interactive Intelligent Systems14, 3 (2024), 1–90
work page 2024
- [7]
-
[8]
2013.The adaptive character of thought
John R Anderson. 2013.The adaptive character of thought. Psychology Press
work page 2013
-
[9]
Albert Bandura. 1997. Self-efficacy the exercise of control. New York: H.Freeman & Co. Student Success333 (1997), 48461
work page 1997
-
[10]
André Barcaui. 2025. ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention.Social Sciences & Humanities Open12 (2025), 102287
work page 2025
-
[11]
Jan-Michael Becker, Kristina Klein, and Martin Wetzels. 2012. Hierarchical latent variable models in PLS-SEM: guidelines for using reflective- formative type models.Long range planning45, 5-6 (2012), 359–394
work page 2012
- [12]
-
[13]
John Biggs, David Kember, and Doris YP Leung. 2001. The revised two-factor study process questionnaire: R-SPQ-2F.British Journal of Educational Psychology71, 1 (2001), 133–149
work page 2001
-
[14]
1956.Taxonomy of educational objectives: The classification of educational goals
Benjamin S Bloom, Max D Engelhart, Edward J Furst, Walker H Hill, David R Krathwohl, et al . 1956.Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain. Longman New York
work page 1956
-
[15]
1999.Handbook of self-regulation
Monique Boekaerts, Moshe Zeidner, and Paul R Pintrich. 1999.Handbook of self-regulation. Elsevier
work page 1999
-
[16]
Jennifer Bryce and Graeme Withers. 2003. Engaging secondary school students in lifelong learning. (2003)
work page 2003
-
[17]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making.Proceedings of the ACM on Human-computer Interaction5, CSCW1 (2021), 1–21
work page 2021
-
[18]
Margaret Burnett, Anicia Peters, Charles Hill, and Noha Elarief. 2016. Finding gender-inclusiveness software issues with GenderMag: A field investigation. InProceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2586–2598
work page 2016
-
[19]
Margaret Burnett, Simone Stumpf, Jamie Macbeth, Stephann Makri, Laura Beckwith, Irwin Kwan, Anicia Peters, and William Jernigan. 2016. GenderMag: A method for evaluating software’s gender inclusiveness.Interacting with Computers28, 6 (2016), 760–787
work page 2016
-
[20]
Sellen, Teodor Vorvoreanu, and Jaime Teevan
Jenna Butler, Steven Jaffe, Ralf Janßen, Nancy Baym, Brent Hecht, Jake Hofman, Sean Rintel, Bahareh Sarrafzadeh, Abigail M. Sellen, Teodor Vorvoreanu, and Jaime Teevan. 2025.Microsoft New Future of Work Report 2025. Technical Report MSR-TR-2025-58. Microsoft Research. https://aka.ms/nfw2025
work page 2025
-
[21]
John T Cacioppo and Richard E Petty. 1982. The need for cognition.Journal of personality and social psychology42, 1 (1982), 116
work page 1982
-
[22]
Xinyue Chen, Kunlin Ruan, Kexin Phyllis Ju, Nathan Yap, and Xu Wang. 2025. More AI assistance reduces cognitive engagement: Examining the AI assistance dilemma in AI-supported note-taking.Proceedings of the ACM on Human-Computer Interaction9, 7 (2025), 1–29
work page 2025
-
[23]
Michelene TH Chi. 2009. Active-constructive-interactive: A conceptual framework for differentiating learning activities.Topics in cognitive science 1, 1 (2009), 73–105
work page 2009
-
[24]
Michelene TH Chi and Ruth Wylie. 2014. The ICAP framework: Linking cognitive engagement to active learning outcomes.Educational psychologist 49, 4 (2014), 219–243
work page 2014
-
[25]
Wynne W Chin et al. 1998. The partial least squares approach to structural equation modeling.Modern Methods for Business Research295, 2 (1998), 295–336
work page 1998
- [26]
-
[27]
Rudrajit Choudhuri, Dylan Liu, Igor Steinmacher, Marco Gerosa, and Anita Sarma. 2024. How Far Are We? The Triumphs and Trials of Generative AI in Learning Software Engineering. InProceedings of the IEEE/ACM 46th international conference on software engineering. 1–13
work page 2024
-
[28]
Rudrajit Choudhuri, Ambareesh Ramakrishnan, Amreeta Chatterjee, Bianca Trinkenreich, Igor Steinmacher, Marco Gerosa, and Anita Sarma. 2025. Insights from the Frontline: GenAI Utilization Among Software Engineering Students. In2025 IEEE/ACM 37th International Conference on Software Engineering Education and Training (CSEE&T). IEEE, 1–12
work page 2025
-
[29]
Rudrajit Choudhuri, Bianca Trinkenreich, Rahul Pandita, Eirini Kalliamvakou, Igor Steinmacher, Marco Gerosa, Christopher Sanchez, and Anita Sarma. 2025. What Guides Our Choices? Modeling Developers’ Trust and Behavioral Intentions Towards GenAI. In2025 IEEE/ACM 47th International Conference on Software Engineering (ICSE). doi:10.1109/ICSE55347.2025.00087
-
[30]
Rudrajit Choudhuri, Bianca Trinkenreich, Rahul Pandita, Eirini Kalliamvakou, Igor Steinmacher, Marco Gerosa, Christopher Sanchez, and Anita Sarma. 2025. What Needs Attention? Prioritizing Drivers of Developers’ Trust and Adoption of Generative AI.arXiv preprint arXiv:2505.17418 (2025)
-
[31]
Ruth C Clark and Richard E Mayer. 2023.E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. john Wiley & sons
work page 2023
-
[32]
2013.Statistical Power Analysis for the Behavioral Sciences
Jacob Cohen. 2013.Statistical Power Analysis for the Behavioral Sciences. Routledge
work page 2013
-
[33]
Deborah R Compeau and Christopher A Higgins. 1995. Computer self-efficacy: Development of a measure and initial test.MIS Quarterly(1995), Manuscript submitted to ACM Why Johnny Can’t Think: GenAI’s Impacts on Cognitive Engagement 23 189–211
work page 1995
-
[34]
Holly C. Corbett. 2024.Psychological Safety: Try This Tip to Know When to Take a Risk at Work. Forbes. https://www.forbes.com/sites/hollycorbett/ 2024/09/27/psychological-safety-try-this-tip-to-know-when-to-take-a-risk-at-work/ Accessed: 2026-01-07
work page 2024
-
[35]
Arianna Costantini, Arnold B Bakker, and Yuri S Scharp. 2025. Playful Study Design: A Novel Approach to Enhancing Student Well-Being and Academic Performance.Educational Psychology Review37, 2 (2025), 1–42
work page 2025
-
[36]
Anna L Cox, Sandy JJ Gould, Marta E Cecchinato, Ioanna Iacovides, and Ian Renfree. 2016. Design frictions for mindful interactions: The case for microboundaries. InProceedings of the 2016 CHI conference extended abstracts on human factors in computing systems. 1389–1397
work page 2016
-
[37]
John Dewey. 1933. How we think: A restatement of the relation of reflective thinking to the educative process. (1933)
work page 1933
-
[38]
Digital Education Council. 2025. Digital Education Council AI Literacy Framework. https://www.digitaleducationcouncil.com/post/digital- education-council-ai-literacy-framework. Accessed December, 2025
work page 2025
-
[39]
Ian Drosos, Advait Sarkar, Neil Toronto, et al. 2025. " It makes you think": Provocations Help Restore Critical Thinking to AI-Assisted Knowledge Work.arXiv preprint arXiv:2501.17247(2025)
-
[40]
John Dunlosky, Christopher Hertzog, M Kennedy, and Keith W Thiede. 2005. The self-monitoring approach for effective learning.Cognitive Technology10, 1 (2005), 4–11
work page 2005
-
[41]
Douglas C Engelbart. 2023. Augmenting human intellect: A conceptual framework. InAugmented education in the global age. Routledge, 13–29
work page 2023
-
[42]
Robert H Ennis. 1993. Critical thinking assessment.Theory into practice32, 3 (1993), 179–186
work page 1993
-
[43]
Robert H Ennis. 2018. Critical thinking across the curriculum: A vision.Topoi37, 1 (2018), 165–184
work page 2018
-
[44]
Christina J Evans, John R Kirby, and Leandre R Fabrigar. 2003. Approaches to learning, need for cognition, and strategic flexibility among university students.British Journal of Educational Psychology73, 4 (2003), 507–528
work page 2003
-
[45]
Peter Facione. 1990. Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction (The Delphi Report). (1990)
work page 1990
-
[46]
Peter A Facione et al. 2011. Critical thinking: What it is and why it counts.Insight assessment1, 1 (2011), 1–23
work page 2011
-
[47]
Peter A Facione, Carol A Sanchez, Noreen C Facione, and Joanne Gainen. 1995. The disposition toward critical thinking.The Journal of general education44, 1 (1995), 1–25
work page 1995
-
[48]
Lei Fan, Kunyang Deng, and Fangxue Liu. 2025. Educational impacts of generative artificial intelligence on learning and performance of engineering students in China.Scientific reports15, 1 (2025), 26521
work page 2025
-
[49]
Yizhou Fan, Luzhen Tang, Huixiao Le, Kejie Shen, Shufang Tan, Yueying Zhao, Yuan Shen, Xinyu Li, and Dragan Gašević. 2025. Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance.British Journal of Educational Technology56, 2 (2025), 489–530
work page 2025
-
[50]
Franz Faul, Edgar Erdfelder, Axel Buchner, and Albert-Georg Lang. 2009. Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses.Behav Res Methods41, 4 (2009), 1149–1160
work page 2009
-
[51]
Jennifer A Fredricks, Phyllis C Blumenfeld, and Alison H Paris. 2004. School engagement: Potential of the concept, state of the evidence.Review of educational research74, 1 (2004), 59–109
work page 2004
-
[52]
Michael Gerlich. 2025. AI tools in society: Impacts on cognitive offloading and the future of critical thinking.Societies15, 1 (2025), 6
work page 2025
-
[53]
Sonia C Giaccone and Mats Magnusson. 2022. Unveiling the role of risk-taking in innovation: antecedents and effects.R&D Management52, 1 (2022), 93–107
work page 2022
-
[54]
Kate Goddard, Abdul Roudsari, and Jeremy C Wyatt. 2012. Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association19, 1 (2012), 121–127
work page 2012
-
[55]
Yuhong Gong, Shang Zhang, Ting Zhang, and Xinfa Yi. 2025. The impact of feedback literacy on reflective learning types in Chinese high school students: based on latent profile analysis.Frontiers in Psychology16 (2025), 1516253
work page 2025
-
[56]
Wolfgang L Grichting. 1994. The meaning of “I Don’t Know” in opinion surveys: Indifference versus ignorance.Aust Psychol29, 1 (1994)
work page 1994
-
[57]
Xingjian Gu and Barbara J Ericson. 2025. AI literacy in K-12 and higher education in the wake of generative AI: An integrative review. InProceedings of the 2025 ACM Conference on International Computing Education Research V. 1. 125–140
work page 2025
-
[58]
2014.A primer on partial least squares structural equation modeling (PLS-SEM)
Joseph F Hair. 2014.A primer on partial least squares structural equation modeling (PLS-SEM). sage
work page 2014
-
[59]
Joseph F Hair, Jeffrey J Risher, Marko Sarstedt, and Christian M Ringle. 2019. When to use and how to report the results of PLS-SEM.Eur. Bus. Rev. (2019)
work page 2019
-
[60]
Juho Hamari, Jonna Koivisto, and Harri Sarsa. 2014. Does gamification work? A literature review of empirical studies on gamification. In2014 47th Hawaii international conference on system sciences. Ieee, 3025–3034
work page 2014
-
[61]
Md Montaser Hamid, Amreeta Chatterjee, Mariam Guizani, Andrew Anderson, Fatima Moussaoui, Sarah Yang, Isaac Escobar, Anita Sarma, and Margaret Burnett. 2024. How to measure diversity actionably in technology. InEquity, Diversity, and Inclusion in Software Engineering: Best Practices and Insights. Apress Berkeley, CA, 469–485
work page 2024
-
[62]
Md Montaser Hamid, FatimA A Moussaoui, Jimena Noa Guevara, Andrew Anderson, Puja Agarwal, Jonathan Dodge, and Margaret Burnett. 2025. Inclusive design of AI’s Explanations: Just for Those Previously Left Out?ACM Transactions on Interactive Intelligent Systems(2025)
work page 2025
-
[63]
Jörg Henseler, Geoffrey Hubona, and Pauline Ash Ray. 2016. Using PLS path modeling in new technology research: updated guidelines.Industrial Management & Data Systems116, 1 (2016), 2–20
work page 2016
-
[64]
Jörg Henseler, Christian M Ringle, and Marko Sarstedt. 2015. A new criterion for assessing discriminant validity in variance-based structural Manuscript submitted to ACM 24 Choudhuri et al. equation modeling.J. Acad. Mark. Sci.43, 1 (2015), 115–135
work page 2015
-
[65]
Kristina Höök. 2000. Steps to take before intelligent user interfaces become real.Interacting with computers12, 4 (2000), 409–426
work page 2000
-
[66]
Matt C Howard. 2016. A review of exploratory factor analysis decisions and overview of current practices: What we are doing and how can we improve?International Journal of Human-Computer Interaction32, 1 (2016), 51–62
work page 2016
-
[67]
Daniel Kahneman. 2011. Thinking, fast and slow.Farrar, Straus and Giroux(2011)
work page 2011
-
[68]
David Kember, Doris YP Leung, Alice Jones, Alice Yuen Loke, Jan McKay, Kit Sinclair, Harrison Tse, Celia Webb, Frances Kam Yuet Wong, Marian Wong, et al. 2000. Development of a questionnaire to measure the level of reflective thinking.Assessment & Evaluation in Higher Education25, 4 (2000), 381–395
work page 2000
-
[69]
Barbara A Kitchenham and Shari L Pfleeger. 2008. Personal opinion surveys. InGuide to advanced empirical software engineering. Springer, 63–92
work page 2008
-
[70]
Amy J Ko, Thomas D LaToza, and Margaret M Burnett. 2015. A practical guide to controlled experiments of software engineering tools with human participants.Empirical Software Engineering20, 1 (2015), 110–141
work page 2015
-
[71]
Aleksander Kobylarek, Kamil Błaszczyński, Luba Ślósarz, and Martyna Madej. 2022. Critical Thinking Questionnaire (CThQ)–construction and application of critical thinking test tool.Andragogy Adult Education and Social Marketing2, 2 (2022), 1–1
work page 2022
-
[72]
Ned Kock. 2014. Advanced mediating effects tests, multi-group analyses, and measurement model assessments in PLS-based SEM.International Journal of e-Collaboration10, 1 (2014)
work page 2014
-
[73]
Ned Kock. 2015. Common method bias in PLS-SEM: A full collinearity assessment approach.International Journal of e-Collaboration (ijec)11, 4 (2015), 1–10
work page 2015
-
[74]
Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. 2025. Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task.arXiv preprint arXiv:2506.08872(2025)
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[75]
Pia Kreijkes, Viktor Kewenig, Martina Kuvalja, Mina Lee, Sylvia Vitello, Jake M Hofman, Abigail Sellen, Sean Rintel, Daniel G Goldstein, David M Rothschild, et al. 2025. Effects of LLM use and note-taking on reading comprehension and memory: A randomised experiment in secondary schools. A vailable at SSRN(2025)
work page 2025
-
[76]
Susie Lamborn, Fred Newmann, and Gary Wehlage. 1992. The significance and sources of student engagement.Student engagement and achievement in American secondary schools(1992), 11–39
work page 1992
-
[77]
Richard S Lazarus. 1991.Emotion and adaptation. Oxford University Press
work page 1991
-
[78]
Hao-Ping Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–22
work page 2025
-
[79]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance.Human factors46, 1 (2004), 50–80
work page 2004
-
[80]
Marina Lepp and Joosep Kaimre. 2025. Does generative AI help in learning programming: Students’ perceptions, reported use and relation to performance.Computers in Human Behavior Reports18 (2025), 100642
work page 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.