pith. machine review for the scientific record. sign in

arxiv: 2601.22430 · v2 · submitted 2026-01-30 · 💻 cs.HC

Recognition: no theorem link

Thinking Less, Trusting More: GenAI's Impacts on Students' Cognitive Habits

Authors on Pith no claims yet

Pith reviewed 2026-05-16 09:58 UTC · model grok-4.3

classification 💻 cs.HC
keywords generative AIcognitive engagementSTEM studentstrustroutine usecognitive offloadingautomation biascognitive debt
0
0 comments X

The pith

Students who trust and routinely use generative AI report significantly lower cognitive engagement in STEM coursework.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Students' trust in generative AI combined with routine use in coursework correlates with reduced levels of reflection, need for understanding, and critical thinking. The study tested this link through a structural equation model grounded in cognitive offloading and automation bias theories, using survey responses from 299 STEM students at five North American universities. Effects were stronger among students who scored higher on technophilic motivations, risk tolerance, and computer self-efficacy, while prior AI or academic experience offered no buffer. The authors interpret the pattern as the start of a cognitive debt cycle in which lower engagement drives greater future reliance on the tool.

Core claim

Trust-driven routine use of generative AI reduces students' cognitive engagement habits in STEM coursework, with the reduction appearing larger for students who already show high technophilia, risk tolerance, and computer self-efficacy; prior experience with AI or academics does not mitigate the association, suggesting a self-reinforcing cognitive debt cycle.

What carries the argument

The Partial Least Squares Structural Equation Model that links trust and routine genAI use to measured cognitive engagement (reflection, need for understanding, and critical thinking) while testing moderation by cognitive style traits.

If this is right

  • Routine genAI use is associated with lower reflection and critical thinking during coursework.
  • Students with higher technophilic motivations and computer self-efficacy show greater vulnerability to this reduction.
  • Prior genAI experience or academic background does not reduce the association.
  • The pattern can initiate a self-reinforcing cycle of declining intellectual habits and rising dependence.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Design of genAI tools for education could incorporate prompts that require users to articulate their own reasoning before accepting suggestions.
  • STEM curricula might add explicit practice in evaluating AI outputs to counteract the observed drop in engagement.
  • The same trust-and-use mechanism could affect skill retention in other domains where professionals adopt generative tools for daily work.

Load-bearing premise

Self-reported survey answers accurately measure actual cognitive habits and the observed links reflect the causal impact of AI use rather than reverse causation or unmeasured differences among students.

What would settle it

A longitudinal or experimental study that tracks the same students' cognitive engagement scores before and after a controlled increase in routine genAI use, or that compares otherwise similar groups with and without AI access.

Figures

Figures reproduced from arXiv: 2601.22430 by Anita Sarma, Christopher Sanchez, Margaret Burnett, Rudrajit Choudhuri.

Figure 1
Figure 1. Figure 1: Proposed theoretical model and measurement specification. The blue region (RQ1-How) models how [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Cognitive engagement scores by genAI usage. Boxplots show standardized latent scores for Reflection, Need for Understanding, [PITH_FULL_IMAGE:figures/full_fig_p012_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Structural model results (RQ1-How): Standardized path coefficients (H1–H4) on arrows between constructs (circles) show how [PITH_FULL_IMAGE:figures/full_fig_p013_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Trust → usage → cognitive engagement distributions. Boxplots show standardized latent scores for Reflection, Need for Understanding, and Critical Thinking among participants grouped by genAI usage (median split) and trust in genAI (low vs. high; median split within each usage group) [PITH_FULL_IMAGE:figures/full_fig_p014_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Structural model path associations (full model: RQs1&2). Standardized path coefficients show direct effects of Trust and [PITH_FULL_IMAGE:figures/full_fig_p016_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Cognitive engagement scores by median-split groups on (a) Technophilic Motivations, (b) Risk Tolerance, and (c) Computer [PITH_FULL_IMAGE:figures/full_fig_p017_6.png] view at source ↗
read the original abstract

Objectives: When students use generative AI in coursework, what are its persistent effects on their intellectual development? We investigate (RQ1-How) how students' trust in and routine use of genAI affect their cognitive engagement habits in STEM coursework, and (RQ2-Who) which students are particularly vulnerable to cognitive disengagement. Method: Drawing on dual-process, cognitive offloading, and automation bias theories, we developed a statistical model explaining how and to what extent students' trust-driven routine genAI use affected their cognitive engagement -- specifically, reflection, the need for understanding, and critical thinking in coursework, and how these effects differed across students' cognitive styles. We empirically evaluated this model using Partial Least Squares Structural Equation Modeling on survey data from 299 STEM students across five North American universities. Results: Students who trusted and routinely used genAI reported significantly lower cognitive engagement. Unexpectedly, students with higher technophilic motivations, risk tolerance, and computer self-efficacy -- traits often celebrated in STEM -- were more prone to these effects. Interestingly, students' prior experience with genAI or academia did not protect them from cognitively disengaging. Implications: Our findings suggest a potential cognitive debt cycle where routine genAI use weakens students' intellectual habits, potentially driving and escalating over-reliance. This poses challenges for curricula and genAI system design, requiring interventions that actively support cognitive engagement.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that students' trust in and routine use of generative AI in STEM coursework is associated with significantly lower cognitive engagement (specifically reflection, need for understanding, and critical thinking), based on a PLS-SEM model fitted to cross-sectional survey data from 299 students across five North American universities. It further reports that students with higher technophilic motivations, risk tolerance, and computer self-efficacy are more vulnerable to these associations, that prior genAI or academic experience does not buffer the effects, and that the pattern suggests a 'cognitive debt cycle' with implications for curricula and genAI design.

Significance. If the reported associations prove robust to longitudinal designs and bias checks, the work would add empirical weight to concerns about cognitive offloading in education and could inform targeted interventions for high-risk student profiles. The unexpected moderation by traits typically viewed as STEM strengths is a potentially valuable contribution, though its policy relevance hinges on establishing directionality.

major comments (2)
  1. [Abstract, Results, Implications] Abstract, Results, and Implications sections: The manuscript repeatedly frames the PLS-SEM paths as evidence that trust-driven routine genAI use 'affected' or 'impacts' cognitive engagement and posits a 'cognitive debt cycle' in which use weakens intellectual habits. Because the data are single-wave self-reports, the design supports only contemporaneous associations; it cannot distinguish causation from reverse causation (low engagement prompting greater AI reliance) or unmeasured confounders (e.g., motivation, workload). This interpretive step is load-bearing for the policy claims and should be revised to reflect correlational limits.
  2. [Method] Method section: The description of the PLS-SEM analysis omits key diagnostics such as full reliability and validity metrics (Cronbach's α, composite reliability, AVE), model fit indices (SRMR, χ²/df), and explicit tests for common-method bias or multicollinearity. Without these, it is difficult to evaluate whether the reported paths are statistically stable or artifactual.
minor comments (2)
  1. [Abstract] Abstract: The parenthetical notation 'RQ1-How' and 'RQ2-Who' is nonstandard and reduces readability; spell out the research questions in plain prose.
  2. [Results] Results: The abstract states 'significantly lower cognitive engagement' without reporting effect sizes, path coefficients, or confidence intervals; these should be added for interpretability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We agree that the cross-sectional design limits causal claims and will revise interpretive language accordingly. We will also expand the Method section with the requested diagnostics.

read point-by-point responses
  1. Referee: [Abstract, Results, Implications] Abstract, Results, and Implications sections: The manuscript repeatedly frames the PLS-SEM paths as evidence that trust-driven routine genAI use 'affected' or 'impacts' cognitive engagement and posits a 'cognitive debt cycle' in which use weakens intellectual habits. Because the data are single-wave self-reports, the design supports only contemporaneous associations; it cannot distinguish causation from reverse causation (low engagement prompting greater AI reliance) or unmeasured confounders (e.g., motivation, workload). This interpretive step is load-bearing for the policy claims and should be revised to reflect correlational limits.

    Authors: We agree that the single-wave survey design supports only contemporaneous associations and does not permit causal claims. In the revised manuscript we will replace causal language ('affected', 'impacts') with associative phrasing ('were associated with', 'linked to') throughout the abstract, results, and implications. The 'cognitive debt cycle' will be reframed as a hypothesized mechanism suggested by the observed pattern, with explicit caveats that longitudinal data are needed to test directionality and rule out reverse causation or confounding. We will also expand the limitations section to discuss these issues directly. revision: yes

  2. Referee: [Method] Method section: The description of the PLS-SEM analysis omits key diagnostics such as full reliability and validity metrics (Cronbach's α, composite reliability, AVE), model fit indices (SRMR, χ²/df), and explicit tests for common-method bias or multicollinearity. Without these, it is difficult to evaluate whether the reported paths are statistically stable or artifactual.

    Authors: We acknowledge the omission. The revised Method section will include a dedicated table reporting Cronbach's α, composite reliability, and AVE for all constructs, along with model fit indices (SRMR, χ²/df, and others). We will also report variance inflation factors to assess multicollinearity and include results from Harman's single-factor test for common-method bias. These additions will allow readers to evaluate the stability of the reported paths. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical PLS-SEM on survey data

full rationale

The paper reports a cross-sectional survey of 299 students analyzed via Partial Least Squares Structural Equation Modeling. No mathematical derivations, equations, or first-principles results are presented that reduce to their own inputs by construction. The model is built from established external theories (dual-process, cognitive offloading, automation bias) and tested against observed associations; no fitted parameters are relabeled as independent predictions, no self-citations serve as load-bearing uniqueness theorems, and no ansatz or renaming of known results occurs. The analysis therefore remains self-contained against external benchmarks and receives the default non-circularity finding.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The model rests on three established psychological theories plus standard survey measurement assumptions; no new entities or free parameters are introduced beyond the SEM estimation.

axioms (1)
  • domain assumption Dual-process, cognitive offloading, and automation bias theories apply directly to student genAI use in coursework
    Invoked to develop the statistical model in the method section

pith-pipeline@v0.9.0 · 5555 in / 998 out tokens · 23529 ms · 2026-05-16T09:58:12.030617+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

145 extracted references · 145 canonical work pages · 1 internal anchor

  1. [1]

    [n. d.]. Supplemental Package. https://zenodo.org/record/18420685

  2. [2]

    Muhammad Abbas, Farooq Ahmed Jam, and Tariq Iqbal Khan. 2024. Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students.International Journal of Educational Technology in Higher Education21, 1 (2024), 10

  3. [3]

    Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. Guidelines for human-AI interaction. In2019 CHI Conference on Human Factors in Computing Systems. 1–13. Manuscript submitted to ACM 22 Choudhuri et al

  4. [4]

    Matin Amoozadeh, David Daniels, Daye Nam, Aayush Kumar, Stella Chen, Michael Hilton, Sruti Srinivasa Ragavan, and Mohammad Amin Alipour

  5. [5]

    InProceedings of the 55th ACM Technical Symposium on Computer Science Education V

    Trust in Generative AI among Students: An exploratory study. InProceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1. 67–73

  6. [6]

    Andrew Anderson, Jimena Noa Guevara, Fatima Moussaoui, Tianyi Li, Mihaela Vorvoreanu, and Margaret Burnett. 2024. Measuring user experience inclusivity in human-AI interaction via five user problem-solving styles.ACM Transactions on Interactive Intelligent Systems14, 3 (2024), 1–90

  7. [7]

    Andrew Anderson, David Piorkowski, Margaret Burnett, and Justin Weisz. 2025. An LLM’s Attempts to Adapt to Diverse Software Engineers’ Problem-Solving Styles: More Inclusive & Equitable?arXiv preprint arXiv:2503.11018(2025)

  8. [8]

    2013.The adaptive character of thought

    John R Anderson. 2013.The adaptive character of thought. Psychology Press

  9. [9]

    Albert Bandura. 1997. Self-efficacy the exercise of control. New York: H.Freeman & Co. Student Success333 (1997), 48461

  10. [10]

    André Barcaui. 2025. ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention.Social Sciences & Humanities Open12 (2025), 102287

  11. [11]

    Jan-Michael Becker, Kristina Klein, and Martin Wetzels. 2012. Hierarchical latent variable models in PLS-SEM: guidelines for using reflective- formative type models.Long range planning45, 5-6 (2012), 359–394

  12. [12]

    Aditya Bhattacharya, Simone Stumpf, and Katrien Verbert. 2025. Importance of User Control in Data-Centric Steering for Healthcare Experts. arXiv preprint arXiv:2506.18770(2025)

  13. [13]

    John Biggs, David Kember, and Doris YP Leung. 2001. The revised two-factor study process questionnaire: R-SPQ-2F.British Journal of Educational Psychology71, 1 (2001), 133–149

  14. [14]

    1956.Taxonomy of educational objectives: The classification of educational goals

    Benjamin S Bloom, Max D Engelhart, Edward J Furst, Walker H Hill, David R Krathwohl, et al . 1956.Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain. Longman New York

  15. [15]

    1999.Handbook of self-regulation

    Monique Boekaerts, Moshe Zeidner, and Paul R Pintrich. 1999.Handbook of self-regulation. Elsevier

  16. [16]

    Jennifer Bryce and Graeme Withers. 2003. Engaging secondary school students in lifelong learning. (2003)

  17. [17]

    Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making.Proceedings of the ACM on Human-computer Interaction5, CSCW1 (2021), 1–21

  18. [18]

    Margaret Burnett, Anicia Peters, Charles Hill, and Noha Elarief. 2016. Finding gender-inclusiveness software issues with GenderMag: A field investigation. InProceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2586–2598

  19. [19]

    Margaret Burnett, Simone Stumpf, Jamie Macbeth, Stephann Makri, Laura Beckwith, Irwin Kwan, Anicia Peters, and William Jernigan. 2016. GenderMag: A method for evaluating software’s gender inclusiveness.Interacting with Computers28, 6 (2016), 760–787

  20. [20]

    Sellen, Teodor Vorvoreanu, and Jaime Teevan

    Jenna Butler, Steven Jaffe, Ralf Janßen, Nancy Baym, Brent Hecht, Jake Hofman, Sean Rintel, Bahareh Sarrafzadeh, Abigail M. Sellen, Teodor Vorvoreanu, and Jaime Teevan. 2025.Microsoft New Future of Work Report 2025. Technical Report MSR-TR-2025-58. Microsoft Research. https://aka.ms/nfw2025

  21. [21]

    John T Cacioppo and Richard E Petty. 1982. The need for cognition.Journal of personality and social psychology42, 1 (1982), 116

  22. [22]

    Xinyue Chen, Kunlin Ruan, Kexin Phyllis Ju, Nathan Yap, and Xu Wang. 2025. More AI assistance reduces cognitive engagement: Examining the AI assistance dilemma in AI-supported note-taking.Proceedings of the ACM on Human-Computer Interaction9, 7 (2025), 1–29

  23. [23]

    Michelene TH Chi. 2009. Active-constructive-interactive: A conceptual framework for differentiating learning activities.Topics in cognitive science 1, 1 (2009), 73–105

  24. [24]

    Michelene TH Chi and Ruth Wylie. 2014. The ICAP framework: Linking cognitive engagement to active learning outcomes.Educational psychologist 49, 4 (2014), 219–243

  25. [25]

    Wynne W Chin et al. 1998. The partial least squares approach to structural equation modeling.Modern Methods for Business Research295, 2 (1998), 295–336

  26. [26]

    Rudrajit Choudhuri, Carmen Badea, Christian Bird, Jenna Butler, Rob DeLine, and Brian Houck. 2025. AI Where It Matters: Where, Why, and How Developers Want AI Support in Daily Work.arXiv preprint arXiv:2510.00762(2025)

  27. [27]

    Rudrajit Choudhuri, Dylan Liu, Igor Steinmacher, Marco Gerosa, and Anita Sarma. 2024. How Far Are We? The Triumphs and Trials of Generative AI in Learning Software Engineering. InProceedings of the IEEE/ACM 46th international conference on software engineering. 1–13

  28. [28]

    Rudrajit Choudhuri, Ambareesh Ramakrishnan, Amreeta Chatterjee, Bianca Trinkenreich, Igor Steinmacher, Marco Gerosa, and Anita Sarma. 2025. Insights from the Frontline: GenAI Utilization Among Software Engineering Students. In2025 IEEE/ACM 37th International Conference on Software Engineering Education and Training (CSEE&T). IEEE, 1–12

  29. [29]

    Rudrajit Choudhuri, Bianca Trinkenreich, Rahul Pandita, Eirini Kalliamvakou, Igor Steinmacher, Marco Gerosa, Christopher Sanchez, and Anita Sarma. 2025. What Guides Our Choices? Modeling Developers’ Trust and Behavioral Intentions Towards GenAI. In2025 IEEE/ACM 47th International Conference on Software Engineering (ICSE). doi:10.1109/ICSE55347.2025.00087

  30. [30]

    Rudrajit Choudhuri, Bianca Trinkenreich, Rahul Pandita, Eirini Kalliamvakou, Igor Steinmacher, Marco Gerosa, Christopher Sanchez, and Anita Sarma. 2025. What Needs Attention? Prioritizing Drivers of Developers’ Trust and Adoption of Generative AI.arXiv preprint arXiv:2505.17418 (2025)

  31. [31]

    2023.E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning

    Ruth C Clark and Richard E Mayer. 2023.E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. john Wiley & sons

  32. [32]

    2013.Statistical Power Analysis for the Behavioral Sciences

    Jacob Cohen. 2013.Statistical Power Analysis for the Behavioral Sciences. Routledge

  33. [33]

    Deborah R Compeau and Christopher A Higgins. 1995. Computer self-efficacy: Development of a measure and initial test.MIS Quarterly(1995), Manuscript submitted to ACM Why Johnny Can’t Think: GenAI’s Impacts on Cognitive Engagement 23 189–211

  34. [34]

    Holly C. Corbett. 2024.Psychological Safety: Try This Tip to Know When to Take a Risk at Work. Forbes. https://www.forbes.com/sites/hollycorbett/ 2024/09/27/psychological-safety-try-this-tip-to-know-when-to-take-a-risk-at-work/ Accessed: 2026-01-07

  35. [35]

    Arianna Costantini, Arnold B Bakker, and Yuri S Scharp. 2025. Playful Study Design: A Novel Approach to Enhancing Student Well-Being and Academic Performance.Educational Psychology Review37, 2 (2025), 1–42

  36. [36]

    Anna L Cox, Sandy JJ Gould, Marta E Cecchinato, Ioanna Iacovides, and Ian Renfree. 2016. Design frictions for mindful interactions: The case for microboundaries. InProceedings of the 2016 CHI conference extended abstracts on human factors in computing systems. 1389–1397

  37. [37]

    John Dewey. 1933. How we think: A restatement of the relation of reflective thinking to the educative process. (1933)

  38. [38]

    Digital Education Council. 2025. Digital Education Council AI Literacy Framework. https://www.digitaleducationcouncil.com/post/digital- education-council-ai-literacy-framework. Accessed December, 2025

  39. [39]

    It makes you think

    Ian Drosos, Advait Sarkar, Neil Toronto, et al. 2025. " It makes you think": Provocations Help Restore Critical Thinking to AI-Assisted Knowledge Work.arXiv preprint arXiv:2501.17247(2025)

  40. [40]

    John Dunlosky, Christopher Hertzog, M Kennedy, and Keith W Thiede. 2005. The self-monitoring approach for effective learning.Cognitive Technology10, 1 (2005), 4–11

  41. [41]

    Douglas C Engelbart. 2023. Augmenting human intellect: A conceptual framework. InAugmented education in the global age. Routledge, 13–29

  42. [42]

    Robert H Ennis. 1993. Critical thinking assessment.Theory into practice32, 3 (1993), 179–186

  43. [43]

    Robert H Ennis. 2018. Critical thinking across the curriculum: A vision.Topoi37, 1 (2018), 165–184

  44. [44]

    Christina J Evans, John R Kirby, and Leandre R Fabrigar. 2003. Approaches to learning, need for cognition, and strategic flexibility among university students.British Journal of Educational Psychology73, 4 (2003), 507–528

  45. [45]

    Peter Facione. 1990. Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction (The Delphi Report). (1990)

  46. [46]

    Peter A Facione et al. 2011. Critical thinking: What it is and why it counts.Insight assessment1, 1 (2011), 1–23

  47. [47]

    Peter A Facione, Carol A Sanchez, Noreen C Facione, and Joanne Gainen. 1995. The disposition toward critical thinking.The Journal of general education44, 1 (1995), 1–25

  48. [48]

    Lei Fan, Kunyang Deng, and Fangxue Liu. 2025. Educational impacts of generative artificial intelligence on learning and performance of engineering students in China.Scientific reports15, 1 (2025), 26521

  49. [49]

    Yizhou Fan, Luzhen Tang, Huixiao Le, Kejie Shen, Shufang Tan, Yueying Zhao, Yuan Shen, Xinyu Li, and Dragan Gašević. 2025. Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance.British Journal of Educational Technology56, 2 (2025), 489–530

  50. [50]

    Franz Faul, Edgar Erdfelder, Axel Buchner, and Albert-Georg Lang. 2009. Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses.Behav Res Methods41, 4 (2009), 1149–1160

  51. [51]

    Jennifer A Fredricks, Phyllis C Blumenfeld, and Alison H Paris. 2004. School engagement: Potential of the concept, state of the evidence.Review of educational research74, 1 (2004), 59–109

  52. [52]

    Michael Gerlich. 2025. AI tools in society: Impacts on cognitive offloading and the future of critical thinking.Societies15, 1 (2025), 6

  53. [53]

    Sonia C Giaccone and Mats Magnusson. 2022. Unveiling the role of risk-taking in innovation: antecedents and effects.R&D Management52, 1 (2022), 93–107

  54. [54]

    Kate Goddard, Abdul Roudsari, and Jeremy C Wyatt. 2012. Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association19, 1 (2012), 121–127

  55. [55]

    Yuhong Gong, Shang Zhang, Ting Zhang, and Xinfa Yi. 2025. The impact of feedback literacy on reflective learning types in Chinese high school students: based on latent profile analysis.Frontiers in Psychology16 (2025), 1516253

  56. [56]

    I Don’t Know

    Wolfgang L Grichting. 1994. The meaning of “I Don’t Know” in opinion surveys: Indifference versus ignorance.Aust Psychol29, 1 (1994)

  57. [57]

    Xingjian Gu and Barbara J Ericson. 2025. AI literacy in K-12 and higher education in the wake of generative AI: An integrative review. InProceedings of the 2025 ACM Conference on International Computing Education Research V. 1. 125–140

  58. [58]

    2014.A primer on partial least squares structural equation modeling (PLS-SEM)

    Joseph F Hair. 2014.A primer on partial least squares structural equation modeling (PLS-SEM). sage

  59. [59]

    Joseph F Hair, Jeffrey J Risher, Marko Sarstedt, and Christian M Ringle. 2019. When to use and how to report the results of PLS-SEM.Eur. Bus. Rev. (2019)

  60. [60]

    Juho Hamari, Jonna Koivisto, and Harri Sarsa. 2014. Does gamification work? A literature review of empirical studies on gamification. In2014 47th Hawaii international conference on system sciences. Ieee, 3025–3034

  61. [61]

    Md Montaser Hamid, Amreeta Chatterjee, Mariam Guizani, Andrew Anderson, Fatima Moussaoui, Sarah Yang, Isaac Escobar, Anita Sarma, and Margaret Burnett. 2024. How to measure diversity actionably in technology. InEquity, Diversity, and Inclusion in Software Engineering: Best Practices and Insights. Apress Berkeley, CA, 469–485

  62. [62]

    Md Montaser Hamid, FatimA A Moussaoui, Jimena Noa Guevara, Andrew Anderson, Puja Agarwal, Jonathan Dodge, and Margaret Burnett. 2025. Inclusive design of AI’s Explanations: Just for Those Previously Left Out?ACM Transactions on Interactive Intelligent Systems(2025)

  63. [63]

    Jörg Henseler, Geoffrey Hubona, and Pauline Ash Ray. 2016. Using PLS path modeling in new technology research: updated guidelines.Industrial Management & Data Systems116, 1 (2016), 2–20

  64. [64]

    Jörg Henseler, Christian M Ringle, and Marko Sarstedt. 2015. A new criterion for assessing discriminant validity in variance-based structural Manuscript submitted to ACM 24 Choudhuri et al. equation modeling.J. Acad. Mark. Sci.43, 1 (2015), 115–135

  65. [65]

    Kristina Höök. 2000. Steps to take before intelligent user interfaces become real.Interacting with computers12, 4 (2000), 409–426

  66. [66]

    Matt C Howard. 2016. A review of exploratory factor analysis decisions and overview of current practices: What we are doing and how can we improve?International Journal of Human-Computer Interaction32, 1 (2016), 51–62

  67. [67]

    Daniel Kahneman. 2011. Thinking, fast and slow.Farrar, Straus and Giroux(2011)

  68. [68]

    David Kember, Doris YP Leung, Alice Jones, Alice Yuen Loke, Jan McKay, Kit Sinclair, Harrison Tse, Celia Webb, Frances Kam Yuet Wong, Marian Wong, et al. 2000. Development of a questionnaire to measure the level of reflective thinking.Assessment & Evaluation in Higher Education25, 4 (2000), 381–395

  69. [69]

    Barbara A Kitchenham and Shari L Pfleeger. 2008. Personal opinion surveys. InGuide to advanced empirical software engineering. Springer, 63–92

  70. [70]

    Amy J Ko, Thomas D LaToza, and Margaret M Burnett. 2015. A practical guide to controlled experiments of software engineering tools with human participants.Empirical Software Engineering20, 1 (2015), 110–141

  71. [71]

    Aleksander Kobylarek, Kamil Błaszczyński, Luba Ślósarz, and Martyna Madej. 2022. Critical Thinking Questionnaire (CThQ)–construction and application of critical thinking test tool.Andragogy Adult Education and Social Marketing2, 2 (2022), 1–1

  72. [72]

    Ned Kock. 2014. Advanced mediating effects tests, multi-group analyses, and measurement model assessments in PLS-based SEM.International Journal of e-Collaboration10, 1 (2014)

  73. [73]

    Ned Kock. 2015. Common method bias in PLS-SEM: A full collinearity assessment approach.International Journal of e-Collaboration (ijec)11, 4 (2015), 1–10

  74. [74]

    Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. 2025. Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task.arXiv preprint arXiv:2506.08872(2025)

  75. [75]

    Pia Kreijkes, Viktor Kewenig, Martina Kuvalja, Mina Lee, Sylvia Vitello, Jake M Hofman, Abigail Sellen, Sean Rintel, Daniel G Goldstein, David M Rothschild, et al. 2025. Effects of LLM use and note-taking on reading comprehension and memory: A randomised experiment in secondary schools. A vailable at SSRN(2025)

  76. [76]

    Susie Lamborn, Fred Newmann, and Gary Wehlage. 1992. The significance and sources of student engagement.Student engagement and achievement in American secondary schools(1992), 11–39

  77. [77]

    1991.Emotion and adaptation

    Richard S Lazarus. 1991.Emotion and adaptation. Oxford University Press

  78. [78]

    Hao-Ping Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–22

  79. [79]

    John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance.Human factors46, 1 (2004), 50–80

  80. [80]

    Marina Lepp and Joosep Kaimre. 2025. Does generative AI help in learning programming: Students’ perceptions, reported use and relation to performance.Computers in Human Behavior Reports18 (2025), 100642

Showing first 80 references.