pith. machine review for the scientific record. sign in

arxiv: 2604.13534 · v2 · submitted 2026-04-15 · 💻 cs.CY

Recognition: 2 theorem links

· Lean Theorem

Who Decides in AI-Mediated Learning? The Agency Allocation Framework

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:44 UTC · model grok-4.3

classification 💻 cs.CY
keywords learner agencyAI-mediated learningdecision authoritylearning at scaleeducational technologyagency frameworktutoring systemsautomation trade-offs
0
0 comments X

The pith

Learner agency in AI education is the explicit allocation of decision authority among students, teachers, institutions, and AI systems.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper reframes learner agency away from engagement or self-regulation metrics and toward who actually gets to make choices about learning plans, progress, and content in automated systems. It introduces the Agency Allocation Framework to break down decisions by their distribution across actors, how available choices are designed, what evidence supports them, and how far into the future their effects reach. A review of learning-at-scale research plus a tutoring-system case shows four persistent problems: vague definitions of agency, over-reliance on behavior as a stand-in, efficiency-control trade-offs, and hidden shifts of power when AI is added. If the framework works as described, the field can move from arguing about degrees of automation to asking precise questions about when AI builds students' own decision capacity and when it simply takes over.

Core claim

The central claim is that learner agency at scale is best analyzed as the allocation of decision authority across learners, educators, institutions, and AI. The Agency Allocation Framework supplies four analytic dimensions—distribution of decisions, architecture of choices, evidential basis, and temporal horizon of consequences—to make these allocations visible. Applied to existing literature and an example tutoring system, the framework shows that AI-mediated environments routinely redistribute authority in ways current proxies miss, and it supplies a language for distinguishing scaffolding from substitution without requiring a blanket preference for more or less automation.

What carries the argument

The Agency Allocation Framework (AAF), a structured mapping tool that tracks how decision rights are assigned by recording their distribution among actors, the design of available choices, the evidence used to justify them, and the time scale over which consequences appear.

If this is right

  • Researchers can compare AI learning systems directly on how authority is split rather than on engagement or completion rates alone.
  • Designers obtain explicit criteria for deciding whether a given AI feature adds learner capacity or removes learner choice.
  • Evaluation studies can track long-term agency effects instead of stopping at short-term efficiency gains.
  • The four recurring challenges identified in the literature become addressable through systematic authority mapping rather than repeated conceptual debates.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same mapping approach could be tested on other automated decision domains such as personalized health recommendations or financial advice tools.
  • Policy guidelines for educational AI could require public authority-allocation diagrams to increase transparency about who controls learning paths.
  • Empirical follow-up studies might measure whether systems built with explicit authority maps produce students who retain more independent decision skill after the AI is removed.
  • The framework could be extended to include power asymmetries between institutions and individual learners as an additional analytic dimension.

Load-bearing premise

That spelling out who holds decision authority will by itself produce clearer analysis and better system designs without first testing whether the framework's four dimensions actually capture the relevant trade-offs in real settings.

What would settle it

Apply the Agency Allocation Framework to a set of existing AI tutoring platforms and check whether the resulting authority maps predict measurable differences in learners' later ability to plan and choose independently; if the maps show no consistent relation to those outcomes, the framework's utility collapses.

Figures

Figures reproduced from arXiv: 2604.13534 by Conrad Borchers, Olga Viberg, Ren\'e F. Kizilcec.

Figure 1
Figure 1. Figure 1: The AAF treats agency as enacted through a focal decision point and proceeds through five steps. The link between [PITH_FULL_IMAGE:figures/full_fig_p008_1.png] view at source ↗
read the original abstract

As AI-mediated learning systems increasingly shape how learners plan, make decisions, and progress through education, learner agency is becoming both more consequential and harder to conceptualize at scale. Existing research often treats agency as a proxy for engagement and self-regulation, leaving unclear who actually holds decision-making authority in large-scale, automated learning environments. This paper reframes learner agency as the allocation of decision authority across learners, educators, institutions, and AI systems. We introduce the Agency Allocation Framework (AAF) for analyzing how decisions are distributed, how choices are architected, what evidence supports them, and over what time horizons their consequences unfold. Drawing on a focused review of Learning at Scale literature and an illustrative tutoring-system example, we identify four recurring challenges for studying learner agency at scale: (1) conceptual ambiguity, (2) reliance on behavioral proxies, (3) trade-offs between efficiency and learner control, and (4) the redistribution of agency through AI-mediated systems. Rather than advocating more or less automation, the AAF supports systematic analysis of when AI scaffolds learners' capacity to act and when it substitutes for it. By making decision authority explicit, the framework provides researchers and designers with analytic tools for studying, comparing, and evaluating agency-preserving learning systems in increasingly automated educational contexts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper reframes learner agency in AI-mediated learning as the allocation of decision authority across learners, educators, institutions, and AI systems. It introduces the Agency Allocation Framework (AAF) with four dimensions—distribution of decisions, architecture of choices, supporting evidence, and time horizons of consequences—to analyze agency at scale. Drawing on a focused literature review, it identifies four recurring challenges (conceptual ambiguity, behavioral proxies, efficiency-control trade-offs, and AI-driven redistribution) and illustrates the framework via a single tutoring-system example, arguing that AAF enables systematic study, comparison, and design of agency-preserving systems without advocating more or less automation.

Significance. If the framework's categories prove consistently actionable, it could advance the field by shifting analysis from engagement proxies to explicit decision mappings, supporting more deliberate design of AI learning tools. The clear enumeration of challenges from the literature review is a strength, as is the non-prescriptive stance on automation; these elements could aid HCI and AIED researchers in addressing agency in large-scale systems.

major comments (3)
  1. [§3 (Agency Allocation Framework definition)] §3 (Agency Allocation Framework definition): The four dimensions are presented as supplying 'analytic tools for studying, comparing, and evaluating' agency-preserving systems, yet no operationalization, coding scheme, or inter-rater guidelines are supplied for applying the mappings; this is load-bearing because the central claim rests on the framework enabling systematic (rather than ad-hoc) analysis.
  2. [§5 (Illustrative tutoring-system example)] §5 (Illustrative tutoring-system example): The example applies the dimensions to one system but performs no cross-system comparison, derives no novel falsifiable design implication, and does not contrast AAF mappings against standard HCI or self-regulation proxies; this leaves the asserted utility for 'systematic design' unshown and is central to the contribution.
  3. [§4 (Recurring challenges)] §4 (Recurring challenges): The four challenges are logically derived from the review, but the manuscript provides no method or evidence demonstrating that AAF resolves or measures them (e.g., how the 'evidence' dimension reduces reliance on behavioral proxies); without this, the framework risks adding terminology without advancing empirical or design practice.
minor comments (2)
  1. The abstract and introduction could more explicitly contrast AAF with prior agency frameworks in HCI or self-regulated learning to clarify incremental novelty.
  2. A summary table listing the four dimensions with brief definitions and the tutoring example mappings would improve readability and allow readers to assess consistency at a glance.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback, which highlights important considerations for strengthening the presentation of the Agency Allocation Framework. We address each major comment below, with planned revisions to enhance clarity and demonstrate the framework's utility while preserving the manuscript's conceptual focus.

read point-by-point responses
  1. Referee: [§3 (Agency Allocation Framework definition)] §3 (Agency Allocation Framework definition): The four dimensions are presented as supplying 'analytic tools for studying, comparing, and evaluating' agency-preserving systems, yet no operationalization, coding scheme, or inter-rater guidelines are supplied for applying the mappings; this is load-bearing because the central claim rests on the framework enabling systematic (rather than ad-hoc) analysis.

    Authors: We agree that the absence of explicit application guidance leaves the claim of systematic analysis somewhat underspecified. The AAF is introduced as a conceptual framework to structure analysis of decision authority rather than as a validated measurement instrument. To address this directly, we will revise §3 to include a new subsection with initial application guidelines: a step-by-step mapping process, examples of how to handle ambiguous decisions (e.g., shared authority between learner and AI), and notes on consistency considerations across analysts. This addition will illustrate how the dimensions support non-ad-hoc use without requiring full inter-rater protocols at this stage. revision: yes

  2. Referee: [§5 (Illustrative tutoring-system example)] §5 (Illustrative tutoring-system example): The example applies the dimensions to one system but performs no cross-system comparison, derives no novel falsifiable design implication, and does not contrast AAF mappings against standard HCI or self-regulation proxies; this leaves the asserted utility for 'systematic design' unshown and is central to the contribution.

    Authors: The single-system example is deliberately illustrative to show how the four dimensions render implicit agency allocations visible in a concrete AI tutoring context, such as distinguishing AI-driven content selection from learner-initiated goal setting. While we do not conduct cross-system comparisons or formal hypothesis testing (which would exceed the paper's scope as a framework introduction), the example does differentiate AAF from standard proxies by emphasizing decision authority over engagement metrics. We will revise §5 to add an explicit contrast paragraph with common self-regulation proxies (e.g., time-on-task or quiz scores) and derive two concrete, testable design implications, such as how choice architecture might influence long-horizon learner control. This will better substantiate the utility for systematic design. revision: partial

  3. Referee: [§4 (Recurring challenges)] §4 (Recurring challenges): The four challenges are logically derived from the review, but the manuscript provides no method or evidence demonstrating that AAF resolves or measures them (e.g., how the 'evidence' dimension reduces reliance on behavioral proxies); without this, the framework risks adding terminology without advancing empirical or design practice.

    Authors: The challenges are synthesized from the literature to frame persistent issues, and the AAF is positioned to mitigate them through explicit decision mapping rather than through new empirical data in this paper. For example, the 'supporting evidence' dimension prompts examination of the basis for decisions (e.g., learner data vs. inferred behavior), which can reduce proxy reliance by design. We acknowledge that the manuscript does not empirically demonstrate resolution. We will revise §4 and the discussion to explicitly link each challenge to relevant AAF dimensions, using the tutoring example to show potential mitigation pathways, and clarify that the framework equips future empirical and design work rather than resolving the challenges itself. revision: partial

Circularity Check

0 steps flagged

No circularity: AAF is a definitional framework grounded in literature review and example

full rationale

The paper introduces the Agency Allocation Framework as a conceptual tool for mapping decision authority in AI-mediated learning, drawing explicitly on a focused review of Learning at Scale literature and one illustrative tutoring-system example. No equations, parameter fits, predictions, or derivations appear in the provided text. The central claims rest on naming four recurring challenges from external literature rather than reducing to self-citations, self-definitions, or fitted inputs. The framework is presented as an analytic lens rather than a result derived from prior fitted values or author-specific uniqueness theorems. This is a standard non-circular conceptual contribution; the derivation chain is self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on the domain assumption that agency is best understood through explicit allocation of decision authority; no free parameters or quantitative fitting are involved, and the framework itself is the primary new construct without independent empirical grounding in the abstract.

axioms (1)
  • domain assumption Learner agency can be usefully reframed as allocation of decision authority across learners, educators, institutions, and AI systems
    This is the core reframing stated in the abstract and underpins the entire framework.
invented entities (1)
  • Agency Allocation Framework (AAF) no independent evidence
    purpose: To provide analytic tools for studying, comparing, and evaluating agency in AI-mediated learning
    Newly proposed conceptual structure; no independent falsifiable evidence provided beyond the illustrative example.

pith-pipeline@v0.9.0 · 5531 in / 1294 out tokens · 46688 ms · 2026-05-12T01:44:56.664891+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

75 extracted references · 75 canonical work pages

  1. [1]

    2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protec- tion Regulation). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX: 32016R0679

  2. [2]

    Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)

    2024. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX: 32024R1689

  3. [3]

    Vincent Aleven, Ido Roll, Bruce M McLaren, and Kenneth R Koedinger. 2016. Help helps, but only so much: Research on help seeking with intelligent tutoring systems.International Journal of Artificial Intelligence in Education26, 1 (2016), 205–223

  4. [4]

    Abram D Anders and Emily Dux Speltz. 2025. Developing generative AI literacies through self-regulated learning: A human-centered approach.Computers and Education: Artificial Intelligence(2025), 100482

  5. [5]

    gaming the system

    Ryan Baker, Jason Walonoski, Neil Heffernan, Ido Roll, Albert Corbett, and Kenneth Koedinger. 2008. Why students engage in “gaming the system” behavior in interactive learning environments.Journal of Interactive Learning Research19, 2 (2008), 185–224. https://www.learntechlib.org/primary/p/24328

  6. [6]

    Ryan SJ d Baker, Albert T Corbett, Kenneth R Koedinger, Shelley Evenson, Ido Roll, Angela Z Wagner, Meghan Naim, Jay Raspat, Daniel J Baker, and Joseph E Beck. 2006. Adapting to when students game an intelligent tutoring system. In International conference on intelligent tutoring systems. Springer, 392–401

  7. [7]

    Albert Bandura. 2006. Toward a psychology of human agency.Perspectives on psychological science1, 2 (2006), 164–180

  8. [8]

    Conrad Borchers, Hannah Deininger, and Zachary A Pardos. 2026. Toward Trait-Aware Learning Analytics.arXiv preprint arXiv:2602.00018(2026)

  9. [9]

    Conrad Borchers, Ashish Gurung, Qinyi Liu, Danielle R Thomas, Mohammad Khalil, and Kenneth R Koedinger. 2026. Brief but Impactful: How Human Tutoring Interactions Shape Engagement in Online Learning.arXiv preprint arXiv:2601.09994(2026)

  10. [10]

    Conrad Borchers, Jeroen Ooge, Cindy Peng, and Vincent Aleven. 2025. How Learner Control and Explainable Learning Analytics About Skill Mastery Shape Student Desires to Finish and Avoid Loss in Tutored Practice. InProceedings of the 15th International Learning Analytics and Knowledge Conference. 810–816

  11. [11]

    Natalie Brezack, Shuangting Yang, Jenna Grady, Abby Lavine, Mingyu Feng, and Linlin Li. 2025. Teacher Perspectives on Student Self-Regulated Learning in a Math Formative Assessment Platform. InProceedings of the Twelfth ACM Conference on Learning@ Scale. 325–329

  12. [12]

    Garvin Brod. 2026. Agency does not equal choice–conceptualizing agency for learning in the age of AI.Learning and Individual Differences125 (2026), 102841. doi:10.1016/j.lindif.2025.102841

  13. [13]

    Pin-Ju Chen and Yang-Hsueh Chen. 2014. Facilitating MOOCs learning through weekly meet-up: a case study in Taiwan. InProceedings of the first ACM conference on Learning@ scale conference. 183–184

  14. [14]

    Ji Yong Cho and René F Kizilcec. 2021. Applying the behavior change technique taxonomy from public health interventions to educational research. InProceedings of the Eighth ACM Conference on Learning@ Scale. 195–207

  15. [15]

    Jiwon Chun, Yankun Zhao, Hanlin Chen, and Meng Xia. 2025. Planglow: Person- alized study planning with an explainable and controllable llm-driven system. In Proceedings of the Twelfth ACM Conference on Learning@ Scale. 116–127

  16. [16]

    Tobias Dalberg, Kalena E Cortes, and Mitchell L Stevens. 2024. Major Selection as Iteration: Observing Gendered Patterns of Major Selection Under Elective Curriculums.AERA Open10 (2024), 23328584241249600

  17. [17]

    Ali Darvishi, Hassan Khosravi, Shazia Sadiq, Dragan Gašević, and George Siemens. 2024. Impact of AI assistance on student agency.Computers & Education 210 (2024), 104967

  18. [18]

    Dan Davis, Guanliang Chen, Claudia Hauff, and Geert-Jan Houben. 2016. Gauging MOOC Learners’ Adherence to the Designed Learning Path. InProceedings of the 9th International Conference on Educational Data Mining (EDM 2016). International Educational Data Mining Society, Raleigh, NC, USA, 54–61

  19. [19]

    Dan Davis, Ioana Jivet, René F Kizilcec, Guanliang Chen, Claudia Hauff, and Geert-Jan Houben. 2017. Follow the successful crowd: raising MOOC completion rates through social comparison at scale. InProceedings of the seventh international learning analytics & knowledge conference. 454–463

  20. [20]

    Edward L Deci and Richard M Ryan. 2012. Self-determination theory.Handbook of theories of social psychology1, 20 (2012), 416–436

  21. [21]

    Yizhou Fan, Luzhen Tang, Huixiao Le, Kejie Shen, Shufang Tan, Yueying Zhao, Yuan Shen, Xinyu Li, and Dragan Gašević. 2025. Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance.British Journal of Educational Technology56, 2 (2025), 489–530

  22. [22]

    Christian Fischer, Zachary A Pardos, Ryan Shaun Baker, Joseph Jay Williams, Padhraic Smyth, Renzhe Yu, Stefan Slater, Rachel Baker, and Mark Warschauer

  23. [23]

    Mining big data in education: Affordances and challenges.Review of research in education44, 1 (2020), 130–160

  24. [24]

    Johanna Fleckenstein, Lucas W Liebenow, and Jennifer Meyer. 2023. Automated feedback and writing: A multi-level meta-analysis of effects on students’ perfor- mance.Frontiers in Artificial Intelligence6 (2023), 1162454

  25. [25]

    Luciano Floridi. 2024. AI as agency without intelligence: On artificial intelligence as a new form of artificial agency and the multiple realisability of agency thesis. A vailable at SSRN(2024)

  26. [26]

    John Frens, Erin Walker, and Gary Hsieh. 2018. Supporting answerers with feedback in social Q&A. InProceedings of the Fifth Annual ACM Conference on Learning at Scale. 1–10

  27. [27]

    Dragan Gašević, Shane Dawson, and George Siemens. 2015. Let’s not forget: Learning analytics are about learning.TechTrends59, 1 (2015), 64–71. doi:10. 1007/s11528-014-0822-x

  28. [28]

    Hariyanto, Francisca Xaveria Diah Kristianingsih, and Rizqona Maharani. 2025. Artificial intelligence in adaptive education: a systematic review of techniques for personalized learning.Discover Education4, 1 (2025), 458

  29. [29]

    Monique H Harrison, Philip A Hernandez, and Mitchell L Stevens. 2022. Should I start at math 101? Content repetition as an academic strategy in elective cur- riculums.Sociology of Education95, 2 (2022), 133–152

  30. [30]

    Neil T Heffernan and Cristina Lindquist Heffernan. 2014. The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching.International Journal of Artificial Intelligence in Education24, 4 (2014), 470–497

  31. [31]

    Stephen Hutt, Sanchari Das, and Ryan S Baker. 2023. The Right to Be Forgotten and Educational Data Mining: Challenges and Paths Forward. InProceedings of the 16th International Conference on Educational Data Mining (EDM 2023). International Educational Data Mining Society, Bengaluru, India, 251–259. doi:10. 5281/zenodo.8115655

  32. [32]

    Sanna Järvelä, Jonna Malmberg, Eetu Haataja, Marta Sobocinski, and Paul A Kirschner. 2021. What multimodal data can tell us about the students’ regulation of their learning process?Learning and instruction72 (2021), 101203

  33. [33]

    Judy Kay, Bob Kummerfeld, Cristina Conati, Kaśka Porayska-Pomsta, and Ken Holstein. 2023. Scrutable AIED. InHandbook of artificial intelligence in education. Edward Elgar Publishing, 101–125

  34. [34]

    Jinwon Kim, Qiujie Li, Zilu Jiang, and Di Xu. 2025. Not all delay is procrastination: Analyzing subpatterns of academic delayers in online learning. InProceedings of the 15th International Learning Analytics and Knowledge Conference. 977–983

  35. [35]

    René F Kizilcec, Jeremy N Bailenson, and Charles J Gomez. 2015. The instructor’s face in video instruction: Evidence from two large-scale field studies.Journal of Educational Psychology107, 3 (2015), 724

  36. [36]

    René F Kizilcec, Rachel B Baker, Elizabeth Bruch, Kalena E Cortes, Laura T Hamilton, David Nathan Lang, Zachary A Pardos, Marissa E Thompson, and Mitchell L Stevens. 2023. From pipelines to pathways in the study of academic progress.Science380, 6643 (2023), 344–347

  37. [37]

    Kenneth R Koedinger, Paulo F Carvalho, Ran Liu, and Elizabeth A McLaughlin

  38. [38]

    Reconstructing Waddington’s landscape from data

    An astonishing regularity in student learning rate.Proceedings of the National Academy of Sciences120, 13 (2023), e2221311120. doi:10.1073/pnas. 2221311120

  39. [39]

    Kenneth R Koedinger, Jihee Kim, Julianna Zhuxin Jia, Elizabeth A McLaughlin, and Norman L Bier. 2015. Learning is not a spectator sport: Doing is better than watching for learning from a MOOC. InProceedings of the second (2015) ACM conference on learning@ scale. 111–120. doi:10.1145/2724660.2724681

  40. [40]

    Falk Lieder and Thomas L Griffiths. 2020. Resource-rational analysis: Under- standing human cognition as the optimal use of limited computational resources. Behavioral and brain sciences43 (2020), e1

  41. [41]

    Astrid Mairitsch, Giulia Sulis, Sarah Mercer, and Désirée Bauer. 2023. Putting the social into learner agency: Understanding social relationships and affordances. International Journal of Educational Research120 (2023), 102214

  42. [42]

    Jai K Malik, Fred M Feinberg, and Elizabeth E Bruch. 2025. From transcripts to trajectories: A framework for studying academic pathways through college. PNAS nexus4, 10 (2025), pgaf210

  43. [43]

    Where is Agency Moving to?

    Ana Mouta, Ana María Pinto-Llorente, and Eva María Torrecilla-Sánchez. 2025. “Where is Agency Moving to?”: Exploring the Interplay between AI Technologies in Education and Human Agency.Digital Society4, 2 (2025), 49

  44. [44]

    Denise Nacu, Caitlin K Martin, Michael Schutzenhofer, and Nicole Pinkard. 2016. Beyond traditional metrics: Using automated log coding to understand 21st century learning online. InProceedings of the Third (2016) ACM Conference on Learning@ Scale. 197–200

  45. [45]

    Na’ilah Suad Nasir and Victoria M Hand. 2006. Exploring sociocultural perspec- tives on race, culture, and learning.Review of educational research76, 4 (2006), 449–475

  46. [46]

    Sriraam Natarajan, Saurabh Mathur, Sahil Sidheekh, Wolfgang Stammer, and Kristian Kersting. 2025. Human-in-the-loop or AI-in-the-loop? Automate or Collaborate?. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 39. 28594–28600

  47. [47]

    2018.Educa- tion and Skills 2030: Conceptual Learning Framework

    Organisation for Economic Co-operation and Development. 2018.Educa- tion and Skills 2030: Conceptual Learning Framework. Technical Report EDU/EDPC(2018)9/ANN2. OECD, Paris, France. https://www.oecd.org/ education/2030-project/ Draft paper for the 7th Informal Working Group (IWG) L@S ’26, July 27–31, 2026, Seoul, Republic of Korea Borchers et al. Meeting

  48. [48]

    OECD Publishing, Paris, 2026.https://doi.org/10.1787/062a7394-en

    Z. Pardos and C. Borchers. 2026. AI in Institutional Workflows: Learning from Higher Education to Unlock New Affordances for Education Systems and Institutions. InOECD Digital Education Outlook 2026: Exploring Effec- tive Uses of Generative AI in Education, OECD (Ed.). OECD Publishing, Paris. doi:10.1787/062a7394-en Chapter 11

  49. [49]

    Zachary A Pardos, Matthew Tang, Ioannis Anastasopoulos, Shreya K Sheel, and Ethan Zhang. 2023. Oatutor: An open-source adaptive tutoring system and curated content library for learning sciences research. InProceedings of the 2023 chi conference on human factors in computing systems. 1–17

  50. [50]

    Zachary A Pardos, Steven Tang, Daniel Davis, and Christopher Vu Le. 2017. Enabling real-time adaptivity in MOOCs with a personalized next-step recom- mendation framework. InProceedings of the fourth (2017) ACM conference on learning@ scale. 23–32

  51. [51]

    Ethan Prihar, Thanaporn Patikorn, Anthony Botelho, Adam Sales, and Neil Heffernan. 2021. Toward personalizing students’ education with crowdsourced tutoring. InProceedings of the Eighth ACM Conference on Learning@ Scale. 37–45

  52. [52]

    Napol Rachatasumrit, Paulo Carvalho, and Kenneth Koedinger. 2024. Beyond Accuracy: Embracing Meaningful Parameters in Educational Data Mining. In Proceedings of the 17th International Conference on Educational Data Mining. 203–210. doi:10.5281/zenodo.12729798

  53. [53]

    Omid Rafieian and Si Zuo. 2024. Personalization, Algorithmic Dependence, and Learning. (2024). https://omidraf.github.io/data/Learning.pdf Working paper, under review at Marketing Science

  54. [54]

    Kavana Ramesh, Laton Vermette, and Parmit K Chilana. 2021. Setting up, Trou- bleshooting, and Innovating on the Delivery of Online Instruction: A Case Study of an LMS Q&A forum. InProceedings of the Eighth ACM Conference on Learning@ Scale. 59–67

  55. [55]

    Selina Reinhard, Sebastian Serth, Thomas Staubitz, and Christoph Meinel. 2024. From One-Size-Fits-All to Individualisation: Redefining MOOCs through Flexible Learning Paths. InProceedings of the Eleventh ACM Conference on Learning@ Scale. 154–164

  56. [56]

    Ido Roll, Daniel M Russell, and Dragan Gašević. 2018. Learning at scale.In- ternational Journal of Artificial Intelligence in Education28, 4 (2018), 471–477. doi:10.1007/s40593-018-0170-7

  57. [57]

    Gavriel Salomon, David N Perkins, and Tamar Globerson. 1991. Partners in cog- nition: Extending human intelligence with intelligent technologies.Educational researcher20, 3 (1991), 2–9

  58. [58]

    2013.Measuring what matters most: Choice- based assessments for the digital age

    Daniel L Schwartz and Dylan Arena. 2013.Measuring what matters most: Choice- based assessments for the digital age. The MIT Press

  59. [59]

    Erzhuo Shao, Shiyuan Guo, and Zachary A Pardos. 2021. Degree planning with plan-bert: Multi-semester recommendation using future courses of interest. In Proceedings of the AAAI conference on artificial intelligence, Vol. 35. 14920–14929

  60. [60]

    2007.Human-machine reconfigurations: Plans and situated actions

    Lucille Alice Suchman. 2007.Human-machine reconfigurations: Plans and situated actions. Cambridge university press

  61. [61]

    Vinitra Swamy, Bahar Radmehr, Natasa Krco, Mirko Marras, and Tanja Käser. 2022. Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs. InProceedings of the 15th International Conference on Educational Data Mining. Durham, UK, 98–109. doi:10.5281/zenodo.6852964

  62. [62]

    Yan Tao, Nathan Maidi, Renzhe Yu, and René F Kizilcec. 2025. Investigating Sys- tematic Variation in Academic Procrastination Behavior by Course, Assignment, and Student Characteristics. InProceedings of the Twelfth ACM Conference on Learning@ Scale. 186–196

  63. [63]

    Juliana Tay, Na Li, Lan Luo, Zihui Zhou, Wan Meng, Qing Zhang, Erick Purwanto, and Yongjia Lu. 2025. Role of learner agency for interactive group learning through the lens of Blooms Taxonomy.Interactive Learning Environments(2025), 1–14

  64. [64]

    Marissa E Thompson, Tobias Dalberg, and Elizabeth E Bruch. 2024. Gender Segregation and Decision-Making in Undergraduate Course-Taking.Sociological Science11 (2024), 1017–1045

  65. [65]

    Vincent Tinto. 2017. Reflections on student persistence.Student Success8, 2 (2017), 1–8

  66. [66]

    Michael Veale and Frederik Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act.Computer Law Review International22, 4 (2021), 97–112. doi:10.9785/cri-2021-220402

  67. [67]

    Olga Viberg, Oleksandra Poquet, Vitomir Kovanovic, and Hassan Khosravi. 2025. Fostering Human Agency in Age of AI: A Learning Analytics Perspective.Journal of Learning Analytics12, 3 (2025), 1–7

  68. [68]

    2017.The EU General Data Protection Regulation (GDPR): A Practical Guide

    Paul Voigt and Axel von dem Bussche. 2017.The EU General Data Protection Regulation (GDPR): A Practical Guide. Springer International Publishing, Cham. doi:10.1007/978-3-319-57959-7

  69. [69]

    Lev S Vygotsky. 1977. The development of higher psychological functions.Soviet Psychology15, 3 (1977), 60–73

  70. [70]

    Hao Wan and Joseph E Beck. 2017. One Decision Tree is Enough to Make Customization. InProceedings of the Fourth (2017) ACM Conference on Learning@ Scale. 193–196

  71. [71]

    Ben Williamson. 2019. Policy networks, performance metrics and platform markets: Charting the expanding data infrastructure of higher education.British Journal of Educational Technology50, 6 (2019), 2794–2809

  72. [72]

    Naomi E Winstone, Robert A Nash, Michael Parker, and James Rowntree. 2017. Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes.Educational psychologist52, 1 (2017), 17–37

  73. [73]

    Benjamin Xie, Greg L Nelson, Harshitha Akkaraju, William Kwok, and Amy J Ko. 2020. The effect of informing agency in self-directed online learning en- vironments. InProceedings of the Seventh ACM Conference on Learning@ Scale. 77–89

  74. [74]

    Lingrui Xu, Zachary A Pardos, and Anirudh Pai. 2023. Convincing the expert: Reducing algorithm aversion in administrative higher education decision-making. InProceedings of the Tenth ACM Conference on Learning@ Scale. 215–225

  75. [75]

    Kevin Wai Ho Yung. 2025. Theorizing agency from a complex dynamic systems theory perspective: Evidence from language learner narratives in online shadow education.The Modern Language Journal109, 1 (2025), 274–294