pith. machine review for the scientific record. sign in

arxiv: 2604.16397 · v1 · submitted 2026-03-29 · 💻 cs.CY · cs.AI· cs.HC

Recognition: no theorem link

Instructor-Created Custom GPTs as Pedagogical Partners Fostering Immersion in Online Higher Education: Two Case Studies

Authors on Pith no claims yet

Pith reviewed 2026-05-14 21:48 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.HC
keywords custom GPTsimmersive learningonline higher educationpedagogical partnersImmersion Learning Cubecase studiesAI in educationstudent engagement
0
0 comments X

The pith

Instructor-created custom GPTs can serve as pedagogical partners that leverage system, narrative, and agency dimensions to foster immersion in online higher education.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines how custom GPTs built by instructors can sustain student engagement in online courses by promoting deep mental involvement. It draws on the Immersive Learning Cube framework to map two case studies: a grant writing course where the GPT acts as a feedback partner and a software engineering course where it functions as a role-playing metacognitive tutor. Analysis of course artifacts shows the GPTs enhance immediacy of responses, reinforce meaningful context, and support learner ownership. The work concludes that such tools amplify rather than replace human instruction, creating more coherent and autonomous learning experiences.

Core claim

Through qualitative mapping of embedded artifacts in two courses, the custom GPTs influenced all three immersion dimensions: system immersion through immediate and permanent availability, narrative immersion through story reinforcement and diegetic role-play, and agency immersion through negotiation of feedback and scaffolding of self-regulated learning, thereby acting as partners that increase engagement without displacing instructors.

What carries the argument

The Immersive Learning Cube framework, which defines immersion via three dimensions of system envelopment, narrative context, and agency in meaning-making.

If this is right

  • Custom GPTs can provide immediate feedback that strengthens system immersion without requiring constant instructor presence.
  • They can maintain narrative coherence by aligning responses with the evolving story of assignments such as grant proposals.
  • They can increase learner agency by letting students negotiate feedback and direct their own revisions.
  • In role-play framing they can scaffold metacognitive processes like self- and co-regulation in technical courses.
  • The approach works across different disciplines and course levels while preserving the instructor's central role.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar GPT integrations could be tested in large-enrollment courses to see whether availability scales without losing personalization.
  • Future work might combine the immersion mapping with quantitative measures such as time-on-task logs to strengthen the evidence.
  • The same three-dimension lens could be applied to other AI tools like custom agents in discussion forums or simulation environments.

Load-bearing premise

The qualitative mapping of course artifacts to the three immersion dimensions accurately reflects students' actual lived experiences of immersion.

What would settle it

Direct student surveys or interaction logs that show no measurable rise in reported engagement or perceived immersion when the custom GPTs are added compared with standard online course tools.

read the original abstract

As online higher education expands, sustaining student engagement remains a critical challenge. This paper approaches immersive learning by investigating how custom GPTs foster immersion (as a state of deep mental involvement) for students and instructors. While large language models (LLMs) offer potential for enhancing feedback, little research has examined instructor-created custom GPTs designed to align with specific pedagogical goals. This paper addresses this gap, employing the Immersive Learning Cube framework, which conceptualizes immersion through three dimensions: system (envelopment by the environment), narrative (meaningful context), and agency (commitment to meaning-making). Through a qualitative analysis of two distinct case studies, an accelerated graduate grant writing course in the US and an undergraduate software engineering course in Portugal, we analyze course-embedded artifacts to map how custom GPTs influence these immersion dimensions. In the grant writing course, the custom GPT functioned as a feedback partner, fostering system immersion through its immediacy, narrative immersion by reinforcing the proposal's evolving story, and agency immersion by empowering students to negotiate feedback and take ownership of revisions. In the software engineering course, a diegetically-framed custom GPT acted as a metacognitive tutor, enhancing system immersion via its permanent availability, narrative immersion through its role-play function and agency immersion by scaffolding students' self- and co-regulated learning. Our findings demonstrate that thoughtfully integrated custom GPTs can act as powerful pedagogical partners that leverage all three dimensions of immersion. Rather than replacing human instructors, they can amplify immediacy, coherence, and learner autonomy, creating more engaging and immersive online learning environments.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that instructor-created custom GPTs can serve as pedagogical partners fostering immersion in online higher education, as conceptualized by the Immersive Learning Cube's three dimensions (system, narrative, and agency). Through qualitative analysis of course-embedded artifacts from two case studies—an accelerated US graduate grant writing course where the GPT acted as a feedback partner, and a Portuguese undergraduate software engineering course where it served as a diegetically-framed metacognitive tutor—the authors conclude that such GPTs amplify immediacy, coherence, and learner autonomy without replacing human instructors.

Significance. If the interpretive mappings are robust, the work offers a timely contribution to educational technology by illustrating concrete, instructor-designed integrations of custom GPTs that support immersive online learning across distinct cultural and disciplinary contexts. The emphasis on partnership rather than replacement, combined with cross-case evidence from grant writing and software engineering, provides practical insights for sustaining engagement in expanding online higher education.

major comments (2)
  1. [Qualitative analysis of the two case studies] The central claim that custom GPTs leverage all three immersion dimensions rests on the authors' qualitative coding of artifacts (feedback threads, role-play logs) onto the Immersive Learning Cube without reported participant validation, student self-reports, behavioral logs, or inter-rater reliability checks. This leaves the mappings interpretive and potentially circular, as different coders or direct queries could yield divergent assignments (see descriptions of the two case studies in the abstract and analysis sections).
  2. [Findings and discussion] No quantitative metrics, error estimates, or cross-student consistency measures are provided to support the strength of the claimed effects on system, narrative, and agency immersion, making it difficult to assess whether the GPT integrations produced the described outcomes beyond the authors' descriptive examples.
minor comments (1)
  1. [Abstract] The abstract and introduction could more clearly delineate the boundaries of the qualitative approach and the small sample of two courses to help readers calibrate expectations for generalizability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address the major comments point by point below, with proposed revisions to improve transparency in our qualitative approach.

read point-by-point responses
  1. Referee: The central claim that custom GPTs leverage all three immersion dimensions rests on the authors' qualitative coding of artifacts (feedback threads, role-play logs) onto the Immersive Learning Cube without reported participant validation, student self-reports, behavioral logs, or inter-rater reliability checks. This leaves the mappings interpretive and potentially circular, as different coders or direct queries could yield divergent assignments (see descriptions of the two case studies in the abstract and analysis sections).

    Authors: We acknowledge that the analysis is interpretive, as is standard in qualitative case study research where instructor-researchers analyze course artifacts. The mappings were derived from direct, iterative examination of the specific feedback threads and role-play logs by the authors in their dual roles. We will revise the methods and analysis sections to include a more detailed account of the artifact selection and mapping process to the Immersive Learning Cube. We will also add an explicit limitations subsection noting the absence of participant validation, inter-rater reliability, and the potential for alternative interpretations. This will clarify the exploratory nature of the claims without overstating them. revision: yes

  2. Referee: No quantitative metrics, error estimates, or cross-student consistency measures are provided to support the strength of the claimed effects on system, narrative, and agency immersion, making it difficult to assess whether the GPT integrations produced the described outcomes beyond the authors' descriptive examples.

    Authors: The study is intentionally designed as qualitative case studies to provide rich contextual descriptions of GPT integration rather than quantitative evaluation. No survey, behavioral log, or other quantitative data were collected in the original design. We will revise the findings and discussion sections to more explicitly frame the outcomes as illustrative examples drawn from the artifact analysis and to avoid any implication of measured effect sizes. This will better align the presentation with the study's scope and methods. revision: partial

Circularity Check

0 steps flagged

No circularity: external framework applied interpretively to case artifacts

full rationale

The paper invokes the Immersive Learning Cube as an established external framework with three pre-defined dimensions (system, narrative, agency) and performs qualitative mapping of course artifacts onto those dimensions. No equations, fitted parameters, predictions, or self-definitions appear; the central claim is an interpretive conclusion from the two case studies rather than a reduction to inputs by construction. The framework is treated as independent input, not derived or renamed within the paper.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim depends on the validity of the Immersive Learning Cube as a complete model of immersion and on the assumption that artifact analysis can reliably indicate students' internal states of involvement.

axioms (1)
  • domain assumption The Immersive Learning Cube framework validly conceptualizes immersion through the three dimensions of system, narrative, and agency.
    All observations and conclusions are mapped directly onto these three dimensions without independent validation.

pith-pipeline@v0.9.0 · 5592 in / 1273 out tokens · 56482 ms · 2026-05-14T21:48:03.262795+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

23 extracted references · 23 canonical work pages

  1. [1]

    MIT Press, Cambridge, Mass

    Hutchins, E.: Cognition in the wild. MIT Press, Cambridge, Mass. (2006)

  2. [2]

    RE@D - Revista de Educação a Distância e Elearning

    Schlemmer, E., Morgado, L.: Inven!RA: a contribution towards platforms aligned with Digital Transformation in Education. RE@D - Revista de Educação a Distância e Elearning. e202403 Pages (2024). https://doi.org/10.34627/REDVOL7ISS1E202403

  3. [3]

    In: 147th AES Pro Audio International Convention

    Agrawal, S., Simon, A., Bech, S.: Defining Immersion: Literature Review and Implications for Research on Immersive Audiovisual Experiences. In: 147th AES Pro Audio International Convention. p. 14. Audio Engineering Society, New York (2019)

  4. [4]

    Human Technology

    Nilsson, N.C., Nordahl, R., Serafin, S.: Immersion Revisited: A review of existing definitions of immersion and their relation to different theories of presence. Human Technology. 12, 108 –134 (2016). https://doi.org/10.17011/ht/urn.201611174652

  5. [5]

    In: 2021 7th International Conference of the Immer sive Learning Research Network (iLRN)

    Beck, D., Morgado, L., Lee, M., Gutl, C., Dengel, A., Wang, M., Warren, S., Richter, J.: Towards an Immersive Learning Knowledge Tree - a Conceptual Framework for Mapping Knowledge and Tools in the Field. In: 2021 7th International Conference of the Immer sive Learning Research Network (iLRN). pp. 1 –8. IEEE, Eureka, CA, USA (2021)

  6. [6]

    Sage Open

    Bawa, P.: Retention in Online Courses: Exploring Issues and Solutions —A Literature Review. Sage Open. 6, 2158244015621777 (2016). https://doi.org/10.1177/2158244015621777

  7. [7]

    The Internet and Higher Education

    Martin, F., Wang, C., Sadaf, A.: Student perception of helpfulness of facilitation strategies that enhance instructor presence, connectedness, engagement and learning in online courses. The Internet and Higher Education. 37, 52 –65 (2018). https://doi.org/10.1016/j.iheduc.2018.01.003

  8. [8]

    Learning and Individual Differences103, 102274 (Apr 2023)

    Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., Weller, J., Kuhn, J., Kasneci, G.: ChatGPT for good? On opportunities and challenges of larg...

  9. [9]

    Int J Educ Technol High Educ

    Crompton, H., Burke, D.: Artificial intelligence in higher education: the state of the field. Int J Educ Technol High Educ. 20, 22 (2023). https://doi.org/10.1186/s41239-023-00392-8

  10. [10]

    Smart Learn

    Tlili, A., Shehata, B., Adarkwah, M.A., Bozkurt, A., Hickey, D.T., Huang, R., Agyemang, B.: What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 10, 15 (2023). https://doi.org/10.1186/s40561-023-00237-x

  11. [11]

    Virtual Reality

    Morgado, L., Beck, D., O’Shea, P.: Bridging the gaps: an updated mapping of the uses of immersive learning environments. Virtual Reality. 29, 134 (2025). https://doi.org/10.1007/s10055-025-01208-y

  12. [12]

    Journal of Research Administration

    Porter, R.: Why Academics Have a Hard Time Writing Good Grant Proposals. Journal of Research Administration. 38, 37–43 (2007)

  13. [13]

    SAGE, Los Angeles (2014)

    Locke, L.F., Spirduso, W.W., Silverman, S.J.: Proposals that work: a guide for planning dissertations and grant proposals. SAGE, Los Angeles (2014)

  14. [14]

    Review of Educational Research

    Hattie, J., Timperley, H.: The Power of Feedback. Review of Educational Research. 77, 81 –112 (2007). https://doi.org/10.3102/003465430298487

  15. [15]

    Hyland, K., Hyland, F.: Feedback on second language students’ writing. Lang. Teach. 39, 83 –101 (2006). https://doi.org/10.1017/S0261444806003399

  16. [16]

    Studies in Higher Education

    Nicol, D.J., Macfarlane‐Dick, D.: Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education. 31, 199–218 (2006). https://doi.org/10.1080/03075070600572090

  17. [17]

    Universidade Aberta, Lisbon, Portugal (2008)

    Pereira, A., Mendes, A.Q., Morgado, L., Amante, L., Bidarra, J.: Universidade Aberta’s pedagogical model for distance education: a university for the future. Universidade Aberta, Lisbon, Portugal (2008)

  18. [18]

    In: 2021 4th International Conference of the Portuguese Society for Engineering Education (CISPEE)

    Pedrosa, D., Fontes, M.M., Araujo, T., Morais, C., Bettencourt, T., Pestana, P.D., Morgado, L., Cravino, J.: Metacognitive challenges to support self-reflection of students in online Software Engineering Education. In: 2021 4th International Conference of the Portuguese Society for Engineering Education (CISPEE). pp. 1 –10. IEEE, Lisbon, Portugal (2021)

  19. [19]

    eds: e -SimProgramming: planificar, conceber e acompanhar atividades didáticas online de engenharia de software

    Pedrosa, D., Cravino, J.P., Morgado, L. eds: e -SimProgramming: planificar, conceber e acompanhar atividades didáticas online de engenharia de software. Universidade Aberta, Lisbon, Portugal (2022)

  20. [20]

    Interactive Learning Environments

    Lai, C.-L., Hwang, G.-J.: Strategies for enhancing self-regulation in e-learning: a review of selected journal publications from 2010 to 2020. Interactive Learning Environments. 31, 3757 –3779 (2023). https://doi.org/10.1080/10494820.2021.1943455

  21. [21]

    ACM Trans

    Loksa, D., Margulieux, L., Becker, B.A., Craig, M., Denny, P., Pettit, R., Prather, J.: Metacognition and Self-Regulation in Programming Education: Theories and Exemplars of Use. ACM Trans. Comput. Educ. 22, 1 –31 (2022). https://doi.org/10.1145/3487050

  22. [22]

    IEEE Trans

    Magana, A.J., Jaiswal, A., Amuah, T.L., Bula, M.Z., Ud Duha, M.S., Richardson, J.C.: Characterizing Team Cognition Within Software Engineering Teams in an Undergraduate Course. IEEE Trans. Educ. 67, 87 –99 (2024). https://doi.org/10.1109/TE.2023.3327059

  23. [23]

    Smart Learn

    Steinert, S., Avila, K.E., Ruzika, S., Kuhn, J., Küchemann, S.: Harnessing large language models to develop research- based learning assistants for formative feedback. Smart Learn. Environ. 11, 62 (2024). https://doi.org/10.1186/s40561- 024-00354-1