pith. machine review for the scientific record. sign in

arxiv: 2604.10851 · v1 · submitted 2026-04-12 · 💻 cs.HC

Recognition: unknown

Participatory, not Punitive: Student-Driven AI Policy Recommendations in a Design Classroom

Authors on Pith no claims yet

Pith reviewed 2026-05-10 14:57 UTC · model grok-4.3

classification 💻 cs.HC
keywords AI policyparticipatory designstudent governancegenerative AIdesign educationworkshop serieszine
0
0 comments X

The pith

Student-driven workshops in a design classroom generate AI policies that expose double standards between students and faculty.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines a student-led process for developing university AI use guidelines instead of relying on top-down punitive rules written without input from those most affected. In a graduate design course, eight participants held discussions without faculty present, shared their actual AI practices, and produced ten policy recommendations that they turned into a zine distributed across campus. These recommendations identified issues such as the requirement that students disclose AI assistance while instructors operate under no parallel expectations. The authors present the workshops as a way to reduce student confusion and fear while building ownership over the rules that govern learning. They describe the methods used as adaptable to other settings where students could shape technology-related policies.

Core claim

Through a three-part workshop series in a graduate design course at a minority-serving university, where student leaders facilitated discussions without faculty present, eight participants shared candid accounts of their AI use, co-authored ten policy recommendations, and visualized them in a zine that circulated across campus. The resulting policies surfaced concerns absent from top-down governance, such as the double standard of requiring students to disclose or abstain from AI use while faculty face no such expectations.

What carries the argument

The student-facilitated workshop series without faculty present that enables candid accounts of AI use, co-authorship of ten policy recommendations, and creation of a zine for campus circulation.

If this is right

  • Policies created through student participation can include balanced accountability measures that apply equally to students and faculty.
  • The workshop format can lower student fear by allowing open discussion of actual AI practices rather than enforcing secrecy.
  • Circulating recommendations as a zine offers a concrete method for sharing student-generated policies with the wider campus community.
  • The approach provides a model for shifting from punitive enforcement to inclusive governance on technology use in education.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar participatory workshops could be tested in undergraduate settings or STEM courses to check whether the same policy gaps appear.
  • Tracking whether participants continue to advocate for these recommendations after the course ends would test if the engagement benefit lasts beyond the workshops.
  • Administrators reviewing official AI policies could use the zine as direct evidence of student priorities when updating rules.

Load-bearing premise

That the concerns and strategies identified by eight participants in one graduate design course at a single minority-serving university are transferable and representative of student experiences across other disciplines and institutions.

What would settle it

If similar student-facilitated workshops at other universities or in non-design fields produce no comparable double-standard concerns and fail to generate engaged policy recommendations, the claim of transferable value would not hold.

Figures

Figures reproduced from arXiv: 2604.10851 by Kaoru Seki, Manisha Vijay, Yasmine Kotturi.

Figure 1
Figure 1. Figure 1: Ten graduate design students at a minority-serving university co-authored 10 policy recommendations through student [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Ten student-drive AI policy recommendations derived from our three-part design workshop series [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Overview of the three-part workshop series. [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Workshop 1 focused on policy drafting through [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: The Student-Driven AI Policy Recommendation zine displayed outside of the research team’s lab (right), in the main [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
read the original abstract

Generative AI is reshaping education, yet most university AI policies are written without students and focus on penalizing misuse. This top-down approach sidelines those most affected from decisions that shape their everyday learning, resulting in confusion and fear about acceptable use. We examine how participatory, student-driven AI policy design can address this disconnect. We report on a three-part workshop series in a graduate design course at a minority-serving university in the U.S., where two student leaders facilitated discussions without faculty present. Eight participants shared candid accounts of their AI use, co-authored ten policy recommendations, and visualized them in a zine that circulated across campus. The resulting policies surfaced concerns absent from top-down governance, such as the double standard of requiring students to disclose or abstain from AI use while faculty face no such expectations. We argue that engaging students in AI governance carries value beyond the resulting policies, and offer transferable strategies for fostering participation across disciplines -- a model for calling students in rather than calling students

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper presents findings from a three-part workshop series held in a graduate design course at a minority-serving university in the U.S. Two student leaders facilitated discussions among eight participants without faculty involvement, allowing candid sharing of AI use experiences. Participants co-authored ten policy recommendations and visualized them in a campus-circulated zine. The work critiques top-down punitive AI policies for sidelining students and missing key issues like disclosure double standards. It posits that student-driven participatory design in AI governance provides benefits beyond the policies and offers strategies transferable to other disciplines as a model for inclusive engagement.

Significance. If validated, the participatory approach could shift university AI policy development towards more inclusive practices, reducing student confusion and fear by incorporating affected voices. The identification of overlooked concerns, such as unequal expectations for AI disclosure, underscores the limitations of faculty-only governance. The paper's strength lies in its direct documentation of student outputs and the positive framing of participation. However, the asserted transferability across disciplines remains untested, which if addressed could enhance its influence on HCI education and AI ethics discussions.

major comments (2)
  1. [Abstract] Abstract: The assertion that the described workshop offers 'transferable strategies for fostering participation across disciplines' is central to the paper's contribution but is not substantiated beyond the single case study of eight participants in one design course. No mechanisms for transfer, comparative data from other contexts, or limitations discussion on generalizability are provided, making this claim load-bearing yet unsupported.
  2. [Workshop Description and Findings] Workshop Description and Findings: The manuscript reports the co-authored policy recommendations and surfaced concerns (such as the faculty-student disclosure double standard) but provides no description of the qualitative analysis methods, coding procedures, or validation steps used to derive these from the discussions. This absence undermines evaluation of how reliably the outcomes reflect participant input and support the contrast with top-down policies.
minor comments (1)
  1. [Abstract] The phrase 'calling students in rather than calling students out' is introduced in the abstract without definition or prior reference, which may reduce clarity for readers outside specific educational discourse communities.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. The comments highlight important areas for strengthening the presentation of our single-case study and improving methodological transparency. We address each point below and will incorporate revisions in the next version.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The assertion that the described workshop offers 'transferable strategies for fostering participation across disciplines' is central to the paper's contribution but is not substantiated beyond the single case study of eight participants in one design course. No mechanisms for transfer, comparative data from other contexts, or limitations discussion on generalizability are provided, making this claim load-bearing yet unsupported.

    Authors: We agree that the claim of transferability is not empirically substantiated beyond this single exploratory case. In revision, we will qualify the language in the abstract and conclusion to describe the strategies as 'potentially transferable' based on detailed documentation of the process, rather than asserting broad applicability. We will also add a limitations subsection explicitly discussing the single-institution, design-course context and the absence of comparative data, while outlining how future studies could test adaptation in other disciplines. This preserves the core contribution of the participatory model without overclaiming. revision: partial

  2. Referee: [Workshop Description and Findings] Workshop Description and Findings: The manuscript reports the co-authored policy recommendations and surfaced concerns (such as the faculty-student disclosure double standard) but provides no description of the qualitative analysis methods, coding procedures, or validation steps used to derive these from the discussions. This absence undermines evaluation of how reliably the outcomes reflect participant input and support the contrast with top-down policies.

    Authors: We acknowledge this omission in the current draft. The outcomes were generated through iterative, participant-led thematic synthesis during the workshops, including real-time note capture, group affinity mapping of discussion points, and consensus validation of the final ten recommendations and zine content. In the revised manuscript, we will insert a dedicated 'Data Analysis' paragraph in the Workshop Description section that details these steps, including how the disclosure double standard and other themes were identified directly from participant statements and cross-checked for fidelity to the discussions. This will allow readers to assess the grounding of the findings. revision: yes

Circularity Check

0 steps flagged

No circularity: direct reporting of single-case workshop findings

full rationale

The paper presents a qualitative case study of a three-part workshop series involving eight participants in one graduate design course. Its central claims about the value of student-driven AI policy design and transferable participation strategies are derived directly from participant accounts, co-authored recommendations, and zine outputs without any mathematical derivations, fitted parameters, self-referential equations, or load-bearing self-citations that reduce the reported results to the inputs by construction. The analysis remains self-contained as empirical description of the observed activities and surfaced concerns.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

This is a qualitative case study relying on assumptions about the effectiveness of peer facilitation for candid disclosure and the broader applicability of findings from a limited sample, without external validation data or benchmarks.

axioms (2)
  • domain assumption Peer-led facilitation without faculty presence elicits more candid student accounts of AI use than faculty-led sessions.
    Stated in the workshop design description as enabling open sharing.
  • ad hoc to paper Insights from this single design-class workshop generalize to produce transferable strategies for other disciplines.
    Used to argue for value beyond the specific policies and across fields.

pith-pipeline@v0.9.0 · 5472 in / 1308 out tokens · 54763 ms · 2026-05-10T14:57:32.599035+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

142 extracted references · 46 canonical work pages · 1 internal anchor

  1. [1]

    d.].BoodleBox

    BoodleBox [n. d.].BoodleBox. BoodleBox. https://boodlebox.ai/

  2. [2]

    d.].ChatGPT

    OpenAI [n. d.].ChatGPT. OpenAI. https://chatgpt.com/

  3. [3]

    d.].Claude

    Anthropic [n. d.].Claude. Anthropic. https://claude.ai/login

  4. [4]

    d.].DALL·E 3

    OpenAI [n. d.].DALL·E 3. OpenAI. https://openai.com/index/dall-e-3/

  5. [5]

    d.].Iconify - Figma Plugin

    Figma Community [n. d.].Iconify - Figma Plugin. Figma Community. https: //www.figma.com/community/plugin/735098390272716381/iconify

  6. [6]

    d.].Uizard: AI-Powered Design Tool

    Uizard Technologies [n. d.].Uizard: AI-Powered Design Tool. Uizard Technologies. https://uizard.io/

  7. [7]

    d.].UX Pilot AI: Design Apps with AI

    [n. d.].UX Pilot AI: Design Apps with AI. https://uxpilot.ai/

  8. [8]

    Vercel, Inc. [n. d.].v0 by Vercel. Vercel, Inc. https://v0.app/

  9. [9]

    Nathalie Aelterman, Maarten Vansteenkiste, and Leen Haerens. 2019. Correlates of students’ internalization and defiance of classroom rules: A self-determination theory perspective.British journal of educational psychology89, 1 (2019), 22–40

  10. [10]

    Nimra Ahmed, Luise Arn, Natalia Obukhova, Nicolai Brodersen Hansen, Nadia Campo Woytuk, Anupriya Tuli, Karin Hansson, Angelika Strohmayer, and Elaine M Huang. 2025. Ethics, Power, and Tensions: Rethinking Participation- Based Design for Sensitive Contexts. InAdjunct Proceedings of the Sixth Decennial Aarhus Conference: Computing X Crisis. 1–4

  11. [11]

    Sara Ahmed. 2021. Complaint! InComplaint!Duke University Press

  12. [12]

    Rami Alsharefeen and Naji Al Sayari. 2025. Examining academic integrity policy and practice in the era of AI: a case study of faculty perspectives.Frontiers in Education10 (2025). doi:10.3389/feduc.2025.1621743

  13. [13]

    Yunjo An, Ji Hyun Yu, and Shadarra James. 2025. Investigating the higher education institutions’ guidelines and policies regarding the use of generative AI in teaching, learning, research, and administration.International Journal of Educational Technology in Higher Education22, 1 (2025), 10

  14. [14]

    Make This Public

    Mariam Asad and Christopher A. Le Dantec. 2017. Tap the "Make This Public" Button: A Design-Based Inquiry into Issue Advocacy and Digital Civics. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, Denver Colorado USA, 6304–6316. doi:10.1145/3025453.3026034

  15. [15]

    Chris Atton. 1999. A reassessment of the alternative press.Media, Culture & Society21, 1 (1999), 51–76. doi:10.1177/016344399021001003

  16. [16]

    Alex Barrett and Austin Pack. 2023. Not quite eye to AI: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education20, 1 (2023), 59

  17. [17]

    Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. Power to the people? Opportunities and challenges for participatory AI. InProceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. 1–8

  18. [18]

    David Birks and John Clare. 2023. Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks.International Journal for Educational Integrity19, 20 (2023), 1–15. doi:10.1007/s40979-023-00142-3

  19. [19]

    Elizabeth L Bjork, Robert A Bjork, et al. 2011. Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning.Psychology and the real world: Essays illustrating fundamental contributions to society2, 59-68 (2011), 56–64

  20. [20]

    Claus Bossen, Christian Dindler, and Ole Sejer Iversen. 2022. Challenges of Scaling Participatory Design: A Systematic Literature Review. InProceedings of the 34th Australian Conference on Human-Computer Interaction (OzCHI ’22). ACM. doi:10.1145/3572921.3572924

  21. [21]

    Isabelle Bousquette. 2024. The Hottest AI Job of 2023 Is Already Obsolete.The Wall Street Journal(April 25 2024). https://www.wsj.com/articles/the-hottest- ai-job-of-2023-is-already-obsolete-1961b054 Accessed: February 4, 2026

  22. [22]

    Catherine Bovill. 2020. Co-creation in learning and teaching: The case for a whole-class approach in higher education.Higher Education79 (2020), 1023–

  23. [23]

    doi:10.1007/s10734-019-00453-w

  24. [24]

    Catherine Bovill, Alison Cook-Sather, and Peter Felten. 2011. Students as co- creators of teaching approaches, course design, and curricula: Implications for academic developers.International Journal for Academic Development16, 2 (2011), 133–145. doi:10.1080/1360144X.2011.568690

  25. [25]

    Edward Watson

    José Antonio Bowen and C. Edward Watson. 2024.Teaching with AI. Johns Hopkins University Press. doi:10.56021/9781421449227

  26. [26]

    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology3, 2 (2006), 77–101

  27. [27]

    Virginia Braun and Victoria Clarke. 2022. Thematic analysis: a practical guide. Thematic analysis

  28. [28]

    Brown University Library. 2026. Citation and Attribution - Generative Artificial Intelligence. https://libguides.brown.edu/c.php?g=1338928&p=9868287 Last updated January 29, 2026; Accessed: February 2, 2026

  29. [29]

    Alice Cai, Ian Arawjo, and Elena L Glassman. 2024. Antagonistic ai.arXiv preprint arXiv:2402.07350(2024)

  30. [30]

    Cecilia Ka Yuk Chan. 2023. A Comprehensive AI Policy Education Framework for University Teaching and Learning.International Journal of Educational Technology in Higher Education20, 1 (July 2023), 38. doi:10.1186/s41239-023- 00408-3

  31. [31]

    2006.Constructing grounded theory: A practical guide through qualitative analysis

    Kathy Charmaz. 2006.Constructing grounded theory: A practical guide through qualitative analysis. sage

  32. [32]

    Liuqing Chen, Yaxuan Song, Jia Guo, Lingyun Sun, Peter Childs, and Yuan Yin

  33. [33]

    2025), e9

    How Generative AI Supports Human in Conceptual Design.Design Science 11 (Jan. 2025), e9. doi:10.1017/dsj.2025.2

  34. [34]

    2014.Engaging students as partners in learning and teaching: A guide for faculty

    Alison Cook-Sather, Catherine Bovill, and Peter Felten. 2014.Engaging students as partners in learning and teaching: A guide for faculty. John Wiley & Sons

  35. [35]

    Cornell University English Language Support Office. 2025. AI and Academic Integrity. https://elso.as.cornell.edu/ai-and-academic-integrity Developed September 2025; Accessed February 2, 2026

  36. [36]

    Christopher A Le Dantec and Carl DiSalvo. 2013. Infrastructuring and the Formation of Publics in Participatory Design.Social Studies of Science43, 2 (April 2013), 241–264. doi:10.1177/0306312712471581

  37. [37]

    Karl de Fine Licht. 2024. Generative artificial intelligence in higher educa- tion: Why the’banning approach’to student use is sometimes morally justified. Philosophy & Technology37, 3 (2024), 113

  38. [38]

    Edward L Deci, Haleh Eghrari, Brian C Patrick, and Dean R Leone. 1994. Facil- itating internalization: The self-determination theory perspective.Journal of personality62, 1 (1994), 119–142

  39. [39]

    Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2023. The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. InProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’23). Association for Computing Machinery, New York, NY, USA, 1–23. doi:10.1145/...

  40. [40]

    1997.Notes From Underground: Zines and the Politics of Alternative Culture

    Stephen Duncombe. 1997.Notes From Underground: Zines and the Politics of Alternative Culture. Microcosm Publishing, Cleveland, OH

  41. [41]

    Nagalakshmi Eragamreddy. 2025. From Plagiarism to Paraphrasing: Graduate Students’ Approaches for Referencing Materials in Academic Writing.Teaching English Language19, 2 (2025), 89–137. doi:10.22132/TEL.2024.232821

  42. [42]

    Gökberk Erol, Anıl Ergen, Büşra Gülşen Erol, Şebnem Kaya Ergen, Tevfik Serhan Bora, Ali Deniz Çölgeçen, Büşra Araz, Cansel Şahin, Günsu Bostancı, İlayda Kılıç, et al. 2025. Can we trust academic AI detective? Accuracy and limitations of AI-output detectors.Acta neurochirurgica167, 1 (2025), 1–12

  43. [43]

    Figma, Inc. 2025. Figma [Online collaborative design platform]. https://www. figma.com/ Software

  44. [44]

    Financial Times. 2024. Science students lead use of AI tools in assessments. https://www.ft.com/content/d591fb1a-9f6c-4345-b5fc-781e091ae3f8. Accessed: 2025-09-11

  45. [45]

    N. J. Francis, S. Jones, and D. P. Smith. 2025. Generative AI in Higher Education: Balancing Innovation and Integrity.British Journal of Biomedical Science81 (2025), 14048. doi:10.3389/bjbs.2024.14048

  46. [46]

    Aakash Gautam. 2024. Reconfiguring participatory design to resist ai realism. InProceedings of the Participatory Design Conference 2024: Exploratory Papers and Workshops-Volume 2. 31–36

  47. [47]

    Michael Gerlich. 2025. AI tools in society: Impacts on cognitive offloading and the future of critical thinking.Societies15, 1 (2025), 6

  48. [48]

    Aashish Ghimire and John Edwards. 2024. From Guidelines to Governance: A Study of AI Policies in Education. arXiv:2403.15601 [cs.CY] https://arxiv.org/ abs/2403.15601

  49. [49]

    Levent Giray. 2024. The Problem with False Positives: AI Detection Unfairly Accuses Scholars of AI Plagiarism.The Serials Librarian85, 5–6 (2024), 181–189. doi:10.1080/0361526X.2024.2433256

  50. [50]

    Chahna Gonsalves. 2025. Addressing student non-compliance in AI use declara- tions: implications for academic integrity and assessment in higher education. Assessment & Evaluation in Higher Education50, 4 (2025), 592–606. CHI ’26, April 13–17, 2026, Barcelona, Spain Kaoru Seki, Manisha Vijay, and Yasmine Kotturi

  51. [51]

    Tim Gorichanaz. 2023. Accused: How students respond to allegations of using ChatGPT on assessments.Learning: Research and Practice9, 2 (2023), 183–196

  52. [52]

    Grammarly, Inc. 2025. Grammarly [AI writing assistance software]. https: //www.grammarly.com/ Software

  53. [53]

    Bingyi Han, Sadia Nawaz, George Buchanan, and Dana McKay. 2025. Students’ Perceptions: Exploring the Interplay of Ethical and Pedagogical Impacts for Adopting AI in Higher Education.International Journal of Artificial Intelligence in Education(Jan. 2025). doi:10.1007/s40593-024-00456-4

  54. [54]

    Christina Harrington, Sheena Erete, and Anne Marie Piper. 2019. Deconstructing community-based collaborative design: Towards more equitable participatory design engagements.Proceedings of the ACM on human-computer interaction3, CSCW (2019), 1–25

  55. [55]

    Kiersten Hay. 2022. Zineography: Exploring the participatory design process of collaborative zine making. InProceedings of the Participatory Design Conference 2022-Volume 2. 313–316

  56. [56]

    Kiersten Hay, Abigail C Durrant, Shema Tariq, Lynne Coventry, and Helen Anderson. 2024. Zineography: A Community-Based Research-through-Design Method of Zine Making for Unequal Contexts. InNordic Conference on Human- Computer Interaction. ACM, Uppsala Sweden, 1–17. doi:10.1145/3679318.3685390

  57. [57]

    Gerald F Hess. 2007. Collaborative course design: Not my course, not their course, but our course.Washburn LJ47 (2007), 367

  58. [58]

    Catherine M Hicks, Vineet Pandey, C Ailie Fraser, and Scott Klemmer. 2016. Framing feedback: Choosing review environment features that support high quality peer assessment. InProceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 458–469

  59. [59]

    Kashmir Hill. 2025. The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It.The New York Times(May 14 2025). https://www. nytimes.com/2025/05/14/technology/chatgpt-college-professors.html Technol- ogy section

  60. [60]

    Marieta Hristova. 2025. Generative Artificial Intelligence and Academic Prac- tices: A Comparative Analysis of Approaches in Europe, the United States and China.Postmodernism Problems15, 2 (2025), 245–269. doi:10.46324/PMP2502245

  61. [61]

    Hsiu-Ling Hsiao and Hsien-Hui Tang. 2024. A Study on the Application of Generative AI Tools in Assisting the User Experience Design Process. InArtificial Intelligence in HCI, Helmut Degen and Stavroula Ntoa (Eds.). Springer Nature Switzerland, Cham, 175–189. doi:10.1007/978-3-031-60611-3_13

  62. [62]

    Yueqiao Jin, Lixiang Yan, Vanessa Echeverria, Dragan Gašević, and Roberto Martinez-Maldonado. 2025. Generative AI in higher education: A global perspec- tive of institutional adoption policies and guidelines.Computers and Education: Artificial Intelligence8 (2025), 100348

  63. [63]

    Wells, Elizabeth M

    Heather Johnston, Rebecca F. Wells, Elizabeth M. Shanks, Timothy Boey, and Bryony N. Parsons. 2024. Student Perspectives on the Use of Generative Artifi- cial Intelligence Technologies in Higher Education.International Journal for Educational Integrity20, 1 (Dec. 2024), 1–21. doi:10.1007/s40979-024-00149-4

  64. [64]

    Computers and Education: Artificial Intelligence 3, 100074

    René F. Kizilcec, Elaine Huber, Elena C. Papanastasiou, Andrew Cram, Chris- tos A. Makridis, Adele Smolansky, Sandris Zeivots, and Corina Raduescu. 2024. Perceived Impact of Generative AI on Assessments: Comparing Educator and Student Perspectives in Australia, Cyprus, and the United States.Computers and Education: Artificial Intelligence7 (Dec. 2024), 10...

  65. [65]

    Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. 2025. Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task.arXiv preprint arXiv:2506.088724 (2025)

  66. [66]

    Yasmine Kotturi, Angel Anderson, Glenn Ford, Michael Skirpan, and Jeffrey P Bigham. 2024. Deconstructing the Veneer of Simplicity: Co-Designing In- troductory Generative AI Workshops with Local Entrepreneurs. InProceed- ings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). Association for Computing Machinery, New York, NY, USA, ...

  67. [67]

    Chinmay E Kulkarni, Michael S Bernstein, and Scott R Klemmer. 2015. Peer- Studio: rapid peer feedback emphasizes revision and improves performance. In Proceedings of the second (2015) ACM conference on learning@ scale. 75–84

  68. [68]

    Rahul Kumar. 2023. Faculty members’ use of artificial intelligence to grade student papers: A case of implications.International Journal for Educational Integrity19, 1 (2023), 9

  69. [69]

    Zhang, Jane Hsieh, Haiyi Zhu, and Ken- neth Holstein

    Tzu-Sheng Kuo, Quan Ze Chen, Amy X. Zhang, Jane Hsieh, Haiyi Zhu, and Ken- neth Holstein. 2025. PolicyCraft: Supporting Collaborative and Participatory Pol- icy Design through Case-Grounded Deliberation. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–24. arXiv:2409.15644 [cs] doi:10.1145/3706598.3713865

  70. [70]

    Lin Kyi, Amruta Mahuli, M Six Silberman, Reuben Binns, Jun Zhao, and Asia J Biega. 2025. Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–16

  71. [71]

    LanguaTalk. 2021. LanguaTalk: Learn a Language with the World’s Best Tutors and AI. Online Platform. https://languatalk.com/ Accessed: February 4, 2026

  72. [72]

    Networks of Design

    Bruno Latour. 2008. A cautious Prometheus? A few steps toward a philosophy of design. In" Networks of Design", Annual International Conference of the Design History Society. Universal Publishers, 2–10

  73. [73]

    Hao-Ping Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. InProceedings of the 2025 CHI conference on human factors in computing systems. 1–22

  74. [74]

    Jie Li, Hancheng Cao, Laura Lin, Youyang Hou, Ruihao Zhu, and Abdallah El Ali. 2024. User experience design professionals’ perceptions of generative artificial intelligence. InProceedings of the 2024 CHI conference on human factors in computing systems. 1–18

  75. [75]

    2012.Zines in third space: Radical cooperation and borderlands rhetoric

    Adela C Licona. 2012.Zines in third space: Radical cooperation and borderlands rhetoric. Suny Press

  76. [76]

    Fang Liu, Junyan Lv, Shenglan Cui, Zhilong Luan, Kui Wu, and Tongqing Zhou

  77. [77]

    ACM Hum.-Comput

    Smart "Error"! Exploring Imperfect AI to Support Creative Ideation.Proc. ACM Hum.-Comput. Interact.8, CSCW1 (April 2024), 121:1–121:28. doi:10.1145/ 3637398

  78. [78]

    originality

    Jiahui Luo. 2024. A critical review of GenAI policies in higher education assess- ment: A call to reconsider the “originality” of students’ work.Assessment & Evaluation in Higher Education49, 5 (2024), 651–664

  79. [79]

    Jiahui Luo. 2025. How does GenAI affect trust in teacher-student relationships? Insights from students’ assessment experiences.Teaching in Higher Education 30, 4 (2025), 991–1006

  80. [80]

    Frank Lyman. 1981. The responsive classroom discussion: The inclusion of all students. Mainstreaming digest.College Park, MD: University of Maryland (1981)

Showing first 80 references.