pith. machine review for the scientific record. sign in

arxiv: 2604.14007 · v1 · submitted 2026-04-15 · 💻 cs.HC

Recognition: unknown

"I'm Not Able to Be There for You": Emotional Labour, Responsibility, and AI in Peer Support

Authors on Pith no claims yet

Pith reviewed 2026-05-10 12:22 UTC · model grok-4.3

classification 💻 cs.HC
keywords peer supportemotional labourAI in mental healthresponsibilityaccountabilityinstitutional ambiguitydesign futuresmental health care
0
0 comments X

The pith

Peer supporters judge AI by how it redistributes their emotional labor and accountability rather than by empathy or technical skill.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Peer support draws on people with lived experience who fill gaps in formal mental health care, yet their involvement rests on unclear expectations about scope and duties. Institutional gaps push emotional labor, boundary decisions, and escalation choices onto individuals without steady organizational backing. Participants assess AI not by how well it mimics empathy or performs tasks, but by whether it eases or heightens their personal risks and workloads. A reader would care because peer support is promoted as a scalable fix, but overlooking these shifts could leave volunteers more exposed instead of supported. The paper calls for AI designs that place responsibility at the center rather than treating the technology mainly as a way to increase volume.

Core claim

Interviews show that lived experience, moral commitment, and self-identification shape who participates in peer support while leaving scope, authority, and accountability unevenly defined. Institutional ambiguity concentrates emotional labour, boundary-setting, and escalation of responsibility at the individual level without consistent organisational scaffolding. Participants evaluated AI not primarily through empathy or technical capability, but through how technologies redistribute risk, labour, and accountability within already fragile support roles. This leads to design futures for an AI-supported peer support ecosystem that treats responsibility as a central design concern.

What carries the argument

Institutional ambiguity in peer support, which concentrates emotional labour, boundary-setting, and responsibility on individuals and serves as the standard by which AI is judged.

Load-bearing premise

The experiences of the interviewed peer supporters represent broader peer support practices and that institutional ambiguity is the main reason emotional labour concentrates at the individual level.

What would settle it

A larger study finding that peer supporters rate AI mainly on empathetic quality or accuracy, or that most peer support settings have clear shared accountability structures and organisational guidelines, would challenge the central account.

read the original abstract

Peer support is increasingly positioned as a scalable response to gaps in mental health care, particularly in digitally mediated settings, yet what counts as peer support and how responsibility is distributed remain unevenly defined in practice. Drawing on interviews with peer supporters, we show how lived experience, moral commitment, and self-identification shape participation while blurring expectations around scope, authority, and accountability. Institutional ambiguity concentrates emotional labour, boundary-setting, and escalation of responsibility at the individual level, often without consistent organisational scaffolding. Participants evaluated AI not primarily through empathy or technical capability, but through how technologies redistribute risk, labour, and accountability within already fragile support roles. Building on these findings, we outline design futures for an AI-supported peer support ecosystem that foregrounds responsibility as a central design concern rather than treating AI as a mechanism of scale.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper draws on interviews with peer supporters to argue that institutional ambiguity in peer support concentrates emotional labour, boundary-setting, and responsibility at the individual level. It claims participants evaluate AI primarily through its effects on redistributing risk, labour, and accountability within fragile support roles, rather than through empathy or technical capability, and proposes design futures that foreground responsibility as a central concern for AI-supported peer support ecosystems.

Significance. If the interpretive findings hold, the work contributes to HCI and CSCW by shifting focus from empathy-centric AI design in mental health to responsibility, risk, and labour redistribution in already ambiguous support structures. This could inform more accountable AI interventions in peer support and related domains.

major comments (2)
  1. [Findings / §4 (or equivalent thematic analysis section)] The central claim that participants 'evaluated AI not primarily through empathy or technical capability, but through how technologies redistribute risk, labour, and accountability' (abstract and likely §4 findings) requires explicit evidence that this priority ordering emerges from the data rather than interview framing or selective theme emphasis. The manuscript should include the interview protocol, sample questions, and any negative-case analysis or comparative coding that rules out question-induced bias toward responsibility themes.
  2. [Methods section] Methods details are insufficient to assess soundness: the abstract and methods section provide no information on sample size, recruitment strategy, participant demographics, interview duration, transcription/analysis approach (e.g., reflexive thematic analysis, grounded theory), or how quotes were selected and anonymised. These are load-bearing for evaluating whether the reported themes support the stated conclusions about institutional ambiguity and AI evaluation.
minor comments (2)
  1. [Introduction] Clarify the distinction between 'peer support' as practiced by participants versus institutional definitions early in the introduction to avoid ambiguity for readers unfamiliar with the domain.
  2. [Design implications / Discussion] The design futures section would benefit from more concrete examples or scenarios illustrating how responsibility could be operationalised in AI tools, to make the recommendations actionable.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for these constructive comments, which correctly identify gaps in methodological transparency that limit assessment of our claims. We will revise the manuscript to provide the requested details and strengthen the evidential grounding for our interpretive findings.

read point-by-point responses
  1. Referee: [Findings / §4 (or equivalent thematic analysis section)] The central claim that participants 'evaluated AI not primarily through empathy or technical capability, but through how technologies redistribute risk, labour, and accountability' (abstract and likely §4 findings) requires explicit evidence that this priority ordering emerges from the data rather than interview framing or selective theme emphasis. The manuscript should include the interview protocol, sample questions, and any negative-case analysis or comparative coding that rules out question-induced bias toward responsibility themes.

    Authors: We accept that the current manuscript does not sufficiently demonstrate how the priority ordering of themes was derived from the data. In revision we will add the complete interview protocol and sample questions as an appendix. We will also expand the findings and methods sections to describe our reflexive thematic analysis process, showing the iterative coding steps and providing concrete examples of how responsibility/risk/accountability codes were generated across the dataset. Where participants raised empathy or technical capability as primary concerns, we will explicitly discuss these as negative cases and explain their relation to the dominant themes. This addition will make clear that the reported emphasis reflects patterns in participant accounts rather than question framing. revision: yes

  2. Referee: [Methods section] Methods details are insufficient to assess soundness: the abstract and methods section provide no information on sample size, recruitment strategy, participant demographics, interview duration, transcription/analysis approach (e.g., reflexive thematic analysis, grounded theory), or how quotes were selected and anonymised. These are load-bearing for evaluating whether the reported themes support the stated conclusions about institutional ambiguity and AI evaluation.

    Authors: We agree the methods section is currently underspecified. The revised manuscript will expand this section to report sample size, recruitment channels, participant demographics (with a summary table), average interview duration, transcription procedures, the specific analytic approach employed, criteria used to select and present quotes, and anonymization steps. These additions will allow readers to evaluate the support for our claims regarding institutional ambiguity and participants' AI evaluations. revision: yes

Circularity Check

0 steps flagged

No significant circularity; interpretive claims grounded in interview data

full rationale

This is a qualitative HCI paper whose central claims derive from thematic analysis of interviews with peer supporters. No equations, fitted parameters, derivations, or mathematical predictions exist. The key assertion that participants evaluated AI through redistribution of risk, labour, and accountability is presented as emerging from participant data rather than reducing by construction to self-citations, prior author work, or definitional inputs. No load-bearing step matches any of the enumerated circularity patterns; the analysis remains self-contained against external benchmarks of interview-based research.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the validity of qualitative interview data for revealing systemic patterns in emotional labor; no free parameters or invented entities are introduced, and the work relies on standard domain assumptions of HCI research rather than novel postulates.

axioms (1)
  • domain assumption Qualitative interviews with peer supporters can surface generalizable insights into emotional labor, responsibility distribution, and technology evaluation.
    Core premise of the study; the abstract does not address sample limitations or generalizability checks.

pith-pipeline@v0.9.0 · 5441 in / 1237 out tokens · 70029 ms · 2026-05-10T12:22:34.102205+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

30 extracted references · 24 canonical work pages · 2 internal anchors

  1. [1]

    Max Bain, Jaesung Huh, Tengda Han, and Andrew Zisserman. 2023. WhisperX: Time-Accurate Speech Transcription of Long-Form Audio.INTERSPEECH 2023 (2023)

  2. [2]

    Mohr, Alex Mariakakis, and Rachel Kornfield

    Ananya Bhattacharjee, Joseph Jay Williams, Miranda Beltzer, Jonah Meyerhoff, Harsh Kumar, Haochen Song, David C. Mohr, Alex Mariakakis, and Rachel Kornfield. 2025. Investigating the Role of Situational Disruptors in Engagement with Digital Mental Health Tools.Proc. ACM Hum.-Comput. Interact.9, 7 (Oct. 2025), CSCW306:1–CSCW306:35. doi:10.1145/3757487

  3. [3]

    Kraut, and Laura Dabbish

    Tianying Chen, Kristy Zhang, Robert E. Kraut, and Laura Dabbish. 2021. Scaf- folding the Online Peer-support Experience: Novice Supporters’ Strategies and Challenges.Proc. ACM Hum.-Comput. Interact.5, CSCW2 (Oct. 2021), 366:1– 366:30. doi:10.1145/3479510

  4. [4]

    Sander de Jong, Ville Paananen, Benjamin Tag, and Niels van Berkel. 2025. Cog- nitive Forcing for Better Decision-Making: Reducing Overreliance on AI Systems Through Partial Explanations.Proc. ACM Hum.-Comput. Interact.9, 2 (May 2025), CSCW048:1–CSCW048:30. doi:10.1145/3710946

  5. [5]

    Carmen C. D. Franke, Barbara C. Paton, and Lee-Anne J. Gassner. 2010. Imple- menting Mental Health Peer Support: A South Australian Experience.Australian Journal of Primary Health16, 2 (2010), 179–186. doi:10.1071/py09067 Emotional Labour, Responsibility, and AI in Peer Support DIS Companion ’26, June 13–17, 2026, Singapore, Singapore

  6. [6]

    Nicola K Gale, Gemma Heath, Elaine Cameron, Sabina Rashid, and Sabi Redwood

  7. [7]

    2013), 117

    Using the Framework Method for the Analysis of Qualitative Data in Multi-Disciplinary Health Research.BMC Medical Research Methodology13, 1 (Sept. 2013), 117. doi:10.1186/1471-2288-13-117

  8. [8]

    Shang-Ling Hsu, Raj Sanjay Shah, Prathik Senthil, Zahra Ashktorab, Casey Dugan, Werner Geyer, and Diyi Yang. 2025. Helping the Helper: Supporting Peer Coun- selors via AI-Empowered Practice and Feedback.Proc. ACM Hum.-Comput. Interact.9, 2 (May 2025), CSCW095:1–CSCW095:45. doi:10.1145/3710993

  9. [9]

    Mathewson, Jaylen Pittman, and Richard Evans

    Zainab Iftikhar, Yumeng Ma, and Jeff Huang. 2023. “Together but Not Together”: Evaluating Typing Indicators for Interaction-Rich Communication. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–12. doi:10.1145/ 3544548.3581248

  10. [10]

    Zainab Iftikhar, Sean Ransom, Amy Xiao, and Jeff Huang. 2024. Therapy as an NLP Task: Psychologists’ Comparison of LLMs and Human Peers in CBT. arXiv:2409.02244 [cs] doi:10.48550/arXiv.2409.02244

  11. [11]

    Meeyun Kim, Koustuv Saha, Munmun De Choudhury, and Daejin Choi. 2023. Supporters First: Understanding Online Social Support on Mental Health from a Supporter Perspective.Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (April 2023), 1–28. doi:10.1145/3579525

  12. [12]

    Ernesto Isaac Lara, Laura Bond, Kathryn O’Neill, Emily Ruiz, and Vikram Patel

  13. [13]

    Peer with a P versus a p

    “Peer with a P versus a p”: A Mixed-Methods Study of Peer Support Training, Service Delivery, and Supervision across Global Contexts.PLOS Mental Health3, 1 (Jan. 2026), e0000447. doi:10.1371/journal.pmen.0000447

  14. [14]

    Ying Ying Lee, Suying Ang, Hong Choon Chua, and Mythily Subramaniam. 2019. Peer Support in Mental Health: A Growing Movement in Singapore.Annals of the Academy of Medicine, Singapore48, 3 (March 2019), 95–97. doi:10.47102/annals- acadmedsg.V48N3p95

  15. [15]

    Avgar, Madeline Sterling, and Nicola Dell

    Anthony Poon, Vaidehi Hussain, Julia Loughman, Ariel C. Avgar, Madeline Sterling, and Nicola Dell. 2021. Computer-Mediated Peer Support Needs of Home Care Workers: Emotional Labor & the Politics of Professionalism.Proc. ACM Hum.-Comput. Interact.5, CSCW2 (Oct. 2021), 336:1–336:32. doi:10.1145/3476077

  16. [16]

    Hameed Shalaby and Vincent I

    Reham A. Hameed Shalaby and Vincent I. O. Agyapong. 2020. Peer Support in Mental Health: Literature Review.JMIR Mental Health7, 6 (June 2020), e15572. doi:10.2196/15572

  17. [17]

    Lin, Adam S

    Ashish Sharma, Inna W. Lin, Adam S. Miner, David C. Atkins, and Tim Althoff

  18. [18]

    2023), 46–57

    Human–AI Collaboration Enables More Empathic Conversations in Text- Based Peer-to-Peer Mental Health Support.Nature Machine Intelligence5, 1 (Jan. 2023), 46–57. doi:10.1038/s42256-022-00593-2

  19. [19]

    Shultz and David Forbes

    James M. Shultz and David Forbes. 2014. Psychological First Aid: Rapid Pro- liferation and the Search for Evidence.Disaster Health2, 1 (2014), 3–12. doi:10.4161/dish.26006

  20. [20]

    "I Said Things I Needed to Hear Myself": Peer Support as an Emotional, Organisational, and Sociotechnical Practice in Singapore

    Kellie Yu Hui Sim and Kenny Tsu Wei Choo. 2025. "I Said Things I Needed to Hear Myself": Peer Support as an Emotional, Organisational, and Sociotechnical Practice in Singapore. arXiv:2506.09362 [cs] doi:10.48550/arXiv.2506.09362

  21. [21]

    "Is This Really a Human Peer Supporter?": Misalignments Between Peer Supporters and Experts in LLM-Supported Interactions

    Kellie Yu Hui Sim, Roy Ka-Wei Lee, and Kenny Tsu Wei Choo. 2025. "Is This Really a Human Peer Supporter?": Misalignments Between Peer Supporters and Experts in LLM-Supported Interactions. arXiv:2506.09354 [cs] doi:10.48550/arXiv. 2506.09354

  22. [22]

    Phyllis Solomon. 2004. Peer Support/Peer Provided Services Underlying Pro- cesses, Benefits, and Critical Ingredients.Psychiatric Rehabilitation Journal27, 4 (2004), 392–401. doi:10.2975/27.2004.392.401

  23. [23]

    Inhwa Song, Sachin R Pendse, Neha Kumar, and Munmun De Choudhury. 2025. The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support.Proc. ACM Hum.-Comput. Interact.9, 7 (Oct. 2025), CSCW249:1– CSCW249:29. doi:10.1145/3757430

  24. [24]

    Ian Steenstra, Farnaz Nouraei, and Timothy Bickmore. 2025. Scaffolding Empathy: Training Counselors with Simulated Patients and Utterance-level Performance Visualizations. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, 1–22. doi:10.1145/3706598.3714014

  25. [25]

    Subramaniam, E

    M. Subramaniam, E. Abdin, L. Picco, S. Pang, S. Shafie, J. A. Vaingankar, K. W. Kwok, K. Verma, and S. A. Chong. 2017. Stigma towards People with Mental Disorders and Its Components – a Perspective from Multi-Ethnic Singapore. Epidemiology and Psychiatric Sciences26, 4 (Aug. 2017), 371–382. doi:10.1017/ S2045796016000159

  26. [26]

    Tony Wang, Amy S Bruckman, and Diyi Yang. 2025. The Practice of Online Peer Counseling and the Potential for AI-Powered Support Tools.Proceedings of the ACM on Human-Computer Interaction9, 2 (May 2025), 1–33. doi:10.1145/3711089

  27. [27]

    Yizhe Yang, Palakorn Achananuparp, Heyan Huang, Jing Jiang, Nicholas Gabriel Lim, Cameron Tan Shi Ern, Phey Ling Kit, Jenny Giam Xiuhui, John Pinto, and Ee-Peng Lim. 2025. Consistent Client Simulation for Motivational Interviewing- based Counseling. InProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long P...

  28. [28]

    Zheng Yao, Haiyi Zhu, and Robert E. Kraut. 2022. Learning to Become a Volunteer Counselor: Lessons from a Peer-to-Peer Mental Health Community.Proceedings of the ACM on Human-Computer Interaction6, CSCW2 (Nov. 2022), 1–24. doi:10. 1145/3555200

  29. [29]

    GeckHong Yeo, Gladys Loo, Matt Oon, Rachel Pang, and Dean Ho. 2023. A Digital Peer Support Platform to Translate Online Peer Support for Emerging Adult Mental Well-being: Randomized Controlled Trial.JMIR Mental Health10, 1 (April 2023), e43956. doi:10.2196/43956

  30. [30]

    Qi Yuan, Edimansyah Abdin, Louisa Picco, Janhavi Ajit Vaingankar, Shazana Shahwan, Anitha Jeyagurunathan, Vathsala Sagayadevan, Saleha Shafie, Jenny Tay, Siow Ann Chong, and Mythily Subramaniam. 2016. Attitudes to Mental Illness and Its Demographic Correlates among General Population in Singapore. PLOS ONE11, 11 (Nov. 2016), e0167297. doi:10.1371/journal....