Recognition: unknown
"I'm Not Able to Be There for You": Emotional Labour, Responsibility, and AI in Peer Support
Pith reviewed 2026-05-10 12:22 UTC · model grok-4.3
The pith
Peer supporters judge AI by how it redistributes their emotional labor and accountability rather than by empathy or technical skill.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Interviews show that lived experience, moral commitment, and self-identification shape who participates in peer support while leaving scope, authority, and accountability unevenly defined. Institutional ambiguity concentrates emotional labour, boundary-setting, and escalation of responsibility at the individual level without consistent organisational scaffolding. Participants evaluated AI not primarily through empathy or technical capability, but through how technologies redistribute risk, labour, and accountability within already fragile support roles. This leads to design futures for an AI-supported peer support ecosystem that treats responsibility as a central design concern.
What carries the argument
Institutional ambiguity in peer support, which concentrates emotional labour, boundary-setting, and responsibility on individuals and serves as the standard by which AI is judged.
Load-bearing premise
The experiences of the interviewed peer supporters represent broader peer support practices and that institutional ambiguity is the main reason emotional labour concentrates at the individual level.
What would settle it
A larger study finding that peer supporters rate AI mainly on empathetic quality or accuracy, or that most peer support settings have clear shared accountability structures and organisational guidelines, would challenge the central account.
read the original abstract
Peer support is increasingly positioned as a scalable response to gaps in mental health care, particularly in digitally mediated settings, yet what counts as peer support and how responsibility is distributed remain unevenly defined in practice. Drawing on interviews with peer supporters, we show how lived experience, moral commitment, and self-identification shape participation while blurring expectations around scope, authority, and accountability. Institutional ambiguity concentrates emotional labour, boundary-setting, and escalation of responsibility at the individual level, often without consistent organisational scaffolding. Participants evaluated AI not primarily through empathy or technical capability, but through how technologies redistribute risk, labour, and accountability within already fragile support roles. Building on these findings, we outline design futures for an AI-supported peer support ecosystem that foregrounds responsibility as a central design concern rather than treating AI as a mechanism of scale.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper draws on interviews with peer supporters to argue that institutional ambiguity in peer support concentrates emotional labour, boundary-setting, and responsibility at the individual level. It claims participants evaluate AI primarily through its effects on redistributing risk, labour, and accountability within fragile support roles, rather than through empathy or technical capability, and proposes design futures that foreground responsibility as a central concern for AI-supported peer support ecosystems.
Significance. If the interpretive findings hold, the work contributes to HCI and CSCW by shifting focus from empathy-centric AI design in mental health to responsibility, risk, and labour redistribution in already ambiguous support structures. This could inform more accountable AI interventions in peer support and related domains.
major comments (2)
- [Findings / §4 (or equivalent thematic analysis section)] The central claim that participants 'evaluated AI not primarily through empathy or technical capability, but through how technologies redistribute risk, labour, and accountability' (abstract and likely §4 findings) requires explicit evidence that this priority ordering emerges from the data rather than interview framing or selective theme emphasis. The manuscript should include the interview protocol, sample questions, and any negative-case analysis or comparative coding that rules out question-induced bias toward responsibility themes.
- [Methods section] Methods details are insufficient to assess soundness: the abstract and methods section provide no information on sample size, recruitment strategy, participant demographics, interview duration, transcription/analysis approach (e.g., reflexive thematic analysis, grounded theory), or how quotes were selected and anonymised. These are load-bearing for evaluating whether the reported themes support the stated conclusions about institutional ambiguity and AI evaluation.
minor comments (2)
- [Introduction] Clarify the distinction between 'peer support' as practiced by participants versus institutional definitions early in the introduction to avoid ambiguity for readers unfamiliar with the domain.
- [Design implications / Discussion] The design futures section would benefit from more concrete examples or scenarios illustrating how responsibility could be operationalised in AI tools, to make the recommendations actionable.
Simulated Author's Rebuttal
We thank the referee for these constructive comments, which correctly identify gaps in methodological transparency that limit assessment of our claims. We will revise the manuscript to provide the requested details and strengthen the evidential grounding for our interpretive findings.
read point-by-point responses
-
Referee: [Findings / §4 (or equivalent thematic analysis section)] The central claim that participants 'evaluated AI not primarily through empathy or technical capability, but through how technologies redistribute risk, labour, and accountability' (abstract and likely §4 findings) requires explicit evidence that this priority ordering emerges from the data rather than interview framing or selective theme emphasis. The manuscript should include the interview protocol, sample questions, and any negative-case analysis or comparative coding that rules out question-induced bias toward responsibility themes.
Authors: We accept that the current manuscript does not sufficiently demonstrate how the priority ordering of themes was derived from the data. In revision we will add the complete interview protocol and sample questions as an appendix. We will also expand the findings and methods sections to describe our reflexive thematic analysis process, showing the iterative coding steps and providing concrete examples of how responsibility/risk/accountability codes were generated across the dataset. Where participants raised empathy or technical capability as primary concerns, we will explicitly discuss these as negative cases and explain their relation to the dominant themes. This addition will make clear that the reported emphasis reflects patterns in participant accounts rather than question framing. revision: yes
-
Referee: [Methods section] Methods details are insufficient to assess soundness: the abstract and methods section provide no information on sample size, recruitment strategy, participant demographics, interview duration, transcription/analysis approach (e.g., reflexive thematic analysis, grounded theory), or how quotes were selected and anonymised. These are load-bearing for evaluating whether the reported themes support the stated conclusions about institutional ambiguity and AI evaluation.
Authors: We agree the methods section is currently underspecified. The revised manuscript will expand this section to report sample size, recruitment channels, participant demographics (with a summary table), average interview duration, transcription procedures, the specific analytic approach employed, criteria used to select and present quotes, and anonymization steps. These additions will allow readers to evaluate the support for our claims regarding institutional ambiguity and participants' AI evaluations. revision: yes
Circularity Check
No significant circularity; interpretive claims grounded in interview data
full rationale
This is a qualitative HCI paper whose central claims derive from thematic analysis of interviews with peer supporters. No equations, fitted parameters, derivations, or mathematical predictions exist. The key assertion that participants evaluated AI through redistribution of risk, labour, and accountability is presented as emerging from participant data rather than reducing by construction to self-citations, prior author work, or definitional inputs. No load-bearing step matches any of the enumerated circularity patterns; the analysis remains self-contained against external benchmarks of interview-based research.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Qualitative interviews with peer supporters can surface generalizable insights into emotional labor, responsibility distribution, and technology evaluation.
Reference graph
Works this paper leans on
-
[1]
Max Bain, Jaesung Huh, Tengda Han, and Andrew Zisserman. 2023. WhisperX: Time-Accurate Speech Transcription of Long-Form Audio.INTERSPEECH 2023 (2023)
2023
-
[2]
Mohr, Alex Mariakakis, and Rachel Kornfield
Ananya Bhattacharjee, Joseph Jay Williams, Miranda Beltzer, Jonah Meyerhoff, Harsh Kumar, Haochen Song, David C. Mohr, Alex Mariakakis, and Rachel Kornfield. 2025. Investigating the Role of Situational Disruptors in Engagement with Digital Mental Health Tools.Proc. ACM Hum.-Comput. Interact.9, 7 (Oct. 2025), CSCW306:1–CSCW306:35. doi:10.1145/3757487
-
[3]
Tianying Chen, Kristy Zhang, Robert E. Kraut, and Laura Dabbish. 2021. Scaf- folding the Online Peer-support Experience: Novice Supporters’ Strategies and Challenges.Proc. ACM Hum.-Comput. Interact.5, CSCW2 (Oct. 2021), 366:1– 366:30. doi:10.1145/3479510
-
[4]
Sander de Jong, Ville Paananen, Benjamin Tag, and Niels van Berkel. 2025. Cog- nitive Forcing for Better Decision-Making: Reducing Overreliance on AI Systems Through Partial Explanations.Proc. ACM Hum.-Comput. Interact.9, 2 (May 2025), CSCW048:1–CSCW048:30. doi:10.1145/3710946
-
[5]
Carmen C. D. Franke, Barbara C. Paton, and Lee-Anne J. Gassner. 2010. Imple- menting Mental Health Peer Support: A South Australian Experience.Australian Journal of Primary Health16, 2 (2010), 179–186. doi:10.1071/py09067 Emotional Labour, Responsibility, and AI in Peer Support DIS Companion ’26, June 13–17, 2026, Singapore, Singapore
-
[6]
Nicola K Gale, Gemma Heath, Elaine Cameron, Sabina Rashid, and Sabi Redwood
-
[7]
Using the Framework Method for the Analysis of Qualitative Data in Multi-Disciplinary Health Research.BMC Medical Research Methodology13, 1 (Sept. 2013), 117. doi:10.1186/1471-2288-13-117
-
[8]
Shang-Ling Hsu, Raj Sanjay Shah, Prathik Senthil, Zahra Ashktorab, Casey Dugan, Werner Geyer, and Diyi Yang. 2025. Helping the Helper: Supporting Peer Coun- selors via AI-Empowered Practice and Feedback.Proc. ACM Hum.-Comput. Interact.9, 2 (May 2025), CSCW095:1–CSCW095:45. doi:10.1145/3710993
-
[9]
Mathewson, Jaylen Pittman, and Richard Evans
Zainab Iftikhar, Yumeng Ma, and Jeff Huang. 2023. “Together but Not Together”: Evaluating Typing Indicators for Interaction-Rich Communication. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–12. doi:10.1145/ 3544548.3581248
-
[10]
Zainab Iftikhar, Sean Ransom, Amy Xiao, and Jeff Huang. 2024. Therapy as an NLP Task: Psychologists’ Comparison of LLMs and Human Peers in CBT. arXiv:2409.02244 [cs] doi:10.48550/arXiv.2409.02244
-
[11]
Meeyun Kim, Koustuv Saha, Munmun De Choudhury, and Daejin Choi. 2023. Supporters First: Understanding Online Social Support on Mental Health from a Supporter Perspective.Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (April 2023), 1–28. doi:10.1145/3579525
-
[12]
Ernesto Isaac Lara, Laura Bond, Kathryn O’Neill, Emily Ruiz, and Vikram Patel
-
[13]
“Peer with a P versus a p”: A Mixed-Methods Study of Peer Support Training, Service Delivery, and Supervision across Global Contexts.PLOS Mental Health3, 1 (Jan. 2026), e0000447. doi:10.1371/journal.pmen.0000447
-
[14]
Ying Ying Lee, Suying Ang, Hong Choon Chua, and Mythily Subramaniam. 2019. Peer Support in Mental Health: A Growing Movement in Singapore.Annals of the Academy of Medicine, Singapore48, 3 (March 2019), 95–97. doi:10.47102/annals- acadmedsg.V48N3p95
-
[15]
Avgar, Madeline Sterling, and Nicola Dell
Anthony Poon, Vaidehi Hussain, Julia Loughman, Ariel C. Avgar, Madeline Sterling, and Nicola Dell. 2021. Computer-Mediated Peer Support Needs of Home Care Workers: Emotional Labor & the Politics of Professionalism.Proc. ACM Hum.-Comput. Interact.5, CSCW2 (Oct. 2021), 336:1–336:32. doi:10.1145/3476077
-
[16]
Reham A. Hameed Shalaby and Vincent I. O. Agyapong. 2020. Peer Support in Mental Health: Literature Review.JMIR Mental Health7, 6 (June 2020), e15572. doi:10.2196/15572
-
[17]
Lin, Adam S
Ashish Sharma, Inna W. Lin, Adam S. Miner, David C. Atkins, and Tim Althoff
-
[18]
Human–AI Collaboration Enables More Empathic Conversations in Text- Based Peer-to-Peer Mental Health Support.Nature Machine Intelligence5, 1 (Jan. 2023), 46–57. doi:10.1038/s42256-022-00593-2
-
[19]
James M. Shultz and David Forbes. 2014. Psychological First Aid: Rapid Pro- liferation and the Search for Evidence.Disaster Health2, 1 (2014), 3–12. doi:10.4161/dish.26006
-
[20]
Kellie Yu Hui Sim and Kenny Tsu Wei Choo. 2025. "I Said Things I Needed to Hear Myself": Peer Support as an Emotional, Organisational, and Sociotechnical Practice in Singapore. arXiv:2506.09362 [cs] doi:10.48550/arXiv.2506.09362
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2506.09362 2025
-
[21]
Kellie Yu Hui Sim, Roy Ka-Wei Lee, and Kenny Tsu Wei Choo. 2025. "Is This Really a Human Peer Supporter?": Misalignments Between Peer Supporters and Experts in LLM-Supported Interactions. arXiv:2506.09354 [cs] doi:10.48550/arXiv. 2506.09354
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv 2025
-
[22]
Phyllis Solomon. 2004. Peer Support/Peer Provided Services Underlying Pro- cesses, Benefits, and Critical Ingredients.Psychiatric Rehabilitation Journal27, 4 (2004), 392–401. doi:10.2975/27.2004.392.401
-
[23]
Inhwa Song, Sachin R Pendse, Neha Kumar, and Munmun De Choudhury. 2025. The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support.Proc. ACM Hum.-Comput. Interact.9, 7 (Oct. 2025), CSCW249:1– CSCW249:29. doi:10.1145/3757430
-
[24]
Ian Steenstra, Farnaz Nouraei, and Timothy Bickmore. 2025. Scaffolding Empathy: Training Counselors with Simulated Patients and Utterance-level Performance Visualizations. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, 1–22. doi:10.1145/3706598.3714014
-
[25]
Subramaniam, E
M. Subramaniam, E. Abdin, L. Picco, S. Pang, S. Shafie, J. A. Vaingankar, K. W. Kwok, K. Verma, and S. A. Chong. 2017. Stigma towards People with Mental Disorders and Its Components – a Perspective from Multi-Ethnic Singapore. Epidemiology and Psychiatric Sciences26, 4 (Aug. 2017), 371–382. doi:10.1017/ S2045796016000159
2017
-
[26]
Tony Wang, Amy S Bruckman, and Diyi Yang. 2025. The Practice of Online Peer Counseling and the Potential for AI-Powered Support Tools.Proceedings of the ACM on Human-Computer Interaction9, 2 (May 2025), 1–33. doi:10.1145/3711089
-
[27]
Yizhe Yang, Palakorn Achananuparp, Heyan Huang, Jing Jiang, Nicholas Gabriel Lim, Cameron Tan Shi Ern, Phey Ling Kit, Jenny Giam Xiuhui, John Pinto, and Ee-Peng Lim. 2025. Consistent Client Simulation for Motivational Interviewing- based Counseling. InProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long P...
-
[28]
Zheng Yao, Haiyi Zhu, and Robert E. Kraut. 2022. Learning to Become a Volunteer Counselor: Lessons from a Peer-to-Peer Mental Health Community.Proceedings of the ACM on Human-Computer Interaction6, CSCW2 (Nov. 2022), 1–24. doi:10. 1145/3555200
2022
-
[29]
GeckHong Yeo, Gladys Loo, Matt Oon, Rachel Pang, and Dean Ho. 2023. A Digital Peer Support Platform to Translate Online Peer Support for Emerging Adult Mental Well-being: Randomized Controlled Trial.JMIR Mental Health10, 1 (April 2023), e43956. doi:10.2196/43956
-
[30]
Qi Yuan, Edimansyah Abdin, Louisa Picco, Janhavi Ajit Vaingankar, Shazana Shahwan, Anitha Jeyagurunathan, Vathsala Sagayadevan, Saleha Shafie, Jenny Tay, Siow Ann Chong, and Mythily Subramaniam. 2016. Attitudes to Mental Illness and Its Demographic Correlates among General Population in Singapore. PLOS ONE11, 11 (Nov. 2016), e0167297. doi:10.1371/journal....
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.