pith. machine review for the scientific record. sign in

arxiv: 2506.09354 · v3 · submitted 2025-06-11 · 💻 cs.HC · cs.AI

Recognition: unknown

"Is This Really a Human Peer Supporter?": Misalignments Between Peer Supporters and Experts in LLM-Supported Interactions

Authors on Pith no claims yet
classification 💻 cs.HC cs.AI
keywords peersupporttrainingcareexpertshealthinteractionsmental
0
0 comments X
read the original abstract

Mental health is a growing global concern, prompting interest in AI-driven solutions to expand access to psychosocial support. \emph{Peer support}, grounded in lived experience, offers a valuable complement to professional care. However, variability in training, effectiveness, and definitions raises concerns about quality, consistency, and safety. Large Language Models (LLMs) present new opportunities to enhance peer support interactions, particularly in real-time, text-based interactions. We present and evaluate an AI-supported system with an LLM-simulated distressed client (\client{}), context-sensitive LLM-generated suggestions (\suggestions{}), and real-time emotion visualisations. 2 mixed-methods studies with 12 peer supporters and 6 mental health professionals (i.e., experts) examined the system's effectiveness and implications for practice. Both groups recognised its potential to enhance training and improve interaction quality. However, we found a key tension emerged: while peer supporters engaged meaningfully, experts consistently flagged critical issues in peer supporter responses, such as missed distress cues and premature advice-giving. This misalignment highlights potential limitations in current peer support training, especially in emotionally charged contexts where safety and fidelity to best practices are essential. Our findings underscore the need for standardised, psychologically grounded training, especially as peer support scales globally. They also demonstrate how LLM-supported systems can scaffold this development--if designed with care and guided by expert oversight. This work contributes to emerging conversations on responsible AI integration in mental health and the evolving role of LLMs in augmenting peer-delivered care.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. "I'm Not Able to Be There for You": Emotional Labour, Responsibility, and AI in Peer Support

    cs.HC 2026-04 unverdicted novelty 5.0

    Peer supporters bear concentrated emotional labor from institutional ambiguity and judge AI by its effects on redistributing responsibility and risk within fragile support roles.