pith. machine review for the scientific record. sign in

arxiv: 2603.07956 · v1 · submitted 2026-03-09 · 💻 cs.HC

Recognition: 1 theorem link

· Lean Theorem

From Daily Song to Daily Self: Supporting Reflective Songwriting of Deaf and Hard-of-Hearing Individuals through Generative Music AI

Authors on Pith no claims yet

Pith reviewed 2026-05-15 15:30 UTC · model grok-4.3

classification 💻 cs.HC
keywords generative AIsongwritingDeaf and Hard-of-Hearingemotional reflectionself-discoveryreflective journalingmusic AIhuman-computer interaction
0
0 comments X

The pith

Ongoing songwriting with a generative AI system produces emotional growth in self-insight, emotion regulation, and self-care attitudes for Deaf and Hard-of-Hearing users.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents SoulNote as a generative AI tool that lets Deaf and Hard-of-Hearing individuals write songs iteratively over multiple sessions, treating songwriting as a form of daily reflective journaling. Design workshops and a multi-session diary study show that repeated engagement helps participants gain clearer views of their own emotions, manage those emotions more effectively, and adopt more constructive daily habits around emotional awareness and self-care. This approach extends beyond one-time creative sessions by sustaining engagement and allowing users new to songwriting to express personal stories through music. A reader would care if the findings hold because they point to a concrete way generative tools can turn creative activity into ongoing self-reflection for a community often excluded from standard music practices.

Core claim

The central claim is that ongoing songwriting with SoulNote facilitated emotional growth across three dimensions: self-insight, emotion regulation, and everyday attitudes toward emotions and self-care. This outcome emerged from a user-centered process that included a design workshop, a preliminary study, and a multi-session diary study in which participants used the system repeatedly to build songs grounded in their personal experiences.

What carries the argument

SoulNote, a generative AI system that supports iterative songwriting by allowing users to refine and extend musical ideas across sessions based on personal input.

If this is right

  • Songwriting can function as an extended music-based journaling practice that sustains emotional reflection beyond single sessions.
  • Generative AI expands access to songwriting for users unfamiliar with it, enabling them to convey personal narratives over time.
  • Creative expression supported by AI can be transformed into a daily practice of self-discovery and reflection for marginalized communities.
  • Multi-session designs reveal emotional benefits that single-session evaluations miss.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar iterative AI tools could be tested for emotional support in other creative domains such as visual art or poetry for the same user group.
  • Real-world deployment without study incentives might show different patterns of sustained use or different emotional outcomes.
  • Incorporating explicit user controls over AI generation parameters could strengthen the observed benefits by reducing frustration with output variation.

Load-bearing premise

The multi-session diary study and participant self-reports isolate the effects of SoulNote from study participation itself, variations in AI output quality, or participants' prior interest in songwriting.

What would settle it

A follow-up study that measures the same emotional growth dimensions in a matched control group using non-AI journaling tools over the same number of sessions would show comparable gains if the claim is incorrect.

Figures

Figures reproduced from arXiv: 2603.07956 by Eun Young Lee, Jaeyoung Moon, Jennifer G. Kim, Jin-Hyuk Hong, Jinyoung Yoo, Yoonjae Kim, Youjin Choi.

Figure 1
Figure 1. Figure 1: Overview of the research design and procedures. [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Interfaces and features of SoulNote. (A) Conversation interface for songwriting with four dialogue phases (highlights indicate CA strategies such as supportive and unembellished response (a), imagery-based questioning (b), and context-based music style recommendation (c)), (B) Interactive lyrics editing interface for modifying CA-generated lyrics, (C) Music apprecia￾tion interface with visual assistance, a… view at source ↗
Figure 3
Figure 3. Figure 3: Overall process and theory of change in CA-assisted songwriting. The upper layer (A) represents user-side psychological processes associated with interacting with the CA, organized into three dimensions: developing self-insight, experimenting with emotion regulation strategies, and shifts in everyday attitudes and behaviors. To complement these qualitatively derived mechanisms, we also analyzed behavioral … view at source ↗
Figure 4
Figure 4. Figure 4: Individual songwriting patterns of DHH participants across 12 sessions. The patterns are categorized into five topics, [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Questionnaire results of the comparative experiment across four conditions. Values in parentheses indicate standard deviations. Between-condition differences were assessed using a one-way ANOVA, followed by Tukey’s HSD post-hoc tests for pairwise comparisons (*p < 0.05, **p < 0.01, ***p < 0.001) the highest scores across all constructs. As illustrated in [PITH_FULL_IMAGE:figures/full_fig_p016_5.png] view at source ↗
read the original abstract

The rapid advancement of generative AI (GenAI) is expanding access to songwriting, offering a new medium of self-expression for Deaf and Hard-of-Hearing (DHH) individuals. However, emerging technologies that support DHH individuals in expressing themselves through music have largely been evaluated in single-session settings and often fall short in helping users unfamiliar with songwriting convey personal narratives or sustain engagement over time. This paper explores songwriting as an extended, music-based journaling practice that supports sustained emotional reflection over multiple sessions. We introduce SoulNote, a GenAI system enabling DHH to engage in iterative songwriting. Grounded in user-centered design, including a design workshop, a preliminary study, and a multi-session diary study, our findings show that ongoing songwriting with \textit{SoulNote} facilitated emotional growth across three dimensions: self-insight, emotion regulation, and \revised{everyday attitudes toward emotions and self-care}. Overall, this work demonstrates how GenAI can support marginalized communities by transforming creative expression into a daily practice of self-discovery and reflection.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 0 minor

Summary. The manuscript introduces SoulNote, a generative AI system for iterative songwriting targeted at Deaf and Hard-of-Hearing (DHH) individuals. Grounded in a user-centered process (design workshop, preliminary study, and multi-session diary study), it claims that ongoing use of the system supports emotional growth across three dimensions: self-insight, emotion regulation, and everyday attitudes toward emotions and self-care, positioning songwriting as a sustained reflective practice.

Significance. If the causal claims hold, the work would meaningfully extend HCI and accessible computing by showing how GenAI can convert creative tools into daily self-reflection mechanisms for marginalized users, moving beyond single-session evaluations. It offers a concrete example of music-based journaling that could inform designs for sustained engagement and emotional support.

major comments (1)
  1. [Diary study] Diary study section (as described in abstract and methods): the design includes no control arm, no pre/post standardized emotional measures independent of the tool, and no reported stratification by prior songwriting experience. This leaves the attribution of the three emotional-growth dimensions to SoulNote vulnerable to confounds from study participation effects or pre-existing interest, directly weakening the central claim.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their detailed review and valuable comments. We have carefully considered the concern about the diary study design and provide our response below. We believe the revisions will address the issues raised while preserving the integrity of our exploratory findings.

read point-by-point responses
  1. Referee: [Diary study] Diary study section (as described in abstract and methods): the design includes no control arm, no pre/post standardized emotional measures independent of the tool, and no reported stratification by prior songwriting experience. This leaves the attribution of the three emotional-growth dimensions to SoulNote vulnerable to confounds from study participation effects or pre-existing interest, directly weakening the central claim.

    Authors: Thank you for highlighting this important methodological consideration. Our diary study was designed as a qualitative, exploratory investigation to understand how DHH users engage with SoulNote over multiple sessions in their daily lives, which is a key contribution given the scarcity of longitudinal studies in this area. We did not include a control arm or standardized measures because the focus was on capturing rich, contextual data on the songwriting process and its perceived impacts through diaries and interviews, rather than testing efficacy in a controlled manner. This choice was informed by the user-centered design process and the need to prioritize accessibility and participant burden for DHH individuals. Nevertheless, we recognize that this leaves room for alternative explanations, such as effects from participating in the study itself or participants' pre-existing motivations. In the revised manuscript, we will add a Limitations section that explicitly addresses these potential confounds, clarifies the qualitative nature of the evidence for the three dimensions, and discusses implications for future controlled studies. We will also ensure that participant demographics, including any information on prior songwriting experience, are fully reported in the Methods section. These revisions will strengthen the paper by providing a more balanced presentation of the results. revision: yes

Circularity Check

0 steps flagged

No significant circularity: empirical claims rest on user study data

full rationale

The paper presents findings from a user-centered design process including a workshop, preliminary study, and multi-session diary study. The central claim of emotional growth across three dimensions is attributed directly to qualitative analysis of participant self-reports rather than any mathematical derivation, fitted parameters renamed as predictions, or self-citation chains that reduce the result to its own inputs. No equations, uniqueness theorems, or ansatzes are invoked; the work is self-contained against external benchmarks of empirical HCI reporting.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The paper rests on the domain assumption that generative music models can produce usable song material for DHH users and that repeated creative interaction produces measurable emotional change; no free parameters or new physical entities are introduced.

axioms (1)
  • domain assumption Generative music AI can produce outputs that DHH users find suitable for personal songwriting and reflection
    Invoked in the design and evaluation of SoulNote
invented entities (1)
  • SoulNote no independent evidence
    purpose: GenAI system supporting iterative songwriting for DHH users
    New system presented in the paper

pith-pipeline@v0.9.0 · 5517 in / 1256 out tokens · 32858 ms · 2026-05-15T15:30:04.698803+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

136 extracted references · 136 canonical work pages · 1 internal anchor

  1. [1]

    Chadia Abras, Diane Maloney-Krichmar, Jenny Preece, et al. 2004. User-centered design.Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications37, 4 (2004), 445–456

  2. [2]

    Altshuler and B

    K.Z. Altshuler and B. Sarlin. 1962.Deafness and Schizophrenia: Interrelation of Communication Stress, Maturation Lag and Schizophrenic Risk. publisher not identified. https://books.google.co.kr/books?id=iXk4HQAACAAJ

  3. [3]

    Robin Angelini, Katta Spiel, and Maartje De Meulder. 2025. Speculating Deaf Tech: Reimagining Technologies Centering Deaf People. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–18

  4. [4]

    Arian Fooroogh Mand Arabi, Cansu Koyuturk, Michael O’Mahony, Raffaella Calati, and Dimitri Ognibene. 2024. Habit Coach: Customising RAG-based chatbots to support behavior change. arXiv:2411.19229 [cs.HC] https://arxiv. org/abs/2411.19229

  5. [5]

    Baikie, Liesbeth Geerligs, and Kay Wilhelm

    Karen A. Baikie, Liesbeth Geerligs, and Kay Wilhelm. 2012. Expressive writing and positive writing for participants with mood disorders: An online randomized controlled trial.Journal of Affective Disorders136, 3 (2012), 310–319. https: //doi.org/10.1016/j.jad.2011.11.032

  6. [6]

    Felicity Baker and Robert E. Krout. 2011. Collaborative peer lyric writ- ing during music therapy training: a tool for facilitating students’ reflec- tions about clinical practicum experiences.Nordic Journal of Music Ther- apy20, 1 (2011), 62–89. https://doi.org/10.1080/08098131.2010.486132 arXiv:https://doi.org/10.1080/08098131.2010.486132

  7. [7]

    Baker, A

    F. Baker, A. Oldfield, L. Magill, T. Wigram, J. Kennelly, J. Tamplin, and E. Davies. 2005.Songwriting: Methods, Techniques and Clinical Applications for Music Therapy Clinicians, Educators and Students. Jessica Kingsley Publishers. https: //books.google.co.kr/books?id=0-hkY8uWiu4C

  8. [8]

    Baker and Raymond A

    Felicity A. Baker and Raymond A. R. MacDonald. 2013. Flow, iden- tity, achievement, satisfaction and ownership during therapeutic song- writing experiences with university students and retirees.Musicae Sci- entiae17, 2 (2013), 131–146. https://doi.org/10.1177/1029864913476287 arXiv:https://doi.org/10.1177/1029864913476287

  9. [9]

    Felicity A Baker, Jeanette Tamplin, Nikki Rickard, Peter New, Jennie Ponsford, Chantal Roddy, and Young-Eun C Lee. 2018. Meaning making process and recovery journeys explored through songwriting in early neurorehabilitation: exploring the perspectives of participants of their self-composed songs through the interpretative phenomenological analysis.Frontie...

  10. [10]

    Felicity A Baker, Jeanette Tamplin, Nikki Rickard, Jennie Ponsford, Peter W New, and Young-Eun C Lee. 2019. A therapeutic songwriting intervention to promote reconstruction of self-concept and enhance well-being following brain or spinal cord injury: pilot randomized controlled trial.Clinical Reha- bilitation33, 6 (2019), 1045–1055. https://doi.org/10.117...

  11. [11]

    Dye, and Peter C

    Daphne Bavelier, Matthew W.G. Dye, and Peter C. Hauser. 2006. Do deaf individuals see better?Trends in Cognitive Sciences10, 11 (2006), 512–518. https://doi.org/10.1016/j.tics.2006.09.006

  12. [12]

    Mohr, Michael Liut, Alex Mariakakis, Rachel Kornfield, and Joseph Jay Williams

    Ananya Bhattacharjee, Sarah Yi Xu, Pranav Rao, Yuchen Zeng, Jonah Meyerhoff, Syed Ishtiaque Ahmed, David C. Mohr, Michael Liut, Alex Mariakakis, Rachel Kornfield, and Joseph Jay Williams. 2025. Perfectly to a Tee: Understanding User Perceptions of Personalized LLM-Enhanced Narrative Interventions. In Proceedings of the 2025 ACM Designing Interactive Syste...

  13. [13]

    Renaud Bougueng Tchemeube, Jeffrey John Ens, and Philippe Pasquier. 2022. Calliope: A Co-creative Interface for Multi-Track Music Generation. InProceed- ings of the 14th Conference on Creativity and Cognition(Venice, Italy)(C&C ’22). Association for Computing Machinery, New York, NY, USA, 608–611. https://doi.org/10.1145/3527927.3535200

  14. [14]

    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology3, 2 (2006), 77–101. https://doi.org/10.1191/ 1478088706qp063oa arXiv:https://doi.org/10.1191/1478088706qp063oa

  15. [15]

    Jean-Pierre Briot and François Pachet. 2020. Deep learning for music generation: challenges and directions.Neural Comput. Appl.32, 4 (Feb. 2020), 981–993. https://doi.org/10.1007/s00521-018-3813-6

  16. [16]

    K.E. Bruscia. 1998.Defining Music Therapy. Barcelona Publishers. https: //books.google.co.kr/books?id=KdomMQAACAAJ

  17. [17]

    Brian E Bunnell, Lynne S Nemeth, Leslie A Lenert, Nikolaos Kazantzis, Esther Deblinger, Kristen A Higgins, and Kenneth J Ruggiero. 2021. Barriers associated with the implementation of homework in youth mental health treatment and potential mobile health solutions.Cognitive therapy and research45, 2 (2021), 272–286

  18. [18]

    Belinda Chen and Danielle Keenan-Miller. 2021. How much therapy is enough? The dose–response effect and its moderators in a psychology training clinic. Journal of Clinical Psychology77, 1 (2021), 20–35. https://doi.org/10.1002/jclp. 23025 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/jclp.23025

  19. [19]

    Lee Cheng. 2025. The impact of generative AI on school music education: Challenges and recommendations.Arts Education Policy Review126, 4 (2025), 255–262. https://doi.org/10.1080/10632913.2025.2451373

  20. [20]

    Young Sang Cho, Ga-Young Kim, Hye Yoon Seol, Eun Yeon Kim, and Il Joon Moon

  21. [21]

    Experiences With the University Admission Process and Educational Support Among Students With Cochlear Implants in South Korea.Clinical and Experimental Otorhinolaryngology14, 2 (2021), 185–191

  22. [22]

    Kim, and Sung-Ju Lee

    Ryuhaerang Choi, Taehan Kim, Subin Park, Jennifer G. Kim, and Sung-Ju Lee

  23. [23]

    InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25)

    Private Yet Social: How LLM Chatbots Support and Challenge Eating Disorder Recovery. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 642, 19 pages. https://doi.org/10.1145/3706598.3713485

  24. [24]

    Youjin Choi, ChungHa Lee, Songmin Chung, Eunhye Cho, Suhyeon Yoo, and Jin-Hyuk Hong. 2025. Enhancing collaborative signing songwriting experience From Daily Song to Daily Self CHI ’26, April 13–17, 2026, Barcelona, Spain of the d/Deaf individuals.International Journal of Human-Computer Studies193 (2025), 103382. https://doi.org/10.1016/j.ijhcs.2024.103382

  25. [25]

    Youjin Choi, JaeYoung Moon, JinYoung Yoo, and Jin-Hyuk Hong. 2025. Exploring the Potential of Music Generative AI for Music-Making by Deaf and Hard of Hearing People. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 762, 20 pages. https://doi.org/10.11...

  26. [26]

    Youjin Choi, JaeYoung Moon, JinYoung Yoo, and Jin-Hyuk Hong. 2025. Un- derstanding the Potentials and Limitations of Prompt-based Music Generative AI. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 766, 15 pages. https://doi.org/10.1145/3706598.3713762

  27. [27]

    Johanna Czamanski-Cohen and Karen L. Weihs. 2023. The role of emotion processing in art therapy (REPAT) intervention protocol.Frontiers in Psychology Volume 14 - 2023 (2023). https://doi.org/10.3389/fpsyg.2023.1208901

  28. [28]

    Dalton and Robert E

    Thomas A. Dalton and Robert E. Krout. 2006. The Grief Song-Writing Process with Bereaved Adolescents: An Integrated Grief Model and Mu- sic Therapy Protocol.Music Therapy Perspectives24, 2 (01 2006), 94–107. https://doi.org/10.1093/mtp/24.2.94 arXiv:https://academic.oup.com/mtp/article- pdf/24/2/94/3908490/24-2-94.pdf

  29. [29]

    Jesper Dammeyer. 2009. Psychosocial Development in a Danish Population of Children With Cochlear Implants and Deaf and Hard-of-Hearing Children. The Journal of Deaf Studies and Deaf Education15, 1 (09 2009), 50–58. https: //doi.org/10.1093/deafed/enp024 arXiv:https://academic.oup.com/jdsde/article- pdf/15/1/50/1136576/enp024.pdf

  30. [30]

    Tobias Davidsson and Frida J. M. Petersson. 2018. Towards an actor-oriented approach to social exclusion: a critical review of contemporary exclusion research in a Swedish social work context.European Journal of Social Work21, 2 (2018), 167–180. https://doi.org/10.1080/13691457.2017.1286297 arXiv:https://doi.org/10.1080/13691457.2017.1286297

  31. [31]

    Jacopo de Berardinis, Lorenzo Porcaro, Albert Merono-Penuela, Angelo Can- gelosi, and Tess Buckley. 2025. Towards Responsible AI Music: an Investigation of Trustworthy Features for Creative Systems.arXiv preprint arXiv:2503.18814 (2025)

  32. [32]

    Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Rad- ford, and Ilya Sutskever. 2020. Jukebox: A Generative Model for Music. arXiv:2005.00341 [eess.AS] https://arxiv.org/abs/2005.00341

  33. [33]

    Sam Earle, Filippos Kokkinos, Yuhe Nie, Julian Togelius, and Roberta Raileanu

  34. [34]

    arXiv:2404.15538 [cs.GR] https://arxiv.org/abs/2404.15538

    DreamCraft: Text-Guided Generation of Functional 3D Environments in Minecraft. arXiv:2404.15538 [cs.GR] https://arxiv.org/abs/2404.15538

  35. [35]

    Tuomas Eerola, Anders Friberg, and Roberto Bresin. 2013. Emotional expression in music: contribution, linearity, and additivity of primary musical cues.Frontiers in PsychologyVolume 4 - 2013 (2013). https://doi.org/10.3389/fpsyg.2013.00487

  36. [36]

    Baker, and Imogen N

    Jasmin Eickholt, Felicity A. Baker, and Imogen N. Clark. 2022. Posi- tive Psychology in Therapeutic Songwriting for People Living with Late- Life Depression—An Intervention Protocol.Brain Sciences12, 5 (2022),

  37. [37]

    https://www.proquest.com/scholarly-journals/positive-psychology- therapeutic-songwriting/docview/2670092392/se-2

  38. [38]

    Nikolaj Fišer, Miguel Ángel Martín-Pascual, and Celia Andreu-Sánchez. 2025. Emotional impact of AI-generated vs. human-composed music in audiovisual media: A biometric and self-report study.PloS one20, 6 (2025), e0326498

  39. [39]

    Suyu Ge, Chunting Zhou, Rui Hou, Madian Khabsa, Yi-Chia Wang, Qifan Wang, Jiawei Han, and Yuning Mao. 2024. MART: Improving LLM Safety with Multi- round Automatic Red-Teaming. InProceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Kevin Duh, ...

  40. [40]

    Gee, Vanessa Hawes, and Nicholas Alexander Cox

    Kate A. Gee, Vanessa Hawes, and Nicholas Alexander Cox. 2019. Blue Notes: Using Songwriting to Improve Student Mental Health and Wellbeing. A Pilot Randomised Controlled Trial.Frontiers in PsychologyVolume 10 - 2019 (2019). https://doi.org/10.3389/fpsyg.2019.00423

  41. [41]

    Ester Goldblat and Tova Most. 2018. Cultural Identity of Young Deaf Adults with Cochlear Implants in Comparison to Deaf without Cochlear Implants and Hard-of-Hearing Young Adults.The Journal of Deaf Studies and Deaf Education23, 3 (03 2018), 228–239. https:// doi.org/10.1093/deafed/eny007 arXiv:https://academic.oup.com/jdsde/article- pdf/23/3/228/25011868...

  42. [42]

    Denise Grocke, Sidney Bloch, and David Castle. 2009. The Effect of Group Music Therapy on Quality of Life for Participants Living with a Severe and Enduring Mental Illness.Journal of Music Therapy46, 2 (07 2009), 90–104. https://doi.org/10.1093/jmt/46.2.90 arXiv:https://academic.oup.com/jmt/article- pdf/46/2/90/5330722/46-2-90.pdf

  43. [43]

    Hackmann, J

    A. Hackmann, J. Bennett-Levy, and E.A. Holmes. 2011.Oxford Guide to Imagery in Cognitive Therapy. OUP Oxford. https://books.google.co.kr/books?id= ngajYg3K8I8C

  44. [44]

    Claire Henderson, Emily Robinson, Sara Evans-Lacko, and Graham Thornicroft

  45. [45]

    https://doi.org/10.1192/bjp.bp.116

    Relationships between anti-stigma programme awareness, disclosure comfort and intended help-seeking regarding a mental health problem.British Journal of Psychiatry211, 5 (2017), 316–322. https://doi.org/10.1192/bjp.bp.116. 195867

  46. [46]

    Stephen Downie

    Xiao Hu and J. Stephen Downie. 2010. Improving mood classification in music digital libraries by combining lyrics and audio. InProceedings of the 10th Annual Joint Conference on Digital Libraries(Gold Coast, Queensland, Australia)(JCDL ’10). Association for Computing Machinery, New York, NY, USA, 159–168. https: //doi.org/10.1145/1816123.1816146

  47. [47]

    Jui-Wei Huang, Yan-Ming Chen, and Chuang-Wen You. 2024. Music Diary: Using ChatGPT to Craft Interactive Diaries with Emotional Music for Reflection and Sharing Real-life Emotions. InCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing(Melbourne VIC, Australia) (UbiComp ’24). Association for Computing Machinery,...

  48. [48]

    Shulei Ji, Jing Luo, and Xinyu Yang. 2020. A comprehensive survey on deep music generation: Multi-level representations, algorithms, evaluations, and future directions.arXiv preprint arXiv:2011.06801(2020)

  49. [49]

    Yucheng Jin, Wanling Cai, Li Chen, Yizhe Zhang, Gavin Doherty, and Tonglin Jiang. 2024. Exploring the Design of Generative AI in Supporting Music-based Reminiscence for Older Adults. arXiv:2403.01413 [cs.HC] https://arxiv.org/abs/ 2403.01413

  50. [50]

    Epstein, and Young-Ho Kim

    Eunkyung Jo, Yuin Jeong, Sohyun Park, Daniel A. Epstein, and Young-Ho Kim

  51. [51]

    In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’24)

    Understanding the Impact of Long-Term Memory on Self-Disclosure with Large Language Model-Driven Chatbots for Public Health Intervention. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 440, 21 pages. https://doi.org/10.1145/361390...

  52. [52]

    Monica Kennison, Connie Lamb, Judy Ponder, Lisa Turner, Aryn C Karpinski, and Laura C Dzurec. 2019. Expressive writing: a self-care intervention for first year undergraduates.Building Healthy Academic Communities Journal3, 1 (2019), 44–55

  53. [53]

    Kim, Hwajung Hong, and Karrie Karahalios

    Jennifer G. Kim, Hwajung Hong, and Karrie Karahalios. 2018. Understanding Identity Presentation in Medical Crowdfunding. InProceedings of the 2018 CHI Conference on Human Factors in Computing Systems(Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173708

  54. [54]

    Taewan Kim, Seolyeong Bae, Hyun Ah Kim, Su-Woo Lee, Hwajung Hong, Chanmo Yang, and Young-Ho Kim. 2024. MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients’ Journaling. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA,...

  55. [55]

    Spitzer, and Janet B

    Kurt Kroenke, Robert L. Spitzer, and Janet B. W. Williams. 2001. The PHQ-9.Journal of General Internal Medicine16, 9 (2001), 606–613. https://doi.org/10.1046/j.1525-1497.2001.016009606. x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1046/j.1525- 1497.2001.016009606.x

  56. [56]

    Philippe Laban, Jesse Vig, Marti Hearst, Caiming Xiong, and Chien-Sheng Wu

  57. [57]

    InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology(Pittsburgh, PA, USA)(UIST ’24)

    Beyond the Chat: Executable and Verifiable Text-Editing with LLMs. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology(Pittsburgh, PA, USA)(UIST ’24). Association for Computing Machinery, New York, NY, USA, Article 20, 23 pages. https://doi.org/10.1145/ 3654777.3676419

  58. [58]

    Hannah R Lawrence, Renee A Schneider, Susan B Rubin, Maja J Matarić, Daniel J McDuff, and Megan Jones Bell. 2024. The Opportunities and Risks of Large Language Models in Mental Health.JMIR Ment Health11 (29 Jul 2024), e59479. https://doi.org/10.2196/59479

  59. [59]

    Mary Leamy, Victoria Bird, Clair Le Boutillier, Julie Williams, and Mike Slade

  60. [60]

    Conceptual framework for personal recovery in mental health: systematic review and narrative synthesis.The British journal of psychiatry199, 6 (2011), 445–452

  61. [61]

    ChungHa Lee, Youjin Choi, Songmin Chung, Eunhye Cho, and Jin-Hyuk Hong

  62. [62]

    ACM Hum.-Comput

    Guaranteeing Equitable Musical Collaboration: Lessons Learned from the Music-Making Activities in Mixed-Hearing Groups.Proc. ACM Hum.-Comput. Interact.9, 2, Article CSCW091 (May 2025), 26 pages. https://doi.org/10.1145/ 3711113

  63. [63]

    Juhyeong Lee, Chul Young Yoon, Junhun Lee, Tae Hoon Kong, Seung Ha Oh, and Young Joon Seo. 2023. A situational analysis of ear and hearing care in South Korea using WHO ear and Hearing Care Situation Analysis tool.Frontiers in Public Health11 (2023), 1215556

  64. [64]

    2002.The writing cure: How expressive writing promotes health and emotional well-being.American Psychological Association

    Stephen J Lepore and Joshua M Smyth. 2002.The writing cure: How expressive writing promotes health and emotional well-being.American Psychological Association

  65. [65]

    Matthew D Lieberman, Naomi I Eisenberger, Molly J Crockett, Sabrina M Tom, Jennifer H Pfeifer, and Baldwin M Way. 2007. Putting feelings into words. Psychological science18, 5 (2007), 421–428. CHI ’26, April 13–17, 2026, Barcelona, Spain Choi et al

  66. [66]

    2022.Multi-modal Musical Interface For Deaf/HoH to Explore Creativity

    Tony Lin. 2022.Multi-modal Musical Interface For Deaf/HoH to Explore Creativity. Ph. D. Dissertation. Ph., D. Dissertation. RMIT University

  67. [67]

    Di Liu, Jingwen Bai, Zhuoyi Zhang, Yilin Zhang, Zhenhao Zhang, Jian Zhao, and Pengcheng An. 2025. TherAIssist: Assisting Art Therapy Homework and Client-Practitioner Collaboration through Human-AI Interaction.Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.9, 3, Article 113 (Sept. 2025), 38 pages. https://doi.org/10.1145/3749497

  68. [68]

    Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai

  69. [69]

    InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20)

    Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376739

  70. [70]

    Ryan Louie, Jesse Engel, and Cheng-Zhi Anna Huang. 2022. Expressive Commu- nication: Evaluating Developments in Generative Models and Steering Interfaces for Music Creation. InProceedings of the 27th International Conference on Intelli- gent User Interfaces(Helsinki, Finland)(IUI ’22). Association for Computing Ma- chinery, New York, NY, USA, 405–417. htt...

  71. [71]

    Lusebrink

    V.B. Lusebrink. 1990.Imagery and Visual Expression in Therapy. Springer US. https://books.google.co.kr/books?id=tUdsAAAAMAAJ

  72. [72]

    2020-4-01

    David D Luxton. 2020-4-01. Ethical implications of conversational agents in global public health.Bulletin of the World Health Organization98, 4 (2020-4-01), 285 – 287

  73. [73]

    Miller, Elizabeth D

    Lena Mamykina, Andrew D. Miller, Elizabeth D. Mynatt, and Daniel Greenblatt

  74. [74]

    In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA)(CHI ’10)

    Constructing identities through storytelling in diabetes management. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA)(CHI ’10). Association for Computing Machinery, New York, NY, USA, 1203–1212. https://doi.org/10.1145/1753326.1753507

  75. [75]

    Joseph A Maxwell. 2010. Using numbers in qualitative research.Qualitative inquiry16, 6 (2010), 475–482

  76. [76]

    Katrina McFerran, Felicity Baker, George C Patton, and Susan M Sawyer. 2006. A retrospective lyrical analysis of songs written by adolescents with anorexia nervosa.European Eating Disorders Review: The Professional Journal of the Eating Disorders Association14, 6 (2006), 397–403

  77. [77]

    Miner, Arnold Milstein, Stephen Schueller, Roshini Hegde, Christina Mangurian, and Eleni Linos

    Adam S. Miner, Arnold Milstein, Stephen Schueller, Roshini Hegde, Christina Mangurian, and Eleni Linos. 2016. Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health.JAMA Internal Medicine176, 5 (05 2016), 619–625. https: //doi.org/10.1001/jamainternmed.2016.0400

  78. [78]

    David C Mohr, Stephen M Schueller, William T Riley, C Hendricks Brown, Pim Cuijpers, Naihua Duan, Mary J Kwasny, Colleen Stiles-Shields, and Ken Cheung. 2015. Trials of Intervention Principles: Evaluation Methods for Evolving Behavioral Intervention Technologies.Journal of Medical Internet Research17, 7 (2015). https://doi.org/10.2196/jmir.4391

  79. [79]

    JaeYoung Moon, Youjin Choi, Jin-Hyuk Hong, and Kyung-Joong Kim. 2025. Sign Dance Maker: A Generative Ai-Assisted Framework for Inclusive Music Performance Support for Sign Language Interpreters.A vailable at SSRN 5245083 (2025)

  80. [80]

    Heinz, Ashmita Kunwar, Eunsol Soul Choi, Xuhai Xu, Joanna Kuc, Jeremy F

    Subigya Nepal, Arvind Pillai, William Campbell, Talie Massachi, Michael V. Heinz, Ashmita Kunwar, Eunsol Soul Choi, Xuhai Xu, Joanna Kuc, Jeremy F. Huckins, Jason Holden, Sarah M. Preum, Colin Depp, Nicholas Jacobson, Mary P. Czerwinski, Eric Granholm, and Andrew T. Campbell. 2024. MindScape Study: Integrating LLM and Behavioral Sensing for Personalized A...

Showing first 80 references.