pith. machine review for the scientific record. sign in

arxiv: 2601.22452 · v2 · submitted 2026-01-30 · 💻 cs.HC · cs.AI

Recognition: no theorem link

Does My Chatbot Have an Agenda? Understanding Human and AI Agency in Human-Human-like Chatbot Interaction

Authors on Pith no claims yet

Pith reviewed 2026-05-16 09:50 UTC · model grok-4.3

classification 💻 cs.HC cs.AI
keywords human-AI interactionperceived agencyconversational agentschatbot designlongitudinal studyco-constructionuser control
0
0 comments X

The pith

Agency in human-chatbot conversations is co-constructed turn by turn rather than owned by either side.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines who holds control when people talk to chatbots built to feel like companions. In a month-long study, 22 adults chatted daily with a custom LLM companion called Day, then reviewed transcripts and discussed strategies in interviews. The authors conclude that neither the human nor the AI holds fixed agency; instead, boundaries and intentions are negotiated exchange by exchange. They present a 3-by-4 framework that tracks how humans, the AI, or both perform intention, execution, adaptation, and delimitation. The work matters because clearer understanding of these shared dynamics can guide designs that give users more visible influence over ongoing conversations.

Core claim

Agency manifests as an emergent, shared experience: as participants set boundaries and the AI steered intentions, control was co-constructed turn-by-turn. We introduce a 3-by-4 framework mapping actors (Human, AI, Hybrid) by their action (Intention, Execution, Adaptation, Delimitation), modulated by individual and environmental factors.

What carries the argument

The 3-by-4 framework that classifies agency by three actor types (Human, AI, Hybrid) performing four actions (Intention, Execution, Adaptation, Delimitation).

If this is right

  • Designers should build translucent systems that reveal AI intentions on user demand rather than hiding them.
  • Conversational agents can be made agency self-aware by tracking and surfacing the four actions in the framework.
  • Individual differences in users and changes in conversation context will shift which actor dominates each action.
  • Sustained, longitudinal interactions reveal agency patterns that single-session tests miss.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same co-construction logic may apply to voice assistants or collaborative writing tools where control shifts rapidly.
  • Future systems could log the four actions in real time to let users replay and adjust past turns.
  • Training data for chatbots might be adjusted to balance the AI's steering tendencies against user delimitation signals.

Load-bearing premise

Post-hoc interviews and strategy reveals with 22 participants accurately reflect real-time agency dynamics without distortion from the disclosure process or the specific chatbot design.

What would settle it

Record real-time conversations without later interviews or goal disclosures and check whether the observed turn-by-turn boundary setting and steering still produce the same patterns of co-constructed control.

Figures

Figures reproduced from arXiv: 2601.22452 by April Yi Wang, Bhada Yun, Evgenia Taranova.

Figure 1
Figure 1. Figure 1: Agency in Human-AI conversation. Color coding shows how conversational control shifts dynamically between [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Sample conversations from participants P8 and P9. Two conversation timelines showing message exchanges with [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Main chat interface with example of agency dynam [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: (Stage 1) Highlighting interesting moments – Participants were presented with their full conversation history and [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: (Stage 2) Cross-participant conversation excerpts – Participants reviewed anonymized conversations from other [PITH_FULL_IMAGE:figures/full_fig_p011_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: (Stage 3, Strategy Reveal) Participant Profiles – “Day” maintained detailed psychological profiles for each user. [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: (Stage 3, Strategy Reveal) Conversation Goals – “Day’s” system included specific objectives for future conversations [PITH_FULL_IMAGE:figures/full_fig_p015_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: (Stage 3, Strategy Reveal) Communication Insights – “Day’s” system included specific conversational strategies for each [PITH_FULL_IMAGE:figures/full_fig_p016_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: (Stage 3, Strategy Reveal) Shared memories – “Day’s” memory system categorized significant conversations with labels [PITH_FULL_IMAGE:figures/full_fig_p018_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: (Stage 1) Conversation patterns interface – Participants reviewed their chat history through multiple visualizations: [PITH_FULL_IMAGE:figures/full_fig_p031_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Descriptive analysis of pre- to post-study attitude shifts on AI friendship potential. Sankey diagram tracking [PITH_FULL_IMAGE:figures/full_fig_p031_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Post-study responses on perceived agency and comfort with AI companionship. Horizontal stacked bar charts [PITH_FULL_IMAGE:figures/full_fig_p032_12.png] view at source ↗
read the original abstract

As AI chatbots shift from tools to companions, critical questions arise: who controls the conversation in human-AI chatrooms? This paper explores perceived human and AI agency in sustained conversation. We report a month-long longitudinal study with 22 adults who chatted with Day, an LLM companion we built, followed by a semi-structured interview with post-hoc elicitation of notable moments, cross-participant chat reviews, and a 'strategy reveal' disclosing Day's goal for each conversation. We discover agency manifests as an emergent, shared experience: as participants set boundaries and the AI steered intentions, control was co-constructed turn-by-turn. We introduce a 3-by-4 framework mapping actors (Human, AI, Hybrid) by their action (Intention, Execution, Adaptation, Delimitation), modulated by individual and environmental factors. We argue for translucent design (transparency-on-demand) and provide implications for agency self-aware conversational agents.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper reports a month-long longitudinal study with 22 adults who interacted with a custom LLM companion chatbot named Day. Through semi-structured interviews that included post-hoc elicitation of notable moments, cross-participant chat reviews, and a 'strategy reveal' disclosing the AI's goals, the authors claim that agency emerges as a shared, turn-by-turn co-construction between human and AI. They introduce a 3-by-4 framework mapping actors (Human, AI, Hybrid) to actions (Intention, Execution, Adaptation, Delimitation), modulated by individual and environmental factors, and advocate for translucent (transparency-on-demand) design in conversational agents.

Significance. If the empirical observations hold after addressing methodological concerns, the work contributes a qualitative framework for understanding perceived agency in sustained human-AI interactions, which could guide design of more self-aware companion chatbots. The longitudinal approach and focus on boundary-setting and intention-steering provide concrete examples that extend beyond one-shot interactions.

major comments (1)
  1. [Methods] Methods section (interview protocol and data analysis): The central claim that agency is co-constructed turn-by-turn depends on retrospective accounts collected after the strategy reveal. No pre-disclosure baseline measures, concurrent think-aloud protocols, or blinded independent coding of raw chat logs (independent of participant reinterpretation) are reported, leaving open whether the 3-by-4 framework captures observed dynamics or post-hoc sense-making induced by disclosure.
minor comments (1)
  1. [Results] The 3-by-4 framework would be clearer with an explicit table or diagram showing example turns from the chat logs mapped to each cell, rather than relying solely on narrative description.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their insightful comments on our paper. We address the methodological concern in detail below, providing clarifications and committing to revisions where appropriate.

read point-by-point responses
  1. Referee: [Methods] Methods section (interview protocol and data analysis): The central claim that agency is co-constructed turn-by-turn depends on retrospective accounts collected after the strategy reveal. No pre-disclosure baseline measures, concurrent think-aloud protocols, or blinded independent coding of raw chat logs (independent of participant reinterpretation) are reported, leaving open whether the 3-by-4 framework captures observed dynamics or post-hoc sense-making induced by disclosure.

    Authors: The referee correctly identifies a key aspect of our design. The interviews began with participants reviewing their full chat logs to identify and narrate specific turn-by-turn moments of boundary-setting and intention-steering; the strategy reveal occurred only after this elicitation phase to examine its effect on subsequent reflections. The 3-by-4 framework was developed through iterative thematic coding of these narrated moments, cross-referenced against the raw logs and compared across participants. We did not include pre-disclosure baselines or concurrent think-aloud protocols because the study prioritized ecological validity over a month-long period. We agree this leaves open the possibility of retrospective sense-making and will revise the Methods section to detail the exact interview sequence and add an explicit Limitations subsection on retrospective bias. We will also flag independent blinded coding of logs as valuable future work. This constitutes a partial revision. revision: partial

Circularity Check

0 steps flagged

No circularity: framework derived from empirical observations without reduction to inputs or self-citations

full rationale

The paper reports a qualitative longitudinal study with 22 participants, deriving its 3-by-4 agency framework directly from chat logs, semi-structured interviews, and post-hoc elicitation. No equations, fitted parameters, predictions, or mathematical derivations appear in the provided text. The central claim of turn-by-turn co-construction is presented as an interpretive finding from the data rather than a quantity that reduces by construction to prior fits or self-citations. Any self-citations (if present in the full manuscript) are not invoked as load-bearing uniqueness theorems or ansatzes for the framework itself. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claim rests on the assumption that qualitative self-reports validly reveal agency dynamics and that the custom chatbot's behavior allows observable co-construction without introducing artifacts.

axioms (2)
  • domain assumption Participants' post-interview descriptions accurately reflect their lived experience of control during chats.
    The study uses interviews and strategy reveals to infer agency, assuming self-reports are reliable.
  • domain assumption The LLM companion can be configured to exhibit steerable intentions that participants can perceive and respond to.
    The chatbot was purpose-built for the study to enable observation of agency interactions.
invented entities (1)
  • 3-by-4 agency framework no independent evidence
    purpose: To categorize actors and actions in human-AI conversations.
    Newly proposed structure based on study observations.

pith-pipeline@v0.9.0 · 5469 in / 1420 out tokens · 41365 ms · 2026-05-16T09:50:52.120691+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

144 extracted references · 144 canonical work pages · 2 internal anchors

  1. [1]

    Daniel Affsprung. 2023. The ELIZA Defect: Constructing the Right Users for Generative AI. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society(Montréal, QC, Canada)(AIES ’23). Association for Computing Machinery, New York, NY, USA, 945–946. doi:10.1145/3600211.3604744

  2. [2]

    Lize Alberts, Ulrik Lyngs, and Max Van Kleek. 2024. Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act Socially.Proc. ACM Hum.-Comput. Interact.8, CSCW1, Article 202 (April 2024), 25 pages. doi:10.1145/3653693

  3. [3]

    2000.Intention

    Gertrude Elizabeth Margaret Anscombe. 2000.Intention. Harvard University Press

  4. [4]

    Pauketat, Ali Ladak, and Aikaterina Manoli

    Jacy Reese Anthis, Janet V.T. Pauketat, Ali Ladak, and Aikaterina Manoli. 2025. Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, 20 Morality, and Sentience (AIMS) Survey. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 10, 22 ...

  5. [5]

    Margaret S. Archer. 2000.Being Human: The Problem of Agency. Cambridge University Press, Cambridge

  6. [6]

    Constitutional AI: Harmlessness from AI Feedback

    Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, K...

  7. [7]

    Vian Bakir and Andrew McStay. 2025. Move Fast and Break People? Ethics, Companion Apps, and the Case of Character.ai. doi:10.2139/ssrn.5159928

  8. [8]

    Albert Bandura. 2001. Social Cognitive Theory: An Agentic Perspective.Annual Review of Psychology52 (2001), 1–26. doi:10.1146/annurev.psych.52.1.1

  9. [9]

    Dan Bennett, Oussama Metatla, Anne Roudaut, and Elisa D. Mekler. 2023. How does HCI Understand Human Agency and Autonomy?. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems(Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 375, 18 pages. doi:10.1145/3544548.3580651

  10. [10]

    Biermann, Ning F

    Oloff C. Biermann, Ning F. Ma, and Dongwook Yoon. 2022. From Tool to Com- panion: Storywriters Want AI Writers to Respect Their Personal Values and Writing Strategies. InProceedings of the 2022 ACM Designing Interactive Sys- tems Conference(Virtual Event, Australia)(DIS ’22). Association for Computing Machinery, New York, NY, USA, 1209–1227. doi:10.1145/3...

  11. [11]

    Sabab Zulfiker, Md

    Al Amin Biswas, Md. Sabab Zulfiker, Md. Mahfujur Rahman, Md. Rafsan Jani, and Md. Musfique Anwar. 2025. Data Privacy and Security Analysis for Mental Health Chatbot Applications. InProceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction(Melbourne, Australia)(HRI ’25). IEEE Press, 1245–1249

  12. [12]

    1977.Outline of a Theory of Practice

    Pierre Bourdieu. 1977.Outline of a Theory of Practice. Cambridge University Press, Cambridge. Trans. Richard Nice

  13. [13]

    1987.Intention, Plans, and Practical Reason

    Michael Bratman. 1987.Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press, Cambridge

  14. [14]

    Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis.Qualitative Research in Sport, Exercise and Health11, 4 (2019), 589–597. doi:10.1080/2159676X.2019.1628806

  15. [15]

    Joanna J. Bryson. 2010. Robots Should Be Slaves. InClose Engagements with Artificial Companions: Key social, psychological, ethical and design issues, Yorick Wilks (Ed.). John Benjamins Publishing, 63–74

  16. [16]

    Ian Burkitt. 2016. Relational agency: Relational sociology, agency and interaction. European journal of social theory19, 3 (2016), 322–339

  17. [17]

    Michel Callon. 1986. Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay. InPower, Action and Belief: A New Sociology of Knowledge?, John Law (Ed.). Routledge, London, 196–223

  18. [18]

    Rachele Carli, Amro Najjar, and Dena Al-Thani. 2024. Human-Agent Interaction and Human Dependency: Possible New Approaches for Old Challenges. In Proceedings of the 12th International Conference on Human-Agent Interaction (Swansea, United Kingdom)(HAI ’24). Association for Computing Machinery, New York, NY, USA, 214–223. doi:10.1145/3687272.3688308

  19. [19]

    Alan Chan, Carson Ezell, Max Kaufmann, Kevin Wei, Lewis Hammond, Her- bie Bradley, Emma Bluemke, Nitarshan Rajkumar, David Krueger, Noam Kolt, Lennart Heim, and Markus Anderljung. 2024. Visibility into AI Agents. InPro- ceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (Rio de Janeiro, Brazil)(FAccT ’24). Association for Co...

  20. [20]

    Alan Chan, Rebecca Salganik, Alva Markelius, Chris Pang, Nitarshan Rajkumar, Dmitrii Krasheninnikov, Lauro Langosco, Zhonghao He, Yawen Duan, Micah Carroll, Michelle Lin, Alex Mayhew, Katherine Collins, Maryam Molamoham- madi, John Burden, Wanru Zhao, Shalaleh Rismani, Konstantinos Voudouris, Umang Bhatt, Adrian Weller, David Krueger, and Tegan Maharaj. 2...

  21. [21]

    Rijul Chaturvedi and Sanjeev Verma. 2023. Social companionship with artificial intelligence: Recent trends and future avenues.Technological Forecasting and Social Change193 (2023), 122634. doi:10.1016/j.techfore.2023.122634

  22. [22]

    Ana Paula Chaves and Marco Aurelio Gerosa. 2021. How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. International Journal of Human–Computer Interaction37, 8 (2021), 729–758

  23. [23]

    Samuel Rhys Cox, Rune Møberg Jacobsen, and Niels van Berkel. 2025. The Impact of a Chatbot’s Ephemerality-Framing on Self-Disclosure Perceptions. In Proceedings of the 7th ACM Conference on Conversational User Interfaces (CUI ’25). Association for Computing Machinery, New York, NY, USA, Article 60, 17 pages. doi:10.1145/3719160.3736617

  24. [24]

    Nick Crossley. 2022. A dependent structure of interdependence: structure and agency in relational perspective.Sociology56, 1 (2022), 166–182

  25. [25]

    Cyert and James G

    Richard M. Cyert and James G. March. 1963.A Behavioral Theory of the Firm. Prentice Hall, Englewood Cliffs, NJ

  26. [26]

    Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. 2024. Safe RLHF: Safe Reinforcement Learning from Human Feedback. InThe Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=TyFrPOKYXw

  27. [27]

    Donald Davidson. 1963. Actions, Reasons, and Causes.Journal of Philosophy60, 23 (1963), 685. doi:10.2307/2023177

  28. [28]

    Yang Deng, Lizi Liao, Zhonghua Zheng, Grace Hui Yang, and Tat-Seng Chua

  29. [29]

    Rahmani, Nick Craswell, Emine Yilmaz, Bhaskar Mitra, and Daniel Campos

    Towards Human-centered Proactive Conversational Agents. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval(Washington DC, USA)(SIGIR ’24). Association for Computing Machinery, New York, NY, USA, 807–818. doi:10.1145/3626772. 3657843

  30. [30]

    1989.The intentional stance

    Daniel C Dennett. 1989.The intentional stance. MIT press

  31. [31]

    Alexander Doudkin, Pat Pataranutaporn, and Pattie Maes. 2025. From Synthetic to Human: The Gap Between AI-Predicted and Actual Pro-Environmental Behav- ior Change After Chatbot Persuasion. InProceedings of the 7th ACM Conference on Conversational User Interfaces (CUI ’25). Association for Computing Machin- ery, New York, NY, USA, Article 71, 18 pages. doi...

  32. [32]

    2001.Where the Action Is: The Foundations of Embodied Interaction

    Paul Dourish. 2001.Where the Action Is: The Foundations of Embodied Interaction. MIT Press, Cambridge, MA

  33. [33]

    Sarah Elwahsh, Nora Stern, Aneesha Singh, and Amid Ayobi. 2025. Linguistic Diversity and Mental Well-Being: Co-Designing Custom AI Chatbots with Multilingual Mothers. InProceedings of the 7th ACM Conference on Conversational User Interfaces (CUI ’25). Association for Computing Machinery, New York, NY, USA, Article 65, 17 pages. doi:10.1145/3719160.3736615

  34. [34]

    Mustafa Emirbayer and Ann Mische. 1998. What is agency?American journal of sociology103, 4 (1998), 962–1023

  35. [35]

    Joel E Fischer, Simone Stumpf, and H Yeo. 2021. Progressive disclosure of system capabilities to build mental models of AI interaction. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, 1–15

  36. [36]

    Luciano Floridi and J. W. Sanders. 2004. On the Morality of Artificial Agents. Minds and Machines14, 3 (2004), 349–379. doi:10.1023/B:MIND.0000035461. 63578.9d

  37. [37]

    Brian J Fogg. 2002. Persuasive technology: using computers to change what we think and do.Ubiquity2002, December (2002), 2

  38. [38]

    Law, and Nena van As

    Asbjørn Følstad, Effie L.-C. Law, and Nena van As. 2024. Conversational Break- down in a Customer Service Chatbot: Impact of Task Order and Criticality on User Trust and Emotion.ACM Trans. Comput.-Hum. Interact.31, 5, Article 66 (Nov. 2024), 52 pages. doi:10.1145/3690383

  39. [39]

    Stan Franklin and Art Graesser. 1997. Is It an Agent, or Just a Program? A Taxonomy for Autonomous Agents. InIntelligent Agents III: Agent Theories, Architectures, and Languages (ATAL’96) (Lecture Notes in Artificial Intelligence, Vol. 1193), J. P. Müller, M. Wooldridge, and N. R. Jennings (Eds.). Springer, Berlin, 21–35. doi:10.1007/BFb0013570

  40. [40]

    Greve, Daniel A

    Giovanni Gavetti, Henrik C. Greve, Daniel A. Levinthal, and William Ocasio

  41. [41]

    doi:10.1080/19416520.2012

    The Behavioral Theory of the Firm: Assessment and Prospects.The Academy of Management Annals6, 1 (2012), 1–40. doi:10.1080/19416520.2012. 656841

  42. [42]

    1984.The Constitution of Society: Outline of the Theory of Structuration

    Anthony Giddens. 1984.The Constitution of Society: Outline of the Theory of Structuration. Polity Press, Cambridge

  43. [43]

    David J. Gunkel. 2012.The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press, Cambridge, MA

  44. [44]

    Yijie Guo, Ruhan Wang, Zhenhan Huang, Tongtong Jin, Xiwen Yao, Yuan-Ling Feng, Weiwei Zhang, Yuan Yao, and Haipeng Mi. 2025. Exploring the Design of LLM-based Agent in Enhancing Self-disclosure Among the Older Adults. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, N...

  45. [45]

    Patrick Haggard. 2017. Sense of Agency in the Human Brain.Nature Reviews Neuroscience18, 4 (2017), 196–207. doi:10.1038/nrn.2017.14

  46. [46]

    Fauzia Zahira Munirul Hakim, Lia Maulia Indrayani, and Rosaria Mita Amalia

  47. [47]

    InProceedings of the Third International Conference of Arts, Language and Culture (ICALC 2018)

    A Dialogic Analysis of Compliment Strategies Employed by Replika Chatbot. InProceedings of the Third International Conference of Arts, Language and Culture (ICALC 2018). Atlantis Press, Surakarta, Indonesia. doi:10.2991/icalc- 18.2019.38

  48. [48]

    Md Naimul Hoque, Ayman A Mahfuz, Mayukha Sridhatri Kindi, and Naeemul Hassan. 2024. Towards Designing a Question-Answering Chatbot for Online News: Understanding Questions and Perspectives. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA) 21 (CHI ’24). Association for Computing Machinery, New York, NY, USA,...

  49. [49]

    My agent understands me better

    Yuki Hou, Haruki Tamoto, and Homei Miyashita. 2024. "My agent understands me better": Integrating Dynamic Human-like Memory Recall and Consolidation in LLM-Based Agents. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI EA ’24). Association for Computing Machinery, New York, NY, USA, Article 7, 7 pages...

  50. [50]

    Shunsen Huang, Xiaoxiong Lai, Li Ke, Yajun Li, Huanlei Wang, Xinmei Zhao, Xinran Dai, and Yun Wang. 2024. AI Technology panic—is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents.Psychology Research and Behavior Management17 (2024), 1087–1102. doi:10.2147/PRBM.S440889 arXiv:...

  51. [51]

    Shih-Hong Huang, Ya-Fang Lin, Zeyu He, Chieh-Yang Huang, and Ting-Hao Ken- neth Huang. 2024. How Does Conversation Length Impact User’s Satisfaction? A Case Study of Length-Controlled Conversations with LLM-Powered Chatbots. InExtended Abstracts of the CHI Conference on Human Factors in Computing Sys- tems(Honolulu, HI, USA)(CHI EA ’24). Association for C...

  52. [52]

    1970.The Crisis of European Sciences and Transcendental Phenomenology

    Edmund Husserl. 1970.The Crisis of European Sciences and Transcendental Phenomenology. Northwestern University Press, Evanston, IL. Translated by David Carr

  53. [53]

    1990.Technology and the Lifeworld: From Garden to Earth

    Don Ihde. 1990.Technology and the Lifeworld: From Garden to Earth. Indiana University Press, Bloomington, IN

  54. [54]

    Daeun Jeong, Sungbok Shin, and Jongwook Jeong. 2025. Conversation Progress Guide: UI System for Enhancing Self-Efficacy in Conversational AI. InProceed- ings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 180, 11 pages. doi:10.1145/3706598.3714222

  55. [55]

    Jeste, Sarah A

    Dilip V. Jeste, Sarah A. Graham, Tanya T. Nguyen, Colin A. Depp, Ellen E. Lee, and Ho-Cheol Kim. 2020. Beyond artificial intelligence: exploring artificial wisdom.International Psychogeriatrics32, 8 (Aug. 2020), 993–1001. doi:10.1017/ s1041610220000927 Publisher: Elsevier BV

  56. [56]

    Epstein, and Young-Ho Kim

    Eunkyung Jo, Yuin Jeong, Sohyun Park, Daniel A. Epstein, and Young-Ho Kim

  57. [57]

    In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’24)

    Understanding the Impact of Long-Term Memory on Self-Disclosure with Large Language Model-Driven Chatbots for Public Health Intervention. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 440, 21 pages. doi:10.1145/3613904.3642420

  58. [58]

    Johnson-Laird

    Philip N. Johnson-Laird. 1983.Mental models: Towards a cognitive science of language, inference, and consciousness. Harvard University Press

  59. [59]

    Mirabelle Jones, Nastasia Griffioen, Christina Neumayer, and Irina Shklovski

  60. [60]

    InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25)

    Artificial Intimacy: Exploring Normativity and Personalization Through Fine-tuning LLM Chatbots. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 793, 16 pages. doi:10.1145/3706598.3713728

  61. [61]

    Kelly Merrill Jr., Jihyun Kim, and Chad Collins. 2022. AI companions for lonely individuals and the role of social presence.Communication Research Reports39, 2 (2022), 93–103. doi:10.1080/08824096.2022.2045929 arXiv:https://doi.org/10.1080/08824096.2022.2045929

  62. [62]

    Nadia Karizat, Dan Delmonaco, Motahhare Eslami, and Nazanin Andalibi. 2021. Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowl- edge of Identity and Engage in Algorithmic Resistance.Proc. ACM Hum.-Comput. Interact.5, CSCW2, Article 305 (Oct. 2021), 44 pages. doi:10.1145/3476046

  63. [63]

    Zoha Khawaja and Jean-Christophe Bélisle-Pipon. 2023. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots.Frontiers in Digital Health5 (Nov. 2023). doi:10.3389/fdgth.2023. 1278186 Publisher: Frontiers Media SA

  64. [64]

    Adam D. I. Kramer, Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental evidence of massive-scale emotional contagion through social networks.Proceed- ings of the National Academy of Sciences111, 24 (2014), 8788–8790. doi:10.1073/ pnas.1320040111 arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.1320040111

  65. [65]

    2005.Reassembling the Social: An Introduction to Actor-Network- Theory

    Bruno Latour. 2005.Reassembling the Social: An Introduction to Actor-Network- Theory. Oxford University Press, Oxford

  66. [66]

    John Law. 1992. Notes on the Theory of the Actor-Network: Ordering, Strategy, and Heterogeneity.Systems Practice5, 4 (1992), 379–393. doi:10.1007/BF01059830

  67. [67]

    Cassandra Lee and Jessica R Mindel. 2024. Closer and Closer Worlds: Using LLMs to Surface Personal Stories in World-building Conversation Games. In Companion Publication of the 2024 ACM Designing Interactive Systems Conference (IT University of Copenhagen, Denmark)(DIS ’24 Companion). Association for Computing Machinery, New York, NY, USA, 289–293. doi:10...

  68. [68]

    I Hear You, I Feel You

    Yi-Chieh Lee, Naomi Yamashita, Yun Huang, and Wai Fu. 2020. "I Hear You, I Feel You": Encouraging Deep Self-disclosure through a Chatbot. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. doi:10.1145/3313831.3376175

  69. [69]

    Jie Li. 2025. When Our Kid Has a Human and an AI Lover: A Conversation with Alexandra Diening on the Future of Relationships.Interactions32, 5 (Aug. 2025), 18–20. doi:10.1145/3757886

  70. [70]

    Yi Li, Xuanxuan Ding, Yifan Chen, Yeye Li, and Nan Ma. 2025. Customizable AI for Depression Care: Improving the User Experience of Large Language Model-Driven Chatbots. InProceedings of the 2025 ACM Designing Interactive Systems Conference (DIS ’25). Association for Computing Machinery, New York, NY, USA, 1844–1866. doi:10.1145/3715336.3735795

  71. [71]

    Kai-Hui Liang, Weiyan Shi, Yoo Jung Oh, Hao-Chuan Wang, Jingwen Zhang, and Zhou Yu. 2024. Dialoging Resonance in Human-Chatbot Conversation: How Users Perceive and Reciprocate Recommendation Chatbot’s Self-Disclosure Strategy.Proc. ACM Hum.-Comput. Interact.8, CSCW1, Article 200 (April 2024), 28 pages. doi:10.1145/3653691

  72. [72]

    Jinyu Liu, Effie Lai-Chong Law, and Hubert P. H. Shum. 2024. Chatbots and Art Critique: A Comparative Study of Chatbot and Human Experts in Traditional Chinese Painting Education. InProceedings of the 13th Nordic Conference on Human-Computer Interaction(Uppsala, Sweden)(NordiCHI ’24). Association for Computing Machinery, New York, NY, USA, Article 50, 13 ...

  73. [73]

    Tianjian Liu, Hongzheng Zhao, Yuheng Liu, Xingbo Wang, and Zhenhui Peng

  74. [74]

    InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology(Pittsburgh, PA, USA)(UIST ’24)

    ComPeer: A Generative Conversational Agent for Proactive Peer Support. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology(Pittsburgh, PA, USA)(UIST ’24). Association for Computing Machinery, New York, NY, USA, Article 117, 22 pages. doi:10.1145/3654777. 3676430

  75. [75]

    Xingyu Bruce Liu, Shitao Fang, Weiyan Shi, Chien-Sheng Wu, Takeo Igarashi, and Xiang ’Anthony’ Chen. 2025. Proactive Conversational Agents with Inner Thoughts. InProceedings of the 2025 CHI Conference on Human Factors in Com- puting Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 184, 19 pages. doi:10.1145/3706598.3713760

  76. [76]

    Yage Liu. 2023. AI Chatbots in Social Media: Ethical Responsibilities and Privacy Challenges of Information and Communication Technology. InProceedings of the 2023 6th International Conference on Information Management and Management Science(Chengdu, China)(IMMS ’23). Association for Computing Machinery, New York, NY, USA, 96–99. doi:10.1145/3625469.3625483

  77. [77]

    Like Having a Really Bad PA

    Ewa Luger and Abigail Sellen. 2016. " Like Having a Really Bad PA" The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI conference on human factors in computing systems. 5286–5297

  78. [78]

    Shuai Ma, Qiaoyi Chen, Xinru Wang, Chengbo Zheng, Zhenhui Peng, Ming Yin, and Xiaojuan Ma. 2025. Towards Human-AI Deliberation: Design and Evalua- tion of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, N...

  79. [79]

    Marriott and Valentina Pitardi

    Hannah R. Marriott and Valentina Pitardi. 2024. One is the loneliest number. . . Two can be as bad as one. The influence of AI Friendship Apps on users’ well- being and addiction.Psychology & Marketing41, 1 (2024), 86–101. doi:10.1002/ mar.21899 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/mar.21899

  80. [80]

    2004.Technology as Experience

    John McCarthy and Peter Wright. 2004.Technology as Experience. MIT Press, Cambridge, MA

Showing first 80 references.