Recognition: no theorem link
AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
Pith reviewed 2026-05-16 09:54 UTC · model grok-4.3
The pith
Thirteen of twenty participants concluded that an LLM understood their personal values after a month of casual chatbot use.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
After one month of texting with a value-aware chatbot and a structured two-hour interview using the Value-Alignment Perception Toolkit, thirteen of twenty participants formed the conviction that the LLM had successfully extracted details of their values, could embody those values when making decisions, and could explain them back to the user.
What carries the argument
VAPT, the Value-Alignment Perception Toolkit, which structures user evaluation around three capabilities: extraction of value details from conversation, embodiment in simulated decisions, and explanation of the values to the user.
If this is right
- Designers of conversational agents should add safeguards against creating false perceptions of value understanding.
- VAPT supplies a repeatable interview protocol for testing value alignment claims in text-based AI.
- As AI systems grow more capable at mimicking values, explicit evaluation methods become necessary to maintain transparency.
- The risk of weaponized empathy grows when users believe an agent understands their values without corresponding welfare alignment.
Where Pith is reading between the lines
- The same toolkit could be adapted to measure how long these convictions last after interaction ends.
- Similar structured evaluations might reveal whether users form comparable beliefs in non-text AI interfaces such as voice or image systems.
- If perceptions of value understanding prove common, regulators may need guidelines for labeling AI systems that simulate empathy.
Load-bearing premise
Users' self-reported convictions after a month of chatbot use and one interview accurately reflect an LLM's actual ability to extract, embody, or explain human values.
What would settle it
A follow-up test in which participants are asked to make real choices that depend on the values the AI claims to have extracted, then checking whether those choices match the values reported in the VAPT interview.
Figures
read the original abstract
Does AI understand human values? While this remains an open philosophical question, we take a pragmatic stance by introducing VAPT, the Value-Alignment Perception Toolkit, for studying how LLMs reflect people's values and how people judge those reflections. 20 participants texted a chatbot over a month, then completed a 2-hour interview with our toolkit evaluating AI's ability to extract (pull details regarding), embody (make decisions guided by), and explain (provide proof of) their values. 13 participants ultimately left our study convinced that AI can understand human values. Thus, we warn about "weaponized empathy": a design pattern that may arise in interactions with value-aware, yet welfare-misaligned conversational agents. VAPT offers a new way to evaluate value-alignment in AI systems. We also offer design implications to evaluate and responsibly build AI systems with transparency and safeguards as AI capabilities grow more inscrutable, ubiquitous, and posthuman into the future.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces VAPT (Value-Alignment Perception Toolkit) as a method for studying user perceptions of LLMs' ability to extract, embody, and explain human values from casual conversations. In a study with 20 participants who texted a chatbot for one month and then completed a 2-hour VAPT interview, 13 participants reported becoming convinced that AI can understand human values. The authors use this to warn about the risk of 'weaponized empathy' in value-aware conversational agents and provide design implications for transparent AI systems.
Significance. If the perceptual findings are taken at face value, the work contributes a new qualitative toolkit for HCI research on value alignment perceptions and surfaces a plausible design risk in future conversational agents. The contribution is primarily methodological and cautionary rather than establishing objective capabilities of LLMs.
major comments (3)
- [Results] Results section: The headline claim that 13 participants left convinced AI understands human values rests solely on post-interview self-reports after one month of chatbot interaction plus a single 2-hour VAPT session, with no pre/post quantitative measures, control condition, external rater validation of value extraction accuracy, or behavioral tests to rule out novelty effects or demand characteristics.
- [Discussion] Discussion: The 'weaponized empathy' warning and associated design implications extrapolate from unvalidated user perceptions to claims about actual LLM value embodiment and misalignment risks, without evidence that chatbot outputs correctly reflected or were guided by participants' stated values.
- [Methods] Methods: The study design provides no objective metrics (e.g., alignment scores between LLM decisions and participant values or inter-rater agreement on extracted values) to corroborate the self-reported convictions, limiting the ability to distinguish perception from actual capability.
minor comments (2)
- [Abstract] Abstract: Clarify that the 13/20 figure reflects post-study self-reported convictions rather than verified LLM performance.
- [VAPT description] The description of VAPT components (extract, embody, explain) would benefit from a table or explicit operational definitions to improve reproducibility.
Simulated Author's Rebuttal
We thank the referee for their constructive comments. We address each major point below, clarifying that this is a qualitative study of user perceptions via the VAPT toolkit rather than an objective evaluation of LLM capabilities. Revisions have been made to strengthen limitations statements and scope the claims appropriately.
read point-by-point responses
-
Referee: [Results] Results section: The headline claim that 13 participants left convinced AI understands human values rests solely on post-interview self-reports after one month of chatbot interaction plus a single 2-hour VAPT session, with no pre/post quantitative measures, control condition, external rater validation of value extraction accuracy, or behavioral tests to rule out novelty effects or demand characteristics.
Authors: We agree the findings rest on self-reported perceptions collected through the VAPT interview process. As a qualitative methodological contribution focused on how users form convictions about AI value understanding, pre/post quantitative measures, controls, and behavioral validation fall outside the study design. In revision we have added an explicit limitations subsection noting potential novelty effects and demand characteristics, and we have rephrased the results headline to emphasize 'self-reported convictions following VAPT' to avoid overstatement. revision: partial
-
Referee: [Discussion] Discussion: The 'weaponized empathy' warning and associated design implications extrapolate from unvalidated user perceptions to claims about actual LLM value embodiment and misalignment risks, without evidence that chatbot outputs correctly reflected or were guided by participants' stated values.
Authors: The 'weaponized empathy' concept is introduced as a potential design risk stemming from users' perceptual convictions, not as evidence of actual LLM embodiment. We have revised the Discussion to explicitly separate perceived alignment from objective capability and to frame the warning as a cautionary design implication for transparency safeguards. No claims are made that the chatbot outputs were verifiably guided by participants' values; the focus remains on how such perceptions may arise and how systems can be designed to mitigate over-trust. revision: yes
-
Referee: [Methods] Methods: The study design provides no objective metrics (e.g., alignment scores between LLM decisions and participant values or inter-rater agreement on extracted values) to corroborate the self-reported convictions, limiting the ability to distinguish perception from actual capability.
Authors: VAPT is deliberately a perception-elicitation toolkit; value alignment is treated as a subjective user judgment rather than an objective property requiring ground-truth metrics. We have expanded the Methods section to articulate this rationale and to outline how future work could combine VAPT with objective alignment measures. The current design prioritizes depth in understanding perception formation over validation of LLM accuracy. revision: yes
Circularity Check
No circularity: empirical claims grounded in participant self-reports
full rationale
The paper reports results from a qualitative user study: 20 participants texted a chatbot for one month then completed a 2-hour VAPT interview. The headline finding (13 participants convinced AI understands human values) is stated as a direct count of post-study self-reports. No equations, derivations, fitted parameters, or self-referential definitions appear. No self-citation load-bearing steps, uniqueness theorems, or ansatz smuggling are present. The central claim reduces only to the collected interview data rather than to any input by construction, satisfying the self-contained criterion.
Axiom & Free-Parameter Ledger
Forward citations
Cited by 1 Pith paper
-
"What Are You Really Trying to Do?": Co-Creating Life Goals from Everyday Computer Use
A co-creation process for inferring and refining personal strivings from computer activity logs yields more representative goals and higher user agency than baselines in a 14-person week-long study.
Reference graph
Works this paper leans on
-
[1]
Larbi Abdenebaoui, Saja Aljuneidi, Fynn Horstmannshoff, Jochen Meyer, and Susanne Boll. 2025. Value-Driven Design for Public Administration: Insights from a Generative Chatbot in a Housing Application Case Study. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New Yor...
-
[2]
Jose Aleman and Dwayne Woods. 2016. Value Orientations From the World Values Survey: How Comparable Are They Cross-Nationally?Comparative Political Studies49, 8 (2016), 1039–1067. doi:10.1177/0010414015600458
-
[3]
Margaret S. Archer. 2021.Friendship Between Human Beings and AI Robots? Springer International Publishing, Cham, 177–189. doi:10.1007/978-3-030- 54173-6_15
-
[4]
I know even if you don’t tell me
Sumit Asthana, Jane Im, Zhe Chen, and Nikola Banovic. 2024. "I know even if you don’t tell me": Understanding Users’ Privacy Preferences Regarding AI- based Inferences of Sensitive Information for Personalization. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery...
-
[5]
Angelika Augustine and Friederike Eyssel. 2024. Motives and Risks of Self- Disclosure to Robots versus Humans.J. Hum.-Robot Interact.14, 1, Article 16 (Dec. 2024), 44 pages. doi:10.1145/3700887
-
[6]
Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, K...
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[7]
Agathe Balayn, Mireia Yurrita, Fanny Rancourt, Fabio Casati, and Ujwal Gadi- raju. 2025. Unpacking Trust Dynamics in the LLM Supply Chain: An Empirical Exploration to Foster Trustworthy LLM Production & Use. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Asso- ciation for Computing Machinery, New York, NY, USA, A...
-
[8]
Anat Bardi and Shalom H Schwartz. 2003. Values and behavior: Strength and structure of relations.Personality and social psychology bulletin29, 10 (2003), 1207–1220
work page 2003
-
[9]
Sarvesh Baskar, Manas Gaur, Srinivasan Parthasarathy, and Tanmay Tulsidas Verlekar. 2025. (CPER) From Guessing to Asking: An Approach to Resolving Per- sona Knowledge Gap in LLMs during Multi-Turn Conversations. InProceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Tech...
-
[10]
Andrea J Bingham and Patricia Witkowsky. 2021. Deductive and inductive approaches to qualitative data analysis.Analyzing and interpreting qualitative data: After the interview1 (2021), 133–146
work page 2021
-
[11]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflex- ive thematic analysis.Qualitative Research in Sport, Exercise and Health11, 4 (2019), 589–597. doi:10.1080/2159676X.2019.1628806 arXiv:https://doi.org/10.1080/2159676X.2019.1628806
-
[12]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making.Proc. ACM Hum.-Comput. Interact.5, CSCW1, Article 188 (April 2021), 21 pages. doi:10.1145/3449287
work page internal anchor Pith review doi:10.1145/3449287 2021
-
[13]
Margaret Burnett, Anicia Peters, Charles Hill, and Noha Elarief. 2016. Finding Gender-Inclusiveness Software Issues with GenderMag: A Field Investigation. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA)(CHI ’16). Association for Computing Machinery, New York, NY, USA, 2586–2598. doi:10.1145/28580...
-
[14]
Rachele Carli, Amro Najjar, and Dena Al-Thani. 2024. Human-Agent Interaction and Human Dependency: Possible New Approaches for Old Challenges. In Proceedings of the 12th International Conference on Human-Agent Interaction (Swansea, United Kingdom)(HAI ’24). Association for Computing Machinery, New York, NY, USA, 214–223. doi:10.1145/3687272.3688308
-
[15]
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert- Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting Training Data from Large Lan- guage Models. In30th USENIX Security Symposium (USENIX Security 21). USENIX Association, 2633–2650. https://www.usenix.org...
work page 2021
-
[16]
Bhanu chander, Chinju John, Lekha Warrier, and Kumaravelan Gopalakrishnan
-
[17]
Toward Trustworthy Artificial Intelligence (TAI) in the Context of Ex- plainability and Robustness.ACM Comput. Surv.57, 6, Article 144 (Feb. 2025), 49 pages. doi:10.1145/3675392
-
[18]
Chang, Michael Pin-Chuan Lin, Shiva Hajian, and Quincy Q
Daniel H. Chang, Michael Pin-Chuan Lin, Shiva Hajian, and Quincy Q. Wang
-
[19]
Educational Design Principles of Using AI Chatbot That Supports Self- Regulated Learning in Education: Goal Setting, Feedback, and Personalization. Sustainability15, 17 (2023). doi:10.3390/su151712921
-
[20]
Liuqing Chen, Wengteng Cheang, Zhaojun Jiang, Yuan Xu, Zebin Cai, Lingyun Sun, Peter Childs, Ji Han, Preben Hansen, and Haoyu Zuo. 2025. I-Card: A Generative AI-Supported Intelligent Design Method Card Deck. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Art...
-
[21]
Jaeho Cho, Saifuddin Ahmed, Martin Hilbert, Billy Liu, and Jonathan Luu. 2020. Do search algorithms endanger democracy? An experimental investigation of algorithm effects on political polarization.Journal of Broadcasting & Electronic Media64, 2 (2020), 150–172
work page 2020
-
[22]
Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. 2023. Toxicity in chatgpt: Analyzing persona-assigned language models. InFindings of the Association for Computational Linguistics: EMNLP 2023, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 1236–1270. doi:1...
-
[23]
Xuan Long Do, Kenji Kawaguchi, Min-Yen Kan, and Nancy Chen. 2025. Align- ing Large Language Models with Human Opinions through Persona Selec- tion and Value–Belief–Norm Reasoning. InProceedings of the 31st International Conference on Computational Linguistics, Owen Rambow, Leo Wanner, Mari- anna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven ...
work page 2025
-
[24]
2001.Where the Action Is: The Foundations of Embodied Interaction
Paul Dourish. 2001.Where the Action Is: The Foundations of Embodied Interaction. MIT Press, Cambridge, MA
work page 2001
-
[25]
Cong Doanh Duong. 2024. What makes for digital entrepreneurs? The role of AI-related drivers for nascent digital start-up activities.Euro- pean Journal of Innovation Management(06 2024). doi:10.1108/EJIM-02- 2024-0154 arXiv:https://www.emerald.com/ejim/article-pdf/doi/10.1108/EJIM- 02-2024-0154/9745404/ejim-02-2024-0154.pdf
-
[26]
P. Ekman and W. V. Friesen. 1969. Nonverbal leakage and clues to deception. Psychiatry: Journal for the Study of Interpersonal Processes32, 1 (1969), 88–106
work page 1969
-
[27]
Xianzhe Fan, Qing Xiao, Xuhui Zhou, Jiaxin Pei, Maarten Sap, Zhicong Lu, and Hong Shen. 2025. User-Driven Value Alignment: Understanding Users’ Perceptions and Strategies for Addressing Biased and Discriminatory Statements in AI Companions. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing ...
-
[28]
Lizhe Fang, Yifei Wang, Zhaoyang Liu, Chenheng Zhang, Stefanie Jegelka, Jinyang Gao, Bolin Ding, and Yisen Wang. 2025. What is Wrong with Per- plexity for Long-context Language Modeling?. InThe Thirteenth International Conference on Learning Representations. https://openreview.net/forum?id= fL4qWkSmtM
work page 2025
-
[29]
Yue Fu, Yixin Chen, Zelia Gomes Da Costa Lai, and Alexis Hiniker. 2025. Should ChatGPT Write Your Breakup Text? Exploring the Role of AI in Relationship Dissolution.Proc. ACM Hum.-Comput. Interact.9, 7, Article CSCW243 (Oct. 2025), 31 pages. doi:10.1145/3757424
-
[30]
Iason Gabriel. 2020. Artificial Intelligence, Values, and Alignment.Minds and Machines30, 3 (Sept. 2020), 411–437. doi:10.1007/s11023-020-09539-2
work page internal anchor Pith review doi:10.1007/s11023-020-09539-2 2020
-
[31]
Andrew Gambino and S. Shyam Sundar. 2019. Acceptance of Self-Driving Cars: Does Their Posthuman Ability Make Them More Eerie or More Desirable?. In 24 Extended Abstracts of the 2019 CHI Conference on Human Factors in Comput- ing Systems(Glasgow, Scotland Uk)(CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. doi:10.1145/3290607.3312870
-
[32]
David Garcia. 2017. Leaking privacy and shadow profiles in online social networks.Science advances3, 8 (2017), e1701172
work page 2017
-
[33]
Jian Guan, Junfei Wu, Jia-Nan Li, Chuanqi Cheng, and Wei Wu. 2025. A Survey on Personalized Alignment—The Missing Piece for Large Language Models in Real-World Applications. InFindings of the Association for Computational Linguistics: ACL 2025, Wanxiang Che, Joyce Nabende, Ekaterina Shutova, and Mohammad Taher Pilehvar (Eds.). Association for Computationa...
-
[34]
Xu Han. 2020. Am I Asking It Properly? Designing and Evaluating Interview Chatbots to Improve Elicitation in an Ethical Way. InCompanion Proceedings of the 25th International Conference on Intelligent User Interfaces(Cagliari, Italy) (IUI ’20 Companion). Association for Computing Machinery, New York, NY, USA, 33–34. doi:10.1145/3379336.3381509
-
[35]
Advances in Engineering Software42(12), 1020–1034 (2011)
Erin A. Heerey and Hemma Velani. 2010. Implicit learning of social predictions. Journal of Experimental Social Psychology46, 3 (2010), 577–581. doi:10.1016/j. jesp.2010.01.003
work page doi:10.1016/j 2010
-
[36]
Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2021. Aligning {AI} With Shared Human Values. In International Conference on Learning Representations. https://openreview.net/ forum?id=dNy_RKzJacY
work page 2021
-
[37]
Geert Hofstede, Gert Jan Hofstede, and Michael Minkov. 2001.Culture’s Con- sequences: Comparing Values, Behaviors, Institutions, and Organizations Across Nations(2nd ed.). SAGE Publications, Thousand Oaks, CA, USA
work page 2001
-
[38]
My agent understands me better
Yuki Hou, Haruki Tamoto, and Homei Miyashita. 2024. "My agent understands me better": Integrating Dynamic Human-like Memory Recall and Consolidation in LLM-Based Agents. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI EA ’24). Association for Computing Machinery, New York, NY, USA, Article 7, 7 pages...
-
[39]
Robert J. House, Paul J. Hanges, Mansour Javidan, Peter W. Dorfman, and Vipin Gupta. 2004.Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies. SAGE Publications, Thousand Oaks, CA, USA
work page 2004
-
[40]
1970.The Crisis of European Sciences and Transcendental Phenomenology
Edmund Husserl. 1970.The Crisis of European Sciences and Transcendental Phenomenology. Northwestern University Press, Evanston, IL. Translated by David Carr
work page 1970
-
[41]
1990.Technology and the Lifeworld: From Garden to Earth
Don Ihde. 1990.Technology and the Lifeworld: From Garden to Earth. Indiana University Press, Bloomington, IN
work page 1990
-
[42]
Mohit Jain, Ramachandra Kota, Pratyush Kumar, and Shwetak N. Patel. 2018. Convey: Exploring the Use of a Context View for Chatbots. InProceedings of the 2018 CHI Conference on Human Factors in Computing Systems(Montreal QC, Canada)(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–6. doi:10.1145/3173574.3174042
-
[43]
Maurice Jakesch, Zana Buçinca, Saleema Amershi, and Alexandra Olteanu. 2022. How Different Groups Prioritize Ethical Values for Responsible AI. InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency(Seoul, Republic of Korea)(FAccT ’22). Association for Computing Machinery, New York, NY, USA, 310–323. doi:10.1145/3531146.3533097
-
[44]
Chenyan Jia, Michelle S. Lam, Minh Chau Mai, Jeffrey T. Hancock, and Michael S. Bernstein. 2024. Embedding Democratic Values into Social Media AIs via Societal Objective Functions.Proc. ACM Hum.-Comput. Interact.8, CSCW1, Article 163 (April 2024), 36 pages. doi:10.1145/3641002
-
[45]
Guangyuan Jiang, Manjie Xu, Song-Chun Zhu, Wenjuan Han, Chi Zhang, and Yixin Zhu. 2023. Evaluating and Inducing Personality in Pre-trained Language Models. InAdvances in Neural Information Processing Systems, A. Oh, T. Nau- mann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.), Vol. 36. Curran Associates, Inc., 10622–10643. https://proceedings.neu...
work page 2023
-
[46]
Mirabelle Jones, Nastasia Griffioen, Christina Neumayer, and Irina Shklovski
-
[47]
InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25)
Artificial Intimacy: Exploring Normativity and Personalization Through Fine-tuning LLM Chatbots. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 793, 16 pages. doi:10.1145/3706598.3713728
-
[48]
Kyuha Jung, Gyuho Lee, Yuanhui Huang, and Yunan Chen. 2025. ’I’ve talked to ChatGPT about my issues last night. ’: Examining Mental Health Conversations with Large Language Models through Reddit Analysis.Proc. ACM Hum.-Comput. Interact.9, 7, Article CSCW356 (Oct. 2025), 25 pages. doi:10.1145/3757537
-
[49]
Ilkka Kaate, Joni Salminen, Soon-Gyo Jung, Hind Almerekhi, and Bernard J. Jansen. 2023. How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction. InProceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter (Torino, Italy)(CHItaly ’23). Association for Computing ...
-
[50]
Vinson, Shobita Parthasarathy, and Nazanin An- dalibi
Nadia Karizat, Alexandra H. Vinson, Shobita Parthasarathy, and Nazanin An- dalibi. 2024. Patent Applications as Glimpses into the Sociotechnical Imaginary: Ethical Speculation on the Imagined Futures of Emotion AI for Mental Health Monitoring and Detection.Proc. ACM Hum.-Comput. Interact.8, CSCW1, Article 106 (April 2024), 43 pages. doi:10.1145/3637383
-
[51]
Zoha Khawaja and Jean-Christophe Bélisle-Pipon. 2023. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots.Frontiers in Digital Health5 (Nov. 2023). doi:10.3389/fdgth.2023. 1278186 Publisher: Frontiers Media SA
-
[52]
Bart P. Knijnenburg and Martijn C. Willemsen. 2016. Inferring Capabilities of Intelligent Agents from Their External Traits.ACM Trans. Interact. Intell. Syst. 6, 4, Article 28 (Nov. 2016), 25 pages. doi:10.1145/2963106
-
[53]
Akaash Kolluri, Renn Su, Farnaz Jahanbakhsh, Dora Zhao, Tiziano Piccardi, and Michael S Bernstein. 2026. Alexandria: A Library of Pluralistic Values for Realtime Re-Ranking of Social Media Feeds.Proceedings of the International AAAI Conference on Web and Social Media(2026)
work page 2026
-
[54]
Yasmine Kotturi, Angel Anderson, Glenn Ford, Michael Skirpan, and Jeffrey P Bigham. 2024. Deconstructing the Veneer of Simplicity: Co-Designing Intro- ductory Generative AI Workshops with Local Entrepreneurs. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New...
-
[55]
Jane Kroger. 2003. What transits in an identity status transition?Identity: An International Journal of Theory and Research3, 3 (2003), 197–220
work page 2003
-
[56]
Gyeonggeon Lee, Lehong Shi, Ehsan Latif, Yizhu Gao, Arne Bewersdorff, Matthew Nyaaba, Shuchen Guo, Zhengliang Liu, Gengchen Mai, Tianming Liu, and Xiaoming Zhai. 2025. Multimodality of AI for Education: Toward Artificial General Intelligence.IEEE Transactions on Learning Technologies18 (2025), 666–683. doi:10.1109/TLT.2025.3574466
-
[57]
Hao-Ping (Hank) Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CH...
-
[58]
I Lewis-Smith, L Pass, and S Reynolds. 2021. How adolescents understand their values: A qualitative study.Clinical Child Psychology and Psychiatry26, 1 (Jan. 2021), 231–242. doi:10.1177/1359104520964506 Epub 2020 Oct 19
-
[59]
Vera Liao, Daniel Gruen, and Sarah Miller
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Inform- ing Design Practices for Explainable AI User Experiences. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. doi:10.1145/3313831.3376590
- [60]
-
[61]
Tianjian Liu, Hongzheng Zhao, Yuheng Liu, Xingbo Wang, and Zhenhui Peng
-
[62]
ComPeer: A Generative Conversational Agent for Proactive Peer Support. InProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology(Pittsburgh, PA, USA)(UIST ’24). Association for Computing Machinery, New York, NY, USA, Article 117, 22 pages. doi:10.1145/3654777. 3676430
-
[63]
Xuelin Liu, Pengyuan Liu, and Dong Yu. 2025. What‘s the most important value? INVP: INvestigating the Value Priorities of LLMs through Decision- making in Social Scenarios. InProceedings of the 31st International Confer- ence on Computational Linguistics, Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven Schocka...
work page 2025
-
[64]
Xingyu Bruce Liu, Shitao Fang, Weiyan Shi, Chien-Sheng Wu, Takeo Igarashi, and Xiang ’Anthony’ Chen. 2025. Proactive Conversational Agents with Inner Thoughts. InProceedings of the 2025 CHI Conference on Human Factors in Com- puting Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 184, 19 pages. doi:10.1145/3706598.3713760
-
[65]
Sheng Lu, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, and Iryna Gurevych. 2024. Are Emergent Abilities in Large Language Models just In-Context Learning?. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for...
-
[66]
W. Lu. 2024. Inevitable challenges of autonomy: ethical concerns in personalized algorithmic decision-making.Humanities and Social Sciences Communications 11 (Oct. 2024), 1321. doi:10.1057/s41599-024-03864-y
-
[67]
Zhengyi Ma, Zhicheng Dou, Yutao Zhu, Hanxun Zhong, and Ji-Rong Wen. 2021. One Chatbot Per Person: Creating Personalized Chatbots based on Implicit User Profiles. InProceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval(Virtual Event, Canada) (SIGIR ’21). Association for Computing Machinery, New York...
-
[68]
Steve Mann. 2004. "Sousveillance": inverse surveillance in multimedia imaging. InProceedings of the 12th Annual ACM International Conference on Multimedia (New York, NY, USA)(MULTIMEDIA ’04). Association for Computing Machinery, New York, NY, USA, 620–627. doi:10.1145/1027527.1027673
-
[69]
McKee, Verena Rieser, and Iason Gabriel
Arianna Manzini, Geoff Keeling, Nahema Marchal, Kevin R. McKee, Verena Rieser, and Iason Gabriel. 2024. Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and Alignment. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Ma...
-
[70]
John McCarthy and Peter Wright. 2004.Technology as Experience. MIT Press, Cambridge, MA
work page 2004
-
[71]
Reid McIlroy-Young, Jon Kleinberg, Siddhartha Sen, Solon Barocas, and Ashton Anderson. 2022. Mimetic Models: Ethical Implications of AI that Acts Like You. InProceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (Oxford, United Kingdom)(AIES ’22). Association for Computing Machinery, New York, NY, USA, 479–490. doi:10.1145/3514094.3534177
-
[72]
Albert Mehrabian. 1971.Silent Messages. Wadsworth Publishing Company, Belmont, CA
work page 1971
-
[73]
Lien Michiels, Jens Leysen, Annelien Smets, and Bart Goethals. 2022. What Are Filter Bubbles Really? A Review of the Conceptual and Empirical Work. In Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization(Barcelona, Spain)(UMAP ’22 Adjunct). Association for Computing Machinery, New York, NY, USA, 274–279. doi:10.1...
-
[74]
Nick Obradovich, Sahib S. Khalsa, Waqas U. Khan, Jina Suh, Roy H. Perlis, Olusola Ajilore, and Martin P. Paulus. 2024. Opportunities and risks of large language models in psychiatry.NPP - Digital Psychiatry and Neuroscience2, 1 (May 2024), 8. doi:10.1038/s44277-024-00010-z
-
[75]
Nigini Oliveira, Jasmine Li, Koosha Khalvati, Rodolfo Cortes Barragan, Katharina Reinecke, Andrew N. Meltzoff, and Rajesh P. N. Rao. 2025. Culturally-attuned AI: Implicit learning of altruistic cultural values through inverse reinforcement learning.PLOS ONE20, 12 (12 2025), 1–22. doi:10.1371/journal.pone.0337914
- [76]
-
[77]
MemGPT: Towards LLMs as Operating Systems
Charles Packer, Vivian Fang, Shishir G. Patil, Kevin Lin, Sarah Wooders, and Joseph E. Gonzalez. 2023. MemGPT: Towards LLMs as Operating Systems.CoRR abs/2310.08560 (2023). https://doi.org/10.48550/arXiv.2310.08560
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2310.08560 2023
-
[78]
Shuyi Pan and Maartje M.A. de Graaf. 2025. Developing a Social Support Framework: Understanding the Reciprocity in Human-Chatbot Relationship. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Article 182, 13 pages. doi:10.1145/3706598.3713503
-
[79]
Pat Pataranutaporn, Sheer Karny, Chayapatr Archiwaranguprok, Constanze Albrecht, Auren R. Liu, and Pattie Maes. 2025. "My Boyfriend is AI": A Com- putational Analysis of Human-AI Companionship in Reddit’s AI Community. arXiv:2509.11391 [cs.HC] https://arxiv.org/abs/2509.11391
-
[80]
Irene Pollach. 2005. A typology of communicative strategies in online privacy policies: Ethics, power and informed consent.Journal of Business Ethics62, 3 (2005), 221–235
work page 2005
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.