Recognition: unknown
Measuring Successful Cooperation in Human-AI Teamwork: Development and Validation of the Perceived Cooperativity and Teaming Perception Scales
Pith reviewed 2026-05-08 02:17 UTC · model grok-4.3
The pith
Two new scales reliably measure how well humans perceive cooperation with AI partners across tasks.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors establish that the Perceived Cooperativity Scale (PCS) and Teaming Perception Scale (TPS) successfully differentiate cooperation partners of varying quality and exhibit construct validity matching theoretical expectations, as shown across studies of cooperative card games, LLM interactions, and decision-support systems with a total of 409 participants.
What carries the argument
The Perceived Cooperativity Scale (PCS), which rates perceived cooperative capability and practice in a single interaction sequence, and the Teaming Perception Scale (TPS), which measures the emergent sense of teaming from mutual contribution and support; both are grounded in existing cooperation theories and adapted for human-AI use.
If this is right
- Researchers gain standardized tools to study what improves or harms human-AI cooperation in varied settings.
- AI developers can test whether system features increase perceived cooperativity and teaming.
- The scales enable direct comparison of cooperation quality between human and AI partners.
- They supply a foundation for evaluating subjective teamwork in gaming, conversational, and decision-support applications.
Where Pith is reading between the lines
- Widespread adoption could steer AI design toward measurable cooperative behaviors rather than isolated performance metrics.
- The scales might reveal systematic differences in how humans experience cooperation with AI versus other humans, informing trust and agency research.
- Extending validation to multi-turn, real-world tasks like medical diagnosis or vehicle control could test whether the measures generalize beyond the three lab scenarios.
- The work opens the possibility of using the scales as outcome measures in training programs that teach humans to cooperate more effectively with AI.
Load-bearing premise
The chosen tasks and theoretical models translate into questionnaire items that capture the essential aspects of perceived cooperativity without task-specific bias or missing elements.
What would settle it
A new study in an untested domain, such as long-term collaborative planning, where the scales fail to show differences between high- and low-cooperation AI partners or lack expected correlations with objective team outcomes.
Figures
read the original abstract
As human-AI cooperation becomes increasingly prevalent, reliable instruments for assessing the subjective quality of cooperative human-AI interaction are needed. We introduce two theoretically grounded scales: the Perceived Cooperativity Scale (PCS), grounded in joint activity theory, and the Teaming Perception Scale (TPS), grounded in evolutionary cooperation theory. The PCS captures an agent's perceived cooperative capability and practice within a single interaction sequence; the TPS captures the emergent sense of teaming arising from mutual contribution and support. Both scales were adapted for human-human cooperation to enable cross-agent comparisons. Across three studies (N = 409) encompassing a cooperative card game, LLM interaction, and a decision-support system, analyses of dimensionality, reliability, and validity indicated that both scales successfully differentiated between cooperation partners of varying cooperative quality and showed construct validity in line with expectations. The scales provide a basis for empirical investigation and system evaluation across a wide range of human-AI cooperation contexts.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. This paper claims to introduce and validate two scales for measuring successful cooperation in human-AI teamwork. The Perceived Cooperativity Scale (PCS) is grounded in joint activity theory to capture perceived cooperative capability and practice, while the Teaming Perception Scale (TPS) draws from evolutionary cooperation theory to assess the emergent sense of teaming. Both scales are adapted for human-human use. Validation across three studies (total N=409) in contexts including a cooperative card game, LLM interaction, and a decision-support system shows the scales have appropriate dimensionality, reliability, and validity, and can differentiate between partners of varying cooperative quality.
Significance. Should the reported psychometric properties hold upon detailed inspection, these scales would represent a significant contribution to human-AI interaction research by providing standardized, theory-based tools for assessing cooperation quality. This would facilitate empirical studies and evaluation of AI systems designed for teamwork. The inclusion of human-human adaptations and testing in diverse tasks strengthens the potential applicability across contexts.
minor comments (2)
- [Abstract] Consider adding specific reliability and validity statistics (e.g., alpha values or correlation ranges) to the abstract for a more informative summary.
- [Methods] The description of how the scales were adapted from theory to items and for human-human comparisons could be expanded for replicability.
Simulated Author's Rebuttal
We thank the referee for their positive summary, recognition of the scales' potential significance, and recommendation for minor revision. We are pleased that the contribution to standardized measurement of human-AI cooperation quality is viewed favorably.
Circularity Check
No significant circularity
full rationale
The paper grounds PCS in joint activity theory and TPS in evolutionary cooperation theory (external sources), adapts them for cross-agent comparison, then validates dimensionality/reliability/validity via three independent empirical studies (N=409 total) against expected patterns of differentiation and construct correlations. No equations, fitted parameters, or self-citations are load-bearing; the central claim is standard psychometric testing in new contexts and does not reduce to any input by construction. This is a self-contained, non-circular scale-development process.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Joint activity theory provides a valid basis for measuring perceived cooperativity in human-AI interactions.
- domain assumption Evolutionary cooperation theory provides a valid basis for measuring teaming perception in human-AI interactions.
Reference graph
Works this paper leans on
-
[1]
ChatGPT Usage in Everyday Life: A Motivation-Theoretic Mixed-Methods Study
Vinzenz Wolf and Christian Maier. “ChatGPT Usage in Everyday Life: A Motivation-Theoretic Mixed-Methods Study.” In:International Journal of Information Management79 (2024).DOI: 10.1016/j.ijinfomgt.2024. 102821. 26 Measuring Successful Human-AI TeamworkATTIG ET AL
-
[2]
Zarif Bin Akhtar. “Unveiling the Evolution of Generative AI (GAI): A Comprehensive and Investigative Analysis toward LLM Models (2021–2024) and Beyond.” In:Journal of Electrical Systems and Information Technology 11.1 (2024), pp. 1–22.DOI:10.1186/s43067-024-00145-1
-
[3]
Deep Learning for Lungs Cancer Detection: A Review
Rabia Javed et al. “Deep Learning for Lungs Cancer Detection: A Review.” In:Artificial Intelligence Review 57.8 (2024), pp. 1–39.DOI:10.1007/s10462-024-10807-1
-
[4]
Lane Detection in Autonomous Vehicles: A Systematic Review
Noor Jannah Zakaria et al. “Lane Detection in Autonomous Vehicles: A Systematic Review.” In:IEEE Access11 (2023), pp. 3729–3765.DOI:10.1109/ACCESS.2023.3234442
-
[5]
Deep Learning-Based Authentication for Insider Threat Detection in Critical Infrastructure
Arnoldas Budžys, Olga Kurasova, and Viktor Medvedev. “Deep Learning-Based Authentication for Insider Threat Detection in Critical Infrastructure.” In:Artificial Intelligence Review57.10 (2024), pp. 1–35.DOI: 10.1007/s10462-024-10893-1
-
[6]
Sunday Adeola Ajagbe and Matthew O. Adigun. “Deep Learning Techniques for Detection and Prediction of Pandemic Diseases: A Systematic Literature Review.” In:Multimedia Tools and Applications83.2 (2024), pp. 5893–5927.DOI:10.1007/s11042-023-15805-z
-
[7]
Stock Market Trend Prediction Using Deep Learning Approach
Mahmoud Ahmad Al-Khasawneh et al. “Stock Market Trend Prediction Using Deep Learning Approach.” In: Computational Economics66.1 (2025), pp. 453–484.DOI:10.1007/s10614-024-10714-1
-
[8]
Xinfang Zhang et al. “A Review of Machine Learning Approaches for Electric Vehicle Energy Consumption Modelling in Urban Transportation.” In:Renewable Energy234 (2024).DOI: 10.1016/j.renene.2024. 121243
-
[9]
Christiane Attig et al. “More than Task Performance: Developing New Criteria for Successful Human-AI Teaming Using the Cooperative Card Game Hanabi.” In:Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. New York, NY , USA: ACM, 2024, pp. 1–11.DOI:10.1145/3613905.3650853
-
[10]
No Person Is an Island: Unpacking the Work and after-Work Consequences of Interacting with Artificial Intelligence
Pok Man Tang et al. “No Person Is an Island: Unpacking the Work and after-Work Consequences of Interacting with Artificial Intelligence.” In:Journal of Applied Psychology108.11 (2023), pp. 1766–1789.DOI:10.1037/ apl0001103
2023
-
[11]
Scientific Rep.15(1) (2025).https://doi.org/10.1038/s41598-025-98385-2
Suqing Wu et al. “Human-Generative AI Collaboration Enhances Task Performance but Undermines Human’s Intrinsic Motivation.” In:Scientific Reports15.1 (2025), pp. 1–30.DOI:10.1038/s41598-025-98385-2
-
[12]
Who/What Is My Teammate? Team Composition Considerations in Human–AI Teaming
N. J. McNeese et al. “Who/What Is My Teammate? Team Composition Considerations in Human–AI Teaming.” In:IEEE Transactions on Human-Machine Systems51.4 (2021), pp. 288–299.DOI: 10.1109/THMS.2021. 3086018
-
[13]
Human-AI Collaboration in Cooperative Games: A Study of Playing Codenames with an LLM Assistant
Matthew Sidji, Wally Smith, and Melissa J. Rogerson. “Human-AI Collaboration in Cooperative Games: A Study of Playing Codenames with an LLM Assistant.” In:Proceedings of the ACM on Human-Computer Interaction8 (2024), pp. 1–25.DOI:10.1145/3677081
-
[14]
Emergence and Collapse of Reciprocity in Semiautomatic Driving Coordination Experiments with Humans
Hirokazu Shirado, Shunichi Kasahara, and Nicholas A. Christakis. “Emergence and Collapse of Reciprocity in Semiautomatic Driving Coordination Experiments with Humans.” In:Proceedings of the National Academy of Sciences120.51 (2023), e2307804120.DOI:10.1073/pnas.2307804120
-
[15]
People Judge Others More Harshly after Talking to Bots
Kian Siong Tey et al. “People Judge Others More Harshly after Talking to Bots.” In:PNAS Nexus3.9 (2024), pgae397.DOI:10.1093/pnasnexus/pgae397
-
[16]
Understanding Human-Centred AI: A Review of Its Defining Elements and a Research Agenda
Stefan Schmager, Ilias O. Pappas, and Polyxeni Vassilakopoulou. “Understanding Human-Centred AI: A Review of Its Defining Elements and a Research Agenda.” In:Behaviour & Information Technology(2025), pp. 1–40. DOI:10.1080/0144929X.2024.2448719
-
[17]
Cooperative Intelligence – A Humane Perspective
B. Sendhoff and H. Wersing. “Cooperative Intelligence – A Humane Perspective.” In:2020 IEEE International Conference on Human-Machine Systems (ICHMS). 2020, pp. 1–6.DOI: 10.1109/ICHMS49158.2020.9209387
-
[18]
Tomorrow’s Human–Machine Design Tools: From Levels of Automation to Interdependencies
Matthew Johnson, Jeffrey M. Bradshaw, and Paul J. Feltovich. “Tomorrow’s Human–Machine Design Tools: From Levels of Automation to Interdependencies.” In:Journal of Cognitive Engineering and Decision Making 12.1 (2018), pp. 77–82.DOI:10.1177/1555343417736462
-
[19]
Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature
Thomas O’Neill et al. “Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature.” In: Human Factors: The Journal of the Human Factors and Ergonomics Society64.5 (2022), pp. 904–938.DOI: 10.1177/0018720820960865
-
[20]
Frank Chang and Damith Herath. “From Interaction to Relationship: The Role of Memory, Learning, and Emotional Intelligence in AI-Embodied Human Engagement.” In:2025 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2025, pp. 1269–1273.DOI:10.1109/HRI61500.2025.10973813
-
[21]
Why Human–AI Relationships Need Socioaffective Alignment,
Hannah Rose Kirk et al. “Why Human–AI Relationships Need Socioaffective Alignment.” In:Humanities and Social Sciences Communications12.1 (2025), p. 728.DOI:10.1057/s41599-025-04532-5
-
[22]
Co-Experiencing with AI: Effects on Social Bonding and Empathy in Human-AI Relationships
Yiming Qian and Xiaoang Wan. “Co-Experiencing with AI: Effects on Social Bonding and Empathy in Human-AI Relationships.” In:Technology in Society83 (2025), p. 103009.DOI:10.1016/j.techsoc.2025.103009. 27 Measuring Successful Human-AI TeamworkATTIG ET AL
-
[23]
Rhea Basappa et al. “Mind the Gaps: How AI Shortcomings and Human Concerns May Disrupt Team Cognition in Human-AI Teams (HATs).” In:Proceedings of the Human Factors and Ergonomics Society Annual Meeting 69.1 (2025), pp. 354–359.DOI:10.1177/10711813251361002
-
[24]
Beau G. Schelble et al. “Should AI Teammates Give All the Answers? Examining the Role of Different AI Information-Sharing Techniques on Team Cognition in Human–AI Teams.” In:International Journal of Human– Computer Interaction(2025), pp. 1–26.DOI:10.1080/10447318.2025.2528988
-
[25]
Attitudes towards AI: Measurement and Associations with Personality
Jan-Philipp Stein et al. “Attitudes towards AI: Measurement and Associations with Personality.” In:Scientific Reports14.1 (2024), pp. 1–16.DOI:10.1038/s41598-024-53335-2
-
[26]
A systematic review of AI literacy scales
Tomáš Lintner. “A Systematic Review of AI Literacy Scales.” In:npj Science of Learning9.1 (2024), pp. 1–11. DOI:10.1038/s41539-024-00264-4
-
[27]
Oxford, United Kingdom: Oxford University Press, 2010.DOI: 10.1093/acref/9780199532919.001.0001
Ian Buchanan.A Dictionary of Critical Theory. Oxford, United Kingdom: Oxford University Press, 2010.DOI: 10.1093/acref/9780199532919.001.0001
-
[28]
The Relational Shift: Why We Need “AI Psychology
Abdulaziz Alqasir. “The Relational Shift: Why We Need “AI Psychology” Now as a Core Field.” In:Journal of Psychology and AI1.1 (2025), pp. 1–18.DOI:10.1080/29974100.2025.2573928
-
[29]
Tim Schrills et al. “Safe Environments to Understand Medical AI – Designing a Diabetes Simulation Interface for Users of Automated Insulin Delivery.” In:Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Ed. by Vincent G. Duffy. V ol. 14029. Cham, Switzerland: Springer, 2023, pp. 306–328.DOI:10.1007/978-3-031-35748-0_23
-
[30]
Prakash Shukla et al. “De-Skilling, Cognitive Offloading, and Misplaced Responsibilities: Potential Ironies of AI-assisted Design.” In:Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. New York, NY , USA: ACM, 2025.DOI:10.1145/3706599.3719931
-
[31]
Common Ground and Coordination in Joint Activity
Gary Klein et al. “Common Ground and Coordination in Joint Activity.” In:Organizational Simulation53 (2005), pp. 139–184.DOI:10.1002/0471739448.ch6
-
[32]
Cohesion in Human–Autonomy Teams: An Approach for Future Research
Shan G. Lakhmani et al. “Cohesion in Human–Autonomy Teams: An Approach for Future Research.” In: Theoretical Issues in Ergonomics Science23.6 (2022), pp. 687–724.DOI: 10.1080/1463922X.2022.2033876
-
[33]
Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams
Ewart J. de Visser et al. “Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams.” In: International Journal of Social Robotics12.2 (2020), pp. 459–478.DOI:10.1007/s12369-019-00596-x
-
[34]
Human–Agent Interaction
Jeffrey M Bradshaw, Paul J Feltovich, and Matthew Johnson. “Human–Agent Interaction.” In:The Handbook of Human-Machine Interaction. Boca Raton, FL, USA: CRC Press, 2017, pp. 283–300
2017
-
[35]
Clifford Nass, Jonathan Steuer, and Ellen R. Tauber. “Computers are Social Actors.” In:Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY , USA: ACM, 1994, pp. 72–78. DOI:10.1145/191666.191703
-
[36]
A Taxonomy of Linguistic Expressions That Contribute to Anthropomorphism of Language Technologies
Alicia DeVrio et al. “A Taxonomy of Linguistic Expressions That Contribute to Anthropomorphism of Language Technologies.” In:Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. New York, NY , USA: ACM, 2025, pp. 1–18.DOI:10.1145/3706598.3714038
-
[37]
Yejin Lee and Sang-Hwan Kim. “Exploring Dimensions of Human Likeness in Conversational AI: Implications for Human Identity Threat and Dehumanization.” In:Proceedings of the Human Factors and Ergonomics Society Annual Meeting69.1 (2025), pp. 360–366.DOI:10.1177/10711813251358801
-
[38]
XFlag: Explainable Fake News Detection Model on Social Media
Helena Lindgren. “Emerging Roles and Relationships Among Humans and Interactive AI Systems.” In:Inter- national Journal of Human–Computer Interaction41.17 (2025), pp. 10595–10617.DOI: 10.1080/10447318. 2024.2435693
-
[39]
The Perception of Teamwork With an Autonomous Agent Enhances Affect and Performance Outcomes
James C. Walliser, Patrick R. Mead, and Tyler H. Shaw. “The Perception of Teamwork With an Autonomous Agent Enhances Affect and Performance Outcomes.” In:Proceedings of the Human Factors and Ergonomics Society Annual Meeting61.1 (2017), pp. 231–235.DOI:10.1177/1541931213601541
-
[40]
How Does It Feel to Act Together?
Elisabeth Pacherie. “How Does It Feel to Act Together?” In:Phenomenology and the Cognitive Sciences13.1 (2014), pp. 25–46.DOI:10.1007/s11097-013-9329-8
-
[41]
A Temporally Based Framework and Taxonomy of Team Processes
Michelle A. Marks, John E. Mathieu, and Stephen J. Zaccaro. “A Temporally Based Framework and Taxonomy of Team Processes.” In:The Academy of Management Review26.3 (2001), pp. 356–376.DOI: 10.2307/259182
-
[42]
Unveiling Team Emergent States in the Age of Human–AI Teaming
Michèle Rieth et al. “Unveiling Team Emergent States in the Age of Human–AI Teaming.” In:International Journal of Human–Computer Interaction(2026), pp. 1–28.DOI:10.1080/10447318.2026.2635683
-
[43]
Five Rules for the Evolution of Cooperation
Martin A Nowak. “Five Rules for the Evolution of Cooperation.” In:Science314.5805 (2006), pp. 1560–1563. DOI:10.1126/science.1133755
-
[44]
Evolutionary Explanations for Cooperation
Stuart A. West, Ashleigh S. Griffin, and Andy Gardner. “Evolutionary Explanations for Cooperation.” In:Current Biology17.16 (2007), R661–R672.DOI:10.1016/j.cub.2007.06.004. 28 Measuring Successful Human-AI TeamworkATTIG ET AL
-
[45]
Beyond Reciprocity: Psychological Needs as a Foundation for Human-AI Cooperation
Christiane Attig et al. “Beyond Reciprocity: Psychological Needs as a Foundation for Human-AI Cooperation.” In:3th International Conference on Human-Agent Interaction (HAI ’25). New York, NY , USA: ACM, 2025.DOI: 10.1145/3765766.3765854
-
[46]
Human–AI Cooperation Reconsidered: Integrating Reciprocity and Psychological Needs
Christiane Attig et al. “Human–AI Cooperation Reconsidered: Integrating Reciprocity and Psychological Needs.” In:Lecture Notes in Artificial Intelligence. In Press
-
[47]
Michael E. Bratman. “Shared Cooperative Activity.” In:The Philosophical Review101.2 (1992), pp. 327–341. DOI:10.2307/2185537
-
[48]
Two Key Steps in the Evolution of Human Cooperation: The Interdependence Hypothesis
Michael Tomasello et al. “Two Key Steps in the Evolution of Human Cooperation: The Interdependence Hypothesis.” In:Current Anthropology53.6 (2012), pp. 673–692.DOI:10.1086/668207
-
[49]
Computers in Human Behavior125, 106942 (Jul 2021).https://doi.org/10.1016/j.chb
Geoff Musick et al. “What Happens When Humans Believe Their Teammate Is an AI? An Investigation into Humans Teaming with Autonomy.” In:Computers in Human Behavior122 (2021).DOI: 10.1016/j.chb. 2021.106852
-
[50]
Human–Autonomy Teaming: Definitions, Debates, and Directions
Joseph B. Lyons et al. “Human–Autonomy Teaming: Definitions, Debates, and Directions.” In:Frontiers in Psychology12 (2021), pp. 1–15.DOI:10.3389/fpsyg.2021.589585
-
[51]
Godfred O. Boateng et al. “Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research: A Primer.” In:Frontiers in Public Health6 (2018), p. 149.DOI:10.3389/fpubh.2018.00149
-
[52]
Tim Schrills and Thomas Franke. “How do Users Experience Traceability of AI systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems.” In:ACM Transactions on Interactive Intelligent Systems13.4 (2023), pp. 1–34.DOI:https://doi.org/10.1145/3588594
-
[53]
Colin Wayne Leach, Naomi Ellemers, and Manuela Barreto. “Group Virtue: The Importance of Morality (vs. Competence and Sociability) in the Positive Evaluation of In-Groups.” In:Journal of personality and social psychology93.2 (2007), pp. 234–249.DOI:10.1037/0022-3514.93.2.234
-
[54]
Thomas Franke et al. “Advancing Electric Vehicle Range Displays for Enhanced User Experience: The Relevance of Trust and Adaptability.” In:Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. New York, NY: ACM, 2015, pp. 249–256.DOI: 10.1145/2799250. 2799283
-
[55]
Massachusetts general laws, part i, title xiv, chapter 90: Motor vehicles and aircraft
Jinke D Van Der Laan, Adriaan Heino, and Dick De Waard. “A Simple Procedure for the Assessment of Acceptance of Advanced Transport Telematics.” In:Transportation Research Part C: Emerging Technologies5.1 (1997), pp. 1–10.DOI:10.1016/S0968-090X(96)00025-3
-
[56]
Christiane Attig et al. “Understanding Successful Human–AI Teaming: The Role of Goal Alignment and AI Autonomy for Social Perception of LLM-based Chatbots.” In:Computers in Human Behavior: Artificial Humans 7 (2026), pp. 1–18.DOI:10.1016/j.chbah.2025.100246
-
[57]
Behavioral Evaluation of Hanabi Rainbow DQN Agents and Rule-Based Agents
Rodrigo Canaan et al. “Behavioral Evaluation of Hanabi Rainbow DQN Agents and Rule-Based Agents.” In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment16.1 (2020), pp. 31–37.DOI:10.1609/aiide.v16i1.7404
-
[58]
Thomas Franke, Christiane Attig, and Daniel Wessel. “A Personal Resource for Technology Interaction: De- velopment and Validation of the Affinity for Technology Interaction (ATI) Scale.” In:International Journal of Human–Computer Interaction35.6 (2019), pp. 456–467.DOI:10.1080/10447318.2018.1456150
-
[59]
The Hanabi Challenge: A New Frontier for AI Research
Nolan Bard et al. “The Hanabi Challenge: A New Frontier for AI Research.” In:Artificial Intelligence280 (2020), pp. 1–19.DOI:10.1016/j.artint.2019.103216
-
[60]
Version 1.1.0
Honda Research Institute EU and synyx.Hanabi4HMI. Version 1.1.0. Aug. 12, 2023.URL: https://github. com/HRI-EU/Hanabi4HMI
2023
-
[61]
Evaluating and Modelling Hanabi-Playing Agents
Joseph Walton-Rivers et al. “Evaluating and Modelling Hanabi-Playing Agents.” In:arXiv(2017).DOI: 10. 48550/ARXIV.1704.07069
-
[62]
In Pursuit of Predictive Models of Human Preferences Toward AI Teammates
Ho Chit Siu et al. “In Pursuit of Predictive Models of Human Preferences Toward AI Teammates.” In:arXiv (2025).DOI:10.48550/arXiv.2503.15516
-
[63]
https://www.prolific.com
Prolific.Prolific – Online Participant Recruitment. https://www.prolific.com . Accessed: 2024-10-08. 2024
2024
-
[64]
Beate Stattkus-Fortange et al. “Understanding the Charging Behavior of Electric Vehicle Drivers on Long- Distance Trips – The Roles of Range Regulation and Human-Automation Cooperation.” In:Transportation Research Part F: Traffic Psychology and Behaviour120 (2026), pp. 1–17.DOI: 10.1016/j.trf.2026.103610
-
[65]
Chichester, UK: John Wiley & Sons, 2017
Barry Cripps.Psychometric testing: Critical perspectives. Chichester, UK: John Wiley & Sons, 2017
2017
-
[66]
Alternative Ways of Assessing Model Fit
Michael W. Browne and Robert Cudeck. “Alternative Ways of Assessing Model Fit.” In:Sociological Methods & Research21.2 (1992), pp. 230–258.DOI:10.1177/0049124192021002005
-
[67]
Significance Tests and Goodness of Fit in the Analysis of Covariance Structures
P. M. Bentler and Douglas G. Bonett. “Significance Tests and Goodness of Fit in the Analysis of Covariance Structures.” In:Psychological Bulletin88.3 (1980), pp. 588–606.DOI:10.1037/0033-2909.88.3.588. 29 Measuring Successful Human-AI TeamworkATTIG ET AL
-
[68]
Li-tze Hu and Peter M. Bentler. “Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives.” In:Structural Equation Modeling: A Multidisciplinary Journal6.1 (1999), pp. 1–55.DOI:10.1080/10705519909540118
-
[69]
Herbert W. Marsh, Kit-Tai Hau, and Zhonglin Wen. “In Search of Golden Rules: Comment on Hypothesis- Testing Approaches to Setting Cutoff Values for Fit Indexes and Dangers in Overgeneralizing Hu and Bentler’s (1999) Findings.” In:Structural Equation Modeling: A Multidisciplinary Journal11.3 (2004), pp. 320–341.DOI: 10.1207/s15328007sem1103_2
-
[70]
Berlin, Germany: Springer, 2020.DOI:10.1007/978-3-662-61532-4
Helfried Moosbrugger and Augustin Kelava.Testtheorie und Fragebogenkonstruktion. Berlin, Germany: Springer, 2020.DOI:10.1007/978-3-662-61532-4
-
[71]
Jacob Cohen. “A Power Primer.” In:Psychological Bulletin112.1 (1992), pp. 155–159.DOI: 10.1037/0033- 2909.112.1.155
-
[72]
Testing Nonnested Structural Equation Models
Edgar C. Merkle, Dongjun. You, and Kristopher J. Preacher. “Testing Nonnested Structural Equation Models.” In:Psychological Methods21.2 (2016), pp. 151–163.DOI:10.1037/met0000038
-
[73]
Development and Validation of a Basic Psy- chological Needs Scale for Technology Use
Laura Moradbakhti, Benedikt Leichtmann, and Martina Mara. “Development and Validation of a Basic Psy- chological Needs Scale for Technology Use.” In:Psychological Test Adaptation and Development5.1 (2024), pp. 26–45.DOI:10.1027/2698-1866/a000062
-
[74]
Why Summaries of Research on Psychological Theories are often Uninterpretable
Paul E. Meehl. “Why Summaries of Research on Psychological Theories are often Uninterpretable.” In:Psycho- logical Reports66.1 (1990), pp. 195–244.DOI:10.2466/pr0.1990.66.1.195
-
[75]
Common Method Biases in Behavioral Research: A Critical Review of the Literature and Recommended Remedies
Philip M. Podsakoff et al. “Common Method Biases in Behavioral Research: A Critical Review of the Literature and Recommended Remedies.” In:Journal of Applied Psychology88.5 (2003), pp. 879–903.DOI: 10.1037/ 0021-9010.88.5.879
2003
-
[76]
Exploring Social Desirability Bias
Janne Chung and Gary S. Monroe. “Exploring Social Desirability Bias.” In:Journal of Business Ethics44.4 (2003), pp. 291–302.DOI:10.1023/A:1023648703356
-
[77]
The Measurement of Attitudes Towards Artificial Intelligence: An Overview and Recommendations
Astrid Schepman and Paul Rodway. “The Measurement of Attitudes Towards Artificial Intelligence: An Overview and Recommendations.” In:The Impact of Artificial Intelligence on Societies. Ed. by Christian Montag and Raian Ali. Cham, Switzerland: Springer, 2025, pp. 9–24.DOI:10.1007/978-3-031-70355-3_2
-
[78]
Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi
Ho C. Siu et al. “Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi.” In:arXiv (2021).DOI:10.48550/arXiv.2107.07630. 30 Measuring Successful Human-AI TeamworkATTIG ET AL. A Perceived cooperativity scale Table 9: Perceived Cooperativity Scale (Agent Version, PCS-A) Item # English version German version I had the feeling that ... Ich ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.