pith. machine review for the scientific record. sign in

arxiv: 2604.05368 · v1 · submitted 2026-04-07 · 💻 cs.HC · cs.AI

Recognition: no theorem link

AI and Collective Decisions: Strengthening Legitimacy and Losers' Consent

Authors on Pith no claims yet

Pith reviewed 2026-05-10 19:55 UTC · model grok-4.3

classification 💻 cs.HC cs.AI
keywords AIcollective decision-makingprocedural legitimacylosers' consentvisualizationtrusthuman-computer interactionpolicy experiences
0
0 comments X

The pith

An AI visualization displaying personal experiences alongside policy predictions can raise legitimacy and trust even for participants whose preferred outcomes are rejected.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tests whether AI can strengthen procedural legitimacy in collective decisions by eliciting diverse personal experiences and making them visible to participants. It asks if seeing these experiences improves trust, understanding, and acceptance when decisions go against one's own preferences. Researchers built an AI interviewer to collect stories and paired it with an interactive display of predicted support levels and the collected experiences. A controlled experiment with 181 people found that using the visualization produced measurable gains in perceived fairness and perspective-taking despite uniform exposure to unfavorable results. This line of work addresses the risk that scaled AI decisions erode consent among those who lose out.

Core claim

We built a system that uses a semi-structured AI interviewer to elicit personal experiences on policy topics and an interactive visualization that displays predicted policy support alongside those voiced experiences. In a randomized experiment (n = 181), interacting with the visualization increased perceived legitimacy, trust in outcomes, and understanding of others' perspectives, even though all participants encountered decisions that went against their stated preferences.

What carries the argument

The interactive visualization that pairs predicted levels of policy support with the personal experiences collected by the semi-structured AI interviewer.

If this is right

  • AI-assisted collective decision tools can raise perceived fairness by surfacing opposing personal experiences without requiring preference change.
  • Visual exposure to others' viewpoints can improve understanding and trust in the process even when the final outcome is disliked.
  • Procedural legitimacy in scaled decisions depends partly on making the diversity of participant experiences legible to everyone involved.
  • Design efforts in democratic AI should treat losers' consent as a measurable outcome alongside efficiency and accuracy.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Deploying similar visualizations in public deliberation platforms could lower post-decision conflict if the elicited experiences are representative.
  • The approach might complement existing voting or deliberation systems by adding an empathy layer before final tallies are taken.
  • Effects could weaken if the AI interviewer produces low-quality or biased experience summaries.
  • Testing the tool in high-stakes settings such as local budgeting or regulatory decisions would reveal whether lab gains persist.

Load-bearing premise

Short-term self-reported gains in legitimacy from a single controlled session with the visualization will carry over into sustained real-world acceptance of collective decisions.

What would settle it

A longitudinal field study that tracks whether participants who used the tool actually comply with or protest an unfavorable policy decision weeks or months later.

Figures

Figures reproduced from arXiv: 2604.05368 by Deb Roy, Emily Kubin, Michiel Bakker, Prerna Ravi, Shrestha Mohanty, Suyash Fulay.

Figure 1
Figure 1. Figure 1: An overview of the different elements used in our study’s process (Detailed version in Appendix K) [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Procedure for curating data for the visualization [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Coefficients for visualization, AI interview, and [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The red values indicate significance at p < 0.1, with solid markers indicating significance at p < 0.05. I Participant Demographics Category Distribution Political Orientation Very Liberal (12%), Somewhat Liberal (8%), Liberal (20%), Moderate (16%), Somewhat Conservative (9%), Conservative (19%), Very Conservative (15%) Income Less than $30,000 (15%), $30,000–$49,999 (15%), $50,000–$99,999 (37%), $100,000 … view at source ↗
Figure 5
Figure 5. Figure 5: STEP 0: AI Interviewer screen where the user has a back n forth conversation with the AI agent on their personal life [PITH_FULL_IMAGE:figures/full_fig_p024_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: STEP 1: Voting screen where the user views the proposal and votes on it through a Likert scale and free-response [PITH_FULL_IMAGE:figures/full_fig_p024_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: STEP 2a: Visualization showing the user where they stand relative to other study participants [PITH_FULL_IMAGE:figures/full_fig_p025_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: STEP 2b: Right side panel that opens up when the user clicks on their own avatar from the previous screen. This [PITH_FULL_IMAGE:figures/full_fig_p025_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: STEP 3a: Visualization showing three featured profiles of participants from across the spectrum on predicted support [PITH_FULL_IMAGE:figures/full_fig_p026_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: STEP 3b: Right side panel that opens up when the user clicks on any of the featured avatars from the previous screen. [PITH_FULL_IMAGE:figures/full_fig_p026_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: STEP 3c: Visualization showing all avatars on the spectrum. The user can optionally explore these after the previous [PITH_FULL_IMAGE:figures/full_fig_p027_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: STEP 4: Decision page showing the user how the participant group voted summatively and if the proposal passed/failed. [PITH_FULL_IMAGE:figures/full_fig_p027_12.png] view at source ↗
read the original abstract

AI is increasingly used to scale collective decision-making, but far less attention has been paid to how such systems can support procedural legitimacy, particularly the conditions shaping losers' consent: whether participants who do not get their preferred outcome still accept it as fair. We ask: (1) how can AI help ground collective decisions in participants' different experiences and beliefs, and (2) whether exposure to these experiences can increase trust, understanding, and social cohesion even when people disagree with the outcome. We built a system that uses a semi-structured AI interviewer to elicit personal experiences on policy topics and an interactive visualization that displays predicted policy support alongside those voiced experiences. In a randomized experiment (n = 181), interacting with the visualization increased perceived legitimacy, trust in outcomes, and understanding of others' perspectives, even though all participants encountered decisions that went against their stated preferences. Our hope is that the design and evaluation of this tool spurs future researchers to focus on how AI can help not only achieve scale and efficiency in democratic processes, but also increase trust and connection between participants.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper describes an AI system using a semi-structured AI interviewer to elicit participants' personal experiences on policy topics, paired with an interactive visualization showing predicted policy support alongside those experiences. It reports results from a randomized controlled experiment (n=181) claiming that interaction with the visualization increased perceived legitimacy, trust in outcomes, and understanding of others' perspectives, even among participants whose preferences were not reflected in the final decision.

Significance. If the experimental results hold under more rigorous validation, the work could meaningfully advance HCI research on AI-supported collective decision-making by providing evidence that targeted visualizations of diverse experiences can bolster procedural legitimacy and losers' consent. This addresses a key gap in scaling democratic processes while preserving social cohesion, and the empirical focus on adverse-outcome scenarios offers a falsifiable starting point for future studies.

major comments (2)
  1. [Abstract and Experiment] Abstract and Experiment section: The central claim rests on positive effects from the n=181 randomized trial, yet no details are provided on the precise outcome measures (e.g., exact survey items or scales for legitimacy, trust, and perspective-taking), statistical tests performed, effect sizes, power analysis, or controls for demand characteristics and baseline differences. Without these, the reported increases cannot be properly evaluated for robustness or replicability.
  2. [Results and Discussion] Results and Discussion sections: The evaluation uses only immediate post-interaction self-reports. No behavioral measures of actual consent (such as willingness to comply with or publicly endorse an adverse outcome in a follow-on task) or delayed re-assessment are described. This is load-bearing for the paper's broader argument that the system strengthens losers' consent, as transient perceptions may not map to durable acceptance in real collective decisions.
minor comments (2)
  1. [System Description] System Description: Additional specifics on the AI model, prompt templates for the interviewer, and how predictions of policy support are computed would improve reproducibility.
  2. [Figures] Figures: The visualization examples could include clearer annotations or legends to show how individual experiences are aggregated and displayed.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their careful reading and constructive comments on our manuscript. We address each major comment below and outline the revisions we will make to improve transparency and contextualize the scope of our findings.

read point-by-point responses
  1. Referee: [Abstract and Experiment] Abstract and Experiment section: The central claim rests on positive effects from the n=181 randomized trial, yet no details are provided on the precise outcome measures (e.g., exact survey items or scales for legitimacy, trust, and perspective-taking), statistical tests performed, effect sizes, power analysis, or controls for demand characteristics and baseline differences. Without these, the reported increases cannot be properly evaluated for robustness or replicability.

    Authors: We agree that the manuscript requires more explicit reporting of these methodological and statistical details to support evaluation and replicability. In the revised version we will expand the Experiment section with a table of all survey items and their exact wording and response scales for perceived legitimacy, trust in outcomes, and perspective-taking. We will also report the full statistical tests (including any ANOVA or regression models), effect sizes, a power analysis, and our procedures for checking baseline equivalence across conditions and minimizing demand characteristics through neutral instructions and cover stories. revision: yes

  2. Referee: [Results and Discussion] Results and Discussion sections: The evaluation uses only immediate post-interaction self-reports. No behavioral measures of actual consent (such as willingness to comply with or publicly endorse an adverse outcome in a follow-on task) or delayed re-assessment are described. This is load-bearing for the paper's broader argument that the system strengthens losers' consent, as transient perceptions may not map to durable acceptance in real collective decisions.

    Authors: We acknowledge that the study relies exclusively on immediate self-report measures collected after the interaction. This design was chosen to isolate the causal effect of the visualization on initial perceptions in a controlled, single-session setting. We recognize that behavioral measures and delayed assessments would provide stronger evidence for durable losers' consent. In the revised Discussion we will add an explicit limitations paragraph noting this scope and will outline concrete directions for future work that could incorporate behavioral tasks (e.g., willingness to publicly endorse or comply with an adverse decision) and longitudinal follow-ups. revision: partial

Circularity Check

0 steps flagged

No circularity: purely empirical randomized experiment with no derivations or fitted predictions

full rationale

The paper reports building an AI interviewer and visualization tool, then presents results from a randomized controlled trial (n=181) on self-reported legitimacy, trust, and perspective-taking. No equations, parameters, or theoretical derivations appear in the provided text. Claims rest directly on experimental outcomes rather than any self-referential construction, fitted inputs renamed as predictions, or load-bearing self-citations. The design is self-contained against external benchmarks as an empirical study; the skeptic's concern about lacking behavioral proxies addresses validity, not circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the validity of survey-based measures of legitimacy and the assumption that the AI system elicits authentic experiences without introducing bias; these are domain assumptions rather than derived results.

axioms (1)
  • domain assumption Perceived legitimacy, trust in outcomes, and understanding of others' perspectives can be reliably captured via self-report surveys administered immediately after a short interaction.
    The study uses these as primary outcomes without additional validation or behavioral measures mentioned.

pith-pipeline@v0.9.0 · 5499 in / 1243 out tokens · 42567 ms · 2026-05-10T19:55:49.824177+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

100 extracted references · 28 canonical work pages · 3 internal anchors

  1. [1]

    Anderson, André Blais, Shaun Bowler, Todd Donovan, and Ola Listhaug

    Christopher J. Anderson, André Blais, Shaun Bowler, Todd Donovan, and Ola Listhaug. 2005.Losers’ Consent: Elections and Democratic Legitimacy. Oxford University Press. doi:10.1093/0199276382.001.0001

  2. [2]

    1983.Collected papers of Kenneth J

    Kenneth Joseph Arrow. 1983.Collected papers of Kenneth J. Arrow: Social choice and justice. Vol. 1. Harvard University Press

  3. [3]

    Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric Gilbert, and Ceren Budak

  4. [4]

    In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems

    Plurals: A system for guiding llms via simulated social ensembles. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–21

  5. [5]

    Fynn Bachmann, Daan van der Weijden, Lucien Heitz, Cristina Sarasua, and Abraham Bernstein. 2025. Adaptive political surveys and GPT-4: Tackling the cold start problem with simulated user interactions.PLoS One20, 5 (2025), e0322690

  6. [6]

    doi:10.1073/pnas.1804840115 , author =

    Christopher A. Bail, Lisa P. Argyle, Taylor W. Brown, John P. Bum- pus, Haohan Chen, M. B. Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. Exposure to op- posing views on social media can increase political polarization.Pro- ceedings of the National Academy of Sciences115, 37 (2018), 9216–9221. arXiv:https://www...

  7. [7]

    Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, et al . 2022. Fine-tuning language models to find agreement among humans with diverse preferences.Advances in neural informa- tion processing systems35 (2022), 38176–38189

  8. [8]

    Goldstein, and Duncan J

    Stefano Balietti, Lise Getoor, Daniel G. Goldstein, and Duncan J. Watts. 2021. Re- ducing opinion polarization: Effects of exposure to similar people with differing political views.Proceedings of the National Academy of Sciences118, 52 (2021), e2112552118. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2112552118 doi:10.1073/pnas.2112552118

  9. [9]

    Liz Barry and Joseph Gubbels. 2025. Digital Platforms and Democ- racy: Double-Edged Sword: Values in Governance Technology. https://static1.squarespace.com/static/5ea874746663b45e14a384a4/t/ 6824e8902170402177ae89b0/1747249297047/Conversation+Networks.pdf

  10. [10]

    Yoav Benjamini and Yosef Hochberg. 1995. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing.Journal of the Royal Statistical Society. Series B (Methodological)57, 1 (1995), 289–300. http: //www.jstor.org/stable/2346101

  11. [11]

    Hi. I’m Molly, Your Virtual Interviewer!

    Shreyan Biswas, Ji-Youn Jung, Abhishek Unnam, Kuldeep Yadav, Shreyansh Gupta, and Ujwal Gadiraju. 2024. “Hi. I’m Molly, Your Virtual Interviewer!” Exploring the impact of race and gender in AI-powered virtual interview ex- periences. InProceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 12. 12–22

  12. [12]

    Wiesenfeld

    Joel Brockner and Batia M. Wiesenfeld. 1996. An integrative framework for explaining reactions to decisions: interactive effects of outcomes and procedures. Psychological Bulletin120, 2 (1996), 189–208. doi:10.1037/0033-2909.120.2.189

  13. [13]

    Josh Burton, Joon Sung Park, Pranav Arora, John Millet, Ali Farhadi, and Yejin Choi. 2024. How Can AI Automate and Augment Collective Intelligence?arXiv preprint arXiv:2408.03356(2024)

  14. [14]

    Prosody and meaning,

    Andre Bächtiger, John S. Dryzek, Jane Mansbridge, and Mark D. Warren. 2018. The Oxford Handbook of Deliberative Democracy. Oxford University Press. doi:10. 1093/oxfordhb/9780198747369.001.0001

  15. [15]

    Christopher Carman. 2010. The process is the reality: Perceptions of procedural fairness and participatory democracy.Political Studies58, 4 (2010), 731–751

  16. [16]

    Kathy Charmaz. 2008. Grounded theory as an emergent method.Handbook of emergent methods155 (2008), 172

  17. [17]

    Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 1–12

  18. [18]

    Bernard C. K. Choi and Anita W. P. Pak. 2005. A catalog of biases in questionnaires. Preventing Chronic Disease2, 1 (2005), A13. https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC1323316/ Epub 2004 Dec 15

  19. [19]

    Felix Chopra and Ingar Haaland. 2023. Conducting qualitative interviews with AI. (2023)

  20. [20]

    Juliet M Corbin and Anselm Strauss. 1990. Grounded theory research: Procedures, canons, and evaluative criteria.Qualitative sociology13, 1 (1990), 3–21

  21. [21]

    Robert A. Dahl. 1989.Democracy and Its Critics. Yale University Press, New Haven

  22. [22]

    James H Davis. 2013. Group decision making and quantitative judgments: A consensus model. InUnderstanding group behavior. Psychology Press, 35–59

  23. [23]

    1965.A Systems Analysis of Political Life

    David Easton. 1965.A Systems Analysis of Political Life. Wiley, New York

  24. [24]

    Brand, Jacopo Amidei, Paul Piwek, Tom Stafford, Svetlana Stoyanchev, and Andreas Vlachos

    Youmna Farag, Charlotte O. Brand, Jacopo Amidei, Paul Piwek, Tom Stafford, Svetlana Stoyanchev, and Andreas Vlachos. 2022. Opening up Minds with Argu- mentative Dialogues. InFindings of the Association for Computational Linguistics: EMNLP. 4569–4582. https://aclanthology.org/2022.findings-emnlp.335.pdf

  25. [25]

    Siamak Faridani, Ephrat Bitton, Kimiko Ryokai, and Ken Goldberg. 2010. Opinion space: a scalable tool for browsing online comments. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Atlanta, Georgia, USA)(CHI ’10). Association for Computing Machinery, New York, NY, USA, 1175–1184. doi:10.1145/1753326.1753502

  26. [26]

    Siamak Faridani, Ephrat Bitton, Kimiko Ryokai, and Ken Goldberg. 2010. Opinion space: a scalable tool for browsing online comments. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1175–1184

  27. [27]

    arXiv preprint arXiv:2309.01291 , year=

    Sara Fish, Paul Gölz, David C. Parkes, Ariel D. Procaccia, Gili Rusak, Itai Shapira, and Manuel Wüthrich. 2025. Generative Social Choice. arXiv:2309.01291 [cs.GT] https://arxiv.org/abs/2309.01291

  28. [28]

    James S. Fishkin. 2011. 32The Trilemma of Democratic Reform. InWhen the People Speak: Deliberative Democracy and Public Consultation. Oxford University Press. arXiv:https://academic.oup.com/book/0/chapter/162456365/chapter-ag- pdf/44911898/book_12596_section_162456365.ag.pdf doi:10.1093/acprof:osobl/ 9780199604432.003.0002

  29. [29]

    George Fragiadakis, Christos Diou, George Kousiouris, and Mara Nikolaidou

  30. [30]

    Evaluating Human-AI Collaboration: A Review and Methodological Frame- work.arXiv preprint arXiv:2407.19098(2024)

  31. [31]

    Jeremy A Frimer, Linda J Skitka, and Matt Motyl. 2017. Liberals and conservatives are similarly motivated to avoid exposure to one another’s opinions.Journal of Experimental Social Psychology72 (2017), 1–12

  32. [32]

    2024.Algorithmic Democracy: A Critical Perspective Based on Deliberative Democracy(1 ed.)

    Domingo García-Marzá and Patrici Calvo. 2024.Algorithmic Democracy: A Critical Perspective Based on Deliberative Democracy(1 ed.). Springer Cham. XIII+257 pages. doi:10.1007/978-3-031-53015-9

  33. [33]

    2005.The deliberative democracy handbook: Strategies for effective civic engagement in the twenty-first century

    John Gastil et al. 2005.The deliberative democracy handbook: Strategies for effective civic engagement in the twenty-first century. Jossey-Bass

  34. [34]

    Ben Green and Yiling Chen. 2019. The Principles and Limits of Algorithm- in-the-Loop Decision Making. InProceedings of the ACM on Human-Computer Interaction, Vol. 3. ACM, 50:1–50:24

  35. [35]

    Gudiño, Umberto Grandi, and César Hidalgo

    Jairo F. Gudiño, Umberto Grandi, and César Hidalgo. 2024. Large Language Models (LLMs) as Agents for Augmented Democracy.Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences382, 2285 (dec 2024). doi:10.1098/rsta.2024.0100

  36. [36]

    Hetherington

    Marc J. Hetherington. 2005.Why Trust Matters: Declining Political Trust and the Demise of American Liberalism. Princeton University Press. http://www.jstor. org/stable/j.ctv301fkq

  37. [37]

    Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card.arXiv preprint arXiv:2410.21276(2024)

  38. [38]

    Involve. 2024. How much does a citizens’ assembly cost? https://www.involve. org.uk/resources/knowledge-base/how-much-do-participatory-processes- cost/how-much-does-citizens-assembly. Accessed: 2025-08-19

  39. [39]

    Bakker, Michael Henry Tessler, Raphael Köster, Jan Balaguer, Romuald Elie, Christopher Summerfield, and Andrea Tac- chetti

    Daniel Jarrett, Miruna Pîslar, Michiel A. Bakker, Michael Henry Tessler, Raphael Köster, Jan Balaguer, Romuald Elie, Christopher Summerfield, and Andrea Tac- chetti. 2025. Language Agents as Digital Representatives in Collective Decision- Making. arXiv:2502.09369 [cs.LG] https://arxiv.org/abs/2502.09369

  40. [40]

    2014.Deliberation, democracy, and civic forums: Improving equality and publicity

    Christopher F Karpowitz and Chad Raphael. 2014.Deliberation, democracy, and civic forums: Improving equality and publicity. Cambridge University Press. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Fulay et al

  41. [41]

    Daniel Kessler, Dimitra Dimitrakopoulou, and Deb Roy. 2023. Hearing Personal Experiences Improves Social Evaluations Compared to Personal Opinions, Es- pecially for Polarized Parties.SSRN Electronic Journal(2023). doi:10.2139/ssrn. 4978495

  42. [42]

    Perrault, Jihee Kim, and Juho Kim

    Hyunwoo Kim, Eun-Young Ko, Donghoon Han, Sung-Chul Lee, Simon T. Perrault, Jihee Kim, and Juho Kim. 2019. Crowdsourcing Perspectives on Public Policy from Stakeholders. InExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems(Glasgow, Scotland Uk)(CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. doi:10...

  43. [43]

    Travis Kriplean, Jonathan Morgan, Deen Freelon, Alan Borning, and Lance Ben- nett. 2012. Supporting reflective public thought with considerit. InProceedings of the ACM 2012 Conference on Computer Supported Cooperative Work(Seattle, Washington, USA)(CSCW ’12). Association for Computing Machinery, New York, NY, USA, 265–274. doi:10.1145/2145204.2145249

  44. [44]

    Travis Kriplean, Michael Toomim, Jonathan Morgan, Alan Borning, and Amy J Ko. 2012. Is this what you meant? Promoting listening on the web with reflect. Inproceedings of the SIGCHI conference on human factors in computing systems. 1559–1568

  45. [45]

    Emily Kubin, Kurt J Gray, and Christian von Sikorski. 2023. Reducing political dehumanization by pairing facts with personal experiences.Political Psychology 44, 5 (2023), 1119–1140

  46. [46]

    Emily Kubin, Curtis Puryear, Chelsea Schein, and Kurt Gray. 2021. Per- sonal experiences bridge moral and political divides better than facts.Pro- ceedings of the National Academy of Sciences118, 6 (2021), e2008389118. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2008389118 doi:10.1073/pnas. 2008389118

  47. [47]

    Emily Kubin, Christian von Sikorski, and Kurt Gray. 2025. Political censorship feels acceptable when ideas seem harmful and false.Political Psychology46, 2 (2025), 279–299

  48. [48]

    Helene E Landemore. 2012. Why the many are smarter than the few and why it matters.Journal of Deliberative Democracy8, 1 (2012)

  49. [49]

    Margaret Levi and Laura Stoker. 2000. Political trust and trustworthiness.Annual Review of Political Science3 (2000), 475–507

  50. [50]

    Eliciting human preferences with language models

    Belinda Z. Li, Alex Tamkin, Noah Goodman, and Jacob Andreas. 2023. Eliciting Human Preferences with Language Models. arXiv:2310.11589 [cs.CL] https: //arxiv.org/abs/2310.11589

  51. [51]

    1988.The social psychology of procedural justice

    E Allan Lind and Tom R Tyler. 1988.The social psychology of procedural justice. Springer Science & Business Media

  52. [52]

    2011.Group agency: The possibility, design, and status of corporate agents

    Christian List and Philip Pettit. 2011.Group agency: The possibility, design, and status of corporate agents. Oxford University Press

  53. [53]

    Henrietta Lyons, Eduardo Velloso, and Tim Miller. 2021. Conceptualising con- testability: Perspectives on contesting algorithmic decisions.Proceedings of the ACM on Human-Computer Interaction5, CSCW1 (2021), 1–25

  54. [54]

    Shuai Ma, Qiaoyi Chen, Xinru Wang, Chengbo Zheng, Zhenhui Peng, Ming Yin, and Xiaojuan Ma. 2025. Towards Human–AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. ACM

  55. [55]

    2006.Hearing the other side: Deliberative versus participatory democracy

    Diana C Mutz. 2006.Hearing the other side: Deliberative versus participatory democracy. Cambridge University Press

  56. [56]

    2025.A small US city experiments with AI to find out what residents want

    James O’Donnell. 2025.A small US city experiments with AI to find out what residents want. https://www.technologyreview.com/2025/04/15/1115125/a-small- us-city-experiments-with-ai-to-find-out-what-residents-want/ Accessed: 2025- 09-10

  57. [57]

    2025.Text-to-Speech (TTS) models

    OpenAI. 2025.Text-to-Speech (TTS) models. https://platform.openai.com/docs/ models/tts-1

  58. [58]

    Aviv Ovadya and Luke Thorburn. 2023. Bridging systems: open problems for countering destructive divisiveness across ranking, recommenders, and gover- nance.arXiv preprint arXiv:2301.09976(2023)

  59. [59]

    Cassandra Overney, Cassandra Moe, Alvin Chang, and Nabeel Gillani. 2025. BoundarEase: Fostering Constructive Community Engagement to Inform More Equitable Student Assignment Policies.Proceedings of the ACM on Human- Computer Interaction9, 2 (2025), 1–37

  60. [60]

    Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S

    Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S. Bernstein

  61. [61]

    LLM Agents Grounded in Self-Reports Enable General-Purpose Simulation of Individuals

    Generative Agent Simulations of 1,000 People. arXiv:2411.10109 [cs.AI] https://arxiv.org/abs/2411.10109

  62. [62]

    Pettigrew and Linda R

    Thomas F. Pettigrew and Linda R. Tropp. 2006. A meta-analytic test of intergroup contact theory.Journal of Personality and Social Psychology90, 5 (2006), 751–783. doi:10.1037/0022-3514.90.5.751

  63. [63]

    Pew Research Center. 2024. Americans’ Deepening Mistrust of Institutions. Trend Magazine(October 17 2024). https://www.pew.org/en/trend/archive/fall- 2024/americans-deepening-mistrust-of-institutions

  64. [64]

    2025.Prolific

    Prolific. 2025.Prolific. https://www.prolific.com/

  65. [65]

    balanced pragmatism

    Curtis Puryear and Kurt Gray. 2024. Using “balanced pragmatism” in political discussions increases cross-partisan respect.Journal of Experimental Psychology: General153, 5 (2024), 1189

  66. [66]

    Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. InInternational conference on machine learning. PMLR, 28492–28518

  67. [67]

    doi:10.1073/pnas.2024292118 , author =

    Steve Rathje, Jay J. Van Bavel, and Sander van der Linden. 2021. Out-group animosity drives engagement on social media.Proceed- ings of the National Academy of Sciences118, 26 (2021), e2024292118. arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2024292118 doi:10.1073/pnas. 2024292118

  68. [68]

    2023.Consensus Building in Taiwan, the Poster Child of Digital Democracy

    Sebastian Cushing Rodriguez. 2023.Consensus Building in Taiwan, the Poster Child of Digital Democracy. https://democracy-technologies.org/participation/ consensus-building-in-taiwan/ Accessed: 2025-09-10

  69. [69]

    Deb Roy, Lawrence Lessig, and Audrey Tang. 2025. Beyond Clicks and Com- ments: Leveraging AI for Meaningful Civic Engagement: Conversation Net- works. https://static1.squarespace.com/static/5ea874746663b45e14a384a4/t/ 6824e8902170402177ae89b0/1747249297047/Conversation+Networks.pdf

  70. [70]

    Juliana Schroeder, Michael Kardas, and Nicholas Epley. 2017. The humanizing voice: Speech reveals, and text conceals, a more thoughtful mind in the midst of disagreement.Psychological science28, 12 (2017), 1745–1762

  71. [71]

    Maija Setälä and Graham Smith. 2018. Mini-publics and deliberative democracy. InThe Oxford Handbook of Deliberative Democracy, André Bächtiger, John Dryzek, Jane Mansbridge, and Mark E. Warren (Eds.). Oxford University Press, Oxford

  72. [72]

    Joongi Shin, Michael A Hedderich, AndréS Lucero, and Antti Oulasvirta. 2022. Chatbots facilitating consensus-building in asynchronous co-design. InProceed- ings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–13

  73. [73]

    2025.Americans’ Trust in One Another

    Laura Silver, Scott Keeter, Stephanie Kramer, Jordan Lippert, Sofia Hernandez Ra- mones, Alan Cooperman, Chris Baronavski, Bill Webster, Reem Nadeem, and Jana- kee Chavda. 2025.Americans’ Trust in One Another. https://www.pewresearch. org/2025/05/08/americans-trust-in-one-another/ Accessed: 2025-08-18

  74. [74]

    Linda J Skitka and G Scott Morgan. 2014. The social and political implications of moral conviction.Political psychology35 (2014), 95–110

  75. [75]

    Small, Michael Bjorkegren, Timo Erkkilä, Lynette Shaw, and Colin Megill

    Christopher T. Small, Michael Bjorkegren, Timo Erkkilä, Lynette Shaw, and Colin Megill. 2021. Polis: Scaling deliberation by mapping high dimensional opinion spaces.Recerca. Revista de Pensament i Anàlisi26, 2 (2021), 1–26. doi:10.6035/ recerca.5516

  76. [76]

    Small, Ivan Vendrov, Esin Durmus, Hadjar Homaei, Eliza- beth Barry, Julien Cornebise, Ted Suzman, Deep Ganguli, and Colin Megill

    Christopher T. Small, Ivan Vendrov, Esin Durmus, Hadjar Homaei, Eliza- beth Barry, Julien Cornebise, Ted Suzman, Deep Ganguli, and Colin Megill

  77. [78]

    Christopher T Small, Ivan Vendrov, Esin Durmus, Hadjar Homaei, Elizabeth Barry, Julien Cornebise, Ted Suzman, Deep Ganguli, and Colin Megill. 2023. Opportunities and risks of LLMs for scalable deliberation with Polis.arXiv preprint arXiv:2306.11932(2023)

  78. [79]

    Jennifer Stromer-Galley and Peter Muhlberger. 2009. Agreement and disagree- ment in group deliberation: Effects on deliberation satisfaction, future engage- ment, and decision legitimacy.Political communication26, 2 (2009), 173–192

  79. [80]

    Bakker, Daniel Jarrett, Hannah Sheahan, Martin J

    Michael Henry Tessler, Michiel A. Bakker, Daniel Jarrett, Hannah Sheahan, Martin J. Chadwick, Raphael Koster, Georgina Evans, Lucy Campbell- Gillingham, Tantum Collins, David C. Parkes, Matthew Botvinick, and Christopher Summerfield. 2024. AI can help humans find com- mon ground in democratic deliberation.Science386, 6719 (2024), eadq2852. arXiv:https://w...

  80. [81]

    Thibaut and L

    J.W. Thibaut and L. Walker. 1975.Procedural Justice: A Psychological Analysis. L. Erlbaum Associates. https://books.google.com/books?id=2l5_QgAACAAJ

Showing first 80 references.