pith. machine review for the scientific record. sign in

arxiv: 2604.21864 · v1 · submitted 2026-04-23 · 💻 cs.CY · cs.HC

Recognition: unknown

FAccT-Checked: A Narrative Review of Authority Reconfigurations and Retention in AI-Mediated Journalism

Authors on Pith no claims yet

Pith reviewed 2026-05-08 14:01 UTC · model grok-4.3

classification 💻 cs.CY cs.HC
keywords AI-mediated journalismeditorial authorityFAccTnarrative reviewauthority reconfigurationlarge language modelsparticipatory AImedia power
0
0 comments X

The pith

AI adoption in journalism drives two concurrent authority migrations that make fairness hard to maintain, accountability difficult to assign, and transparency performative.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper uses a narrative review of journalism studies, HCI, and FAccT work to show how editorial authority is shifting due to AI. It describes an internal migration where newsroom judgment moves to LLMs through everyday interactions, cognitive habits, and organizational routines that make AI outputs seem legitimate while hiding who is responsible. It also describes an external migration that moves decision power from newsrooms to platforms and vendors supplying the systems. These shifts matter because journalism depends on clear lines of authority to produce trustworthy reporting and uphold ethical standards.

Core claim

Editorial authority consists of decision rights, epistemic warrant, and responsibility. The authors identify an internal migration in which editorial judgment is progressively deferred to LLMs embedded in newsroom workflows through interactional, cognitive, and organizational mechanisms that legitimize AI-generated outputs while obscuring responsibility and weakening agency. They also identify an external migration in which decision-making power shifts from news organizations toward platforms, vendors, and infrastructural providers. Unaddressed, these reconfigurations risk rendering fairness hard to maintain, accountability difficult to assign, and transparency performative. The paper then,

What carries the argument

Editorial authority defined as the conjunction of decision rights, epistemic warrant, and responsibility, which undergoes two concurrent reconfigurations (internal migration to LLMs and external migration to platforms and vendors) driven by AI adoption.

If this is right

  • Fairness in news outputs becomes harder to maintain because decision processes are obscured.
  • Accountability for errors or biases becomes difficult to assign across human editors, AI systems, and external providers.
  • Transparency measures risk becoming performative rather than substantive.
  • Participatory AI design and deployment can either redistribute authority meaningfully or function as tokenism that leaves power relations unchanged.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the migrations hold, journalism training programs would need to add explicit focus on retaining human oversight of AI outputs.
  • Similar authority shifts may occur in other domains that rely on expert judgment, such as legal research or medical reporting.
  • Without intervention, the described changes could accelerate concentration of influence over public information among a small number of technology firms.

Load-bearing premise

Interpretivist reading of existing literature can reliably identify and describe these authority reconfigurations as actually occurring in current practice without new empirical data.

What would settle it

Systematic observation across multiple newsrooms that finds editors retaining full control over all AI-assisted decisions and no measurable transfer of influence to AI vendors or platforms would falsify the claimed reconfigurations.

Figures

Figures reproduced from arXiv: 2604.21864 by Daniel Gatica-Perez, Matilde Barbini, Stefano Sorrentino.

Figure 1
Figure 1. Figure 1: Distribution of the 209 references by publication year and thematic category. The corpus spans seven decades (1955–2025). The view at source ↗
read the original abstract

Building on recent interpretivist approaches, we conduct a critical narrative review across journalism studies, human-computer interaction, and FAccT scholarship, conceptualizing editorial authority as the conjunction of decision rights, epistemic warrant, and responsibility. We provide a comprehensive theoretical framework for addressing how concerns on fairness, accountability and transparency emerge, interact, and persist within AI mediated journalistic practice. We identify and describe two concurrent authority reconfigurations driven by AI adoption. First, an internal migration of authority, in which editorial judgment is progressively deferred to large language models (LLMs) embedded within newsroom workflows. This migration occurs not through explicit policy decisions, but through interactional, cognitive, and organizational mechanisms that legitimize AI generated outputs while obscuring responsibility and weakening individual and professional agency. Second, we analyze an external migration of authority, whereby decision making power shifts from news organizations toward platforms, vendors, and infrastructural providers that supply AI systems and distribution channels, exacerbating existing power asymmetries within the media ecosystem. Unaddressed, these reconfigurations risk rendering fairness hard to maintain, accountability difficult to assign and transparency performative. We examine participatory approaches to AI design and deployment in journalism as potential mechanisms for retaining or reclaiming editorial authority. We critically assess both their promise and their structural limitations, highlighting how participation can either meaningfully redistribute authority or function as a tokenistic practice that leaves underlying power relations intact.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. This paper conducts a critical narrative review across journalism studies, HCI, and FAccT scholarship. It conceptualizes editorial authority as the conjunction of decision rights, epistemic warrant, and responsibility. The authors identify two concurrent authority reconfigurations in AI-mediated journalism: an internal migration deferring editorial judgment to LLMs through interactional, cognitive, and organizational mechanisms that obscure responsibility, and an external migration shifting power to platforms, vendors, and infrastructural providers. These are argued to risk making fairness hard to maintain, accountability difficult to assign, and transparency performative. The paper evaluates participatory approaches to AI design and deployment as potential mechanisms for retaining editorial authority, critically assessing their promise and structural limitations.

Significance. If the described authority reconfigurations accurately capture ongoing dynamics in AI-mediated journalism, the paper's theoretical framework would be significant for advancing discussions in FAccT and related fields by linking AI adoption to shifts in power and responsibility. It provides a structured way to analyze how fairness, accountability, and transparency concerns arise and persist. The critical assessment of participatory approaches adds value by highlighting both opportunities and risks of tokenism. As a synthesis of existing literature rather than a source of new data or machine-checked proofs, its primary strength lies in conceptual integration, which could inform future empirical studies and policy in newsrooms.

major comments (2)
  1. [Description of internal authority migration] The claim that editorial judgment is 'progressively deferred to large language models (LLMs) embedded within newsroom workflows' through interactional, cognitive, and organizational mechanisms is central to the paper's argument about obscured responsibility and weakened agency. However, this is based on interpretivist synthesis of cited literature without new empirical data collection or systematic review of current practices. This leaves open whether these mechanisms are actively reconfiguring authority in practice or represent aspirational or selective accounts from the sources, directly impacting the validity of the risk assessments for accountability and transparency.
  2. [Analysis of risks to fairness, accountability, and transparency] The assertion that unaddressed reconfigurations 'risk rendering fairness hard to maintain, accountability difficult to assign and transparency performative' is load-bearing for the contribution. While logically derived from the two migrations, the absence of observable indicators or falsifiable predictions means the claims rest on the premise that the selected literature comprehensively captures real-world dynamics. Alternative interpretations, such as AI tools enhancing rather than deferring authority in some contexts, are not sufficiently addressed to strengthen the conclusions.
minor comments (2)
  1. [Conceptual framework] The definition of editorial authority could be more explicitly linked to specific examples from the reviewed literature to aid reader understanding of how decision rights, epistemic warrant, and responsibility interact in AI contexts.
  2. [Participatory approaches section] The discussion of participatory approaches would benefit from clearer distinctions between meaningful redistribution of authority and tokenistic practices, perhaps with references to concrete case studies if available in the cited works.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their insightful comments on our narrative review. We address each major comment point by point below, agreeing where revisions can strengthen the manuscript while defending the interpretivist approach appropriate to a synthesis of existing literature.

read point-by-point responses
  1. Referee: The claim that editorial judgment is 'progressively deferred to large language models (LLMs) embedded within newsroom workflows' through interactional, cognitive, and organizational mechanisms is central to the paper's argument about obscured responsibility and weakened agency. However, this is based on interpretivist synthesis of cited literature without new empirical data collection or systematic review of current practices. This leaves open whether these mechanisms are actively reconfiguring authority in practice or represent aspirational or selective accounts from the sources, directly impacting the validity of the risk assessments for accountability and transparency.

    Authors: We appreciate the referee's emphasis on the evidential foundation of our central claim. As a critical narrative review, the paper deliberately employs an interpretivist synthesis of journalism studies, HCI, and FAccT literature to develop a conceptual framework rather than presenting original empirical data or conducting a systematic review. This method allows us to identify recurring patterns across sources that describe authority shifts. To address concerns about selectivity or aspirational accounts, we will revise the manuscript by adding an explicit subsection on the narrative review methodology, including source selection criteria and limitations, and by engaging more directly with literature that describes AI as augmenting rather than deferring editorial judgment in specific contexts (e.g., data verification tools). These changes will clarify the scope of our risk assessments without requiring new data collection, which falls outside the paper's scope as a synthesis. revision: partial

  2. Referee: The assertion that unaddressed reconfigurations 'risk rendering fairness hard to maintain, accountability difficult to assign and transparency performative' is load-bearing for the contribution. While logically derived from the two migrations, the absence of observable indicators or falsifiable predictions means the claims rest on the premise that the selected literature comprehensively captures real-world dynamics. Alternative interpretations, such as AI tools enhancing rather than deferring authority in some contexts, are not sufficiently addressed to strengthen the conclusions.

    Authors: We agree that the risk claims are central and benefit from greater engagement with alternatives and potential indicators. The assertions follow logically from the authority reconfigurations identified in the synthesized literature, framed as risks rather than certainties. In revision, we will expand the discussion of risks to include examples of observable indicators drawn from the cited studies (such as shifts in decision attribution in newsroom ethnographies) and will add a balanced treatment of alternative interpretations where AI integration could enhance epistemic warrant or accountability in delimited domains. This will make the argument more robust and address the concern that alternatives are insufficiently considered, while preserving the paper's focus on conceptual integration. revision: partial

Circularity Check

0 steps flagged

Narrative review exhibits no circularity; synthesis draws from external literature without self-referential reduction.

full rationale

This is a critical narrative review paper that constructs a theoretical framework by synthesizing cited scholarship in journalism studies, HCI, and FAccT. No equations, fitted parameters, predictions, or first-principles derivations appear. The two authority reconfigurations are described via interpretivist reading of external sources rather than being defined in terms of the paper's own outputs or reduced to self-citations that bear the central load. The claims about risks to fairness, accountability, and transparency follow from the reviewed literature, not from any internal tautology or renaming of inputs as results. The paper is self-contained against external benchmarks as a synthesis exercise.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the appropriateness of interpretivist methods for analyzing authority and on the existence of the described interactional and organizational mechanisms in current newsrooms; no free parameters or new entities are introduced.

axioms (1)
  • domain assumption Interpretivist approaches provide a valid lens for conceptualizing editorial authority reconfigurations in AI-mediated journalism.
    Explicitly stated as the foundation for the critical narrative review in the abstract.

pith-pipeline@v0.9.0 · 5558 in / 1209 out tokens · 33334 ms · 2026-05-08T14:01:08.683554+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

267 extracted references · 163 canonical work pages · 3 internal anchors

  1. [1]

    Lama Ahmad, Sandhini Agarwal, Michael Lampe, and Pamela Mishkin. 2025. OpenAI’s Approach to External Red Teaming for AI Models and Systems. (January 2025). arXiv:2503.16431 doi:abs/2503.16431

  2. [2]

    Leah Hope Ajmani, Nuredin Ali Abdelkadir, and Stevie Chancellor. 2025. Secondary Stakeholders in AI: Fighting for, Brokering, and Navigating Agency. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 1095–1107. doi:10.1145/3715275.3732071

  3. [3]

    Canfer Akbulut, Laura Weidinger, Arianna Manzini, Iason Gabriel, and Verena Rieser. 2024. All too human? Mapping and mitigating the risk from anthropomorphic AI. InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Vol. 7. 13–26

  4. [4]

    Iban Albizu-Rivas, Sonia Parratt-Fernández, and Montse Mera-Fernández. 2024. Artificial Intelligence in Slow Journalism: Journalists’ Uses, Perceptions, and Attitudes.Journalism and Media5, 4 (Dec. 2024), 1836–1850. doi:10.3390/journalmedia5040111 Publisher: Multidisciplinary Digital Publishing Institute

  5. [5]

    Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz

    Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems(Glasgow, Scotland Uk)(CHI ’19). Associat...

  6. [6]

    Journalism Will Always Need Journalists

    Laura Amigo and Colin Porlezza. 2025. “Journalism Will Always Need Journalists. ” The Perceived Impact of AI on Journalism Authority in Switzerland.Journalism Practice0, 0 (2025), 1–19. doi:10.1080/17512786.2025.2487534

  7. [7]

    Amponsah and Atianashie Miracle Atianashie

    Peter N. Amponsah and Atianashie Miracle Atianashie. 2024. Navigating the New Frontier: A Comprehensive Review of AI in Journalism.Advances in Journalism and Communication12, 1 (2024), 1–17. doi:10.4236/ajc.2024.121001

  8. [8]

    2018.Networked Press Freedom: Creating Infrastructures for a Public Right to Hear

    Mike Ananny. 2018.Networked Press Freedom: Creating Infrastructures for a Public Right to Hear. The MIT Press. doi:10.7551/mitpress/9516.001.0001 Manuscript submitted to ACM FAccT-Checked 17

  9. [9]

    Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.new media & society20, 3 (2018), 973–989

  10. [10]

    Christopher W Anderson. 2013. Towards a sociology of computational and algorithmic journalism.New media & society15, 7 (2013), 1005–1021. doi:10.1177/1461444812465137

  11. [11]

    2006.Virtual migration: The programming of globalization

    Aneesh Aneesh. 2006.Virtual migration: The programming of globalization. Duke University Press

  12. [12]

    Ravinithesh Annapureddy, Alessandro Fornaroli, and Daniel Gatica-Perez. 2025. Generative AI Literacy: Twelve Defining Competencies.Digit. Gov.: Res. Pract.6, 1, Article 13 (Feb. 2025), 21 pages. doi:10.1145/3685680

  13. [13]

    Arnstein

    Sherry R. Arnstein. 1969. A Ladder Of Citizen Participation.Journal of the American Institute of Planners35, 4 (July 1969), 216–224. doi:10.1080/ 01944366908977225

  14. [14]

    Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022. Constitutional ai: Harmlessness from ai feedback.arXiv preprint(2022). doi:10.48550/arXiv.2212.08073

  15. [15]

    Agathe Balayn, Mireia Yurrita, Fanny Rancourt, Fabio Casati, and Ujwal Gadiraju. 2025. Unpacking Trust Dynamics in the LLM Supply Chain: An Empirical Exploration to Foster Trustworthy LLM Production & Use. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, Artic...

  16. [16]

    Liam Bannon, Jeffrey Bardzell, and Susanne Bødker. 2018. Reimagining participatory design.Interactions26, 1 (Dec. 2018), 26–32. doi:10.1145/3292015

  17. [17]

    Roland Barthes. 1997. The Death of the Author. InTwentieth-Century Literary Theory: A Reader, K. M. Newton (Ed.). Macmillan Education UK, London, 120–123. doi:10.1007/978-1-349-25934-2_25

  18. [18]

    Kim Björn Becker. 2023. New game, new rules: An investigation into editorial guidelines for dealing with artificial intelligence in the newsroom. Journalism Research6, 2 (2023), 133–152. doi:10.1453/2569-152X-22023-13404-en

  19. [19]

    Kim Björn Becker, Simon Felix M., and Christopher Crum. 2025. Policies in Parallel? A Comparative Study of Journalistic AI Policies in 52 Global News Organisations.Digital Journalism(Jan. 2025), 1–21. doi:10.1080/21670811.2024.2431519

  20. [20]

    Charlie Beckett. 2019. New powers, new responsibilities.A global survey of journalism and artificial intelligence(2019). https://blogs.lse.ac.uk/ polis/2019/11/18/new-powers-new-responsibilities/

  21. [21]

    2023.Generating Change: A Global Survey of What News Organisations Are Doing with Artificial Intelligence

    Charlie Beckett and Mira Yaseen. 2023.Generating Change: A Global Survey of What News Organisations Are Doing with Artificial Intelligence. Technical Report. JournalismAI. https://www.journalismai.info/research/2023-generating-change

  22. [22]

    Bell, Taylor Owen, Peter D

    Emily J. Bell, Taylor Owen, Peter D. Brown, Codi Hauka, and Nushin Rashidian. 2017.The Platform Press: How Silicon Valley Reengineered Journalism. Technical Report. Tow Center for Digital Journalism, Columbia Journalism School. doi:10.7916/D8R216ZZ

  23. [23]

    Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell

    Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada)(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. doi:10.114...

  24. [24]

    Dan Bennett, Oussama Metatla, Anne Roudaut, and Elisa D Mekler. 2023. How does HCI understand human agency and autonomy?. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18

  25. [25]

    Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. Power to the People? Opportunities and Challenges for Participatory AI. InProceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(Arlington, VA, USA)(EAAMO ’22). Association for Com...

  26. [26]

    Balázs Bodó. 2019. Selling News to Audiences – A Qualitative Inquiry into the Emerging Logics of Algorithmic News Personalization in European Quality News Media.Digital Journalism7, 8 (Sept. 2019), 1054–1075. doi:10.1080/21670811.2019.1624185

  27. [27]

    Federico Bomba and Antonella De Angeli. 2025. Agency and authorship in AI art: Transformational practices for epistemic troubles.International Journal of Human-Computer Studies205 (Nov. 2025), 103652. doi:10.1016/j.ijhcs.2025.103652

  28. [28]

    Rishi Bommasani, Drew Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney Arx, Michael Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Davis, Dora Demszky, and Percy Liang. 2021. On the Opportunities and Risks of Foundation...

  29. [29]

    Tone Bratteteig and Ina Wagner. 2012. Disentangling power and decision-making in participatory design. InProceedings of the 12th Participatory Design Conference: Research Papers - Volume 1 (PDC ’12). Association for Computing Machinery, New York, NY, USA, 41–50. doi:10.1145/2347635. 2347642

  30. [30]

    Warren Breed. 1955. Social Control in the Newsroom: A Functional Analysis.Social Forces33, 4 (May 1955), 326–335. doi:10.2307/2573002 Number: 4

  31. [31]

    Matt Carlson. 2015. The Robotic Reporter: Automated journalism and the redefinition of labor, compositional forms, and journalistic authority. Digital Journalism3, 3 (May 2015), 416–431. doi:10.1080/21670811.2014.976412

  32. [32]

    2017.Journalistic Authority: Legitimating News in the Digital Era

    Matt Carlson. 2017.Journalistic Authority: Legitimating News in the Digital Era. Columbia University Press, New York. doi:10.7312/carl17444

  33. [33]

    Matt Carlson. 2018. Automating Judgment? Algorithmic Judgment, News Knowledge, and Journalistic Professionalism.New Media & Society20, 5 (2018), 1755–1772. doi:10.1177/1461444817706684

  34. [34]

    Matt Carlson. 2020. Journalistic Epistemology and Digital News Circulation: Infrastructure, Circulation Practices, and Epistemic Contests.New Media & Society22, 2 (2020), 230–247. doi:10.1177/1461444819856921

  35. [35]

    2015.Boundaries of journalism: professionalism, practices and participation

    Matt Carlson and Seth C Lewis. 2015.Boundaries of journalism: professionalism, practices and participation. Routledge. doi:10.4324/9781315727684 Manuscript submitted to ACM 18 Stefano Sorrentino, Matilde Barbini, and Daniel Gatica-Perez

  36. [36]

    Irina Carnat. 2024. Human, all too human: accounting for automation bias in generative large language models.International Data Privacy Law14, 4 (2024), 299–314. doi:10.1093/idpl/ipae018

  37. [37]

    Sanfelice, and Andrew R

    Angèle Christin. 2020.Metrics at Work: Journalism and the Contested Meaning of Algorithms. Princeton University Press. doi:10.23943/princeton/ 9780691175232.001.0001

  38. [38]

    Sharon Chu, Marcin Karcz, Amal Hashky, Neha Rani, Theodora Chaspari, Winfred Jr, and Eric Ragan. 2025. User judgment of an AI model is biased by its description: A study in a job interview training context.International Journal of Human-Computer Studies208 (11 2025), 103691. doi:10.1016/j.ijhcs.2025.103691

  39. [39]

    Sherwin Chua and Oscar Westlund. 2022. Platform Configuration: A Longitudinal Study and Conceptualization of a Legacy News Publisher’s Platform-Related Innovation Practices.Online Media and Global Communication1, 1 (2022), 1–18. doi:10.1515/omgc-2022-0003

  40. [40]

    Mark Coddington. 2015. Clarifying Journalism’s Quantitative Turn: A typology for evaluating data journalism, computational journalism, and computer-assisted reporting.Digital Journalism3, 3 (May 2015), 331–348. doi:10.1080/21670811.2014.976400

  41. [41]

    2001.Participation: the new tyranny?Zed Books, London ; New York

    Bill Cooke and Uma Kothari (Eds.). 2001.Participation: the new tyranny?Zed Books, London ; New York

  42. [42]

    Hannes Cools and Nicholas Diakopoulos. 2023. Towards Guidelines for Guidelines on the Use of Generative AI in Newsrooms. doi:10.13140/RG.2.2. 29287.25768

  43. [43]

    Hannes Cools and Nicholas Diakopoulos. 2024. Uses of Generative AI in the Newsroom: Mapping Journalists’ Perceptions of Perils and Possibilities. Journalism Practice(2024), 1–19. doi:10.1080/17512786.2024.2394558

  44. [44]

    Hannes Cools and Michael Koliska. 2024. News Automation and Algorithmic Transparency in the Newsroom: The Case of the Washington Post. Journalism Studies25, 6 (April 2024), 662–680. doi:10.1080/1461670X.2024.2326636

  45. [45]

    Feder Cooper, Emanuel Moss, Benjamin Laufer, and Helen Nissenbaum

    A. Feder Cooper, Emanuel Moss, Benjamin Laufer, and Helen Nissenbaum. 2022. Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning. InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea)(FAccT ’22). Association for Computing Machinery, New York,...

  46. [46]

    Ned Cooper, Tiffanie Horne, Gillian R Hayes, Courtney Heldreth, Michal Lahav, Jess Holbrook, and Lauren Wilcox. 2022. A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA)(CHI ’22). Association for Computing...

  47. [47]

    Ned Cooper and Alexandra Zafiroglu. 2024. From Fitting Participation to Forging Relationships: The Art of Participatory ML. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 746, 9 pages. doi:10.1145/3613904.3642775

  48. [48]

    Eric Corbett, Remi Denton, and Sheena Erete. 2023. Power and Public Participation in AI. InProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(Boston, MA, USA)(EAAMO ’23). Association for Computing Machinery, New York, NY, USA, Article 8, 13 pages. doi:10.1145/3617694.3623228

  49. [49]

    Stephen Cushion, Lewis Justin, and Robert Callaghan. 2017. Data Journalism, Impartiality And Statistical Claims.Journalism Practice11, 10 (Nov. 2017), 1198–1215. doi:10.1080/17512786.2016.1256789

  50. [50]

    Add Diverse Stakeholders and Stir

    Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2021. Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and Stir". doi:10.48550/arXiv.2111.01122 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia

  51. [51]

    Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2023. The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. InProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(Boston, MA, USA)(EAAMO ’23). Association for Computing Machinery, New York, NY, USA, Ar...

  52. [52]

    Mark Deuze. 2008. The changing context of news work: Liquid journalism for a monitorial citizenry.International journal of Communication2 (2008), 18

  53. [53]

    Nicholas Diakopoulos. 2014. Algorithmic accountability reporting: On the investigation of black boxes. (2014). doi:10.7916/D8ZK5TW2

  54. [54]

    Nicholas Diakopoulos. 2015. Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.Digital Journalism3, 3 (2015), 398–415. doi:10.1080/21670811.2014.976411

  55. [55]

    2019.Automating the news: how algorithms are rewriting the media

    Nicholas Diakopoulos. 2019.Automating the news: how algorithms are rewriting the media. Harvard University Press, Cambridge, Massachusetts

  56. [56]

    Nicholas Diakopoulos. 2020. Computational news discovery: Towards design considerations for editorial orientation algorithms in journalism. Digital journalism8, 7 (2020), 945–967. doi:10.1080/21670811.2020.1736946

  57. [57]

    Nicholas Diakopoulos, Hannes Cools, Charlotte Li, Natali Helberger, Ernest Kung, Aimee Rinehart, and Lisa Gibbs. 2024. Generative AI in Journalism: The Evolution of Newswork and Ethics in a Generative Information Ecosystem. (2024). doi:10.13140/RG.2.2.31540.05765

  58. [58]

    Laurence Dierickx. 2023. News automation, materialities, and the remix of an editorial process.Journalism24, 3 (2023), 654–670. doi:10.1177/ 14648849211023872

  59. [59]

    Tomás Dodds, Astrid Vandendaele, Felix M Simon, Natali Helberger, Valeria Resendez, and Wang Ngai Yeung. 2024. The Impact of Knowledge Silos on Responsible AI Practices in Journalism. (2024). doi:10.48550/arXiv.2410.01138

  60. [60]

    Tomás Dodds, Wang Ngai Yeung, Claudia Mellado, and Mathias-Felipe de Lima-Santos. 2025. On Controlled Change: Generative AI’s Impact on Professional Authority in Journalism. doi:10.48550/arXiv.2510.19792

  61. [61]

    Konstantin Nicholas Dörr and Katharina Hollnbuchner. 2017. Ethical challenges of algorithmic journalism.Digital journalism5, 4 (2017), 404–419. Manuscript submitted to ACM FAccT-Checked 19

  62. [62]

    Giovanni Dosi, Luigi Marengo, and Maria Enrica Virgillito. 2021. Hierarchies, Knowledge, and Power Inside Organizations.Strategy Science6, 4 (Dec. 2021), 371–384. doi:10.1287/stsc.2021.0136 Number: 4

  63. [63]

    Max van Drunen and Denise Fechner and. 2023. Safeguarding Editorial Independence in an Automated Media System: The Relationship Between Law and Journalistic Perspectives.Digital Journalism11, 9 (2023), 1723–1750. doi:10.1080/21670811.2022.2108868

  64. [64]

    Madeleine Elish. 2019. Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. 40-60 pages. doi:10.17351/ests2019.260

  65. [65]

    I Always Assumed That I Wasn’t Really That Close to [Her]

    Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. "I Always Assumed That I Wasn’t Really That Close to [Her]": Reasoning about Invisible Algorithms in News Feeds. InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 153–...

  66. [66]

    Ausserhofer et al. 2020. The datafication of data journalism scholarship: Focal points, methods, and research propositions for the investigation of data-intensive newswork.Journalism21, 7 (2020), 950–973. doi:10.1177/1464884917700667

  67. [67]

    Hansen et al. 2023. Initial white paper on the social, economic, and political impact of media AI technologies (D2.2). https://www.ai4media.eu/ reports/initial-white-paper-on-the-social-economic-and-political-impact-of-media-ai-technologies-2/

  68. [68]

    Henry Farrell and Marion Fourcade. 2023. The Moral Economy of High-Tech Modernism.Daedalus152, 1 (02 2023), 225–235. doi:10.1162/daed_a_ 01982

  69. [69]

    Michael Feffer, Michael Skirpan, Zachary Lipton, and Hoda Heidari. 2023. From Preference Elicitation to Participatory ML: A Critical Survey & Guidelines for Future Research. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Montréal QC Canada, 38–48. doi:10.1145/3600211.3604661

  70. [70]

    Gerhard Fischer. 2011. Understanding, fostering, and supporting cultures of participation.Interactions18, 3 (May 2011), 42–53. doi:10.1145/1962438. 1962450

  71. [71]

    Tomi Fischer. 2025. The Efficiency Paradox: How AI Affects Journalist Workflows and Organizational Dynamics. (2025). https://aaltodoc.aalto.fi/ items/5172677c-102a-4ed9-925b-96f1f2fd96cb

  72. [72]

    Michel Foucault. 1977. What Is an Author? InLanguage, Counter -Memory, Practice: Selected Essays and Interviews, Donald F. Bouchard and Sherry Simon (Eds.). Cornell University Press, Ithaca, NY, 113–138. Lecture originally given at the Société française de philosophie, 22 February 1969

  73. [73]

    Michel Foucault. 1980. Power/knowledge: Selected interviews and other writings, 1972-1977.Pantheon(1980)

  74. [74]

    Michel Foucault. 2009. Truth and Power. InMedia Studies: A Reader, Sue Thornham, Caroline Bassett, and Paul Marris (Eds.). Edinburgh University Press, 63–75. https://www.degruyterbrill.com/document/doi/10.1515/9781474473231-009/html

  75. [75]

    Batya Friedman, Peter Kahn, and Alan Borning. 2003. Value Sensitive Design: Theory and Methods. (June 2003). https://research.cs.vt.edu/ns/ cs5724papers/6.theoriesofuse.cwaandvsd.friedman.vsd.pdf

  76. [77]

    Iason Gabriel. 2020. Artificial Intelligence, Values, and Alignment.Minds and Machines30, 3 (Sept. 2020), 411–437. doi:10.1007/s11023-020-09539-2

  77. [78]

    Herbert J. Gans. 2004.Deciding what’s news: a study of CBS evening news, NBC nightly news, Newsweek, and Time(25th anniversary edition ed.). Northwestern University Press, Evanston, Ill

  78. [79]

    José Alberto García Avilés, León , Bienvenido, Sanders , Karen, , and Jackie Harrison. 2004. Journalists at digital television newsrooms in Britain and Spain: workflow and multi-skilling in a competitive environment.Journalism Studies5, 1 (Feb. 2004), 87–100. doi:10.1080/1461670032000174765

  79. [80]

    Tarleton Gillespie. 2014. The relevance of algorithms.Media technologies: Essays on communication, materiality, and society167, 2014 (2014), 167. doi:10.7551/mitpress/9042.003.0013

  80. [81]

    Gordon, Michelle S

    Mitchell L. Gordon, Michelle S. Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S. Bernstein. 2022. Jury Learning: Integrating Dissenting Voices into Machine Learning Models. InCHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–19. doi:10.1145/3491102.3502004

Showing first 80 references.