pith. machine review for the scientific record. sign in

arxiv: 2605.05584 · v1 · submitted 2026-05-07 · 💻 cs.SE · cs.CY

Recognition: unknown

Operationalizing Ethics for AI Agents: How Developers Encode Values into Repository Context Files

Authors on Pith no claims yet

Pith reviewed 2026-05-08 09:18 UTC · model grok-4.3

classification 💻 cs.SE cs.CY
keywords AI agentsethicssoftware engineeringrepository context filesgovernanceoperationalizing valuesAGENTS.mddeveloper practices
0
0 comments X

The pith

Developers are encoding ethical rules for AI coding agents into repository files like AGENTS.md to create actionable guidance in real workflows.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper investigates how developers translate abstract ethical principles into concrete, natural-language instructions placed in repository-level context files for AI agents. Through a preliminary look at current practices, it identifies guidance already appearing on topics such as fairness, accessibility, sustainability, tone, and privacy. A reader would care because this approach moves ethics discussion from theory into the daily files that shape how agents complete tasks. The work positions these files as an emerging developer-led governance mechanism and sketches questions for further study on variation, negotiation, and adherence.

Core claim

Developers are already embedding behavioral rules related to ethics and values into repository context files for AI coding agents. These files act as a governance layer that converts high-level principles into situated directives written in plain language. The preliminary investigation finds examples covering fairness, accessibility, sustainability, tone, and privacy, showing that this operationalization is happening inside ordinary development workflows rather than in separate ethics documents.

What carries the argument

Repository context files such as AGENTS.md, which developers use to supply situated natural-language directives that translate abstract ethical principles into instructions intended to shape AI agent behavior during software tasks.

If this is right

  • Encoded values will differ across developer communities and project types.
  • When multiple contributors edit these files, distinct governance dynamics will arise around rule negotiation.
  • The practical impact will depend on the extent to which agents follow the specified constraints.
  • Studying this practice will ground abstract AI governance discussions in concrete software engineering activity.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Widespread adoption could let teams manage AI ethics through the same version-controlled files used for code rather than separate policy documents.
  • This method might surface conflicts between different values more quickly when contributors disagree on file content.
  • Future work could test whether agents trained on code alone perform differently from agents given explicit natural-language ethical rules in these files.

Load-bearing premise

The preliminary investigation reflects actual developer practices and the instructions placed in these files will meaningfully influence how AI agents act.

What would settle it

A broad scan of public repositories that finds no AGENTS.md or equivalent files containing ethical or value-based guidance, or controlled tests showing AI agents complete the same tasks without regard to the constraints written in such files.

read the original abstract

As AI coding agents become embedded in software development workflows, developers are beginning to operationalize ethical principles by encoding behavioral rules into repository-level context files for AI agents, such as AGENTS.md files. Rather than examining the ethics of AI agents in the abstract, this vision paper investigates how ethics and values are already being translated for AI agents into actionable instructions that shape agent behavior. Through a preliminary investigation, we find that developers are already embedding guidance related to fairness, accessibility, sustainability, tone, and privacy. These artifacts function as a developer-authored governance layer, translating abstract principles into situated, natural-language directives within development workflows. We outline a research agenda for studying this emerging practice, including how encoded values vary across communities, what governance dynamics emerge when multiple contributors negotiate these files, and whether agents reliably adhere to the constraints specified. Understanding how ethics and values are operationalized for AI agents is essential to ground AI governance in modern software engineering practice.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper claims that as AI coding agents integrate into development workflows, developers are operationalizing ethics by encoding rules into repository context files (e.g., AGENTS.md). A preliminary investigation reveals embedded guidance on fairness, accessibility, sustainability, tone, and privacy; these files are positioned as a developer-authored governance layer translating abstract principles into situated natural-language directives. The manuscript outlines a research agenda on cross-community variation, multi-contributor negotiation, and agent adherence.

Significance. If the preliminary observations hold and prove representative, the work could ground AI governance discussions in observable software-engineering artifacts rather than top-down abstractions, highlighting an emerging bottom-up mechanism for value alignment and motivating empirical studies of negotiation and compliance in open-source contexts.

major comments (1)
  1. [Abstract] Abstract: The claim that 'developers are already embedding guidance related to fairness, accessibility, sustainability, tone, and privacy' rests on an unspecified preliminary investigation. No details are supplied on repository sampling criteria, search strategy, number of files examined, or the process for identifying versus inferring ethical content. This methodological opacity is load-bearing for the 'developer-authored governance layer' framing, as the findings cannot be assessed for representativeness or confirmation bias without these elements.
minor comments (1)
  1. [Abstract] The manuscript is described as a 'vision paper' yet presents empirical-sounding observations; clarifying the boundary between the preliminary findings and the agenda-setting portions would improve reader expectations.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive review and for recognizing the potential of this vision paper to ground AI governance discussions in observable software-engineering artifacts. We address the single major comment below.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The claim that 'developers are already embedding guidance related to fairness, accessibility, sustainability, tone, and privacy' rests on an unspecified preliminary investigation. No details are supplied on repository sampling criteria, search strategy, number of files examined, or the process for identifying versus inferring ethical content. This methodological opacity is load-bearing for the 'developer-authored governance layer' framing, as the findings cannot be assessed for representativeness or confirmation bias without these elements.

    Authors: We agree that the current description of the preliminary investigation is insufficiently detailed. As a vision paper, the investigation is intended to be illustrative rather than a systematic empirical study, and the core contribution is the framing of repository context files as an emerging developer-authored governance layer together with the proposed research agenda. Nevertheless, the lack of methodological transparency makes it difficult for readers to evaluate the observations. In the revised manuscript we will add a new subsection (likely under a revised 'Preliminary Investigation' heading) that explicitly describes: (1) the repository sampling criteria and search strategy used to locate AGENTS.md and similar files, (2) the approximate number of files examined, and (3) the process by which ethical guidance was identified versus inferred. We will also clarify that these observations are exploratory and not claimed to be representative, thereby reducing the risk of confirmation bias in the framing. revision: yes

Circularity Check

0 steps flagged

No circularity: observational vision paper with no derivations or self-referential reductions

full rationale

The paper is a vision piece reporting a preliminary investigation into existing developer practices around AGENTS.md files and outlining a future research agenda. It contains no equations, no fitted parameters, no uniqueness theorems, and no derivation chain that could reduce to its own inputs. Claims rest on direct observation of artifacts rather than any self-definitional or self-citation loop. The absence of methodological details noted by the skeptic is a transparency issue, not a circularity issue. This is the expected non-finding for an agenda-setting paper without quantitative modeling.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper is a vision paper whose central claim rests on the assumption that the preliminary investigation reflects real practices; no free parameters, invented entities, or formal axioms are introduced.

axioms (1)
  • domain assumption Developers are beginning to operationalize ethical principles by encoding behavioral rules into repository-level context files for AI agents.
    This premise underpins the entire investigation and vision.

pith-pipeline@v0.9.0 · 5461 in / 1181 out tokens · 40408 ms · 2026-05-08T09:18:14.786954+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

28 extracted references · 16 canonical work pages

  1. [1]

    Aldeida Aleti, Baishakhi Ray, Rashina Hoda, and Simin Chen. 2026. Trustworthy AI Software Engineers. CoRR abs/2602.06310 (2026). arXiv: 2602.06310 doi:10. 48550/ARXIV.2602.06310

  2. [2]

    Razieh Alidoosti, Patricia Lago, Maryam Razavian, and Antony Tang. 2022. Ethics in Software Engineering: A Systematic Literature Review. Technical Report. Vrije Universiteit Amsterdam. Intermediate version of later scientific publica- tion

  3. [3]

    David Anzola, Pete Barbrook-Johnson, and Nigel Gilbert. 2022. The Ethics of Agent-Based Social Simulation. J. Artif. Soc. Soc. Simul. 25, 4 (2022). doi:10. 18564/JASSS.4907

  4. [4]

    Hassan, and Hajimu Iida

    Worawalan Chatlatanagulchai, Hao Li, Yutaro Kashiwa, Brittany Reid, Kundjan- asith Thonglek, Pattara Leelaprute, Arnon Rungsawang, Bundit Manaskasem- sak, Bram Adams, Ahmed E. Hassan, and Hajimu Iida. 2025. Agent READMEs: An Empirical Study of Context Files for Agentic Coding. CoRR abs/2511.12884 (2025). arXiv: 2511.12884 doi:10.48550/ARXIV.2511.12884

  5. [5]

    Marc Cheong and Simon Coghlan. 2025. Transition to digital ethics . Chapman & Hall/CRC, Philadelphia, PA

  6. [6]

    Batya Friedman and David G Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press

  7. [7]

    Matthias Galster, Seyedmoein Mohsenimofidi, Jai Lal Lulla, Muhammad Auwal Abubakar, Christoph Treude, and Sebastian Baltes. 2026. Configuring Agen- tic AI Coding Tools: An Exploratory Study. In 2026 3rd IEEE/ACM International Conference on AI-powered Software (AIware) . IEEE

  8. [8]

    Haoyu Gao, Mansooreh Zahedi, Christoph Treude, Sarita Rosenstock, and Marc Cheong. 2024. Documenting Ethical Considerations in Open Source AI Mod- els. In Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2024, Barcelona, Spain, October 24- 25, 2024, Xavier Franch, Maya Daneva, Silverio Martín...

  9. [9]

    Jan Gogoll, Niina Zuber, Severin Kacianka, Timo Greger, Alexander Pretschner, and Julian Nida-Rümelin. 2021. Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation. Philosophy & Technology 34, 4 (2021), 1085–1108. doi:10.1007/s13347-021-00451-w

  10. [10]

    Miller, and Simon Rogerson

    Donald Gotterbarn, Keith W. Miller, and Simon Rogerson. 1997. Software En- gineering Code of Ethics. Commun. ACM 40, 11 (1997), 110–118. doi:10.1145/ 265684.265699

  11. [11]

    Thilo Hagendorff. 2020. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 30, 1 (2020), 99–120. doi:10.1007/S11023-020-09517-8

  12. [12]

    Erika Halme, Mamia Agbese, Jani Antikainen, Hanna-Kaisa Alanen, Marianna Jantunen, Arif Ali Khan, Kai-Kristian Kemell, Ville Vakkuri, and Pekka Abra- hamsson. 2022. Ethical User Stories: Industrial Study. In Joint Proceedings of REFSQ-2022 Workshops, Doctoral Symposium, and Posters & Tools Track co- located with the 28th International Conference on Requir...

  13. [13]

    Rashina Hoda. 2025. Toward Agentic Software Engineering Beyond Code: Framing Vision, Values, and Vocabulary. CoRR abs/2510.19692 (2025). arXiv:2510.19692 doi:10.48550/ARXIV.2510.19692

  14. [14]

    Everaldo Silva Júnior, Lina Marsso, Ricardo Caldas, Marsha Chechik, and Genaína Nunes Rodrigues. 2026. Operationalizing Human Values in the Re- quirements Engineering Process of Ethics-Aware Autonomous Systems. CoRR abs/2602.09921 (2026). arXiv: 2602.09921 doi:10.48550/ARXIV.2602.09921

  15. [15]

    Stefan Kapferer, Mirko Stocker, and Olaf Zimmermann. 2024. Towards respon- sible software engineering: combining Value-based processes, Agile practices, and green metering. In IEEE International Symposium on Technology and Soci- ety, ISTAS 2024, Puebla, Mexico, September 18-20, 2024 . IEEE, 1–4. doi:10.1109/ ISTAS61960.2024.10732097

  16. [16]

    Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, Didar Zowghi, and Aurelie Jacquet. 2024. Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering. ACM Comput. Surv. 56, 7 (2024), 173:1– 173:35. doi:10.1145/3626234

  17. [17]

    Zhang, Sebas- tian Baltes, and Christoph Treude

    Jai Lal Lulla, Seyedmoein Mohsenimofidi, Matthias Galster, Jie M. Zhang, Se- bastian Baltes, and Christoph Treude. 2026. On the Impact of AGENTS.md Files on the Efficiency of AI Coding Agents. CoRR abs/2601.20404 (2026). arXiv:2601.20404 doi:10.48550/ARXIV.2601.20404

  18. [18]

    Yotam Lurie and Shlomo Mark. 2016. Professional Ethics of Software Engineers: An Ethical Framework. Sci. Eng. Ethics 22, 2 (2016), 417–434. doi:10.1007/S11948- 015-9665-X

  19. [19]

    Steven Mascaro, Kevin B Korb, Ann E Nicholson, and Owen Woodberry. 2010. Evolving ethics. Imprint Academic, Exeter, England

  20. [20]

    Seyedmoein Mohsenimofidi, Matthias Galster, Christoph Treude, and Sebastian Baltes. 2026. Context Engineering for AI Agents in Open-Source Software. In Proceedings of the 23rd IEEE/ACM International Conference on Mining Software Repositories (MSR 2026)

  21. [21]

    Grundy, and Jon Whittle

    Mojtaba Shahin, Waqar Hussain, Arif Nurwidyantoro, Harsha Perera, Rifat Ara Shams, John C. Grundy, and Jon Whittle. 2022. Operationalizing Human Values in Software Engineering: A Survey. IEEE Access 10 (2022), 75269–75295. doi:10. 1109/ACCESS.2022.3190975

  22. [22]

    Sarah Spiekermann. 2021. What to Expect From IEEE 7000: The First Standard for Building Ethical Systems. IEEE Technol. Soc. Mag. 40, 3 (2021), 99–100. doi:10. 1109/MTS.2021.3104386

  23. [23]

    Sarah Spiekermann. 2023. Value-Based Engineering: A Guide to Building Eth- ical Technology for Humanity . De Gruyter, Berlin, Boston. doi:10.1515/ 9783110793383

  24. [24]

    Parastou Tourani, Bram Adams, and Alexander Serebrenik. 2017. Code of Con- duct in Open Source Projects. In IEEE 24th International Conference on Software Analysis, Evolution and Reengineering, SANER 2017, Klagenfurt, Austria, February 20-24, 2017, Martin Pinzger, Gabriele Bavota, and Andrian Marcus (Eds.). IEEE Computer Society, 24–33. doi:10.1109/SANER....

  25. [25]

    Christoph Treude. 2026. Accountable Agents in Software Engineering: An Anal- ysis of Terms of Service and a Research Roadmap. In 2026 3rd IEEE/ACM Inter- national Conference on AI-powered Software (AIware) . IEEE

  26. [26]

    Christoph Treude and Marco Aurélio Gerosa. 2025. How Developers Interact with AI: A Taxonomy of Human-AI Collaboration in Software Engineering. In IEEE/ACM Second International Conference on AI Foundation Models and Software Engineering, Forge@ICSE 2025, Ottawa, ON, Canada, April 27-28, 2025. IEEE, 236–

  27. [27]

    doi:10.1109/FORGE66646.2025.00033

  28. [28]

    Olaf Zimmermann, Mirko Stocker, and Stefan Kapferer. 2024. Bringing ethical values into Agile software engineering. In The Leading Role of Smart Ethics in the Digital World. Universidad de La Rioja, 87–98