pith. machine review for the scientific record. sign in

arxiv: 2604.21103 · v1 · submitted 2026-04-22 · 💻 cs.AI · econ.GN· q-fin.EC

Recognition: unknown

AI Governance under Political Turnover: The Alignment Surface of Compliance Design

Authors on Pith no claims yet

Pith reviewed 2026-05-09 23:44 UTC · model grok-4.3

classification 💻 cs.AI econ.GNq-fin.EC
keywords AI governancepolitical turnovercompliance layeradministrative automationstrategic exploitationformal modelgovernment AIapproval boundary
0
0 comments X

The pith

Compliance layers for government AI create stable approval boundaries that successor administrations can learn to navigate while preserving legal appearance.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a formal model in which governments embed probabilistic AI into administrative decisions via a compliance layer that enforces reviewability, repeatability, and legal defensibility. This layer makes illegal departures easier to detect but simultaneously creates a fixed boundary that later governments can learn to operate along without altering the underlying rules. The model identifies conditions under which choices about automation scale, codification depth, and iterative safeguards increase vulnerability to internal strategic use, and it shows why oversight improvements can raise that vulnerability over time while making expansions in AI use hard to reverse. A sympathetic reader cares because the argument links concrete technical design choices directly to the durability of administrative constraints across political turnover.

Core claim

Embedding AI in public administration requires a compliance layer that renders decisions reviewable, repeatable, and legally defensible; this layer establishes a stable approval boundary that political successors can strategically navigate while maintaining the appearance of lawful administration. Institutions select the scale of automation, the degree of codification, and safeguards on iterative use. The resulting model demonstrates when these systems become vulnerable to exploitation from within government, why reforms that first strengthen oversight can later heighten vulnerability, and why AI expansions become difficult to unwind.

What carries the argument

The formal model of institutional choices over automation scale, codification degree, and iterative safeguards, which generates a stable approval boundary that successors can navigate without changing the rules.

If this is right

  • Reforms that initially improve oversight can later increase vulnerability to strategic use by future administrations.
  • Expansions in AI use become difficult to unwind once the compliance boundary is established and learned.
  • Vulnerability to internal strategic use rises when institutions select higher automation scale or lower codification under the model.
  • The compliance layer improves short-term detection of departures but creates longer-term navigability for successors.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Designers may need to add turnover-triggered resets or randomized review mechanisms to disrupt the learned boundary.
  • The same logic could apply to non-AI automated administrative systems where review layers create stable procedural surfaces.
  • Empirical tests could track decision patterns immediately before and after elections in agencies that have adopted AI compliance structures.

Load-bearing premise

Political successors will strategically navigate the stable approval boundary created by the compliance layer while preserving the appearance of lawful administration.

What would settle it

Data showing that successor governments do not increase their rate of decisions along the compliance boundary compared with predecessors, or that oversight reforms reduce long-term exploitation rates even after turnover.

Figures

Figures reproduced from arXiv: 2604.21103 by Andrew J. Peterson.

Figure 1
Figure 1. Figure 1: Modernization pressure and threshold crossing along the binding adoption path. [PITH_FULL_IMAGE:figures/full_fig_p012_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Codification and the Shift from Constraint to Exploitability [PITH_FULL_IMAGE:figures/full_fig_p014_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Search-based threshold for alignment-surface exploitability. [PITH_FULL_IMAGE:figures/full_fig_p027_3.png] view at source ↗
read the original abstract

Governments are increasingly interested in using AI to make administrative decisions cheaper, more scalable, and more consistent. But for probabilistic AI to be incorporated into public administration it must be embedded in a compliance layer that makes decisions reviewable, repeatable, and legally defensible. That layer can improve oversight by making departures from law easier to detect. But it can also create a stable approval boundary that political successors learn to navigate while preserving the appearance of lawful administration. We develop a formal model in which institutions choose the scale of automation, the degree of codification, and safeguards on iterative use. The model shows when these systems become vulnerable to strategic use from within government, why reforms that initially improve oversight can later increase that vulnerability, and why expansions in AI use may be difficult to unwind. Making AI usable can thus make procedures easier for future governments to learn and exploit.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper develops a formal model of AI integration into public administration under political turnover. Governments embed probabilistic AI via a compliance layer to ensure reviewability, repeatability, and legal defensibility. Institutions choose the scale of automation, degree of codification, and safeguards on iterative use. The model identifies conditions under which these choices create a stable approval boundary that successors can navigate while preserving lawful appearance, explains why initial oversight reforms can later heighten vulnerability, and shows why expansions in AI use may prove difficult to unwind.

Significance. If the derivations hold, the work identifies a counterintuitive risk in AI governance: compliance mechanisms designed for oversight can generate learnable, exploitable boundaries across turnover. This adds a dynamic, political dimension to discussions of AI in administration and underscores challenges in designing reversible systems. The formal parameterization of the three decision variables is a constructive step, though the absence of explicit equations in the summary limits evaluation of whether the vulnerability results are emergent or definitional.

major comments (2)
  1. [Formal model] Formal model: The central claim that the compliance layer produces a stable approval boundary successors can strategically navigate (while preserving lawful appearance) treats the navigation and learning process as given rather than deriving it from the three parameters (scale of automation, degree of codification, safeguards on iterative use). Without an explicit derivation or equilibrium condition showing how successors identify and traverse the boundary without detection, the vulnerability prediction risks circularity.
  2. [Results on reforms] Reform effects: The result that reforms initially improving oversight can subsequently increase vulnerability is stated as a model outcome. The manuscript must supply the specific equations or comparative statics (e.g., how changes in codification or safeguards shift the approval boundary across periods) to demonstrate this is an independent prediction rather than built into the definition of the boundary or the successor strategy.
minor comments (2)
  1. [Abstract] Abstract: The abstract states model conclusions without equations, parameter definitions, or validation steps, which hinders immediate assessment of the formal claims.
  2. [Introduction / Model setup] Terminology: The invented constructs 'alignment surface of compliance design' and 'stable approval boundary' require precise mathematical definitions and explicit mapping to the three free parameters at first introduction.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive report. The comments correctly identify areas where the formal derivations and comparative statics require clearer exposition to avoid any appearance of circularity. We respond to each major comment below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: Formal model: The central claim that the compliance layer produces a stable approval boundary successors can strategically navigate (while preserving lawful appearance) treats the navigation and learning process as given rather than deriving it from the three parameters (scale of automation, degree of codification, safeguards on iterative use). Without an explicit derivation or equilibrium condition showing how successors identify and traverse the boundary without detection, the vulnerability prediction risks circularity.

    Authors: We agree that the equilibrium derivation should be stated more explicitly. Section 3 defines the approval boundary B as an endogenous function B(A, C, S) of the three institutional parameters. The successor's navigation is derived as the solution to a constrained optimization problem in which the successor maximizes expected utility subject to remaining inside the boundary with probability 1-ε, where ε is the detection threshold induced by the safeguards parameter S. The learning process is modeled via Bayesian updating over repeated observations of compliant decisions. We will insert the full equilibrium condition and the associated proposition showing that, for interior values of C and S, the boundary is learnable without triggering detection. This establishes the vulnerability result as emergent from the parameterization rather than assumed. revision: yes

  2. Referee: Reform effects: The result that reforms initially improving oversight can subsequently increase vulnerability is stated as a model outcome. The manuscript must supply the specific equations or comparative statics (e.g., how changes in codification or safeguards shift the approval boundary across periods) to demonstrate this is an independent prediction rather than built into the definition of the boundary or the successor strategy.

    Authors: The manuscript already contains the relevant comparative statics in the proof of Proposition 2, which shows that an increase in initial codification C tightens the period-1 boundary while expanding the period-2 learnable set because lower decision variance facilitates successor inference. We will add the explicit cross-period equation ΔB_{t+1} = (∂B/∂C)ΔC + (∂B/∂S)ΔS + γ·Var(A), where γ captures the learning rate, and the associated corollary that ∂Vulnerability/∂C_0 > 0 for C_0 above a threshold. These derivations are independent of the successor strategy, which is held fixed across the comparative statics exercise. The revised version will present the full system of equations in the main text rather than the appendix. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the formal model derivation

full rationale

The abstract describes a formal model with three explicit choice parameters (automation scale, codification degree, safeguards) from which the paper derives conditions under which vulnerability to strategic successor use arises, why certain reforms increase vulnerability, and why expansions are hard to unwind. These are presented as model outputs rather than inputs by definition. No equations, self-citations, or ansatzes are visible in the provided text that would reduce the central claims (stable approval boundary, learnable navigation, reform effects) to tautologies or fitted inputs renamed as predictions. The navigation of the boundary is characterized as a consequence of the compliance layer design, not an unmodeled assumption smuggled in to force the result. The derivation therefore remains self-contained against the stated parameters and does not exhibit any of the enumerated circularity patterns.

Axiom & Free-Parameter Ledger

3 free parameters · 1 axioms · 2 invented entities

The model treats institutional choices over automation scale, codification degree, and iterative safeguards as decision variables and assumes strategic behavior by political actors; it introduces the conceptual entities of alignment surface and stable approval boundary without external empirical grounding.

free parameters (3)
  • scale of automation
    Decision variable chosen by institutions in the model
  • degree of codification
    Decision variable chosen by institutions in the model
  • safeguards on iterative use
    Decision variable chosen by institutions in the model
axioms (1)
  • domain assumption Institutions and political successors act strategically to achieve their objectives within the constraints of the compliance system
    Invoked to generate the vulnerability and navigation results described in the abstract
invented entities (2)
  • alignment surface of compliance design no independent evidence
    purpose: Conceptual boundary that makes AI decisions reviewable yet potentially exploitable
    New term introduced to organize the model's findings on oversight and vulnerability
  • stable approval boundary no independent evidence
    purpose: Fixed set of rules that successors can learn to navigate while appearing lawful
    Core conceptual invention enabling the claims about strategic use and reform effects

pith-pipeline@v0.9.0 · 5440 in / 1462 out tokens · 101033 ms · 2026-05-09T23:44:09.376136+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

63 extracted references · 5 canonical work pages · 2 internal anchors

  1. [1]

    Keeping a watchful eye: The politics of congressional oversight

    Joel D Aberbach. Keeping a watchful eye: The politics of congressional oversight. Rowman & Littlefield, 2001

  2. [2]

    Concrete Problems in AI Safety

    Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man \'e . Concrete problems in ai safety. Technical Report arXiv:1606.06565, arXiv, 2016

  3. [3]

    Political Control Versus Expertise: Congressional Choices about Administrative Procedures

    Kathleen Bawn. Political Control Versus Expertise: Congressional Choices about Administrative Procedures . American Political Science Review, 89 0 (1): 0 62–73, 1995. doi:10.2307/2083075

  4. [4]

    On democratic backsliding

    Nancy Bermeo. On democratic backsliding. Journal of Democracy, 27 0 (1): 0 5--19, 2016

  5. [5]

    What's measured is what matters: Targets and gaming in the english public health care system

    Gwyn Bevan and Christopher Hood. What's measured is what matters: Targets and gaming in the english public health care system. Public Administration, 84 0 (3): 0 517--538, 2006

  6. [6]

    From street-level to system-level bureaucracies: How information and communication technology is transforming administrative discretion and constitutional control

    Mark Bovens and Stavros Zouridis. From street-level to system-level bureaucracies: How information and communication technology is transforming administrative discretion and constitutional control. Public Administration Review, 62 0 (2): 0 174--184, 2002

  7. [7]

    Working, Shirking, and Sabotage: Bureaucratic Response to a Democratic Public

    John Brehm and Scott Gates. Working, Shirking, and Sabotage: Bureaucratic Response to a Democratic Public. University of Michigan Press, Ann Arbor, 1997. ISBN 047210764X

  8. [8]

    Enock, Saba Esnaashari, John Francis, Youmna Hashem, and Deborah Morgan

    Jonathan Bright, Florence E. Enock, Saba Esnaashari, John Francis, Youmna Hashem, and Deborah Morgan. Generative AI is already widespread in the public sector, 2024. arXiv:2401.01291

  9. [9]

    Generative AI framework for HMG , January 2024

    Cabinet Office , Government Digital Service , and Central Digital and Data Office . Generative AI framework for HMG , January 2024. URL https://www.gov.uk/government/publications/generative-ai-framework-for-hmg. Withdrawn 10 Feb 2025; superseded by the AI Playbook for the UK Government

  10. [10]

    Technological due process

    Danielle Keats Citron. Technological due process. Washington University Law Review, 85 0 (6): 0 1249--1313, 2008

  11. [11]

    The authoritarian resurgence: autocratic legalism in V enezuela

    Javier Corrales. The authoritarian resurgence: autocratic legalism in V enezuela. Journal of Democracy, 26 0 (2): 0 37--51, 2015

  12. [12]

    Colin S. Diver. The optimal precision of administrative rules. The Yale Law Journal, 93 0 (1): 0 65--109, 1983

  13. [13]

    Delegating powers

    David Epstein and S haryn O'Halloran . Delegating powers. Cambridge: Cambridge, 1999

  14. [14]

    Rankings and reactivity: How public measures recreate social worlds

    Wendy Nelson Espeland and Michael Sauder. Rankings and reactivity: How public measures recreate social worlds. American journal of sociology, 113 0 (1): 0 1--40, 2007

  15. [15]

    Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor

    Virginia Eubanks. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press, New York, 2018

  16. [16]

    Regulation (eu) 2024/1689 of the european parliament and of the council of 13 june 2024 laying down harmonised rules on artificial intelligence (artificial intelligence act)

    European Parliament and Council of the European Union . Regulation (eu) 2024/1689 of the european parliament and of the council of 13 june 2024 laying down harmonised rules on artificial intelligence (artificial intelligence act). https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng, June 2024. Accessed 2026-02-25

  17. [17]

    Sean Gailmard and John W. Patty. Slackers and zealots: Civil service, policy discretion, and bureaucratic expertise. American Journal of Political Science, 51 0 (4): 0 873--889, 2007

  18. [18]

    Over Ruled: The Human Toll of Too Much Law

    Neil Gorsuch and Janie Nitze. Over Ruled: The Human Toll of Too Much Law. Harper, 2024. URL https://www.harpercollins.com/products/over-ruled-neil-gorsuchjanie-nitze?variant=42471336050722

  19. [19]

    Algorithmic transparency recording standard (atrs): Guidance for public sector bodies

    Government Digital Service and Central Digital and Data Office . Algorithmic transparency recording standard (atrs): Guidance for public sector bodies. https://www.gov.uk/government/publications/guidance-for-organisations-using-the-algorithmic-transparency-recording-standard/algorithmic-transparency-recording-standard-guidance-for-public-sector-bodies, Ma...

  20. [20]

    The algorithm register of the dutch government

    Government of the Netherlands . The algorithm register of the dutch government. https://algoritmes.overheid.nl/en, 2026. Accessed 2026-02-25

  21. [21]

    Inverse reward design

    Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, Stuart J Russell, and Anca Dragan. Inverse reward design. In Advances in Neural Information Processing Systems, volume 30, 2017

  22. [22]

    Gaming in targetworld: The targets approach to managing british public services

    Christopher Hood. Gaming in targetworld: The targets approach to managing british public services. Public Administration Review, 66 0 (4): 0 515--521, 2006

  23. [23]

    Stronger universal and transferable attacks by suppressing refusals

    David Huang, Avidan Shah, Alexandre Araujo, David Wagner, and Chawin Sitawarin. Stronger universal and transferable attacks by suppressing refusals. In Proceedings of the 2025 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2025), 2025. URL https://aclanthology.org/2025.naacl-long.302/

  24. [24]

    Huber and Charles R

    John D. Huber and Charles R. Shipan. Deliberate Discretion?: The Institutional Foundations of Bureaucratic Autonomy. Cambridge University Press, 2002

  25. [25]

    How to lose a constitutional democracy

    Aziz Huq and Tom Ginsburg. How to lose a constitutional democracy. UCLA L. Rev., 65: 0 78, 2018

  26. [26]

    Model AI governance framework for agentic AI , January 2026

    Infocomm Media Development Authority . Model AI governance framework for agentic AI , January 2026. URL https://www.imda.gov.sg/-/media/imda/files/about/emerging-tech-and-research/artificial-intelligence/mgf-for-agentic-ai.pdf. Version 1.0

  27. [27]

    Rules versus standards: An economic analysis

    Louis Kaplow. Rules versus standards: An economic analysis. Duke Law Journal, 42 0 (3): 0 557--629, 1992

  28. [28]

    The logic of delegation

    D Roderick Kiewiet and Mathew D McCubbins. The logic of delegation. University of Chicago Press, 1991

  29. [29]

    Kroll, Joanna Huey, Solon Barocas, Edward W

    Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. Accountable algorithms. University of Pennsylvania Law Review, 165 0 (3): 0 633--705, 2017

  30. [30]

    Abusive constitutionalism

    David Landau. Abusive constitutionalism. UC Davis Law Review, 47: 0 189, 2013

  31. [31]

    How democracies die

    Steven Levitsky and Daniel Ziblatt. How democracies die. Crown, 2019

  32. [32]

    Street-Level Bureaucracy: Dilemmas of the Individual in Public Services

    Michael Lipsky. Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. Russell Sage Foundation, New York, 1980

  33. [33]

    Initial policy considerations for generative artificial intelligence

    Philippe Lorenz, Karine Perset, and Jamie Berryhill. Initial policy considerations for generative artificial intelligence. Technical Report 1, OECD Publishing, Paris, 2023

  34. [34]

    Learning from oversight: Fire alarms and police patrols reconstructed

    Arthur Lupia and Mathew D McCubbins. Learning from oversight: Fire alarms and police patrols reconstructed. The Journal of Law, Economics, & Organization, 10 0 (1): 0 96--125, 1994

  35. [35]

    Explaining Institutional Change: Ambiguity, Agency, and Power

    James Mahoney and Kathleen Thelen. Explaining Institutional Change: Ambiguity, Agency, and Power. Cambridge University Press, Cambridge, 2010. eds

  36. [36]

    Cops, Teachers, Counselors: Stories from the Front Lines of Public Service

    Steven Maynard-Moody and Michael Musheno. Cops, Teachers, Counselors: Stories from the Front Lines of Public Service. University of Michigan Press, Ann Arbor, 2003. ISBN 0472068326

  37. [37]

    McCubbins and Thomas Schwartz

    Mathew D. McCubbins and Thomas Schwartz. Congressional oversight overlooked: Police patrols versus fire alarms. American Journal of Political Science, 28 0 (1): 0 165--179, 1984

  38. [38]

    McCubbins, Roger G

    Mathew D. McCubbins, Roger G. Noll, and Barry R. Weingast. Administrative procedures as instruments of political control. Journal of Law, Economics, and Organization, 3 0 (2): 0 243--277, 1987

  39. [39]

    McCubbins, Roger G

    Mathew D. McCubbins, Roger G. Noll, and Barry R. Weingast. Structure and process, politics and policy: Administrative arrangements and the political control of agencies. Virginia Law Review, 75: 0 431--482, 1989

  40. [40]

    Governing with artificial intelligence: Are governments ready? Technical Report 20, OECD Publishing, Paris, 2024

    OECD . Governing with artificial intelligence: Are governments ready? Technical Report 20, OECD Publishing, Paris, 2024

  41. [41]

    The agentic AI landscape and its conceptual foundations

    OECD . The agentic AI landscape and its conceptual foundations. Technical Report 56, OECD Publishing, Paris, 2026

  42. [42]

    The effects of reward misspecification: Mapping and mitigating misaligned models

    Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=JYtwGwIL7ye

  43. [43]

    The Black Box Society: The Secret Algorithms That Control Money and Information

    Frank Pasquale. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, Cambridge, MA, 2015

  44. [44]

    Normal Accidents: Living with High-Risk Technologies

    Charles Perrow. Normal Accidents: Living with High-Risk Technologies. Basic Books, 1984

  45. [45]

    Theodore M. Porter. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton University Press, 1995

  46. [46]

    The Audit Society: Rituals of Verification

    Michael Power. The Audit Society: Rituals of Verification. Oxford University Press, Oxford, 1997

  47. [47]

    Human Compatible: Artificial Intelligence and the Problem of Control

    Stuart Russell. Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019

  48. [48]

    Playing by the Rules: A Philosophical Examination of Rule-Based Decision-Making in Law and in Life

    Frederick Schauer. Playing by the Rules: A Philosophical Examination of Rule-Based Decision-Making in Law and in Life. Oxford University Press, 1991

  49. [49]

    Autocratic legalism

    Kim Lane Scheppele. Autocratic legalism. University of Chicago Law Review, 85: 0 545--583, 2018

  50. [50]

    Oversight structures for agentic AI in public-sector organizations

    Chris Schmitz, Jonathan Rystr m, and Jan Batzner. Oversight structures for agentic AI in public-sector organizations. In Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025), pages 298--308. Association for Computational Linguistics, July 2025. URL https://aclanthology.org/2025.realm-1.21.pdf

  51. [51]

    James C. Scott. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press, 1998

  52. [52]

    Joar Skalse, Nikolaus H. R. Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward gaming. In Advances in Neural Information Processing Systems, volume 35, pages 9460--9471, 2022

  53. [53]

    'improving ratings': audit in the british university system

    Marilyn Strathern. 'improving ratings': audit in the british university system. European Review, 5 0 (3): 0 305--321, 1997

  54. [54]

    Sunstein

    Cass R. Sunstein. Problems with rules. California Law Review, 83 0 (4): 0 953--1026, 1995

  55. [55]

    Directive on automated decision-making

    Treasury Board of Canada Secretariat . Directive on automated decision-making. https://publications.gc.ca/collections/collection_2021/sct-tbs/BT48-31-2021-eng.pdf, March 2021. Accessed 2026-02-25

  56. [56]

    Constitutional hardball

    Mark Tushnet. Constitutional hardball. The John Marshall Law Review, 37 0 (2): 0 523--553, 2004

  57. [57]

    Tom R. Tyler. Why People Obey the Law. Yale University Press, New Haven, CT, 1990

  58. [58]

    Ozan O. Varol. Stealth authoritarianism. Iowa Law Review, 100 0 (4): 0 1673--1742, 2015

  59. [59]

    Generative AI in public administration in light of the regulatory awakening in the US and EU

    Sophie Weerts. Generative AI in public administration in light of the regulatory awakening in the US and EU . Cambridge Forum on AI: Law and Governance, 1: 0 e3, 2025. doi:10.1017/cfl.2024.10

  60. [60]

    Bureaucratic discretion or congressional control? regulatory policymaking by the federal trade commission

    Barry R Weingast and Mark J Moran. Bureaucratic discretion or congressional control? regulatory policymaking by the federal trade commission. Journal of political economy, 91 0 (5): 0 765--800, 1983

  61. [61]

    James Q. Wilson. Bureaucracy: What Government Agencies Do and Why They Do It. Basic Books, New York, 1989. ISBN 0465007848

  62. [62]

    Algorithmic regulation: A critical interrogation

    Karen Yeung. Algorithmic regulation: A critical interrogation. Regulation & Governance, 12 0 (4): 0 505--523, 2018

  63. [63]

    Universal and Transferable Adversarial Attacks on Aligned Language Models

    Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023. URL https://arxiv.org/abs/2307.15043