pith. machine review for the scientific record. sign in

arxiv: 2512.12109 · v4 · submitted 2025-12-13 · 💻 cs.CY · cs.AI· cs.LO

Recognition: 2 theorem links

· Lean Theorem

A Neuro-Symbolic Framework for Accountability in Public-Sector AI

Authors on Pith no claims yet

Pith reviewed 2026-05-16 23:22 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.LO
keywords neuro-symbolic frameworkAI accountabilitypublic sectorexplainabilityCalFresheligibility ruleslegal compliancerule extraction
0
0 comments X

The pith

A neuro-symbolic framework connects AI-generated explanations for public benefits to statutory law, enabling detection of legal inconsistencies.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a framework to ensure that explanations from automated eligibility systems for public benefits align with the legal rules that authorize the decisions. It builds a structured ontology from California's Manual of Policies and Procedures for CalFresh, extracts rules into a formal representation, and uses a solver to reason about alignment with law. Case studies show it can spot explanations that violate eligibility rules and make the decision basis traceable and contestable. If true, this would allow individuals to challenge automated denials on legal grounds rather than opaque AI outputs.

Core claim

The framework links system-generated decision justifications to the statutory constraints of CalFresh by combining a structured ontology derived from the state's Manual of Policies and Procedures, a rule extraction pipeline that expresses statutory logic formally, and a solver-based reasoning layer to check alignment with governing law.

What carries the argument

Structured ontology of eligibility requirements from the Manual of Policies and Procedures, paired with a rule extraction pipeline and solver-based reasoning layer for verifying legal consistency.

Load-bearing premise

The ontology and rule extraction pipeline can accurately capture statutory logic from the policy manual without significant loss or misinterpretation.

What would settle it

Finding a real CalFresh case where the AI explanation matches the law but the framework flags it as inconsistent, or vice versa.

Figures

Figures reproduced from arXiv: 2512.12109 by Allen Daniel Sunny, Ido Sivan-Sevilla.

Figure 2.1
Figure 2.1. Figure 2.1: Example of a SHAP (SHapley Additive Explanations) plot illustrating [PITH_FULL_IMAGE:figures/full_fig_p026_2_1.png] view at source ↗
Figure 3.1
Figure 3.1. Figure 3.1: System architecture of the statutory explainability framework. [PITH_FULL_IMAGE:figures/full_fig_p039_3_1.png] view at source ↗
Figure 3.2
Figure 3.2. Figure 3.2: A section of the statutory law as visualized in a knowledge graph [PITH_FULL_IMAGE:figures/full_fig_p041_3_2.png] view at source ↗
Figure 3.3
Figure 3.3. Figure 3.3: TBox Architecture diagram 3.4.1 Ontology Design and Concept Hierarchy The hierarchy was derived directly from the legal organization of MPP Division 63. The MPP structures eligibility rules into major regulatory domains (e.g., income, residency, re￾sources), each of which governs a legally distinct basis for eligibility. These top-level divisions were adopted as parent classes in the ontology to preserve… view at source ↗
Figure 3.4
Figure 3.4. Figure 3.4: Initial Ontology Structure 3.4.2.2 Vocabulary Extraction from Statutory Law Each statutory subsection in MPP Division 63 is processed sequentially to derive the ontology vocabulary used in the TBox. The goal is to identify the core legal concepts that will later be referenced by ABox assertions and solver rules. This can be done in a two step process: 36 [PITH_FULL_IMAGE:figures/full_fig_p046_3_4.png] view at source ↗
Figure 3.5
Figure 3.5. Figure 3.5: Ontology Expansion After Concept Integration [PITH_FULL_IMAGE:figures/full_fig_p049_3_5.png] view at source ↗
Figure 3.6
Figure 3.6. Figure 3.6: ABox creation pipeline. 3.5.1 Assertion Vocabulary Extraction Assertion construction begins by grounding the free-text explanation in the same vocab￾ulary that underlies the TBox. In the same way that statutory clauses were decomposed and mapped to ontology concepts, each explanation is first segmented into clauses and then aligned with ontology predicates. 1. Clause segmentation. The explanation is divi… view at source ↗
Figure 3.7
Figure 3.7. Figure 3.7: SMT reasoning architecture for legal verification. [PITH_FULL_IMAGE:figures/full_fig_p057_3_7.png] view at source ↗
Figure 4.1
Figure 4.1. Figure 4.1: Ontology concept clusters by eligibility domain [PITH_FULL_IMAGE:figures/full_fig_p062_4_1.png] view at source ↗
Figure 4.2
Figure 4.2. Figure 4.2: Performance comparison across prompting strategies. [PITH_FULL_IMAGE:figures/full_fig_p065_4_2.png] view at source ↗
Figure 4.3
Figure 4.3. Figure 4.3: Solver trace visualization with red nodes identified as violated statutes [PITH_FULL_IMAGE:figures/full_fig_p069_4_3.png] view at source ↗
Figure 4.4
Figure 4.4. Figure 4.4: Example case walkthrough visualization Example case: • GrossIncome = 2015.13 (above threshold) • NOA claim: “Income exceeds limit” Solver result: UNSAT ⇒ Explanation legally insufficient The solver identifies precisely which statutory rule is violated (MPP §63-502.32). 60 [PITH_FULL_IMAGE:figures/full_fig_p070_4_4.png] view at source ↗
Figure 5.1
Figure 5.1. Figure 5.1: Visualization of satisfied and violated explanations in a sample eligibility [PITH_FULL_IMAGE:figures/full_fig_p073_5_1.png] view at source ↗
read the original abstract

Automated eligibility systems increasingly determine access to essential public benefits, but the explanations they generate often fail to reflect the legal rules that authorize those decisions. This thesis develops a legally grounded explainability framework that links system-generated decision justifications to the statutory constraints of CalFresh, California's Supplemental Nutrition Assistance Program. The framework combines a structured ontology of eligibility requirements derived from the state's Manual of Policies and Procedures (MPP), a rule extraction pipeline that expresses statutory logic in a verifiable formal representation, and a solver-based reasoning layer to evaluate whether the explanation aligns with governing law. Case evaluations demonstrate the framework's ability to detect legally inconsistent explanations, highlight violated eligibility rules, and support procedural accountability by making the basis of automated determinations traceable and contestable.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript develops a neuro-symbolic framework for accountability in public-sector AI, focused on CalFresh (California's SNAP) eligibility decisions. It constructs a structured ontology from the state's Manual of Policies and Procedures (MPP), applies a rule extraction pipeline to encode statutory logic in a verifiable formal representation, and uses a solver-based reasoning layer to assess whether automated explanations align with governing law. Case evaluations are presented as demonstrating the framework's capacity to detect legally inconsistent explanations, identify violated eligibility rules, and enhance procedural accountability through traceable and contestable determinations.

Significance. If the rule extraction and ontology faithfully preserve legal meaning, the framework would provide a concrete, auditable mechanism for linking AI-generated justifications to statutory constraints in high-stakes public benefits systems. This could strengthen due-process protections, facilitate legal challenges, and inform standards for explainability in administrative AI, particularly where current black-box or post-hoc methods fall short of legal requirements.

major comments (2)
  1. [Abstract] Abstract: the claim that 'case evaluations demonstrate the framework's ability to detect legally inconsistent explanations' is unsupported by any quantitative metrics, error rates, implementation details, or validation data. Without these, the central empirical assertion cannot be assessed and remains load-bearing for the paper's contribution.
  2. [Rule extraction pipeline] Rule extraction pipeline (as described in the methods and framework sections): the solver layer can only reliably detect inconsistencies if the translation from MPP statutory text to formal rules preserves legal meaning without significant omission or distortion. The manuscript provides no independent legal validation—such as expert review, inter-annotator agreement scores, or formal equivalence checks—leaving this translation step unverified and directly undermining the accountability claims.
minor comments (2)
  1. [Framework description] The formal representation language used by the solver layer should be defined with explicit syntax and semantics in a dedicated subsection to support reproducibility and external verification.
  2. [Case evaluations] The manuscript would benefit from a table summarizing the case evaluations, including the number of cases, types of inconsistencies detected, and any baseline comparisons.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for these focused comments, which highlight important gaps in empirical support and validation. We agree that the abstract claim requires tempering and that the rule extraction step needs more transparent discussion of its limitations. We will make revisions to address both points directly while preserving the manuscript's focus on the framework design.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that 'case evaluations demonstrate the framework's ability to detect legally inconsistent explanations' is unsupported by any quantitative metrics, error rates, implementation details, or validation data. Without these, the central empirical assertion cannot be assessed and remains load-bearing for the paper's contribution.

    Authors: We accept this critique. The case evaluations in the manuscript are illustrative demonstrations using a small set of concrete CalFresh scenarios drawn from public examples; they show the solver identifying specific inconsistencies but do not include quantitative metrics, error rates, or large-scale validation. We will revise the abstract to replace the claim with 'illustrative case evaluations demonstrate the framework's capacity to detect...' and will expand the methods and evaluation sections with implementation details (number of extracted rules, solver runtime on the cases, and the exact formalization approach). A new limitations subsection will explicitly note the absence of quantitative benchmarking and the qualitative nature of the current evidence. revision: yes

  2. Referee: [Rule extraction pipeline] Rule extraction pipeline (as described in the methods and framework sections): the solver layer can only reliably detect inconsistencies if the translation from MPP statutory text to formal rules preserves legal meaning without significant omission or distortion. The manuscript provides no independent legal validation—such as expert review, inter-annotator agreement scores, or formal equivalence checks—leaving this translation step unverified and directly undermining the accountability claims.

    Authors: We agree that independent legal validation would strengthen the claims. The extraction was performed manually by the authors through direct mapping from the MPP text into a logic-based representation, with traceability maintained via the ontology. No external legal expert review or inter-annotator agreement was conducted, as this was a single-author thesis project. We will revise the methods section to include explicit examples of text-to-rule translations, document the extraction criteria used, and add a dedicated limitations paragraph acknowledging the lack of formal equivalence checks or expert validation. We will also note that the framework is designed to support such validation in future collaborative work with legal experts. revision: partial

Circularity Check

0 steps flagged

No circularity in derivation chain

full rationale

The paper constructs its framework by deriving a structured ontology directly from the external CalFresh Manual of Policies and Procedures (MPP), applying a rule extraction pipeline to translate statutory logic into formal representations, and using a solver layer for consistency checks. Case evaluations then demonstrate detection of inconsistencies against these externally sourced rules. No self-definitional loops, fitted parameters renamed as predictions, or load-bearing self-citations appear in the derivation; the central claims rest on independent statutory documents and empirical case checks rather than reducing to the paper's own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim depends on the premise that legal rules can be accurately captured and checked formally; no free parameters or invented entities are introduced beyond the framework itself.

axioms (1)
  • domain assumption Eligibility rules in the Manual of Policies and Procedures can be structured into an ontology and expressed in verifiable formal logic without material loss of legal meaning.
    Invoked as the foundation for the rule extraction pipeline and solver-based alignment check.
invented entities (1)
  • Neuro-symbolic accountability framework no independent evidence
    purpose: To connect AI-generated explanations to statutory constraints for public benefit decisions
    The framework is the novel construct proposed; no independent falsifiable evidence outside the described case evaluations is provided.

pith-pipeline@v0.9.0 · 5418 in / 1368 out tokens · 82133 ms · 2026-05-16T23:22:24.710350+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

109 extracted references · 109 canonical work pages · 5 internal anchors

  1. [1]

    Controlling welfare bureaucracy: A dynamic approach.Notre Dame Law Review, 50:457–, 1975

    John Denvir. Controlling welfare bureaucracy: A dynamic approach.Notre Dame Law Review, 50:457–, 1975

  2. [2]

    Understanding artificial intelligence ethics and safety

    David Leslie. Understanding artificial intelligence ethics and safety. Technical report, The Alan Turing Institute, June 2019. arXiv:1906.05684 [cs]

  3. [3]

    Technological due process.Washington University Law Review, 85(6):1249–1313, 2008

    Danielle Keats Citron. Technological due process.Washington University Law Review, 85(6):1249–1313, 2008

  4. [4]

    The Automated Welfare State: Challenges for Socioeconomic Rights of the Marginalised

    Terry Carney. The Automated Welfare State: Challenges for Socioeconomic Rights of the Marginalised. In Zofia Bednarz and Monika Zalnieriute, editors,Money, Power, and AI, pages 95–115. Cambridge University Press, 1 edition, November 2023

  5. [5]

    Oversight design for ai-enabled decision making in government services

    Liming Zhu, Qinghua Lu, Sung Une Lee, and Ding Ming. Oversight design for ai-enabled decision making in government services. SSRN, 2025. Posted September 29, 2025; SSRN: 5543018

  6. [6]

    Informal agency rulemaking under the law

    Administrative Law Center Justia. Informal agency rulemaking under the law. Justia Legal Portal, 2025. Accessed on December 18, 2025

  7. [7]

    RULEMAKING AND INSCRUTABLE AUTOMATED DECI- SION TOOLS

    Columbia Law Review. RULEMAKING AND INSCRUTABLE AUTOMATED DECI- SION TOOLS

  8. [8]

    Department of Agriculture, Food and Nutrition Service

    U.S. Department of Agriculture, Food and Nutrition Service. Supplemental nutrition assistance program (snap), 2025. Accessed: 2025-12-01

  9. [9]

    Calfresh, 2025

    California Department of Social Services. Calfresh, 2025. Accessed: 2025-12-01

  10. [10]

    Calfresh regulations, 2025

    California Department of Social Services. Calfresh regulations, 2025. Accessed: 2025-12- 01

  11. [11]

    Notice of action documents, 2025

    California Department of Social Services. Notice of action documents, 2025. Accessed: 2025-12-01

  12. [12]

    Administration by algorithm? public management meets public sector machine learning.Public Administration Review, 79(6):845–856, 2019

    Michael Veale and Irina Brass. Administration by algorithm? public management meets public sector machine learning.Public Administration Review, 79(6):845–856, 2019

  13. [13]

    Digitization or equality: When government automation covers some, but not all citizens.Government Information Quarterly, 38(1):101547, January 2021

    Karl Kristian Larsson. Digitization or equality: When government automation covers some, but not all citizens.Government Information Quarterly, 38(1):101547, January 2021

  14. [14]

    Automated Government Benefits and Welfare Surveillance.Surveillance & Society, 21(3):246–258, September 2023

    Mike Zajko. Automated Government Benefits and Welfare Surveillance.Surveillance & Society, 21(3):246–258, September 2023. 76

  15. [15]

    Lael R. Keiser. Understanding Street-Level Bureaucrats’ Decision Making: Determin- ing Eligibility in the Social Security Disability Program.Public Administration Review, 70(2):247–257, March 2010

  16. [16]

    The ideational robustness of bureaucracy.Policy and Society, 43(2):141–158, July 2024

    Eva Sørensen and Jacob Torfing. The ideational robustness of bureaucracy.Policy and Society, 43(2):141–158, July 2024

  17. [17]

    Virginia Eubanks.Automating Inequality: How High-Tech Tools Profile, Police, and Pun- ish the Poor. St. Martin’s Press, New York, 2018

  18. [18]

    The hidden costs of digital self-service: administrative burden, vulnerability and the role of interpersonal aid in Norwegian and Brazilian welfare services

    Hanne Hoglund Ryden and Luiz De Andrade. The hidden costs of digital self-service: administrative burden, vulnerability and the role of interpersonal aid in Norwegian and Brazilian welfare services. InProceedings of the 16th International Conference on Theory and Practice of Electronic Governance, pages 473–478, Belo Horizonte Brazil, September

  19. [19]

    The automated administrative state: A crisis of legitimacy.Michigan Law Review, 119(5):891–948, 2021

    Ryan Calo and Danielle Keats Citron. The automated administrative state: A crisis of legitimacy.Michigan Law Review, 119(5):891–948, 2021

  20. [20]

    The changing governance of welfare: revisiting Jessop’s framework in the context of healthcare.Social Theory & Health, 20(1):21–36, 2022

    Ian Greener. The changing governance of welfare: revisiting Jessop’s framework in the context of healthcare.Social Theory & Health, 20(1):21–36, 2022

  21. [21]

    Michigan Office of the Auditor General. Performance audit report: Michigan integrated data automated system (midas), unemployment insurance agency; department of talent and economic development; department of technology, management, and budget. Techni- cal Report 641-0593-15, Michigan Office of the Auditor General, Lansing, MI, February 2016

  22. [22]

    Heinrich and Deanna Malatesta

    Carolyn J. Heinrich and Deanna Malatesta. Postmortem on a public sector contract collapse: The state of indiana’s welfare modernization failure. Technical report, Vanderbilt University, March 2022. Working paper

  23. [23]

    Federal compliance audit report for the fiscal year ended june 30, 2018

    California State Auditor. Federal compliance audit report for the fiscal year ended june 30, 2018. Technical report, California State Auditor’s Office, 2019. Accessed: 2025-12-01

  24. [24]

    Robert Brauneis and Ellen P. Goodman. Algorithmic transparency for the smart city. Yale Journal of Law & Technology, 20:103–176, 2018

  25. [25]

    Derek Wu and Bruce D. Meyer. Administer, automate, activate, and adjudicate: How should states implement the one-stop-shop vision for benefit delivery? IZA Discussion Paper 16294, IZA Institute of Labor Economics, 2024. IZA Discussion Paper No. 16294

  26. [26]

    Introduction: Administrative Burden as a Mechanism of Inequality in Policy Implementation.RSF: The Russell Sage Foundation Journal of the Social Sciences, 9(4):1–30, September 2023

    Pamela Herd, Hilary Hoynes, Jamila Michener, and Donald Moynihan. Introduction: Administrative Burden as a Mechanism of Inequality in Policy Implementation.RSF: The Russell Sage Foundation Journal of the Social Sciences, 9(4):1–30, September 2023

  27. [27]

    When The Algorithm Says No: AI Denies Vital Benefits, May 2025

    RoX818. When The Algorithm Says No: AI Denies Vital Benefits, May 2025. Section: Bias & Mitigation. 77

  28. [28]

    Public procurement of artificial intelligence systems: new risks and future proofing.Ai & Society, pages 1–15, October 2022

    Merve Hickok. Public procurement of artificial intelligence systems: new risks and future proofing.Ai & Society, pages 1–15, October 2022

  29. [29]

    Mark Bovens and Stavros Zouridis. From Street-Level to System-Level Bureaucracies: How Information and Communication Technology Is Transforming Administrative Dis- cretion and Constitutional Control.Public Administration Review, 62(2):174–184, 2002. Publisher: [American Society for Public Administration, Wiley]

  30. [30]

    Russell Sage Foundation, New York, 1980

    Michael Lipsky.Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. Russell Sage Foundation, New York, 1980

  31. [31]

    22 gdpr – automated individual decision-making, including profiling.https:// gdpr-info.eu/art-22-gdpr/, 2016

    Art. 22 gdpr – automated individual decision-making, including profiling.https:// gdpr-info.eu/art-22-gdpr/, 2016. Accessed: 2025-11-20

  32. [32]

    Articles 13(2)(f), 14(2)(g), 15(1)(h)

    Regulation (eu) 2016/679 of the european parliament and of the council, 2016. Articles 13(2)(f), 14(2)(g), 15(1)(h)

  33. [34]

    Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation

    Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99, May 2017

  34. [35]

    Promulgated 8 October 2016

    Loi n°2016-1321 du 7 octobre 2016 pour une r´ epublique num´ erique [law for a digital republic], 2016. Promulgated 8 October 2016. Accessed from WIPO Lex

  35. [36]

    Directive on automated decision-making

    Treasury Board of Canada Secretariat. Directive on automated decision-making. Technical report, Treasury Board of Canada Secretariat, Ottawa, ON, 2020. Effective for systems developed or procured after April 1, 2020

  36. [37]

    Lei geral de prote¸ c˜ ao de dados pessoais (lgpd) — gen- eral data protection law.https://iapp.org/resources/article/ brazilian-data-protection-law-lgpd-english-translation/, 2018

    Brazil. Lei geral de prote¸ c˜ ao de dados pessoais (lgpd) — gen- eral data protection law.https://iapp.org/resources/article/ brazilian-data-protection-law-lgpd-english-translation/, 2018. English translation via IAPP; accessed 2025-11-20

  37. [38]

    Understanding Viewport- and World- based Pointing with Everyday Smart Devices in Immersive Augmented Reality

    Yuan Chen, Keiko Katsuragawa, and Edward Lank. Understanding Viewport- and World- based Pointing with Everyday Smart Devices in Immersive Augmented Reality. InPro- ceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, pages 1–13, New York, NY, USA, April 2020. Association for Computing Machinery

  38. [39]

    kelly, 397 u.s

    Goldberg v. kelly, 397 u.s. 254, 1970. Procedural due process requires an evidentiary hearing before termination of welfare benefits

  39. [40]

    Supreme Court

    U.S. Supreme Court. Califano v. yamasaki, 442 u.s. 682.https://supreme.justia.com/ cases/federal/us/442/682/, 1979. Accessed 2025-11-20

  40. [41]

    Power, process, and automated decision-making.Fordham Law Re- view, 88(2):613–632, 2019

    Ari Ezra Waldman. Power, process, and automated decision-making.Fordham Law Re- view, 88(2):613–632, 2019. 78

  41. [42]

    Regulation (eu) 2024/1689 on artificial intelligence (ai act).https://eur-lex.europa.eu/eli/reg/2024/1689/oj,

    European Parliament and Council of the European Union. Regulation (eu) 2024/1689 on artificial intelligence (ai act).https://eur-lex.europa.eu/eli/reg/2024/1689/oj,

  42. [43]

    Entered into force 1 August 2024; accessed 2025-11-20

  43. [44]

    Oecd AI principles

    Organisation for Economic Co-operation and Development (OECD). Oecd AI principles. https://www.oecd.org/en/topics/ai-principles.html, 2019. Accessed: 2025-11-20

  44. [45]

    Recommendation on the ethics of artificial intelligence.https://www.unesco

    UNESCO. Recommendation on the ethics of artificial intelligence.https://www.unesco. org/en/artificial-intelligence/recommendation-ethics, 2021. Adopted November 2021; accessed 2025-12-01

  45. [46]

    Ron [D-OR Sen. Wyden. S.2892 - 118th Congress (2023-2024): Algorithmic Accountability Act of 2023, September 2023. Archive Location: 2023-09-21

  46. [47]

    Blueprint for an ai bill of rights: Making automated systems work for the american people.https://www.whitehouse.gov/ostp/ ai-bill-of-rights/, 2022

    Office of Science and Technology Policy (OSTP). Blueprint for an ai bill of rights: Making automated systems work for the american people.https://www.whitehouse.gov/ostp/ ai-bill-of-rights/, 2022. Accessed: 2025-11-20

  47. [48]

    Is Administrative Law at War with Itself?SSRN Electronic Journal, 2020

    Jerry Louis Mashaw. Is Administrative Law at War with Itself?SSRN Electronic Journal, 2020

  48. [49]

    Fuller.The Morality of Law

    Lon L. Fuller.The Morality of Law. Yale University Press, New Haven, CT, revised edition edition, 1969

  49. [50]

    An Administrative Jurisprudence: The Rule of Law in the Ad- ministrative State

    Columbia Law Review. An Administrative Jurisprudence: The Rule of Law in the Ad- ministrative State

  50. [51]

    Explain- ability for experts: A design framework for making algorithms supporting expert decisions more explainable.Journal of Responsible Technology, 7-8:100017, October 2021

    Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans, and Rhianne Jones. Explain- ability for experts: A design framework for making algorithms supporting expert decisions more explainable.Journal of Responsible Technology, 7-8:100017, October 2021

  51. [52]

    John J. Nay. Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, May 2023. arXiv:2209.13020 [cs]

  52. [53]

    Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives, October 2025

    Andrada Iulia Prajescu and Roberto Confalonieri. Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives, October 2025. arXiv:2510.11079 [cs]

  53. [54]

    Legally-informed explainable ai.arXiv preprint arXiv:2504.10708, 2025

    Gennie Mansi, Naveena Karusala, and Mark Riedl. Legally-informed explainable ai.arXiv preprint arXiv:2504.10708, 2025

  54. [55]

    ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier

    Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. InProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144, San Francisco California USA, August 2016. ACM

  55. [56]

    shapiq: Shapley Interactions for Machine Learning, 2024

    Maximilian Muschalik, Hubert Baniecki, Fabian Fumagalli, Patrick Kolpaczki, Barbara Hammer, and Eyke H¨ ullermeier. shapiq: Shapley Interactions for Machine Learning, 2024. Version Number: 1. 79

  56. [57]

    Dickerson, and Keegan Hines

    Sahil Verma, John P. Dickerson, and Keegan Hines. Counterfactual explanations for machine learning: A review.CoRR, abs/2010.10596, 2020

  57. [58]

    Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real- World Phenomena.Minds and Machines, 34(3):32, July 2024

    Timo Freiesleben, Gunnar K¨ onig, Christoph Molnar, and´Alvaro Tejero-Cantero. Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real- World Phenomena.Minds and Machines, 34(3):32, July 2024

  58. [59]

    Stop explaining black box machine learning models for high stakes de- cisions and use interpretable models instead.Nature Machine Intelligence, 1(5):206–215, May 2019

    Cynthia Rudin. Stop explaining black box machine learning models for high stakes de- cisions and use interpretable models instead.Nature Machine Intelligence, 1(5):206–215, May 2019

  59. [60]

    Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI

    Shane T. Mueller, Robert R. Hoffman, William Clancey, Abigail Emrey, and Gary Klein. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI, February 2019. arXiv:1902.01876 [cs]

  60. [61]

    Vera Liao and Kush R

    Q. Vera Liao and Kush R. Varshney. Human-centered explainable ai (xai): From algo- rithms to user experiences.arXiv preprint arXiv:2110.10790, 2021

  61. [62]

    Vera Liao, Michael Muller, Mark O

    Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. Expand- ing Explainability: Towards Social Transparency in AI systems. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–19, May 2021. arXiv:2101.04719 [cs]

  62. [63]

    McGuinness, Daniele Nardi, and Peter F

    Franz Baader, Deborah L. McGuinness, Daniele Nardi, and Peter F. Patel-Schneider. The description logic handbook: Theory, implementation, and applications. InThe Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, 2 edition, 2010

  63. [64]

    Thomas R. Gruber. A translation approach to portable ontology specifications.Knowledge Acquisition, 5(2):199–220, June 1993

  64. [65]

    Lassila and R

    O. Lassila and R. Swick. Resource description framework (rdf) model and syntax speci- fication. Technical Report REC-rdf-syntax-19990222, W3C, Feb 1999. W3C Recommen- dation

  65. [66]

    Owl 2 web ontology language document overview (second edition).https://www.w3.org/TR/2012/REC-owl2-overview-20121211/, 2012

    World Wide Web Consortium (W3C). Owl 2 web ontology language document overview (second edition).https://www.w3.org/TR/2012/REC-owl2-overview-20121211/, 2012. W3C Recommendation; accessed 2025-11-20

  66. [67]

    Gene ontology: Overview and documentation.http: //geneontology.org/docs/ontology-documentation/, 2025

    The Gene Ontology Consortium. Gene ontology: Overview and documentation.http: //geneontology.org/docs/ontology-documentation/, 2025. Accessed 2025-11-20

  67. [68]

    Suggested upper merged ontology (sumo).https://www

    Ontology Portal. Suggested upper merged ontology (sumo).https://www. ontologyportal.org/. Accessed 2025-11-20

  68. [69]

    Prov-o: The prov ontology.https://www.w3.org/ TR/prov-o/, 2013

    World Wide Web Consortium (W3C). Prov-o: The prov ontology.https://www.w3.org/ TR/prov-o/, 2013. W3C Recommendation; accessed 2025-11-20. 80

  69. [70]

    Thorne McCarty

    L. Thorne McCarty. Reflections on TAXMAN: An Experiment in Artificial Intelligence and Legal Reasoning (Original Version).Harvard Law Review, 5:305–373, January 1976

  70. [71]

    SHYSTER: A Pragmatic Legal Expert System

    James Popple. SHYSTER: A Pragmatic Legal Expert System. InSSRN Electronic Jour- nal, 1993. ISSN: 1556-5068 Journal Abbreviation: SSRN Journal

  71. [72]

    The lkif core ontology of basic legal concepts

    Rinke Hoekstra, Joost Breuker, Marcello Di Bello, and Alexander Boer. The lkif core ontology of basic legal concepts. InProceedings of the 10th International Conference on Artificial Intelligence and Law (ICAIL 2007), pages 43–44. ACM, 2007

  72. [73]

    Legal- Reasoner: A Multi-Stage Framework for Legal Judgment Prediction via Large Language Models and Knowledge Integration.IEEE Access, 12:166843–166854, 2024

    Xuran Wang, Xinguang Zhang, Vanessa Hoo, Zhouhang Shao, and Xuguang Zhang. Legal- Reasoner: A Multi-Stage Framework for Legal Judgment Prediction via Large Language Models and Knowledge Integration.IEEE Access, 12:166843–166854, 2024

  73. [74]

    Satisfiability Modulo Theories: An Appetizer

    Leonardo De Moura and Nikolaj Bjørner. Satisfiability Modulo Theories: An Appetizer. In David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum, Marcel Vin´ ıcius Medeiro...

  74. [75]

    The regorous approach to process compliance

    Guido Governatori. The regorous approach to process compliance. InProceedings of the 2015 IEEE 19th International Enterprise Distributed Object Computing Conference Workshops and Demonstrations (EDOCW 2015), pages 33–40. IEEE, 2015

  75. [76]

    Automating defeasible reasoning in law.arXiv preprint, 2022

    How Khang Lim, Avishkar Mahajan, Martin Strecker, and Meng Weng Wong. Automating defeasible reasoning in law.arXiv preprint, 2022

  76. [77]

    Shapiro, and Ruzica Piskac

    Samuel Judson, Matthew Elacqua, Filip Cano, Timos Antonopoulos, Bettina K¨ onighofer, Scott J. Shapiro, and Ruzica Piskac. ’Put the Car on the Stand’: SMT-based Oracles for Investigating Decisions, January 2024. arXiv:2305.05731 [cs]

  77. [78]

    Automating Defeasible Reasoning in Law, May 2022

    How Khang Lim, Avishkar Mahajan, Martin Strecker, and Meng Weng Wong. Automating Defeasible Reasoning in Law, May 2022. arXiv:2205.07335 [cs]

  78. [79]

    Legal linguistic templates and the tension between legal knowledge repre- sentation and reasoning.Frontiers in Artificial Intelligence, 6:113626, 2023

    Tomer Libal. Legal linguistic templates and the tension between legal knowledge repre- sentation and reasoning.Frontiers in Artificial Intelligence, 6:113626, 2023

  79. [80]

    Towards Cognitive AI Sys- tems: a Survey and Prospective on Neuro-Symbolic AI, January 2024

    Zishen Wan, Che-Kai Liu, Hanchen Yang, Chaojian Li, Haoran You, Yonggan Fu, Cheng Wan, Tushar Krishna, Yingyan Lin, and Arijit Raychowdhury. Towards Cognitive AI Sys- tems: a Survey and Prospective on Neuro-Symbolic AI, January 2024. arXiv:2401.01040 [cs]

  80. [81]

    Neurosymbolic ai and its taxonomy: A survey.arXiv preprint, 2023

    Wandemberg Gibaut, Leonardo Pereira, Fabio Grassiotto, Alexandre Osorio, Eder Gadi- oli, Amparo Munoz, Sildolfo Gomes, and Claudio dos Santos. Neurosymbolic ai and its taxonomy: A survey.arXiv preprint, 2023. arXiv:2305.08876 [cs.AI]. 81

Showing first 80 references.