Recognition: 2 theorem links
· Lean TheoremA Neuro-Symbolic Framework for Accountability in Public-Sector AI
Pith reviewed 2026-05-16 23:22 UTC · model grok-4.3
The pith
A neuro-symbolic framework connects AI-generated explanations for public benefits to statutory law, enabling detection of legal inconsistencies.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The framework links system-generated decision justifications to the statutory constraints of CalFresh by combining a structured ontology derived from the state's Manual of Policies and Procedures, a rule extraction pipeline that expresses statutory logic formally, and a solver-based reasoning layer to check alignment with governing law.
What carries the argument
Structured ontology of eligibility requirements from the Manual of Policies and Procedures, paired with a rule extraction pipeline and solver-based reasoning layer for verifying legal consistency.
Load-bearing premise
The ontology and rule extraction pipeline can accurately capture statutory logic from the policy manual without significant loss or misinterpretation.
What would settle it
Finding a real CalFresh case where the AI explanation matches the law but the framework flags it as inconsistent, or vice versa.
Figures
read the original abstract
Automated eligibility systems increasingly determine access to essential public benefits, but the explanations they generate often fail to reflect the legal rules that authorize those decisions. This thesis develops a legally grounded explainability framework that links system-generated decision justifications to the statutory constraints of CalFresh, California's Supplemental Nutrition Assistance Program. The framework combines a structured ontology of eligibility requirements derived from the state's Manual of Policies and Procedures (MPP), a rule extraction pipeline that expresses statutory logic in a verifiable formal representation, and a solver-based reasoning layer to evaluate whether the explanation aligns with governing law. Case evaluations demonstrate the framework's ability to detect legally inconsistent explanations, highlight violated eligibility rules, and support procedural accountability by making the basis of automated determinations traceable and contestable.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript develops a neuro-symbolic framework for accountability in public-sector AI, focused on CalFresh (California's SNAP) eligibility decisions. It constructs a structured ontology from the state's Manual of Policies and Procedures (MPP), applies a rule extraction pipeline to encode statutory logic in a verifiable formal representation, and uses a solver-based reasoning layer to assess whether automated explanations align with governing law. Case evaluations are presented as demonstrating the framework's capacity to detect legally inconsistent explanations, identify violated eligibility rules, and enhance procedural accountability through traceable and contestable determinations.
Significance. If the rule extraction and ontology faithfully preserve legal meaning, the framework would provide a concrete, auditable mechanism for linking AI-generated justifications to statutory constraints in high-stakes public benefits systems. This could strengthen due-process protections, facilitate legal challenges, and inform standards for explainability in administrative AI, particularly where current black-box or post-hoc methods fall short of legal requirements.
major comments (2)
- [Abstract] Abstract: the claim that 'case evaluations demonstrate the framework's ability to detect legally inconsistent explanations' is unsupported by any quantitative metrics, error rates, implementation details, or validation data. Without these, the central empirical assertion cannot be assessed and remains load-bearing for the paper's contribution.
- [Rule extraction pipeline] Rule extraction pipeline (as described in the methods and framework sections): the solver layer can only reliably detect inconsistencies if the translation from MPP statutory text to formal rules preserves legal meaning without significant omission or distortion. The manuscript provides no independent legal validation—such as expert review, inter-annotator agreement scores, or formal equivalence checks—leaving this translation step unverified and directly undermining the accountability claims.
minor comments (2)
- [Framework description] The formal representation language used by the solver layer should be defined with explicit syntax and semantics in a dedicated subsection to support reproducibility and external verification.
- [Case evaluations] The manuscript would benefit from a table summarizing the case evaluations, including the number of cases, types of inconsistencies detected, and any baseline comparisons.
Simulated Author's Rebuttal
We thank the referee for these focused comments, which highlight important gaps in empirical support and validation. We agree that the abstract claim requires tempering and that the rule extraction step needs more transparent discussion of its limitations. We will make revisions to address both points directly while preserving the manuscript's focus on the framework design.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that 'case evaluations demonstrate the framework's ability to detect legally inconsistent explanations' is unsupported by any quantitative metrics, error rates, implementation details, or validation data. Without these, the central empirical assertion cannot be assessed and remains load-bearing for the paper's contribution.
Authors: We accept this critique. The case evaluations in the manuscript are illustrative demonstrations using a small set of concrete CalFresh scenarios drawn from public examples; they show the solver identifying specific inconsistencies but do not include quantitative metrics, error rates, or large-scale validation. We will revise the abstract to replace the claim with 'illustrative case evaluations demonstrate the framework's capacity to detect...' and will expand the methods and evaluation sections with implementation details (number of extracted rules, solver runtime on the cases, and the exact formalization approach). A new limitations subsection will explicitly note the absence of quantitative benchmarking and the qualitative nature of the current evidence. revision: yes
-
Referee: [Rule extraction pipeline] Rule extraction pipeline (as described in the methods and framework sections): the solver layer can only reliably detect inconsistencies if the translation from MPP statutory text to formal rules preserves legal meaning without significant omission or distortion. The manuscript provides no independent legal validation—such as expert review, inter-annotator agreement scores, or formal equivalence checks—leaving this translation step unverified and directly undermining the accountability claims.
Authors: We agree that independent legal validation would strengthen the claims. The extraction was performed manually by the authors through direct mapping from the MPP text into a logic-based representation, with traceability maintained via the ontology. No external legal expert review or inter-annotator agreement was conducted, as this was a single-author thesis project. We will revise the methods section to include explicit examples of text-to-rule translations, document the extraction criteria used, and add a dedicated limitations paragraph acknowledging the lack of formal equivalence checks or expert validation. We will also note that the framework is designed to support such validation in future collaborative work with legal experts. revision: partial
Circularity Check
No circularity in derivation chain
full rationale
The paper constructs its framework by deriving a structured ontology directly from the external CalFresh Manual of Policies and Procedures (MPP), applying a rule extraction pipeline to translate statutory logic into formal representations, and using a solver layer for consistency checks. Case evaluations then demonstrate detection of inconsistencies against these externally sourced rules. No self-definitional loops, fitted parameters renamed as predictions, or load-bearing self-citations appear in the derivation; the central claims rest on independent statutory documents and empirical case checks rather than reducing to the paper's own inputs by construction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Eligibility rules in the Manual of Policies and Procedures can be structured into an ontology and expressed in verifiable formal logic without material loss of legal meaning.
invented entities (1)
-
Neuro-symbolic accountability framework
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.lean; IndisputableMonolith/Cost/FunctionalEquation.leanreality_from_one_distinction; washburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
The framework combines a structured ontology of eligibility requirements derived from the state's Manual of Policies and Procedures (MPP), a rule extraction pipeline that expresses statutory logic in a verifiable formal representation, and a solver-based reasoning layer to evaluate whether the explanation aligns with governing law.
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanLogicNat recovery; embed_injective unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
SMT Solver for Legal Consistency Checking... Z3 SMT solver determines whether there exists a logically coherent assignment of values under which all statements can be true simultaneously.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Controlling welfare bureaucracy: A dynamic approach.Notre Dame Law Review, 50:457–, 1975
John Denvir. Controlling welfare bureaucracy: A dynamic approach.Notre Dame Law Review, 50:457–, 1975
work page 1975
-
[2]
Understanding artificial intelligence ethics and safety
David Leslie. Understanding artificial intelligence ethics and safety. Technical report, The Alan Turing Institute, June 2019. arXiv:1906.05684 [cs]
work page internal anchor Pith review Pith/arXiv arXiv 2019
-
[3]
Technological due process.Washington University Law Review, 85(6):1249–1313, 2008
Danielle Keats Citron. Technological due process.Washington University Law Review, 85(6):1249–1313, 2008
work page 2008
-
[4]
The Automated Welfare State: Challenges for Socioeconomic Rights of the Marginalised
Terry Carney. The Automated Welfare State: Challenges for Socioeconomic Rights of the Marginalised. In Zofia Bednarz and Monika Zalnieriute, editors,Money, Power, and AI, pages 95–115. Cambridge University Press, 1 edition, November 2023
work page 2023
-
[5]
Oversight design for ai-enabled decision making in government services
Liming Zhu, Qinghua Lu, Sung Une Lee, and Ding Ming. Oversight design for ai-enabled decision making in government services. SSRN, 2025. Posted September 29, 2025; SSRN: 5543018
work page 2025
-
[6]
Informal agency rulemaking under the law
Administrative Law Center Justia. Informal agency rulemaking under the law. Justia Legal Portal, 2025. Accessed on December 18, 2025
work page 2025
-
[7]
RULEMAKING AND INSCRUTABLE AUTOMATED DECI- SION TOOLS
Columbia Law Review. RULEMAKING AND INSCRUTABLE AUTOMATED DECI- SION TOOLS
-
[8]
Department of Agriculture, Food and Nutrition Service
U.S. Department of Agriculture, Food and Nutrition Service. Supplemental nutrition assistance program (snap), 2025. Accessed: 2025-12-01
work page 2025
-
[9]
California Department of Social Services. Calfresh, 2025. Accessed: 2025-12-01
work page 2025
-
[10]
California Department of Social Services. Calfresh regulations, 2025. Accessed: 2025-12- 01
work page 2025
-
[11]
Notice of action documents, 2025
California Department of Social Services. Notice of action documents, 2025. Accessed: 2025-12-01
work page 2025
-
[12]
Michael Veale and Irina Brass. Administration by algorithm? public management meets public sector machine learning.Public Administration Review, 79(6):845–856, 2019
work page 2019
-
[13]
Karl Kristian Larsson. Digitization or equality: When government automation covers some, but not all citizens.Government Information Quarterly, 38(1):101547, January 2021
work page 2021
-
[14]
Mike Zajko. Automated Government Benefits and Welfare Surveillance.Surveillance & Society, 21(3):246–258, September 2023. 76
work page 2023
-
[15]
Lael R. Keiser. Understanding Street-Level Bureaucrats’ Decision Making: Determin- ing Eligibility in the Social Security Disability Program.Public Administration Review, 70(2):247–257, March 2010
work page 2010
-
[16]
The ideational robustness of bureaucracy.Policy and Society, 43(2):141–158, July 2024
Eva Sørensen and Jacob Torfing. The ideational robustness of bureaucracy.Policy and Society, 43(2):141–158, July 2024
work page 2024
-
[17]
Virginia Eubanks.Automating Inequality: How High-Tech Tools Profile, Police, and Pun- ish the Poor. St. Martin’s Press, New York, 2018
work page 2018
-
[18]
Hanne Hoglund Ryden and Luiz De Andrade. The hidden costs of digital self-service: administrative burden, vulnerability and the role of interpersonal aid in Norwegian and Brazilian welfare services. InProceedings of the 16th International Conference on Theory and Practice of Electronic Governance, pages 473–478, Belo Horizonte Brazil, September
-
[19]
The automated administrative state: A crisis of legitimacy.Michigan Law Review, 119(5):891–948, 2021
Ryan Calo and Danielle Keats Citron. The automated administrative state: A crisis of legitimacy.Michigan Law Review, 119(5):891–948, 2021
work page 2021
-
[20]
Ian Greener. The changing governance of welfare: revisiting Jessop’s framework in the context of healthcare.Social Theory & Health, 20(1):21–36, 2022
work page 2022
-
[21]
Michigan Office of the Auditor General. Performance audit report: Michigan integrated data automated system (midas), unemployment insurance agency; department of talent and economic development; department of technology, management, and budget. Techni- cal Report 641-0593-15, Michigan Office of the Auditor General, Lansing, MI, February 2016
work page 2016
-
[22]
Carolyn J. Heinrich and Deanna Malatesta. Postmortem on a public sector contract collapse: The state of indiana’s welfare modernization failure. Technical report, Vanderbilt University, March 2022. Working paper
work page 2022
-
[23]
Federal compliance audit report for the fiscal year ended june 30, 2018
California State Auditor. Federal compliance audit report for the fiscal year ended june 30, 2018. Technical report, California State Auditor’s Office, 2019. Accessed: 2025-12-01
work page 2018
-
[24]
Robert Brauneis and Ellen P. Goodman. Algorithmic transparency for the smart city. Yale Journal of Law & Technology, 20:103–176, 2018
work page 2018
-
[25]
Derek Wu and Bruce D. Meyer. Administer, automate, activate, and adjudicate: How should states implement the one-stop-shop vision for benefit delivery? IZA Discussion Paper 16294, IZA Institute of Labor Economics, 2024. IZA Discussion Paper No. 16294
work page 2024
-
[26]
Pamela Herd, Hilary Hoynes, Jamila Michener, and Donald Moynihan. Introduction: Administrative Burden as a Mechanism of Inequality in Policy Implementation.RSF: The Russell Sage Foundation Journal of the Social Sciences, 9(4):1–30, September 2023
work page 2023
-
[27]
When The Algorithm Says No: AI Denies Vital Benefits, May 2025
RoX818. When The Algorithm Says No: AI Denies Vital Benefits, May 2025. Section: Bias & Mitigation. 77
work page 2025
-
[28]
Merve Hickok. Public procurement of artificial intelligence systems: new risks and future proofing.Ai & Society, pages 1–15, October 2022
work page 2022
-
[29]
Mark Bovens and Stavros Zouridis. From Street-Level to System-Level Bureaucracies: How Information and Communication Technology Is Transforming Administrative Dis- cretion and Constitutional Control.Public Administration Review, 62(2):174–184, 2002. Publisher: [American Society for Public Administration, Wiley]
work page 2002
-
[30]
Russell Sage Foundation, New York, 1980
Michael Lipsky.Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. Russell Sage Foundation, New York, 1980
work page 1980
-
[31]
Art. 22 gdpr – automated individual decision-making, including profiling.https:// gdpr-info.eu/art-22-gdpr/, 2016. Accessed: 2025-11-20
work page 2016
-
[32]
Articles 13(2)(f), 14(2)(g), 15(1)(h)
Regulation (eu) 2016/679 of the european parliament and of the council, 2016. Articles 13(2)(f), 14(2)(g), 15(1)(h)
work page 2016
-
[34]
Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2):76–99, May 2017
work page 2017
-
[35]
Loi n°2016-1321 du 7 octobre 2016 pour une r´ epublique num´ erique [law for a digital republic], 2016. Promulgated 8 October 2016. Accessed from WIPO Lex
work page 2016
-
[36]
Directive on automated decision-making
Treasury Board of Canada Secretariat. Directive on automated decision-making. Technical report, Treasury Board of Canada Secretariat, Ottawa, ON, 2020. Effective for systems developed or procured after April 1, 2020
work page 2020
-
[37]
Brazil. Lei geral de prote¸ c˜ ao de dados pessoais (lgpd) — gen- eral data protection law.https://iapp.org/resources/article/ brazilian-data-protection-law-lgpd-english-translation/, 2018. English translation via IAPP; accessed 2025-11-20
work page 2018
-
[38]
Yuan Chen, Keiko Katsuragawa, and Edward Lank. Understanding Viewport- and World- based Pointing with Everyday Smart Devices in Immersive Augmented Reality. InPro- ceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, pages 1–13, New York, NY, USA, April 2020. Association for Computing Machinery
work page 2020
-
[39]
Goldberg v. kelly, 397 u.s. 254, 1970. Procedural due process requires an evidentiary hearing before termination of welfare benefits
work page 1970
-
[40]
U.S. Supreme Court. Califano v. yamasaki, 442 u.s. 682.https://supreme.justia.com/ cases/federal/us/442/682/, 1979. Accessed 2025-11-20
work page 1979
-
[41]
Power, process, and automated decision-making.Fordham Law Re- view, 88(2):613–632, 2019
Ari Ezra Waldman. Power, process, and automated decision-making.Fordham Law Re- view, 88(2):613–632, 2019. 78
work page 2019
-
[42]
European Parliament and Council of the European Union. Regulation (eu) 2024/1689 on artificial intelligence (ai act).https://eur-lex.europa.eu/eli/reg/2024/1689/oj,
work page 2024
-
[43]
Entered into force 1 August 2024; accessed 2025-11-20
work page 2024
-
[44]
Organisation for Economic Co-operation and Development (OECD). Oecd AI principles. https://www.oecd.org/en/topics/ai-principles.html, 2019. Accessed: 2025-11-20
work page 2019
-
[45]
Recommendation on the ethics of artificial intelligence.https://www.unesco
UNESCO. Recommendation on the ethics of artificial intelligence.https://www.unesco. org/en/artificial-intelligence/recommendation-ethics, 2021. Adopted November 2021; accessed 2025-12-01
work page 2021
-
[46]
Ron [D-OR Sen. Wyden. S.2892 - 118th Congress (2023-2024): Algorithmic Accountability Act of 2023, September 2023. Archive Location: 2023-09-21
work page 2023
-
[47]
Office of Science and Technology Policy (OSTP). Blueprint for an ai bill of rights: Making automated systems work for the american people.https://www.whitehouse.gov/ostp/ ai-bill-of-rights/, 2022. Accessed: 2025-11-20
work page 2022
-
[48]
Is Administrative Law at War with Itself?SSRN Electronic Journal, 2020
Jerry Louis Mashaw. Is Administrative Law at War with Itself?SSRN Electronic Journal, 2020
work page 2020
-
[49]
Lon L. Fuller.The Morality of Law. Yale University Press, New Haven, CT, revised edition edition, 1969
work page 1969
-
[50]
An Administrative Jurisprudence: The Rule of Law in the Ad- ministrative State
Columbia Law Review. An Administrative Jurisprudence: The Rule of Law in the Ad- ministrative State
-
[51]
Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans, and Rhianne Jones. Explain- ability for experts: A design framework for making algorithms supporting expert decisions more explainable.Journal of Responsible Technology, 7-8:100017, October 2021
work page 2021
- [52]
-
[53]
Andrada Iulia Prajescu and Roberto Confalonieri. Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives, October 2025. arXiv:2510.11079 [cs]
-
[54]
Legally-informed explainable ai.arXiv preprint arXiv:2504.10708, 2025
Gennie Mansi, Naveena Karusala, and Mark Riedl. Legally-informed explainable ai.arXiv preprint arXiv:2504.10708, 2025
-
[55]
”Why Should I Trust You?”: Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. InProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144, San Francisco California USA, August 2016. ACM
work page 2016
-
[56]
shapiq: Shapley Interactions for Machine Learning, 2024
Maximilian Muschalik, Hubert Baniecki, Fabian Fumagalli, Patrick Kolpaczki, Barbara Hammer, and Eyke H¨ ullermeier. shapiq: Shapley Interactions for Machine Learning, 2024. Version Number: 1. 79
work page 2024
-
[57]
Sahil Verma, John P. Dickerson, and Keegan Hines. Counterfactual explanations for machine learning: A review.CoRR, abs/2010.10596, 2020
-
[58]
Timo Freiesleben, Gunnar K¨ onig, Christoph Molnar, and´Alvaro Tejero-Cantero. Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real- World Phenomena.Minds and Machines, 34(3):32, July 2024
work page 2024
-
[59]
Cynthia Rudin. Stop explaining black box machine learning models for high stakes de- cisions and use interpretable models instead.Nature Machine Intelligence, 1(5):206–215, May 2019
work page 2019
-
[60]
Shane T. Mueller, Robert R. Hoffman, William Clancey, Abigail Emrey, and Gary Klein. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI, February 2019. arXiv:1902.01876 [cs]
work page internal anchor Pith review Pith/arXiv arXiv 2019
-
[61]
Q. Vera Liao and Kush R. Varshney. Human-centered explainable ai (xai): From algo- rithms to user experiences.arXiv preprint arXiv:2110.10790, 2021
-
[62]
Vera Liao, Michael Muller, Mark O
Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. Expand- ing Explainability: Towards Social Transparency in AI systems. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–19, May 2021. arXiv:2101.04719 [cs]
-
[63]
McGuinness, Daniele Nardi, and Peter F
Franz Baader, Deborah L. McGuinness, Daniele Nardi, and Peter F. Patel-Schneider. The description logic handbook: Theory, implementation, and applications. InThe Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, 2 edition, 2010
work page 2010
-
[64]
Thomas R. Gruber. A translation approach to portable ontology specifications.Knowledge Acquisition, 5(2):199–220, June 1993
work page 1993
-
[65]
O. Lassila and R. Swick. Resource description framework (rdf) model and syntax speci- fication. Technical Report REC-rdf-syntax-19990222, W3C, Feb 1999. W3C Recommen- dation
work page 1999
-
[66]
World Wide Web Consortium (W3C). Owl 2 web ontology language document overview (second edition).https://www.w3.org/TR/2012/REC-owl2-overview-20121211/, 2012. W3C Recommendation; accessed 2025-11-20
work page 2012
-
[67]
The Gene Ontology Consortium. Gene ontology: Overview and documentation.http: //geneontology.org/docs/ontology-documentation/, 2025. Accessed 2025-11-20
work page 2025
-
[68]
Suggested upper merged ontology (sumo).https://www
Ontology Portal. Suggested upper merged ontology (sumo).https://www. ontologyportal.org/. Accessed 2025-11-20
work page 2025
-
[69]
Prov-o: The prov ontology.https://www.w3.org/ TR/prov-o/, 2013
World Wide Web Consortium (W3C). Prov-o: The prov ontology.https://www.w3.org/ TR/prov-o/, 2013. W3C Recommendation; accessed 2025-11-20. 80
work page 2013
-
[70]
L. Thorne McCarty. Reflections on TAXMAN: An Experiment in Artificial Intelligence and Legal Reasoning (Original Version).Harvard Law Review, 5:305–373, January 1976
work page 1976
-
[71]
SHYSTER: A Pragmatic Legal Expert System
James Popple. SHYSTER: A Pragmatic Legal Expert System. InSSRN Electronic Jour- nal, 1993. ISSN: 1556-5068 Journal Abbreviation: SSRN Journal
work page 1993
-
[72]
The lkif core ontology of basic legal concepts
Rinke Hoekstra, Joost Breuker, Marcello Di Bello, and Alexander Boer. The lkif core ontology of basic legal concepts. InProceedings of the 10th International Conference on Artificial Intelligence and Law (ICAIL 2007), pages 43–44. ACM, 2007
work page 2007
-
[73]
Xuran Wang, Xinguang Zhang, Vanessa Hoo, Zhouhang Shao, and Xuguang Zhang. Legal- Reasoner: A Multi-Stage Framework for Legal Judgment Prediction via Large Language Models and Knowledge Integration.IEEE Access, 12:166843–166854, 2024
work page 2024
-
[74]
Satisfiability Modulo Theories: An Appetizer
Leonardo De Moura and Nikolaj Bjørner. Satisfiability Modulo Theories: An Appetizer. In David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum, Marcel Vin´ ıcius Medeiro...
work page 2009
-
[75]
The regorous approach to process compliance
Guido Governatori. The regorous approach to process compliance. InProceedings of the 2015 IEEE 19th International Enterprise Distributed Object Computing Conference Workshops and Demonstrations (EDOCW 2015), pages 33–40. IEEE, 2015
work page 2015
-
[76]
Automating defeasible reasoning in law.arXiv preprint, 2022
How Khang Lim, Avishkar Mahajan, Martin Strecker, and Meng Weng Wong. Automating defeasible reasoning in law.arXiv preprint, 2022
work page 2022
-
[77]
Samuel Judson, Matthew Elacqua, Filip Cano, Timos Antonopoulos, Bettina K¨ onighofer, Scott J. Shapiro, and Ruzica Piskac. ’Put the Car on the Stand’: SMT-based Oracles for Investigating Decisions, January 2024. arXiv:2305.05731 [cs]
-
[78]
Automating Defeasible Reasoning in Law, May 2022
How Khang Lim, Avishkar Mahajan, Martin Strecker, and Meng Weng Wong. Automating Defeasible Reasoning in Law, May 2022. arXiv:2205.07335 [cs]
-
[79]
Tomer Libal. Legal linguistic templates and the tension between legal knowledge repre- sentation and reasoning.Frontiers in Artificial Intelligence, 6:113626, 2023
work page 2023
-
[80]
Towards Cognitive AI Sys- tems: a Survey and Prospective on Neuro-Symbolic AI, January 2024
Zishen Wan, Che-Kai Liu, Hanchen Yang, Chaojian Li, Haoran You, Yonggan Fu, Cheng Wan, Tushar Krishna, Yingyan Lin, and Arijit Raychowdhury. Towards Cognitive AI Sys- tems: a Survey and Prospective on Neuro-Symbolic AI, January 2024. arXiv:2401.01040 [cs]
-
[81]
Neurosymbolic ai and its taxonomy: A survey.arXiv preprint, 2023
Wandemberg Gibaut, Leonardo Pereira, Fabio Grassiotto, Alexandre Osorio, Eder Gadi- oli, Amparo Munoz, Sildolfo Gomes, and Claudio dos Santos. Neurosymbolic ai and its taxonomy: A survey.arXiv preprint, 2023. arXiv:2305.08876 [cs.AI]. 81
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.