pith. machine review for the scientific record. sign in

arxiv: 2604.15898 · v1 · submitted 2026-04-17 · 💻 cs.AI

Recognition: unknown

Towards Rigorous Explainability by Feature Attribution

Authors on Pith no claims yet

Pith reviewed 2026-05-10 09:12 UTC · model grok-4.3

classification 💻 cs.AI
keywords explainable AIfeature attributionShapley valuessymbolic methodsXAIrigorous explanationsmachine learning interpretability
0
0 comments X

The pith

Symbolic methods can provide rigorous feature importance assignments in explainable AI unlike non-symbolic approaches such as Shapley values

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper argues that non-symbolic methods have served as the default option for explaining complex machine learning models for about a decade, yet they lack rigor and can mislead human decision-makers, a problem that is especially acute in high-stakes applications. It identifies the use of Shapley values in tools such as SHAP as a clear case of provable lack of rigor. The work surveys ongoing efforts to develop and apply rigorous symbolic methods as a concrete alternative for assigning relative feature importance.

Core claim

Non-symbolic methods such as Shapley values lack rigor and can mislead, while symbolic methods provide rigorous alternatives for feature importance assignment.

What carries the argument

Rigorous symbolic methods of XAI for assigning relative feature importance

If this is right

  • High-stakes ML applications would gain explanations that carry formal guarantees rather than approximations.
  • Decision-makers could act on feature importance scores with reduced risk of being misled.
  • Development of symbolic XAI tools would replace current non-rigorous practices in domains requiring accountability.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Symbolic methods could be paired with formal verification to strengthen overall AI safety pipelines.
  • Hybrid systems might address scalability limits and allow symbolic rigor on larger models.
  • Regulatory requirements for AI explainability could shift toward demanding provable rather than heuristic attributions.

Load-bearing premise

That symbolic methods can be made practical and scalable for the complex, high-dimensional models used in real-world machine learning.

What would settle it

A concrete case in a high-stakes domain where a symbolic feature attribution method yields an incorrect or incomplete assignment, or where a non-symbolic method such as SHAP produces explanations that never mislead users.

Figures

Figures reproduced from arXiv: 2604.15898 by Joao Marques-Silva, Olivier L\'etoff\'e, Xuanxiang Huang.

Figure 1
Figure 1. Figure 1: Classification model M1 represented as a decision tree. row # x1 x2 π2(x) 1 0 0 −1/2 2 0 1 3/2 3 1 0 1 4 1 1 1 (a) Tabular representation x1 x2 −1/2 3/2 1 ∈ {0} ∈ {0} ∈ {1} ∈ {1} 1 2 4 5 3 (b) Regression tree (RT) S rows(S) υe(S) ∅ 1, 2, 3, 4 3/4 {1} 3, 4 1 {2} 2, 4 5/4 {1, 2} 4 1 (c) Expected values [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Regression model M2 represented as a regression tree. An explanation problem is a tuple E = (M, I), where M can either be a classification or a regression model, and I = (v, p) is a given instance, with v ∈ F. (Observe that p = π(v), with p ∈ V.) Running examples [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Computation of SHAP scores for E1. i = 1 S υe(S) υe(S ∪ {1}) ∆1(S) ς(S) ς(S) × ∆1(S) ∅ 3/4 1 1/4 1/2 1/8 {2} 5/4 1 −1/4 1/2 −1/8 Sve(1) = 0 i = 2 S υe(S) υe(S ∪ {2}) ∆2(S) ς(S) ς(S) × ∆2(S) ∅ 3/4 5/4 1/2 1/2 1/4 {1} 1 1 0 1/2 0 Sve(2) = 0.25 [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Computation of SHAP scores for E2. no observable difference exists between the ML model’s output for x and v. 6 For regression problems, we write instead σ as the instantiation of a template 6 Throughout the paper, parameterization are shown after the separator ’;’, and will be elided when clear from the context [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Example of regression model that is Lipschitz continuous. [PITH_FULL_IMAGE:figures/full_fig_p012_5.png] view at source ↗
read the original abstract

For around a decade, non-symbolic methods have been the option of choice when explaining complex machine learning (ML) models. Unfortunately, such methods lack rigor and can mislead human decision-makers. In high-stakes uses of ML, the lack of rigor is especially problematic. One prime example of provable lack of rigor is the adoption of Shapley values in explainable artificial intelligence (XAI), with the tool SHAP being a ubiquitous example. This paper overviews the ongoing efforts towards using rigorous symbolic methods of XAI as an alternative to non-rigorous non-symbolic approaches, concretely for assigning relative feature importance.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 0 minor

Summary. The paper claims that non-symbolic feature attribution methods such as Shapley values and the SHAP tool lack rigor and can mislead human decision-makers, especially in high-stakes ML applications. It positions rigorous symbolic methods as preferable alternatives and overviews ongoing efforts to apply them for assigning relative feature importance.

Significance. If the overview accurately catalogs viable symbolic techniques and their advantages, the paper could help redirect XAI research toward more trustworthy methods. As a survey without new derivations, proofs, or experiments, its significance rests on synthesis quality and whether it substantiates the practicality of symbolic approaches for real-world models.

major comments (1)
  1. Abstract and introduction: The central positioning of symbolic methods as practical alternatives to non-symbolic ones (e.g., SHAP) for complex ML models is load-bearing, yet the manuscript provides no concrete scaling arguments, complexity bounds, or references to benchmarks demonstrating that cited symbolic techniques avoid exponential blow-up in high-dimensional settings. This leaves the 'alternative' claim unsubstantiated.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our survey. We address the single major comment below and will revise the manuscript to strengthen the relevant sections.

read point-by-point responses
  1. Referee: Abstract and introduction: The central positioning of symbolic methods as practical alternatives to non-symbolic ones (e.g., SHAP) for complex ML models is load-bearing, yet the manuscript provides no concrete scaling arguments, complexity bounds, or references to benchmarks demonstrating that cited symbolic techniques avoid exponential blow-up in high-dimensional settings. This leaves the 'alternative' claim unsubstantiated.

    Authors: We agree that the survey would benefit from more explicit discussion of practicality and scalability. As an overview paper, the manuscript synthesizes existing literature rather than deriving new bounds; however, the referee is correct that the current text does not sufficiently reference or summarize complexity results or benchmarks for the cited symbolic techniques. In revision we will expand the introduction and add a dedicated subsection on computational considerations. This will include citations to relevant analyses of symbolic feature attribution methods (e.g., work on decision diagrams and satisfiability-based approaches that demonstrate polynomial scaling under restricted model classes or via approximations) and note both successful high-dimensional applications and remaining limitations regarding exponential blow-up. The revised text will also qualify the 'alternative' claim to distinguish rigor advantages from universal practicality. revision: yes

Circularity Check

0 steps flagged

No circularity: survey paper with no derivations or self-referential reductions

full rationale

The paper is an overview surveying symbolic XAI methods as rigorous alternatives to non-symbolic approaches like Shapley values/SHAP. It contains no equations, fitted parameters, predictions, or derivation chains that could reduce to the paper's own inputs by construction. Claims rest on external references rather than internal self-definition, fitted-input renaming, or load-bearing self-citations. This matches the default expectation for non-circular survey papers.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a survey paper. No free parameters, axioms, or invented entities are introduced because no new technical claims or derivations are made.

pith-pipeline@v0.9.0 · 5398 in / 862 out tokens · 24548 ms · 2026-05-10T09:12:01.066977+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

70 extracted references · 7 canonical work pages

  1. [1]

    Cooper, and Salim Debbaoui

    Leila Amgoud, Martin C. Cooper, and Salim Debbaoui. Axiomatic characterisa- tions of sample-based explainers. InECAI, pages 770–777, 2024

  2. [2]

    Bertossi, and Mikaël Monet

    Marcelo Arenas, Pablo Barceló, Leopoldo E. Bertossi, and Mikaël Monet. On the complexity of SHAP-score-based explanations: Tractability via knowledge compi- lation and non-approximability results.J. Mach. Learn. Res., 24:63:1–63:58, 2023

  3. [3]

    Weighted voting doesn’t work: A mathematical analysis

    John F Banzhaf III. Weighted voting doesn’t work: A mathematical analysis. Rutgers L. Rev., 19:317, 1965

  4. [4]

    Explainingk-nearest neighbors: Abductive and counterfactual explanations.Proc

    Pablo Barceló, Alexander Kozachinskiy, Miguel Romero, Bernardo Subercaseaux, and José Verschae. Explainingk-nearest neighbors: Abductive and counterfactual explanations.Proc. ACM Manag. Data, 3(2):97:1–97:26, 2025

  5. [5]

    IOS Press, 2021

    Armin Biere, Marijn Heule, Hans van Maaren, and Toby Walsh, editors.Handbook of Satisfiability - Second Edition, volume 336 ofFrontiers in Artificial Intelligence and Applications. IOS Press, 2021

  6. [6]

    Position: Rethinking ex- plainable machine learning as applied statistics

    Sebastian Bordt, Eric Raidl, and Ulrike von Luxburg. Position: Rethinking ex- plainable machine learning as applied statistics. InICML, 2025

  7. [7]

    Cooper, and João Marques-Silva

    Clément Carbonnel, Martin C. Cooper, and João Marques-Silva. Tractable ex- plaining of multivariate decision trees. InKR, pages 127–135, 2023

  8. [8]

    Polynomial calculation of the shapley value based on sampling.Comput

    Javier Castro, Daniel Gómez, and Juan Tejada. Polynomial calculation of the shapley value based on sampling.Comput. Oper. Res., 36(5):1726–1730, 2009

  9. [9]

    Wooldridge.Computational Aspects of Cooperative Game Theory

    Georgios Chalkiadakis, Edith Elkind, and Michael J. Wooldridge.Computational Aspects of Cooperative Game Theory. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2012

  10. [10]

    XGBoost: A scalable tree boosting system

    Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In KDD, pages 785–794, 2016

  11. [11]

    Cooper and Leila Amgoud

    Martin C. Cooper and Leila Amgoud. Abductive explanations of classifiers under constraints: Complexity and properties. InECAI, pages 469–476, 2023

  12. [12]

    Cooper and João Marques-Silva

    Martin C. Cooper and João Marques-Silva. On the tractability of explaining deci- sions of classifiers. InCP, pages 21:1–21:18, 2021

  13. [13]

    Cooper and João Marques-Silva

    Martin C. Cooper and João Marques-Silva. Tractability of explaining classifier decisions.Artif. Intell., 316:103841, 2023

  14. [14]

    Logic for explainable AI

    Adnan Darwiche. Logic for explainable AI. InLICS, pages 1–11, 2023

  15. [15]

    The MNIST database of handwritten digit images for machine learning research [best of the web].IEEE signal processing magazine, 29(6):141–142, 2012

    Li Deng. The MNIST database of handwritten digit images for machine learning research [best of the web].IEEE signal processing magazine, 29(6):141–142, 2012

  16. [16]

    On the complexity of cooperative solution concepts.Mathematics of operations research, 19(2):257–266, 1994

    Xiaotie Deng and Christos H Papadimitriou. On the complexity of cooperative solution concepts.Mathematics of operations research, 19(2):257–266, 1994. 18 O. Létoffé et al

  17. [17]

    The complexity of logic-based abduction.J

    Thomas Eiter and Georg Gottlob. The complexity of logic-based abduction.J. ACM, 42(1):3–42, 1995

  18. [18]

    Using ma- chine learning to investigate the influence of the prenatal chemical exposome on neurodevelopment of young children.NeuroToxicology, 108:218–230, 2025

    Gillian England-Mason, Sarah J MacEachern, Kimberly Amador, Munawar Hus- sain Soomro, Anthony JF Reardon, Amy M MacDonald, David W Kinniburgh, Nicole Letourneau, Gerald F Giesbrecht, Jonathan W Martin, et al. Using ma- chine learning to investigate the influence of the prenatal chemical exposome on neurodevelopment of young children.NeuroToxicology, 108:2...

  19. [19]

    The measurement of a priori voting power

    Dan S Felsenthal and Moshé Machover. The measurement of a priori voting power. In Jac C. Heckelman and Nicholas R. Miller, editors,Handbook of Social Choice and Voting, chapter 08, pages 117–139. Edward Elgar Publishing, 2015

  20. [20]

    Fredman and Leonid Khachiyan

    Michael L. Fredman and Leonid Khachiyan. On the complexity of dualization of monotone disjunctive normal forms.J. Algorithms, 21(3):618–628, 1996

  21. [21]

    Goodfellow, Jonathon Shlens, and Christian Szegedy

    Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and har- nessing adversarial examples. InICLR, 2015

  22. [22]

    PhD thesis, University of Rochester, 2025

    Zezhen He.The Role of Explainable Artificial Intelligence in Model Selection, Adoption, and Wait Time Communication. PhD thesis, University of Rochester, 2025

  23. [23]

    Explaining model behavior across space and time: Differential and intertemporal explanations.SSRN, (5277675), 2025

    Zezhen Dawn He and Yaron Shaposhnik. Explaining model behavior across space and time: Differential and intertemporal explanations.SSRN, (5277675), 2025

  24. [24]

    Cooper, Nicholas Asher, and João Marques-Silva

    Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin C. Cooper, Nicholas Asher, and João Marques-Silva. Tractable explanations for d-DNNF classifiers. InAAAI, pages 5719–5728, 2022

  25. [25]

    On efficiently explaining graph-based classifiers

    Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, and João Marques-Silva. On efficiently explaining graph-based classifiers. InKR, pages 356–367, 2021

  26. [26]

    The inadequacy of Shapley values for explainability.CoRR, abs/2302.08160, 2023

    Xuanxiang Huang and Joao Marques-Silva. The inadequacy of Shapley values for explainability.CoRR, abs/2302.08160, 2023

  27. [27]

    A refutation of Shapley values for explainability.CoRR, abs/2309.03041, 2023

    Xuanxiang Huang and João Marques-Silva. A refutation of Shapley values for explainability.CoRR, abs/2309.03041, 2023

  28. [28]

    Refutation of Shapley values for XAI – additional evidence.CoRR, abs/2310.00416, 2023

    Xuanxiang Huang and Joao Marques-Silva. Refutation of Shapley values for XAI – additional evidence.CoRR, abs/2310.00416, 2023

  29. [29]

    On the failings of shapley values for explainability.Int

    Xuanxiang Huang and João Marques-Silva. On the failings of shapley values for explainability.Int. J. Approx. Reason., 171:109112, 2024

  30. [30]

    Stuckey, and João Marques-Silva

    Alexey Ignatiev, Yacine Izza, Peter J. Stuckey, and João Marques-Silva. Using maxsat for efficient explanations of tree ensembles. InAAAI, pages 3776–3785, 2022

  31. [31]

    SAT-based rigorous explanations for decision lists

    Alexey Ignatiev and Joao Marques-Silva. SAT-based rigorous explanations for decision lists. InSAT, pages 251–269, 2021

  32. [32]

    From contrastive to abductive explanations and back again

    Alexey Ignatiev, Nina Narodytska, Nicholas Asher, and Joao Marques-Silva. From contrastive to abductive explanations and back again. InAIxIA, pages 335–355, 2020

  33. [33]

    Abduction-based ex- planations for machine learning models

    Alexey Ignatiev, Nina Narodytska, and Joao Marques-Silva. Abduction-based ex- planations for machine learning models. InAAAI, pages 1511–1519, 2019

  34. [34]

    On relating explana- tions and adversarial examples

    Alexey Ignatiev, Nina Narodytska, and Joao Marques-Silva. On relating explana- tions and adversarial examples. InNeurIPS, pages 15857–15867, 2019

  35. [35]

    On explaining random forests with SAT

    Yacine Izza and Joao Marques-Silva. On explaining random forests with SAT. In IJCAI, pages 2584–2591, 2021

  36. [36]

    Breast cancer prediction based on gene expression data using interpretable machine learning techniques.Scientific Reports, 15(1):7594, 2025

    Gabriel Kallah-Dagadu, Mohanad Mohammed, Justine B Nasejje, Nobuhle Nokubonga Mchunu, Halima S Twabi, Jesca Mercy Batidzirai, Ge- offrey Chiyuzga Singini, Portia Nevhungoni, and Innocent Maposa. Breast cancer prediction based on gene expression data using interpretable machine learning techniques.Scientific Reports, 15(1):7594, 2025. Rigorous Explainabili...

  37. [37]

    Space explanations of neural network classification

    Faezeh Labbaf, Tomás Kolárik, Martin Blicha, Grigory Fedyukovich, Michael Wand, and Natasha Sharygina. Space explanations of neural network classification. InCAV, pages 287–303, 2025

  38. [38]

    On correcting SHAP scores.CoRR, abs/2405.00076, 2024

    Olivier Létoffé, Xuanxiang Huang, and Joao Marques-Silva. On correcting SHAP scores.CoRR, abs/2405.00076, 2024

  39. [39]

    SHAP scores fail pervasively even when Lipschitz succeeds.CoRR, abs/2412.13866, 2024

    Olivier Létoffé, Xuanxiang Huang, and João Marques-Silva. SHAP scores fail pervasively even when Lipschitz succeeds.CoRR, abs/2412.13866, 2024

  40. [40]

    Towards trustable SHAP scores

    Olivier Létoffé, Xuanxiang Huang, and João Marques-Silva. Towards trustable SHAP scores. InAAAI, pages 18198–18208, 2025

  41. [41]

    Liffiton, Alessandro Previti, Ammar Malik, and João Marques-Silva

    Mark H. Liffiton, Alessandro Previti, Ammar Malik, and João Marques-Silva. Fast, flexible MUS enumeration.Constraints An Int. J., 21(2):223–250, 2016

  42. [42]

    Liffiton and Karem A

    Mark H. Liffiton and Karem A. Sakallah. Algorithms for computing minimal unsatisfiable subsets of constraints.J. Autom. Reason., 40(1):1–33, 2008

  43. [43]

    Lundberg and Su-In Lee

    Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. InNeurIPS, pages 4765–4774, 2017

  44. [44]

    Logic-based explainability in machine learning

    João Marques-Silva. Logic-based explainability in machine learning. InReasoning Web, pages 24–104, 2022

  45. [45]

    Logic-based explainability: Past, present and future

    Joao Marques-Silva. Logic-based explainability: Past, present and future. In ISoLA, pages 181–204, 2024

  46. [46]

    Cooper, Alexey Ignatiev, and Nina Narodytska

    Joao Marques-Silva, Thomas Gerspacher, Martin C. Cooper, Alexey Ignatiev, and Nina Narodytska. Explaining naive bayes and other linear classifiers with polyno- mial time and delay. InNeurIPS, 2020

  47. [47]

    Cooper, Alexey Ignatiev, and Nina Narodytska

    João Marques-Silva, Thomas Gerspacher, Martin C. Cooper, Alexey Ignatiev, and Nina Narodytska. Explanations for monotonic classifiers. InICML, pages 7469– 7479, 2021

  48. [48]

    Explainability is NOT a game – pre- liminary report.CoRR, abs/2307.07514, 2023

    João Marques-Silva and Xuanxiang Huang. Explainability is NOT a game – pre- liminary report.CoRR, abs/2307.07514, 2023

  49. [49]

    Explainability isNota game.Com- mun

    João Marques-Silva and Xuanxiang Huang. Explainability isNota game.Com- mun. ACM, 67(7):66–75, 2024

  50. [50]

    The explanation game - rekindled (extended version).CoRR, abs/2501.11429, 2025

    João Marques-Silva, Xuanxiang Huang, and Olivier Létoffé. The explanation game - rekindled (extended version).CoRR, abs/2501.11429, 2025

  51. [51]

    Lefebre-Lobaina, and Maria Vanina Martinez

    João Marques-Silva, Jairo A. Lefebre-Lobaina, and Maria Vanina Martinez. Effi- cient and rigorous model-agnostic explanations. InIJCAI, pages 2637–2646, 2025

  52. [52]

    A survey of algorithms for calculating power indices of weighted majority games.Journal of the Operations Research Society of Japan, 43(1):71–86, 2000

    Tomomi Matsui and Yasuko Matsui. A survey of algorithms for calculating power indices of weighted majority games.Journal of the Operations Research Society of Japan, 43(1):71–86, 2000

  53. [53]

    The magical number seven, plus or minus two: Some limits on our capacity for processing information.Psychological review, 63(2):81–97, 1956

    George A Miller. The magical number seven, plus or minus two: Some limits on our capacity for processing information.Psychological review, 63(2):81–97, 1956

  54. [54]

    Lulu.com, 2020

    Christoph Molnar.Interpretable machine learning. Lulu.com, 2020

  55. [55]

    Computing inflated explanations for boosted trees: A compilation-based approach

    Alnis Murtovi, Maximilian Schlüter, and Bernhard Steffen. Computing inflated explanations for boosted trees: A compilation-based approach. In Mike Hinchey and Bernhard Steffen, editors,The Combined Power of Research, Education, and Dissemination, pages 183–201, 2025

  56. [56]

    Olson, William La Cava, Patryk Orzechowski, Ryan J

    Randal S. Olson, William La Cava, Patryk Orzechowski, Ryan J. Urbanowicz, and Jason H. Moore. PMLB: a large benchmark suite for machine learning evaluation and comparison.BioData Mining, 10(36):1–13, Dec 2017

  57. [57]

    Springer Science & Business Media, 2006

    Mícheál O’Searcoid.Metric spaces. Springer Science & Business Media, 2006

  58. [58]

    Scikit-learn: Machine learning in python.the Journal of machine Learning research, 12:2825–2830, 2011

    Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python.the Journal of machine Learning research, 12:2825–2830, 2011. 20 O. Létoffé et al

  59. [59]

    Atheoryofdiagnosisfromfirstprinciples.Artif

    RaymondReiter. Atheoryofdiagnosisfromfirstprinciples.Artif. Intell.,32(1):57– 95, 1987

  60. [60]

    why should I trust you?

    Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should I trust you?": Explaining the predictions of any classifier. InKDD, pages 1135–1144, 2016

  61. [61]

    Anchors: High-precision model-agnostic explanations

    Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. InAAAI, pages 1527–1535, 2018

  62. [62]

    Lloyd S. Shapley. A value forn-person games.Contributions to the Theory of Games, 2(28):307–317, 1953

  63. [63]

    A symbolic approach to explaining bayesian network classifiers

    Andy Shih, Arthur Choi, and Adnan Darwiche. A symbolic approach to explaining bayesian network classifiers. InIJCAI, pages 5103–5111, 2018

  64. [64]

    An efficient explanation of individual classi- fications using game theory.J

    Erik Strumbelj and Igor Kononenko. An efficient explanation of individual classi- fications using game theory.J. Mach. Learn. Res., 11:1–18, 2010

  65. [65]

    Explaining prediction models and individual predictions with feature contributions.Knowl

    Erik Strumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions.Knowl. Inf. Syst., 41(3):647–665, 2014

  66. [66]

    Goodfellow, and Rob Fergus

    ChristianSzegedy,WojciechZaremba,IlyaSutskever,JoanBruna,DumitruErhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014

  67. [67]

    Interpretable machine learning for battery prognosis: Retrospect and prospect.Advanced Energy Materials, page e03067, 2025

    Ting-Ting Wang, Kun-Yu Liu, Hong-Jie Peng, and Xinyan Liu. Interpretable machine learning for battery prognosis: Retrospect and prospect.Advanced Energy Materials, page e03067, 2025

  68. [68]

    A similarity measure for in- definite rankings.ACM Transactions on Information Systems (TOIS), 28(4):1–38, 2010

    William Webber, Alistair Moffat, and Justin Zobel. A similarity measure for in- definite rankings.ACM Transactions on Information Systems (TOIS), 28(4):1–38, 2010

  69. [69]

    Min Wu, Haoze Wu, and Clark W. Barrett. VeriX: Towards verified explainability of deep neural networks. InNeurIPS, 2023

  70. [70]

    Springer, 2021

    Zhi-Hua Zhou.Machine Learning. Springer, 2021. A Abductive Explanations & Logic-Based Abduction Throughout this section, we adopt the definition of (subset-minimal) logic-based abduction from earlier work [17]. Furthermore, and for simplicity, we assume an ML model with boolean features and boolean prediction function, such that it has a propositional log...