pith. machine review for the scientific record. sign in

arxiv: 2604.19684 · v1 · submitted 2026-04-21 · 💻 cs.LG

Recognition: unknown

PREF-XAI: Preference-Based Personalized Rule Explanations of Black-Box Machine Learning Models

Authors on Pith no claims yet

Pith reviewed 2026-05-10 03:25 UTC · model grok-4.3

classification 💻 cs.LG
keywords explainable AIpersonalized explanationspreference learningrule-based explanationsrobust ordinal regressionblack-box modelsuser-centric XAI
0
0 comments X

The pith

PREF-XAI reframes explanations as user-ranked alternatives and infers additive utility functions from small rankings to personalize rule-based interpretations of black-box models.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes PREF-XAI as a framework that treats explanations not as fixed outputs approximating a model but as alternatives evaluated according to individual user preferences. Users rank a small set of candidate rule explanations, and these rankings are used to infer an additive utility function via robust ordinal regression. The resulting model selects highly relevant explanations for that user and can generate novel rules the user had not considered. A sympathetic reader would care because most existing XAI methods produce generic, model-centric outputs that ignore differences in user goals, cognitive constraints, and what counts as useful.

Core claim

Within the PREF-XAI perspective, explanations are evaluated and selected according to user-specific criteria modeled by an additive utility function inferred using robust ordinal regression from rankings of candidate explanations. This methodology combines rule-based explanations with formal preference learning. Experimental results on real-world datasets show that PREF-XAI can accurately reconstruct user preferences from limited feedback, identify highly relevant explanations, and discover novel explanatory rules not initially considered by the user.

What carries the argument

The additive utility function inferred via robust ordinal regression from a user's ranking of a small set of candidate rule explanations, which scores explanations by aggregating preferences over their features and enables selection of personalized alternatives.

If this is right

  • Explanations can be selected to match each user's specific goals and cognitive style rather than providing the same output to everyone.
  • Novel explanatory rules beyond those initially supplied by the user can be discovered and presented.
  • Accurate preference reconstruction is feasible from limited feedback on real-world datasets.
  • A formal connection is established between XAI and preference learning that supports future interactive and adaptive explanation systems.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Interactive interfaces could let users iteratively refine the preference model across multiple sessions as their understanding evolves.
  • The ranking-plus-utility approach could extend to non-rule explanation types such as counterfactuals or feature attributions by defining appropriate preference criteria over them.
  • In domains like medicine or finance, personalized selection might increase user trust by aligning explanations with individual decision needs.
  • Active learning could be layered on top to choose which candidate explanations to present for ranking, further reducing user effort.

Load-bearing premise

An additive utility function inferred via robust ordinal regression from rankings of a small set of candidate explanations can accurately capture and reconstruct individual user preferences over explanations.

What would settle it

A user study in which participants rank a set of rule explanations, the inferred utility model predicts their preference order on a held-out set of explanations, and accuracy is measured by direct comparison to the participants' actual rankings of those held-out items.

Figures

Figures reproduced from arXiv: 2604.19684 by Jacek Karolczak, Jerzy Stefanowski, Roman S{\l}owi\'nski, Salvatore Greco.

Figure 1
Figure 1. Figure 1: Number of Covering Rules: The number of decision rules covering an individual instance, stratified by the predicted class for each dataset. This illustrates the variable size of the initial rule set Rx, highlighting the need to generate personalized explanations. a uniform distribution, we simulate a broad spectrum of potential preference profiles. This approach ensures that the methodology is evaluated fo… view at source ↗
Figure 2
Figure 2. Figure 2: Ranking Fidelity: Distribution of Kendall’s τ correlation between the algo￾rithmically generated full rankings and the true user preferences. Higher values indicate that the algorithm better preserves the user’s overall preference order. To ensure these algorithmically discovered rules are of genuine interest to the user, we calculate the Jaccard index for the top-5 positions of both rankings; this yields … view at source ↗
Figure 3
Figure 3. Figure 3: Parameter Recovery: Distribution of Kendall’s τ correlation between the recovered PRUS weights and the user’s ground-truth Utrue weights. Higher values demon￾strate a more accurate approximation of the underlying utility function. H&RC Max 0 10 20 30 40 Churn (Banking) H&RC Max Churn (Telecom) H&RC Max HELOC Number of discovered rules [PITH_FULL_IMAGE:figures/full_fig_p021_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Rule Discovery: The total number of newly discovered rules in the final ranking that appear strictly before the user’s highest-ranked reference rule. Higher counts indicate a stronger capacity to reveal previously unseen rules. our analysis concentrates exclusively on the two most prominent strategies: H&RC and Max ϵ. Importantly, this high overlap in the top-k sets is achieved while actively introducing n… view at source ↗
Figure 5
Figure 5. Figure 5: Rule Discovery within Top Positions: The number of newly discovered rules occupying the top-5 positions of the final ranking. Values range from 0 to 5, showing how many top rules in the algorithmic solution are new to the user. Dataset H&RC Max ϵ Churn (Banking) 1.28 1.66 Churn (Telecom) 4.60 5.37 HELOC 3.01 3.54 [PITH_FULL_IMAGE:figures/full_fig_p022_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Top-5 Rule Set Similarity: Jaccard index measuring the overlap between the top-5 rules selected by the algorithm and the user’s true top-5 rules. A score closer to 1.0 indicates near-perfect alignment with the most preferred rules. indicating a positive correlation with the true ranking. Parameter recovery ( [PITH_FULL_IMAGE:figures/full_fig_p023_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Top-10 Rule Set Similarity: Jaccard index measuring the overlap between the top-10 rules selected by the algorithm and the user’s true top-10 rules. A score closer to 1.0 indicates better identification of the user’s preferred explanatory patterns. This suggests that expected weights from hit-and-run sampling provide a more robust PRUS approximation than a single discrimination-maximizing vector. Conversel… view at source ↗
read the original abstract

Explainable artificial intelligence (XAI) has predominantly focused on generating model-centric explanations that approximate the behavior of black-box models. However, such explanations often overlook a fundamental aspect of interpretability: different users require different explanations depending on their goals, preferences, and cognitive constraints. Although recent work has explored user-centric and personalized explanations, most existing approaches rely on heuristic adaptations or implicit user modeling, lacking a principled framework for representing and learning individual preferences. In this paper, we consider Preference-Based Explainable Artificial Intelligence (PREF-XAI), a novel perspective that reframes explanation as a preference-driven decision problem. Within PREF-XAI, explanations are not treated as fixed outputs, but as alternatives to be evaluated and selected according to user-specific criteria. In the PREF-XAI perspective, here we propose a methodology that combines rule-based explanations with formal preference learning. User preferences are elicited through a ranking of a small set of candidate explanations and modeled via an additive utility function inferred using robust ordinal regression. Experimental results on real-world datasets show that PREF-XAI can accurately reconstruct user preferences from limited feedback, identify highly relevant explanations, and discover novel explanatory rules not initially considered by the user. Beyond the proposed methodology, this work establishes a connection between XAI and preference learning, opening new directions for interactive and adaptive explanation systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces PREF-XAI, a framework reframing XAI as a preference-driven decision problem. It elicits user preferences over a small set of candidate rule-based explanations via ranking, models them with an additive utility function inferred by robust ordinal regression, and uses the model both to select relevant explanations and to generate novel rules. The central claim is that this approach accurately reconstructs individual user preferences from limited feedback and yields personalized, high-quality explanations, supported by experimental results on real-world datasets.

Significance. If the experimental claims hold, the work establishes a formal bridge between XAI and preference learning, moving beyond heuristic personalization to a principled, interactive framework grounded in robust ordinal regression. This could enable adaptive explanation systems that better respect user-specific criteria and cognitive constraints. The explicit use of established preference-learning machinery is a methodological strength that provides reproducibility and falsifiability.

major comments (2)
  1. [Section 5] Section 5 (Experimental Evaluation): The manuscript asserts that PREF-XAI 'accurately reconstructs user preferences from limited feedback' and 'identifies highly relevant explanations' on real-world datasets, yet supplies no concrete information on the datasets employed, the quantitative metrics used to measure reconstruction accuracy or relevance, the baseline methods, the number of users or feedback instances, or any statistical tests. Without these details the experimental evidence cannot be assessed and does not yet substantiate the central methodological claims.
  2. [Section 3.2] Section 3.2 (Preference Modeling): The additive utility function is presented as sufficient to capture user preferences over explanations, but the paper provides neither a justification for the absence of interaction terms nor an empirical check (e.g., comparison with a non-additive model or residual analysis) on whether the additivity assumption holds for the elicited rankings. This modeling choice is load-bearing for the reconstruction and novel-rule-generation results.
minor comments (2)
  1. [Abstract and Section 1] The abstract and introduction would benefit from explicit citations to the robust ordinal regression literature (e.g., the specific formulation of the inference procedure) so that readers can immediately locate the technical foundation.
  2. [Section 3] Notation for the utility function and the set of candidate explanations is introduced without a consolidated table of symbols; adding one would improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which help clarify the presentation of our experimental results and modeling assumptions. We address each point below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [Section 5] Section 5 (Experimental Evaluation): The manuscript asserts that PREF-XAI 'accurately reconstructs user preferences from limited feedback' and 'identifies highly relevant explanations' on real-world datasets, yet supplies no concrete information on the datasets employed, the quantitative metrics used to measure reconstruction accuracy or relevance, the baseline methods, the number of users or feedback instances, or any statistical tests. Without these details the experimental evidence cannot be assessed and does not yet substantiate the central methodological claims.

    Authors: We agree that the current experimental section lacks sufficient detail for independent assessment. In the revised manuscript we will expand Section 5 with the following concrete information: the specific real-world datasets (names, sources, sizes, and feature characteristics), the quantitative metrics (Kendall tau for preference reconstruction accuracy and precision-at-k for explanation relevance), the baseline methods (non-personalized rule selection, random ranking, and a simple heuristic), the number of users and feedback instances collected per user, and the statistical tests performed (paired t-tests with reported p-values). These additions will directly support the claims regarding preference reconstruction and relevance. revision: yes

  2. Referee: [Section 3.2] Section 3.2 (Preference Modeling): The additive utility function is presented as sufficient to capture user preferences over explanations, but the paper provides neither a justification for the absence of interaction terms nor an empirical check (e.g., comparison with a non-additive model or residual analysis) on whether the additivity assumption holds for the elicited rankings. This modeling choice is load-bearing for the reconstruction and novel-rule-generation results.

    Authors: We acknowledge that an explicit justification and empirical validation of the additivity assumption are missing. The additive form is chosen for its interpretability of per-criterion contributions and its robustness under the limited feedback regime of robust ordinal regression, consistent with established UTA-style methods. In the revision we will add a dedicated paragraph in Section 3.2 providing this rationale and include an empirical check: a comparison of reconstruction error between the additive model and a version allowing pairwise interactions on the collected rankings, together with a residual analysis to assess whether systematic deviations from additivity are present. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper applies the established robust ordinal regression technique from preference learning literature to model additive utility functions over ranked explanations in XAI. No claimed derivation, prediction, or first-principles result reduces by construction to its own inputs or fitted parameters. The central methodology is an application of an externally developed method, with experimental results on real-world datasets serving as independent validation rather than a self-referential loop. Minor self-citations to prior preference-learning work exist but are not load-bearing for any internal derivation, as the technique is externally falsifiable and not redefined here.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The framework rests on standard decision-theoretic assumptions about preference representation applied to the new domain of XAI explanations; no invented entities are introduced.

free parameters (1)
  • weights of the additive utility function
    Inferred from user rankings using robust ordinal regression; exact values depend on individual feedback.
axioms (2)
  • domain assumption User preferences over explanations can be represented by an additive utility function
    Standard assumption drawn from multi-criteria decision analysis and preference learning.
  • ad hoc to paper A small set of candidate explanations suffices to elicit and reconstruct user preferences
    Practical assumption required for the elicitation procedure described.

pith-pipeline@v0.9.0 · 5546 in / 1332 out tokens · 82193 ms · 2026-05-10T03:25:33.904619+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

27 extracted references · 26 canonical work pages · 1 internal anchor

  1. [1]

    doi:10.1145/3236009

    R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, D. Pe- dreschi, A survey of methods for explaining black box models, ACM Comput. Surv. 51 (5) (2018).doi:10.1145/3236009

  2. [2]

    Saeed, C

    W. Saeed, C. Omlin, Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities, Knowledge-Based Systems 263 (2023) 110273.doi:10.1016/j.knosys.2023.110273

  3. [3]

    R. M. Byrne, Good explanations in explainable artificial intelligence (XAI): evidence from human explanatory reasoning, in: Proc. of IJ- CAI’23, 2023, pp. 6536–6544.doi:10.24963/ijcai.2023/733

  4. [4]

    Fürnkranz, D

    J. Fürnkranz, D. Gamberger, N. Lavrac, Foundations of Rule Learning, Cognitive Technologies, Springer, 2012.doi:10.1007/978-3-540-751 97-7

  5. [5]

    E. T. Mekonnen, L. Longo, P. Dondio, A global model-agnostic rule- based XAI method based on parameterized event primitives for time series classifiers, Frontiers in Artificial Intelligence Volume 7 - 1381921 (2024).doi:10.3389/frai.2024.1381921

  6. [6]

    Letham, C

    B. Letham, C. Rudin, T. H. McCormick, D. Madigan, Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model, The Annals of Applied Statistics (2015) 1350–1371do i:10.1214/15-AOAS848

  7. [7]

    Bodria, F

    F. Bodria, F. Giannotti, R. Guidotti, F. Naretto, D. Pedreschi, S.Rinzivillo, Benchmarkingandsurveyofexplanationmethodsforblack box models, Data Mining and Knowledge Discovery 37 (5) (2023) 1719– 1778.doi:10.1007/s10618-023-00933-9

  8. [8]

    2022 , issn =

    D. Macha, M. Kozielski, Ł. Wróbel, M. Sikora, Rulexai—a package for rule-based explanations of machine learning model, SoftwareX 20 (2022) 101209.doi:10.1016/j.softx.2022.101209

  9. [9]

    Kozielski, M

    M. Kozielski, M. Sikora, Ł. Wawrowski, Towards consistency of rule- based explainer and black box model – fusion of rule induction and XAI-based feature importance, Knowledge-Based Systems 311 (2025) 113092.doi:10.1016/j.knosys.2025.113092. 26

  10. [10]

    Kaplan, H

    S. Kaplan, H. Uusitalo, L. Lensu, A unified and practical user-centric framework for explainable artificial intelligence, Knowledge-Based Sys- tems 283 (2024) 111107.doi:10.1016/j.knosys.2023.111107

  11. [11]

    S. F. Nimmy, O. K. Hussain, R. K. Chakrabortty, F. K. Hussain, M. Saberi, An optimized belief-rule-based (BRB) approach to ensure the trustworthiness of interpreted time-series decisions, Knowledge-Based Systems 271 (2023) 110552.doi:10.1016/j.knosys.2023.110552

  12. [12]

    Lubos, T

    S. Lubos, T. N. T. Tran, A. Felfernig, S. Polat Erdeniz, V.-M. Le, LLM- generated explanations for recommender systems, in: Proc. of ACM UMAP Adjunct’24, 2024, pp. 276–285.doi:10.1145/3631700.366518 5

  13. [13]

    DR-TTA: Dynamic and Robust Test-Time Adaptation Under Low-Quality Mri Conditions for Brain Tumor Segmentation

    S. Song, Y. Chen, Y. Zhang, X. Yang, X. Wang, W. Guo, “explaining ai medical models in my way”: An LLM-enhanced personalized report for clinicians, in: Proc. of IEEE BIBM’25, 2025, pp. 7385–7392.doi: 10.1109/BIBM66473.2025.11356627

  14. [14]

    Mayne, R

    H. Mayne, R. O. Kearns, Y. Yang, A. M. Bean, E. Delaney, C. Rus- sell, A. Mahdi, LLMs don’t know their own decision boundaries: The unreliability of self-generated counterfactual explanations, in: Proc. of EMLNLP, 2025, pp. 24161–24186.doi:10.18653/v1/2025.emnlp-mai n.1231

  15. [15]

    Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting

    M. Turpin, J. Michael, E. Perez, S. R. Bowman, Language models don’t always say what they think: unfaithful explanations in chain-of-thought prompting, in: Proc. of NIPS’23, 2023, pp. 74952–74965.doi:10.485 50/arXiv.2305.04388

  16. [16]

    Local Rule-Based Explanations of Black Box Decision Systems

    R. Guidotti, A. Monreale, S. Ruggieri, D. Pedreschi, F. Turini, F. Gi- annotti, Local rule-based explanations of black box decision systems (2018).doi:10.48550/arXiv.1805.10820

  17. [17]

    M. T. Ribeiro, S. Singh, C. Guestrin, Anchors: High-precision model- agnostic explanations, Proceedings of the AAAI Conference on Artificial Intelligence 32 (1) (Apr. 2018).doi:10.1609/aaai.v32i1.11491

  18. [18]

    Greco, R

    S. Greco, R. Słowiński, I. Szcz¸ ech, Measures of rule interestingness in various perspectives of confirmation, Information Sciences 346 (2016) 216–235.doi:10.1016/j.ins.2016.01.056. 27

  19. [19]

    Hüllermeier, R

    E. Hüllermeier, R. Słowiński, Preference learning and multiple criteria decision aiding: differences, commonalities, and synergies - Part I and II, 4OR - Quarterly Journal of Operations Research 22 (1-2) (2024) 179–209, 313–349.doi:10.1007/s10288-023-00560-6

  20. [20]

    Greco, R

    S. Greco, R. Słowiński, J. Wallenius, Fifty years of multiple criteria decision analysis: From classical methods to robust ordinal regression, European Journal of Operational Research 323 (2025) 351–377.doi: 10.1016/j.ejor.2024.07.038

  21. [21]

    Corrente, S

    S. Corrente, S. Greco, M. Kadziński, R. Słowiński, Robust ordinal re- gression in preference learning and ranking, Machine Learning 93 (2013) 381–422.doi:10.1007/s10994-013-5365-4

  22. [22]

    Greco, V

    S. Greco, V. Mousseau, R. Słowiński, Ordinal regression revisited: mul- tiple criteria ranking using a set of additive value functions, Euro- pean Journal of Operational Research 191 (2) (2008) 416–436.doi: 10.1016/j.ejor.2007.08.013

  23. [23]

    C. J. Belisle, H. E. Romeijn, R. L. Smith, Hit-and-run algorithms for generating multivariate distributions, Mathematics of Operations Re- search 18 (2) (1993) 255–266.doi:10.1287/moor.18.2.255

  24. [24]

    B. Liu, W. Hsu, Y. Ma, Integrating classification and association rule mining, in: Proceedings of the 4th International Conference on Knowl- edge Discovery and Data Mining, 1998, pp. 80–86. URLhttps://dl.acm.org/doi/10.5555/3000292.3000305

  25. [25]

    Agrawal, R

    R. Agrawal, R. Srikant, Fast algorithms for mining association rules in large databases, in: Proc. of VLDB ’94, Morgan Kaufmann Publishers Inc., 1994, p. 487–499. URLhttps://dl.acm.org/doi/10.5555/645920.672836

  26. [26]

    Goodfellow, Y

    I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016. URLhttps://www.deeplearningbook.org

  27. [27]

    D. Rey, M. Neuhäuser, International Encyclopedia of Statistical Science, Springer Berlin Heidelberg, 2011.doi:10.1007/978-3-642-04898-2 _616. 28 Appendix A. Impact of the Reference Ranking Length This appendix investigates how the size of the initial reference ranking affects the performance of the proposed methodology. The reference ranking length define...