pith. machine review for the scientific record. sign in

arxiv: 1907.09615 · v1 · submitted 2019-07-22 · 💻 cs.LG · stat.ML

Recognition: unknown

Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords decisionmakingsystemsindividualrecourseoutcomeactionablealgorithm
0
0 comments X
read the original abstract

Machine learning based decision making systems are increasingly affecting humans. An individual can suffer an undesirable outcome under such decision making systems (e.g. denied credit) irrespective of whether the decision is fair or accurate. Individual recourse pertains to the problem of providing an actionable set of changes a person can undertake in order to improve their outcome. We propose a recourse algorithm that models the underlying data distribution or manifold. We then provide a mechanism to generate the smallest set of changes that will improve an individual's outcome. This mechanism can be easily used to provide recourse for any differentiable machine learning based decision making system. Further, the resulting algorithm is shown to be applicable to both supervised classification and causal decision making systems. Our work attempts to fill gaps in existing fairness literature that have primarily focused on discovering and/or algorithmically enforcing fairness constraints on decision making systems. This work also provides an alternative approach to generating counterfactual explanations.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Causal Algorithmic Recourse: Foundations and Methods

    cs.AI 2026-05 conditional novelty 8.0

    A causal process model for algorithmic recourse introduces post-recourse stability conditions and copula-based methods to infer intervention effects from observational or paired data, with a distribution-free fallback...

  2. Interpretability Can Be Actionable

    cs.LG 2026-05 conditional novelty 6.0

    Interpretability research should be judged by actionability—the degree to which its insights support concrete decisions and interventions—rather than explanatory power alone.

  3. From Universal to Individualized Actionability: Revisiting Personalization in Algorithmic Recourse

    cs.LG 2026-04 unverdicted novelty 6.0

    Formalizing personalization as individual actionability in causal recourse shows hard constraints degrade validity and plausibility while revealing socio-demographic disparities in costs.