pith. machine review for the scientific record. sign in

arxiv: 1903.04361 · v2 · submitted 2019-03-03 · 💻 cs.GL

Recognition: unknown

Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence

Authors on Pith no claims yet
classification 💻 cs.GL
keywords frameworkcomputingnormativeopaquetechniquestheyartificialblack
0
0 comments X
read the original abstract

Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. The Explainable Artificial Intelligence research program aims to develop analytic techniques with which to render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques' explanatory success. The aim of the present discussion is to develop such a framework, while paying particular attention to different stakeholders' distinct explanatory requirements. Building on an analysis of 'opacity' from philosophy of science, this framework is modeled after David Marr's influential account of explanation in cognitive science. Thus, the framework distinguishes between the different questions that might be asked about an opaque computing system, and specifies the general way in which these questions should be answered. By applying this normative framework to current techniques such as input heatmapping, feature-detector identification, and diagnostic classification, it will be possible to determine whether and to what extent the Black Box Problem can be solved.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Seeing What Shouldn't Be There: Counterfactual GANs for Medical Image Attribution

    cs.CV 2026-05 unverdicted novelty 5.0

    A cycle-consistent GAN generates counterfactual medical images to attribute classification decisions more comprehensively than standard saliency methods.