pith. machine review for the scientific record. sign in

arxiv: 1802.06309 · v3 · submitted 2018-02-17 · 💻 cs.LG · stat.ML

Recognition: unknown

Learning Adversarially Fair and Transferable Representations

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords fairlearningrepresentationrepresentationsadversarialexperimentallearnedobjectives
0
0 comments X
read the original abstract

In this paper, we advocate for representation learning as the key to mitigating unfair prediction outcomes downstream. Motivated by a scenario where learned representations are used by third parties with unknown objectives, we propose and explore adversarial representation learning as a natural method of ensuring those parties act fairly. We connect group fairness (demographic parity, equalized odds, and equal opportunity) to different adversarial objectives. Through worst-case theoretical guarantees and experimental validation, we show that the choice of this objective is crucial to fair prediction. Furthermore, we present the first in-depth experimental demonstration of fair transfer learning and demonstrate empirically that our learned representations admit fair predictions on new tasks while maintaining utility, an essential goal of fair representation learning.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Optimized Deferral for Imbalanced Settings

    cs.LG 2026-04 unverdicted novelty 5.0

    MILD reformulates two-stage learning to defer as cost-sensitive learning over the input-expert domain and derives new margin-based losses with guarantees, yielding better performance than baselines on image classifica...