pith. machine review for the scientific record. sign in

arxiv: 1802.05214 · v3 · submitted 2018-02-14 · 💻 cs.LG · cs.CR· cs.CV· stat.ML

Recognition: unknown

Learning Privacy Preserving Encodings through Adversarial Training

Authors on Pith no claims yet
classification 💻 cs.LG cs.CRcs.CVstat.ML
keywords privateadversarialattributesencodingsfixedapproachclassifiersencoder
0
0 comments X
read the original abstract

We present a framework to learn privacy-preserving encodings of images that inhibit inference of chosen private attributes, while allowing recovery of other desirable information. Rather than simply inhibiting a given fixed pre-trained estimator, our goal is that an estimator be unable to learn to accurately predict the private attributes even with knowledge of the encoding function. We use a natural adversarial optimization-based formulation for this---training the encoding function against a classifier for the private attribute, with both modeled as deep neural networks. The key contribution of our work is a stable and convergent optimization approach that is successful at learning an encoder with our desired properties---maintaining utility while inhibiting inference of private attributes, not just within the adversarial optimization, but also by classifiers that are trained after the encoder is fixed. We adopt a rigorous experimental protocol for verification wherein classifiers are trained exhaustively till saturation on the fixed encoders. We evaluate our approach on tasks of real-world complexity---learning high-dimensional encodings that inhibit detection of different scene categories---and find that it yields encoders that are resilient at maintaining privacy.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.