pith. machine review for the scientific record. sign in

arxiv: 1812.06210 · v2 · submitted 2018-12-15 · 💻 cs.LG · stat.ML

Recognition: unknown

A General Approach to Adding Differential Privacy to Iterative Training Procedures

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords privacytrainingapproachdifferentalgorithmschallengeslearningmechanism
0
0 comments X
read the original abstract

In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the privacy mechanism, and then isolates and simplifies the critical logic that computes the final privacy guarantees. A key challenge is that training algorithms often require estimating many different quantities (vectors) from the same set of examples --- for example, gradients of different layers in a deep learning architecture, as well as metrics and batch normalization parameters. Each of these may have different properties like dimensionality, magnitude, and tolerance to noise. By extending previous work on the Moments Accountant for the subsampled Gaussian mechanism, we can provide privacy for such heterogeneous sets of vectors, while also structuring the approach to minimize software engineering challenges.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.