pith. machine review for the scientific record. sign in

arxiv: 1903.11112 · v1 · submitted 2019-03-26 · 💻 cs.LG · cs.CL· stat.ML

Recognition: unknown

Privacy-preserving Active Learning on Sensitive Data for User Intent Classification

Authors on Pith no claims yet
classification 💻 cs.LG cs.CLstat.ML
keywords activedatalearningprivacyannotationapproachclassificationsensitive
0
0 comments X
read the original abstract

Active learning holds promise of significantly reducing data annotation costs while maintaining reasonable model performance. However, it requires sending data to annotators for labeling. This presents a possible privacy leak when the training set includes sensitive user data. In this paper, we describe an approach for carrying out privacy preserving active learning with quantifiable guarantees. We evaluate our approach by showing the tradeoff between privacy, utility and annotation budget on a binary classification task in a active learning setting.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.