pith. machine review for the scientific record. sign in

arxiv: 1905.00180 · v1 · submitted 2019-05-01 · 💻 cs.LG · stat.ML

Recognition: unknown

Dropping Pixels for Adversarial Robustness

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords adversarialrobustnessexamplesimprovesnetworkspixelsaccuracyapproach
0
0 comments X
read the original abstract

Deep neural networks are vulnerable against adversarial examples. In this paper, we propose to train and test the networks with randomly subsampled images with high drop rates. We show that this approach significantly improves robustness against adversarial examples in all cases of bounded L0, L2 and L_inf perturbations, while reducing the standard accuracy by a small value. We argue that subsampling pixels can be thought to provide a set of robust features for the input image and, thus, improves robustness without performing adversarial training.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.