pith. machine review for the scientific record. sign in

arxiv: 1712.09491 · v1 · submitted 2017-12-27 · 💻 cs.LG · cs.CR· cs.CV

Recognition: unknown

Exploring the Space of Black-box Attacks on Deep Neural Networks

Authors on Pith no claims yet
classification 💻 cs.LG cs.CRcs.CV
keywords attacksblack-boxadversarialestimationgradienttransferabilityattackdeep
0
0 comments X
read the original abstract

Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can "transfer" to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model's class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world Content Moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-the-art defenses. We show that the Gradient Estimation attacks are very effective even against these defenses.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. SoK: A Comprehensive Analysis of the Current Status of Neural Tangent Generalization Attacks with Research Directions

    cs.LG 2026-05 accept novelty 3.0

    NTGA is the first clean-label generalization attack under black-box settings but is vulnerable to adversarial training and image transformations, with newer attacks outperforming it.