pith. machine review for the scientific record. sign in

arxiv: 1710.11342 · v2 · submitted 2017-10-31 · 💻 cs.LG · cs.AI· cs.CL· cs.CV

Recognition: unknown

Generating Natural Adversarial Examples

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CLcs.CV
keywords adversarialexamplesnaturaladversariesblack-boxclassifiersdataevaluating
0
0 comments X
read the original abstract

Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Uncovering and Understanding FPR Manipulation Attack in Industrial IoT Networks

    cs.CR 2026-01 unverdicted novelty 8.0

    FPR manipulation attack perturbs benign MQTT packets to flip labels to attacks in NIDS with 80-100% success, increasing SOC delays without gradient-based methods.

  2. Insider Attacks in Multi-Agent LLM Consensus Systems

    cs.MA 2026-05 unverdicted novelty 5.0

    A malicious agent in multi-agent LLM consensus systems can be trained via a surrogate world model and RL to reduce consensus rates and prolong disagreement more effectively than direct prompt attacks.