pith. machine review for the scientific record. sign in

arxiv: 1801.02613 · v3 · submitted 2018-01-08 · 💻 cs.LG · cs.CR· cs.CV

Recognition: unknown

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

Authors on Pith no claims yet
classification 💻 cs.LG cs.CRcs.CV
keywords adversarialexamplesregionsattacksdnnsbettercharacteristiccharacterizing
0
0 comments X
read the original abstract

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called 'adversarial subspaces') in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets. Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. INTARG: Informed Real-Time Adversarial Attack Generation for Time-Series Regression

    cs.LG 2026-04 unverdicted novelty 7.0

    INTARG generates effective real-time adversarial attacks on time-series regression models by selectively targeting high-confidence high-error steps in a bounded-buffer online setting, increasing prediction error up to...

  2. Intermediate Representations are Strong AI-Generated Image Detectors

    cs.CV 2026-05 unverdicted novelty 6.0

    Intermediate layer embedding sensitivity to perturbations distinguishes AI-generated images from real ones, yielding higher AUROC on GenImage and Forensics Small benchmarks than prior methods.

  3. Insider Attacks in Multi-Agent LLM Consensus Systems

    cs.MA 2026-05 unverdicted novelty 5.0

    A malicious agent in multi-agent LLM consensus systems can be trained via a surrogate world model and RL to reduce consensus rates and prolong disagreement more effectively than direct prompt attacks.

  4. NeuroTrace: Inference Provenance-Based Detection of Adversarial Examples

    cs.CR 2026-04 unverdicted novelty 5.0

    NeuroTrace framework builds heterogeneous graphs of inference provenance to detect adversarial examples in DNNs, showing strong transferable performance across attack families in vision and malware domains.