pith. machine review for the scientific record. sign in

arxiv: 1712.09665 · v2 · submitted 2017-12-27 · 💻 cs.CV

Recognition: unknown

Adversarial Patch

Authors on Pith no claims yet
classification 💻 cs.CV
keywords adversarialpatchestheybecausescenecauseclassclassifiers
0
0 comments X
read the original abstract

We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class. To reproduce the results from the paper, our code is available at https://github.com/tensorflow/cleverhans/tree/master/examples/adversarial_patch

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 7 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TRAP: Tail-aware Ranking Attack for World-Model Planning

    cs.LG 2026-05 unverdicted novelty 6.0

    TRAP is a tail-aware ranking attack that plants a backdoor in world models so that a trigger causes the model to reorder a few critical imagined trajectories and redirect planning while preserving normal behavior on c...

  2. Transferable Physical-World Adversarial Patches Against Object Detection in Autonomous Driving

    cs.CV 2026-04 unverdicted novelty 6.0

    AdvAD produces physical-world adversarial patches with improved transferability to unseen object detectors by multi-model optimization, adaptive balancing, and physical variation robustness.

  3. Transferable Physical-World Adversarial Patches Against Pedestrian Detection Models

    cs.CV 2026-04 unverdicted novelty 6.0

    TriPatch generates transferable physical adversarial patches via multi-stage triplet loss, appearance consistency, and data augmentation to achieve higher attack success rates on pedestrian detectors than prior methods.

  4. Street-Legal Physical-World Adversarial Rim for License Plates

    cs.CV 2026-04 conditional novelty 6.0

    SPAR is a street-legal physical rim that cuts modern ALPR accuracy by 60% and reaches 18% targeted impersonation while costing under $100 and requiring no plate modification.

  5. Understanding Adversarial Transferability in Vision-Language Models for Autonomous Driving: A Cross-Architecture Analysis

    cs.CV 2026-04 unverdicted novelty 5.0

    Adversarial patches transfer across three VLM architectures in autonomous driving scenarios with 73-91% success rates and affect 65-79% of critical decision frames even without target-specific optimization.

  6. RACF: A Resilient Autonomous Car Framework with Object Distance Correction

    cs.RO 2026-04 unverdicted novelty 4.0

    RACF corrects inconsistent depth camera distance estimates in autonomous vehicles using LiDAR and kinematic redundancy, achieving up to 35% RMSE reduction and better braking in tests on a Quanser QCar 2 platform.

  7. Physical Adversarial Attacks on AI Surveillance Systems:Detection, Tracking, and Visible--Infrared Evasion

    cs.CV 2026-04 unverdicted novelty 3.0

    The paper organizes existing physical adversarial attack literature into a surveillance-oriented taxonomy emphasizing temporal persistence, multi-modal sensing, carrier realism, and system-level objectives, concluding...