pith. machine review for the scientific record. sign in

arxiv: 1608.00853 · v1 · submitted 2016-08-02 · 💻 cs.CV · cs.LG

Recognition: unknown

A study of the effect of JPG compression on adversarial images

Authors on Pith no claims yet
classification 💻 cs.CV cs.LG
keywords imagesadversarialclassificationcompressioneffectdatahumansimage
0
0 comments X
read the original abstract

Neural network image classifiers are known to be vulnerable to adversarial images, i.e., natural images which have been modified by an adversarial perturbation specifically designed to be imperceptible to humans yet fool the classifier. Not only can adversarial images be generated easily, but these images will often be adversarial for networks trained on disjoint subsets of data or with different architectures. Adversarial images represent a potential security risk as well as a serious machine learning challenge---it is clear that vulnerable neural networks perceive images very differently from humans. Noting that virtually every image classification data set is composed of JPG images, we evaluate the effect of JPG compression on the classification of adversarial images. For Fast-Gradient-Sign perturbations of small magnitude, we found that JPG compression often reverses the drop in classification accuracy to a large extent, but not always. As the magnitude of the perturbations increases, JPG recompression alone is insufficient to reverse the effect.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Systematic Discovery of Semantic Attacks in Online Map Construction through Conditional Diffusion

    cs.CV 2026-05 unverdicted novelty 8.0

    MIRAGE discovers semantic attacks on online HD map construction via conditional diffusion, enabling boundary removal and injection that degrade AV performance while passing as realistic environmental changes.

  2. Physically-Induced Atmospheric Adversarial Perturbations: Enhancing Transferability and Robustness in Remote Sensing Image Classification

    cs.CV 2026-04 unverdicted novelty 7.0

    FogFool creates fog-based adversarial perturbations using Perlin noise optimization to achieve high black-box transferability (83.74% TASR) and robustness to defenses in remote sensing classification.

  3. Adversarial Attacks Against MLLMs via Progressive Resolution Processing and Adaptive Feature Alignment

    cs.CV 2026-05 unverdicted novelty 6.0

    PRAF-Attack improves targeted attack transferability on black-box MLLMs by using multi-scale progressive resolution and adaptive intermediate feature alignment instead of final-layer global features.