pith. machine review for the scientific record. sign in

arxiv: 1802.04034 · v3 · submitted 2018-02-12 · 💻 cs.CV · cs.LG· stat.ML

Recognition: unknown

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

Authors on Pith no claims yet
classification 💻 cs.CV cs.LGstat.ML
keywords networksneuralperturbationscertificationefficientnetworkprovablytraining
0
0 comments X
read the original abstract

High sensitivity of neural networks against malicious perturbations on inputs causes security concerns. To take a steady step towards robust classifiers, we aim to create neural network models provably defended from perturbations. Prior certification work requires strong assumptions on network structures and massive computational costs, and thus the range of their applications was limited. From the relationship between the Lipschitz constants and prediction margins, we present a computationally efficient calculation technique to lower-bound the size of adversarial perturbations that can deceive networks, and that is widely applicable to various complicated networks. Moreover, we propose an efficient training procedure that robustifies networks and significantly improves the provably guarded areas around data points. In experimental evaluations, our method showed its ability to provide a non-trivial guarantee and enhance robustness for even large networks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Selective Prediction from Agreement: A Lipschitz-Consistent Version Space Approach

    cs.LG 2026-05 unverdicted novelty 5.0

    Selective prediction abstains unless all Lipschitz-consistent heads in the version space agree on a certified label for each pool point.