Recognition: unknown
Logit Pairing Methods Can Fool Gradient-Based Attacks
read the original abstract
Recently, Kannan et al. [2018] proposed several logit regularization methods to improve the adversarial robustness of classifiers. We show that the computationally fast methods they propose - Clean Logit Pairing (CLP) and Logit Squeezing (LSQ) - just make the gradient-based optimization problem of crafting adversarial examples harder without providing actual robustness. We find that Adversarial Logit Pairing (ALP) may indeed provide robustness against adversarial examples, especially when combined with adversarial training, and we examine it in a variety of settings. However, the increase in adversarial accuracy is much smaller than previously claimed. Finally, our results suggest that the evaluation against an iterative PGD attack relies heavily on the parameters used and may result in false conclusions regarding robustness of a model.
This paper has not been read by Pith yet.
Forward citations
Cited by 4 Pith papers
-
Quantitative Linear Logic
pQLL calculi make proof validity and sequent provability real-valued quantities, generalizing hypersequent calculi and deep inference while proving cut-elimination and completeness for soft residuated lattices.
-
Quantitative Linear Logic
pQLL calculi assign real-valued strength to proofs, generalize hypersequent and deep inference systems, prove cut elimination, and achieve completeness for soft residuated lattices, recovering MALL as p goes to infinity.
-
Quantitative Linear Logic for Neuro-Symbolic Learning and Verification
QLL is a novel logic for neuro-symbolic learning that uses ML-native operations (sum, log-sum-exp) on logits to embed constraints, satisfying most linear logic properties and showing stronger correlation between empir...
-
Quantitative Linear Logic for Neuro-Symbolic Learning and Verification
Quantitative Linear Logic interprets logical connectives via natural ML operations on logits to embed constraints in neural training while satisfying most linear logic laws and correlating performance with independent...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.