pith. machine review for the scientific record. sign in

arxiv: 1206.6389 · v3 · submitted 2012-06-27 · 💻 cs.LG · cs.CR· stat.ML

Recognition: unknown

Poisoning Attacks against Support Vector Machines

Authors on Pith no claims yet
classification 💻 cs.LG cs.CRstat.ML
keywords attacksdataerrorgradientascentattackdemonstrateincreases
0
0 comments X
read the original abstract

We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM's test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM's decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM's optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier's test error.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Laundering AI Authority with Adversarial Examples

    cs.CR 2026-05 unverdicted novelty 5.0

    Adversarial examples enable AI authority laundering by causing production VLMs to give authoritative but wrong responses on subtly perturbed images, with success rates of 22-100% using decade-old attack methods.

  2. Robustness Analysis of Machine Learning Models for IoT Intrusion Detection Under Data Poisoning Attacks

    cs.CR 2026-04 unverdicted novelty 3.0

    Ensemble models like Random Forest and Gradient Boosting maintain more stable performance than Logistic Regression and Deep Neural Networks under label manipulation and outlier-based poisoning attacks on IoT intrusion...