pith. machine review for the scientific record. sign in

arxiv: 1905.13399 · v2 · submitted 2019-05-31 · 💻 cs.CR · cs.LG· cs.SD· eess.AS

Recognition: unknown

Real-Time Adversarial Attacks

Authors on Pith no claims yet
classification 💻 cs.CR cs.LGcs.SDeess.AS
keywords inputadversarialattackattackerattacksdatalearningmachine
0
0 comments X
read the original abstract

In recent years, many efforts have demonstrated that modern machine learning algorithms are vulnerable to adversarial attacks, where small, but carefully crafted, perturbations on the input can make them fail. While these attack methods are very effective, they only focus on scenarios where the target model takes static input, i.e., an attacker can observe the entire original sample and then add a perturbation at any point of the sample. These attack approaches are not applicable to situations where the target model takes streaming input, i.e., an attacker is only able to observe past data points and add perturbations to the remaining (unobserved) data points of the input. In this paper, we propose a real-time adversarial attack scheme for machine learning models with streaming inputs.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. LLM-as-Judge Framework for Evaluating Tone-Induced Hallucination in Vision-Language Models

    cs.CV 2026-04 unverdicted novelty 7.0

    Ghost-100 benchmark shows prompt tone drives hallucination rates and intensities in VLMs, with non-monotonic peaks at intermediate pressure and task-specific differences that aggregate metrics hide.

  2. Toward Accountable AI-Generated Content on Social Platforms: Steganographic Attribution and Multimodal Harm Detection

    cs.CV 2026-04 unverdicted novelty 4.0

    The proposed steganography-based attribution system with CLIP multimodal fusion achieves robust watermarking under distortions and 0.99 AUC-ROC for harm detection, enabling traceable AI content accountability.