FERMI improves membership inference on tabular diffusion models by mapping relational auxiliary information into attack features, raising TPR at 0.1 FPR by up to 53% white-box and 22% black-box over single-table baselines.
Extracting training data from diffusion models
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
BAF reduces memorization in diffusion LoRAs by filtering spectral channels of the adaptation weights that show weak alignment with the base model's principal subspace.
citing papers explorer
-
FERMI: Exploiting Relations for Membership Inference Against Tabular Diffusion Models
FERMI improves membership inference on tabular diffusion models by mapping relational auxiliary information into attack features, raising TPR at 0.1 FPR by up to 53% white-box and 22% black-box over single-table baselines.
-
Filtering Memorization from Parameter-Space in Diffusion Models
BAF reduces memorization in diffusion LoRAs by filtering spectral channels of the adaptation weights that show weak alignment with the base model's principal subspace.