Recognition: unknown
BAGAN: Data Augmentation with Balancing GAN
read the original abstract
Image classification datasets are often imbalanced, characteristic that negatively affects the accuracy of deep-learning classifiers. In this work we propose balancing GAN (BAGAN) as an augmentation tool to restore balance in imbalanced datasets. This is challenging because the few minority-class images may not be enough to train a GAN. We overcome this issue by including during the adversarial training all available images of majority and minority classes. The generative model learns useful features from majority classes and uses these to generate images for minority classes. We apply class conditioning in the latent space to drive the generation process towards a target class. The generator in the GAN is initialized with the encoder module of an autoencoder that enables us to learn an accurate class-conditioning in the latent space. We compare the proposed methodology with state-of-the-art GANs and demonstrate that BAGAN generates images of superior quality when trained with an imbalanced dataset.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
VAE-Inf: A statistically interpretable generative paradigm for imbalanced classification
VAE-Inf trains a VAE on majority data to build a reference distribution, then uses limited minority samples and a projection score to produce classifiers with guaranteed control of false-positive rates in imbalanced settings.
-
Synthesizing real-world distributions from high-dimensional Gaussian Noise with Fully Connected Neural Network
Fully connected neural network with randomized loss synthesizes real-world tabular data distributions from Gaussian noise faster than state-of-the-art deep generative models.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.