pith. machine review for the scientific record. sign in

arxiv: 1706.08224 · v2 · submitted 2017-06-26 · 💻 cs.LG

Recognition: unknown

Do GANs actually learn the distribution? An empirical study

Authors on Pith no claims yet
classification 💻 cs.LG
keywords distributionganslearnsizesupportactuallyempiricalgenerated
0
0 comments X
read the original abstract

Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of (Goodfellow et al 2014) suggested they do, if they were given sufficiently large deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al (to appear at ICML 2017) raised doubts whether the same holds when discriminator has finite size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support ---in other words, the training objective is unable to prevent mode collapse. The current note reports experiments suggesting that such problems are not merely theoretical. It presents empirical evidence that well-known GANs approaches do learn distributions of fairly low support, and thus presumably are not learning the target distribution. The main technical contribution is a new proposed test, based upon the famous birthday paradox, for estimating the support size of the generated distribution.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Progressive Growing of GANs for Improved Quality, Stability, and Variation

    cs.NE 2017-10 accept novelty 7.0

    Progressive growing stabilizes GAN training to produce high-resolution images of unprecedented quality and achieves a record unsupervised inception score of 8.80 on CIFAR10.

  2. Demystifying MMD GANs

    stat.ML 2018-01 accept novelty 6.0

    MMD GANs have unbiased critic gradients but biased generator gradients from sample-based learning, and the Kernel Inception Distance provides a practical new measure for GAN convergence and dynamic learning rate adaptation.