pith. machine review for the scientific record. sign in

arxiv: 1807.04720 · v3 · submitted 2018-07-12 · 💻 cs.LG · stat.ML

Recognition: unknown

A Large-Scale Study on Regularization and Normalization in GANs

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords gansgenerativemanymodelsneuralnormalizationpracticalregularization
0
0 comments X
read the original abstract

Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant number of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of "tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, as well as neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We discuss and evaluate common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.