pith. machine review for the scientific record. sign in

arxiv: 1810.01365 · v2 · submitted 2018-10-02 · 💻 cs.LG · cs.CV· stat.ML

Recognition: unknown

On Self Modulation for Generative Adversarial Networks

Authors on Pith no claims yet
classification 💻 cs.LG cs.CVstat.ML
keywords self-modulationadversarialarchitecturalchangedatagenerativegeneratormodification
0
0 comments X
read the original abstract

Training Generative Adversarial Networks (GANs) is notoriously challenging. We propose and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings. Intuitively, self-modulation allows the intermediate feature maps of a generator to change as a function of the input noise vector. While reminiscent of other conditioning techniques, it requires no labeled data. In a large-scale empirical study we observe a relative decrease of $5\%-35\%$ in FID. Furthermore, all else being equal, adding this modification to the generator leads to improved performance in $124/144$ ($86\%$) of the studied settings. Self-modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.