pith. machine review for the scientific record. sign in

arxiv: 1704.00028 · v3 · submitted 2017-03-31 · 💻 cs.LG · stat.ML

Recognition: unknown

Improved Training of Wasserstein GANs

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords trainingganswganclippingcriticgenerativemodelsproposed
0
0 comments X
read the original abstract

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 7 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Progressive Growing of GANs for Improved Quality, Stability, and Variation

    cs.NE 2017-10 accept novelty 7.0

    Progressive growing stabilizes GAN training to produce high-resolution images of unprecedented quality and achieves a record unsupervised inception score of 8.80 on CIFAR10.

  2. Separate Universe Super-Resolution Emulator

    astro-ph.CO 2026-05 unverdicted novelty 6.0

    A generative adversarial network emulator upscales low-resolution N-body simulations with non-zero curvature to high resolution, recovering most large-scale power but with up to 10% small-scale suppression and altered...

  3. Ensemble Distributionally Robust Bayesian Optimisation

    cs.LG 2026-05 unverdicted novelty 6.0

    A tractable ensemble distributionally robust Bayesian optimization method achieves improved sublinear regret bounds under context uncertainty.

  4. Fast Voxelization and Level of Detail for Microgeometry Rendering

    cs.GR 2026-04 unverdicted novelty 6.0

    A CUDA-parallel voxelizer and hierarchical SGGX clustering representation enable faster, more accurate level-of-detail rendering of sparse microgeometry volumes.

  5. Demystifying MMD GANs

    stat.ML 2018-01 accept novelty 6.0

    MMD GANs have unbiased critic gradients but biased generator gradients from sample-based learning, and the Kernel Inception Distance provides a practical new measure for GAN convergence and dynamic learning rate adaptation.

  6. On the Tradeoffs of On-Device Generative Models in Federated Predictive Maintenance Systems

    cs.LG 2026-05 unverdicted novelty 5.0

    Experiments on real industrial time series show that partial model sharing improves diffusion model performance in bandwidth-limited non-IID settings, while full sharing stabilizes GAN training but offers less robustn...

  7. Conditional Wasserstein GAN for Simulating Neutrino Event Summaries using Incident Energy of Electron Neutrinos

    hep-ph 2026-03 unverdicted novelty 4.0

    A conditional Wasserstein GAN generates complete kinematic event summaries for IBD-CC, NC, and NuEElastic electron neutrino interactions that match GENIE distributions in 1D marginals and correlations.