pith. machine review for the scientific record. sign in

arxiv: 1611.02163 · v4 · submitted 2016-11-07 · 💻 cs.LG · stat.ML

Recognition: unknown

Unrolled Generative Adversarial Networks

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords discriminatorgeneratoradversarialgansgenerativenetworksobjectivetraining
0
0 comments X
read the original abstract

We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Curated Synthetic Data Doesn't Have to Collapse: A Theoretical Study of Generative Retraining with Pluralistic Preferences

    cs.LG 2026-05 unverdicted novelty 7.0

    Recursive generative retraining with pluralistic preferences converges to a stable diverse distribution that satisfies a weighted Nash bargaining solution.

  2. Decision-Focused Learning via Tangent-Space Projection of Prediction Error

    cs.LG 2026-05 unverdicted novelty 7.0

    Regret gradients in DFL are the tangent-space projection of prediction error scaled by curvature, enabling efficient direct computation without differentiating through solvers.

  3. Progressive Growing of GANs for Improved Quality, Stability, and Variation

    cs.NE 2017-10 accept novelty 7.0

    Progressive growing stabilizes GAN training to produce high-resolution images of unprecedented quality and achieves a record unsupervised inception score of 8.80 on CIFAR10.

  4. Intermediate Representations are Strong AI-Generated Image Detectors

    cs.CV 2026-05 unverdicted novelty 6.0

    Intermediate layer embedding sensitivity to perturbations distinguishes AI-generated images from real ones, yielding higher AUROC on GenImage and Forensics Small benchmarks than prior methods.

  5. SubFlow: Sub-mode Conditioned Flow Matching for Diverse One-Step Generation

    cs.LG 2026-04 unverdicted novelty 5.0

    SubFlow restores full mode coverage in one-step flow matching by conditioning on sub-modes from semantic clustering, yielding higher diversity on ImageNet-256 while preserving FID.