pith. machine review for the scientific record. sign in

arxiv: 1701.04862 · v1 · submitted 2017-01-17 · 📊 stat.ML · cs.LG

Recognition: unknown

Towards Principled Methods for Training Generative Adversarial Networks

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords adversarialgenerativenetworkssectiontowardstrainingproblemstheoretical
0
0 comments X
read the original abstract

The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first section introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a practical and theoretically grounded direction towards solving these problems, while introducing new tools to study them.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 8 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. VLTI/PIONIER imaging of post-AGB binaries. An INSPIRING hunt for inner rim substructures in circumbinary discs

    astro-ph.SR 2026-05 unverdicted novelty 7.0

    High-resolution interferometric imaging of eight post-AGB circumbinary discs reveals diverse inner-rim substructures including azimuthal brightness enhancements and arc-like features not explained by inclination alone.

  2. Causal Stability Selection

    stat.ME 2026-05 unverdicted novelty 6.0

    Causal stability selection identifies treatment effect modifiers with a non-asymptotic bound on expected false positives by integrating cross-fitted CATE estimation and stability selection.

  3. A Dual Perspective on Synthetic Trajectory Generators: Utility Framework and Privacy Vulnerabilities

    cs.AI 2026-04 unverdicted novelty 6.0

    A new framework evaluates utility of synthetic mobility trajectories while a membership inference attack reveals privacy vulnerabilities in generative models thought to be safe.

  4. Continuous Adversarial Flow Models

    cs.LG 2026-04 unverdicted novelty 6.0

    Continuous adversarial flow models replace MSE in flow matching with adversarial training via a discriminator, improving guidance-free FID on ImageNet from 8.26 to 3.63 for SiT and similar gains for JiT and text-to-im...

  5. Demystifying MMD GANs

    stat.ML 2018-01 accept novelty 6.0

    MMD GANs have unbiased critic gradients but biased generator gradients from sample-based learning, and the Kernel Inception Distance provides a practical new measure for GAN convergence and dynamic learning rate adaptation.

  6. Finite-Time Analysis of MCTS in Continuous POMDP Planning

    cs.AI 2026-05 unverdicted novelty 5.0

    The paper proves finite-time probabilistic bounds on value estimates for MCTS in both discrete and continuous POMDPs and introduces Voro-POMCPOW with adaptive partitioning for guarantees.

  7. A Unified Measure-Theoretic View of Diffusion, Score-Based, and Flow Matching Generative Models

    cs.LG 2026-05 unverdicted novelty 4.0

    Diffusion, score-based, and flow matching models are unified as instances of learning time-dependent vector fields inducing marginal distributions governed by continuity and Fokker-Planck equations.

  8. Cross-Machine Anomaly Detection Leveraging Pre-trained Time-series Model

    cs.LG 2026-04 unverdicted novelty 4.0

    A cross-machine anomaly detection framework disentangles MOMENT embeddings using random forests to create machine-invariant condition features that improve generalization to unseen machines on industrial data.