pith. machine review for the scientific record. sign in

arxiv: 1705.10743 · v1 · submitted 2017-05-30 · 💻 cs.LG · stat.ML

Recognition: unknown

The Cramer Distance as a Solution to Biased Wasserstein Gradients

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords wassersteincramdistancemetrickullback-leiblerprobabilitypropertiesdivergence
0
0 comments X
read the original abstract

The Wasserstein probability metric has received much attention from the machine learning community. Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes. The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling. In this paper we describe three natural properties of probability divergences that reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third. We provide empirical evidence suggesting that this is a serious issue in practice. Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cram\'er distance. We show that the Cram\'er distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences. To illustrate the relevance of the Cram\'er distance in practice we design a new algorithm, the Cram\'er Generative Adversarial Network (GAN), and show that it performs significantly better than the related Wasserstein GAN.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Harmonized Feature Conditioning and Frequency-Prompt Personalization for Multi-Rater Medical Segmentation

    cs.CV 2026-05 unverdicted novelty 7.0

    A harmonized probabilistic model with adaptive feature conditioning and high-frequency prompt modules disentangles acquisition artifacts from rater variability to produce personalized yet consistent multi-rater segmen...

  2. Large Scale GAN Training for High Fidelity Natural Image Synthesis

    cs.LG 2018-09 accept novelty 7.0

    BigGANs achieve state-of-the-art class-conditional synthesis on ImageNet 128x128 with Inception Score 166.5 and FID 7.4 by scaling GANs and applying orthogonal regularization plus truncation.

  3. Demystifying MMD GANs

    stat.ML 2018-01 accept novelty 6.0

    MMD GANs have unbiased critic gradients but biased generator gradients from sample-based learning, and the Kernel Inception Distance provides a practical new measure for GAN convergence and dynamic learning rate adaptation.

  4. Fast Text-to-Audio Generation with One-Step Sampling via Energy-Scoring and Auxiliary Contextual Representation Distillation

    cs.SD 2026-05 unverdicted novelty 5.0

    A one-step text-to-audio model using energy-distance training and contextual distillation outperforms prior fast baselines on AudioCaps and achieves up to 8.5x faster inference than the multi-step IMPACT system with c...