Recognition: unknown
Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders
read the original abstract
We study a variant of the variational autoencoder model (VAE) with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models. We observe that the known problem of over-regularisation that has been shown to arise in regular VAEs also manifests itself in our model and leads to cluster degeneracy. We show that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with our model. Furthermore we analyse the effect of this heuristic and provide an intuition of the various processes with the help of visualizations. Finally, we demonstrate the performance of our model on synthetic data, MNIST and SVHN, showing that the obtained clusters are distinct, interpretable and result in achieving competitive performance on unsupervised clustering to the state-of-the-art results.
This paper has not been read by Pith yet.
Forward citations
Cited by 4 Pith papers
-
A Testable Certificate for Constant Collapse in Teacher-Guided VAEs
For any fixed nonconstant teacher T, the best constant student has alignment cost exactly equal to the teacher mutual information I_T(X;T); a latent-only witness below this threshold with margin cannot be constant.
-
From Unsupervised to Guided Clustering: A Variational Implementation
GCVAE is a variational autoencoder that structures its latent space as a Gaussian mixture and optimizes a variational objective to make the representation maximally informative about a user-chosen guiding variable, en...
-
PDGMM-VAE: A Variational Autoencoder with Adaptive Per-Dimension Gaussian Mixture Model Priors for Nonlinear ICA
PDGMM-VAE recovers latent sources in nonlinear ICA by using jointly learned per-dimension GMM priors that fit source-specific marginals and reduce permutation symmetry.
-
Prototype Guided Post-pretraining for Single-Cell Representation Learning
CellRefine adds a marker-gene-guided post-pretraining stage to single-cell models that refines the cell embedding manifold and improves downstream task performance by up to 15%.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.