Recognition: unknown
Learning Latent Superstructures in Variational Autoencoders for Deep Multidimensional Clustering
read the original abstract
We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features. In general, our superstructure is a tree structure of multiple super latent variables and it is automatically learned from data. When there is only one latent variable in the superstructure, our model reduces to one that assumes the latent features to be generated from a Gaussian mixture model. We call our model the latent tree variational autoencoder (LTVAE). Whereas previous deep learning methods for clustering produce only one partition of data, LTVAE produces multiple partitions of data, each being given by one super latent variable. This is desirable because high dimensional data usually have many different natural facets and can be meaningfully partitioned in multiple ways.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
PDGMM-VAE: A Variational Autoencoder with Adaptive Per-Dimension Gaussian Mixture Model Priors for Nonlinear ICA
PDGMM-VAE recovers latent sources in nonlinear ICA by using jointly learned per-dimension GMM priors that fit source-specific marginals and reduce permutation symmetry.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.