pith. machine review for the scientific record. sign in

arxiv: 1611.07492 · v1 · submitted 2016-11-22 · 📊 stat.ML · cs.CV· cs.LG

Recognition: unknown

Inducing Interpretable Representations with Variational Autoencoders

Authors on Pith no claims yet
classification 📊 stat.ML cs.CVcs.LG
keywords variationalallowsautoencodersframeworkgraphicalinterpretablemodelmodels
0
0 comments X
read the original abstract

We develop a framework for incorporating structured graphical models in the \emph{encoders} of variational autoencoders (VAEs) that allows us to induce interpretable representations through approximate variational inference. This allows us to both perform reasoning (e.g. classification) under the structural constraints of a given graphical model, and use deep generative models to deal with messy, high-dimensional domains where it is often difficult to model all the variation. Learning in this framework is carried out end-to-end with a variational objective, applying to both unsupervised and semi-supervised schemes.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.