pith. machine review for the scientific record. sign in

arxiv: 1811.00995 · v3 · submitted 2018-11-02 · 💻 cs.LG · cs.AI· cs.CV· stat.ML

Recognition: unknown

Invertible Residual Networks

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CVstat.ML
keywords invertiblearchitecturesgenerativemodelrequiresresidualresnetsstandard
0
0 comments X
read the original abstract

We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both state-of-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.