pith. machine review for the scientific record. sign in

arxiv: 1804.02541 · v1 · submitted 2018-04-07 · 💻 cs.CV · cs.LG

Recognition: unknown

Statistical transformer networks: learning shape and appearance models via self supervision

Authors on Pith no claims yet
classification 💻 cs.CV cs.LG
keywords modelshapestatisticalappearancelearntnetworkstatnsupervision
0
0 comments X
read the original abstract

We generalise Spatial Transformer Networks (STN) by replacing the parametric transformation of a fixed, regular sampling grid with a deformable, statistical shape model which is itself learnt. We call this a Statistical Transformer Network (StaTN). By training a network containing a StaTN end-to-end for a particular task, the network learns the optimal nonrigid alignment of the input data for the task. Moreover, the statistical shape model is learnt with no direct supervision (such as landmarks) and can be reused for other tasks. Besides training for a specific task, we also show that a StaTN can learn a shape model using generic loss functions. This includes a loss inspired by the minimum description length principle in which an appearance model is also learnt from scratch. In this configuration, our model learns an active appearance model and a means to fit the model from scratch with no supervision at all, even identity labels.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.