pith. machine review for the scientific record. sign in

arxiv: 1511.06038 · v4 · submitted 2015-11-19 · 💻 cs.CL · cs.LG· stat.ML

Recognition: unknown

Neural Variational Inference for Text Processing

Authors on Pith no claims yet
classification 💻 cs.CL cs.LGstat.ML
keywords variationalinferencemodelneuraltextdocumentgenerativequestion
0
0 comments X
read the original abstract

Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bag-of-words generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Language Models as Knowledge Bases?

    cs.CL 2019-09 accept novelty 7.0

    BERT stores relational knowledge extractable via cloze queries without fine-tuning and matches supervised baselines on open-domain QA tasks.