Recognition: unknown
A Fixed-Size Encoding Method for Variable-Length Sequences with its Application to Neural Network Language Models
classification
💻 cs.NE
cs.CLcs.LG
keywords
fofefixed-sizefnn-lmsencodinglanguagemethodmodelsnetwork
read the original abstract
In this paper, we propose the new fixed-size ordinally-forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence of words into a fixed-size representation. FOFE can model the word order in a sequence using a simple ordinally-forgetting mechanism according to the positions of words. In this work, we have applied FOFE to feedforward neural network language models (FNN-LMs). Experimental results have shown that without using any recurrent feedbacks, FOFE based FNN-LMs can significantly outperform not only the standard fixed-input FNN-LMs but also the popular RNN-LMs.
This paper has not been read by Pith yet.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.