pith. machine review for the scientific record. sign in

arxiv: 1808.10627 · v1 · submitted 2018-08-31 · 💻 cs.CL

Recognition: unknown

Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

Authors on Pith no claims yet
classification 💻 cs.CL
keywords negativepolarityitemslanguagemodelabilitycontextformal
0
0 comments X
read the original abstract

In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.