pith. machine review for the scientific record. sign in

arxiv: 1709.04696 · v3 · submitted 2017-09-14 · 💻 cs.CL · cs.AI

Recognition: unknown

DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding

Authors on Pith no claims yet
classification 💻 cs.CL cs.AI
keywords attentiondirectionaldisanlanguageneuralself-attentionaccuracybest
0
0 comments X
read the original abstract

Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively. Attention mechanisms have recently attracted enormous interest due to their highly parallelizable computation, significantly less training time, and flexibility in modeling dependencies. We propose a novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise). A light-weight neural net, "Directional Self-Attention Network (DiSAN)", is then proposed to learn sentence embedding, based solely on the proposed attention without any RNN/CNN structure. DiSAN is only composed of a directional self-attention with temporal order encoded, followed by a multi-dimensional attention that compresses the sequence into a vector representation. Despite its simple form, DiSAN outperforms complicated RNN models on both prediction quality and time efficiency. It achieves the best test accuracy among all sentence encoding methods and improves the most recent best result by 1.02% on the Stanford Natural Language Inference (SNLI) dataset, and shows state-of-the-art test accuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language inference (MultiNLI), Sentences Involving Compositional Knowledge (SICK), Customer Review, MPQA, TREC question-type classification and Subjectivity (SUBJ) datasets.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Attention U-Net: Learning Where to Look for the Pancreas

    cs.CV 2018-04 unverdicted novelty 6.0

    Attention gates added to U-Net automatically focus on target organs in CT images and improve segmentation performance on abdominal datasets.