pith. machine review for the scientific record. sign in

arxiv: 1506.00278 · v1 · submitted 2015-05-31 · 💻 cs.CV · cs.CL

Recognition: unknown

Visual Madlibs: Fill in the blank Image Generation and Question Answering

Authors on Pith no claims yet
classification 💻 cs.CV cs.CL
keywords datasetgenerationmadlibsvisualdescriptiondescriptionsfocusedimages
0
0 comments X
read the original abstract

In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. VisualBERT: A Simple and Performant Baseline for Vision and Language

    cs.CV 2019-08 conditional novelty 6.0

    VisualBERT is a Transformer model that implicitly aligns text and image regions through self-attention and achieves competitive or superior results on VQA, VCR, NLVR2, and Flickr30K after pre-training on captions.