pith. machine review for the scientific record. sign in

arxiv: 1704.06001 · v1 · submitted 2017-04-20 · 💻 cs.LG · cs.CV· stat.ML

Recognition: unknown

Fast Generation for Convolutional Autoregressive Models

Authors on Pith no claims yet
classification 💻 cs.LG cs.CVstat.ML
keywords generationmodelsautoregressiveconvolutionalfastmethodredundanttimes
0
0 comments X
read the original abstract

Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a na\"{i}ve fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to $21\times$ and $183\times$ speedups respectively.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Efficiently Modeling Long Sequences with Structured State Spaces

    cs.LG 2021-10 unverdicted novelty 8.0

    S4 is an efficient state space sequence model that captures long-range dependencies via structured parameterization of the SSM, achieving state-of-the-art results on the Long Range Arena and other benchmarks while bei...

  2. Progressive Distillation for Fast Sampling of Diffusion Models

    cs.LG 2022-02 unverdicted novelty 7.0

    Progressive distillation halves sampling steps repeatedly in diffusion models, reaching 4 steps with FID 3.0 on CIFAR-10 from 8192-step samplers.