pith. machine review for the scientific record. sign in

arxiv: 1704.00648 · v2 · submitted 2017-04-03 · 💻 cs.LG · cs.CV

Recognition: unknown

Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations

Authors on Pith no claims yet
classification 💻 cs.LG cs.CV
keywords quantizationapproachcompressiblecompressionend-to-endmethodrepresentationssoft-to-hard
0
0 comments X
read the original abstract

We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion

    cs.CV 2022-08 unverdicted novelty 8.0

    Textual Inversion learns a single embedding vector from a few images to represent personal concepts inside the text embedding space of a frozen text-to-image model, enabling their composition in natural language prompts.