pith. machine review for the scientific record. sign in

arxiv: 1311.2540 · v2 · submitted 2013-11-11 · 💻 cs.IT · math.IT

Recognition: unknown

Asymmetric numeral systems: entropy coding combining speed of Huffman coding with compression rate of arithmetic coding

Authors on Pith no claims yet
classification 💻 cs.IT math.IT
keywords codingentropycompressionrateallowsalphabetarithmeticcost
0
0 comments X
read the original abstract

The modern data compression is mainly based on two approaches to entropy coding: Huffman (HC) and arithmetic/range coding (AC). The former is much faster, but approximates probabilities with powers of 2, usually leading to relatively low compression rates. The latter uses nearly exact probabilities - easily approaching theoretical compression rate limit (Shannon entropy), but at cost of much larger computational cost. Asymmetric numeral systems (ANS) is a new approach to accurate entropy coding, which allows to end this trade-off between speed and rate: the recent implementation [1] provides about $50\%$ faster decoding than HC for 256 size alphabet, with compression rate similar to provided by AC. This advantage is due to being simpler than AC: using single natural number as the state, instead of two to represent a range. Beside simplifying renormalization, it allows to put the entire behavior for given probability distribution into a relatively small table: defining entropy coding automaton. The memory cost of such table for 256 size alphabet is a few kilobytes. There is a large freedom while choosing a specific table - using pseudorandom number generator initialized with cryptographic key for this purpose allows to simultaneously encrypt the data. This article also introduces and discusses many other variants of this new entropy coding approach, which can provide direct alternatives for standard AC, for large alphabet range coding, or for approximated quasi arithmetic coding.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 7 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Fast and Exact: Asymptotically Linear KL-Optimal Frequency Normalization

    cs.IT 2026-05 unverdicted novelty 7.0

    Three new provably KL-optimal frequency normalization algorithms are presented, one running in linear time in the number of symbols.

  2. Decoupling Vector Data and Index Storage for Space Efficiency

    cs.DB 2026-04 unverdicted novelty 7.0

    DecoupleVS decouples vector data and index storage in ANNS systems to cut storage space by up to 58.7% with competitive search and update performance.

  3. ENEC: A Lossless AI Model Compression Method Enabling Fast Inference on Ascend NPUs

    cs.AR 2026-03 unverdicted novelty 7.0

    ENEC delivers 3.43X higher throughput than DietGPU and 1.12X better compression ratio than nvCOMP for lossless model weight compression on Ascend NPUs, yielding up to 6.3X end-to-end inference speedup.

  4. OpenZL: Using Graphs to Compress Smaller and Faster

    cs.IR 2026-05 unverdicted novelty 6.0

    OpenZL uses a directed acyclic graph of modular codecs to enable rapid creation of application-specific compressors that deliver better ratios and speeds than general-purpose tools while remaining competitive with dee...

  5. TStore: Rethinking AI Model Hub with Tensor-Centric Compression

    cs.DC 2026-04 unverdicted novelty 5.0

    TStore reduces AI model storage via tensor-level fingerprinting, clustering, and compression without annotations while claiming to preserve usability.

  6. LEAN-3D: Low-latency Hierarchical Point Cloud Codec for Mobile 3D Streaming

    eess.SP 2026-04 unverdicted novelty 5.0

    LEAN-3D delivers 3-5x lower latency and up to 5.1x lower edge energy for learned point cloud compression on mobile hardware by restricting learned components to shallow hierarchy levels and using deterministic coding ...

  7. TStore: Rethinking AI Model Hub with Tensor-Centric Compression

    cs.DC 2026-04 unverdicted novelty 4.0

    TensorHub reduces storage in AI model hubs via tensor-centric deduplication and compression while keeping model performance intact.