CARMEN delivers a CORDIC-based multi-precision vector engine achieving up to 33% fewer computation cycles and 21% power savings per MAC in 28 nm ASIC while supporting 8/16-bit flexible precision for deep learning inference.
RECON: Resource- Efficient CORDIC-Based Neuron Architecture,
2 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.AR 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
TREA is a low-precision time-multiplexed edge accelerator using dual-precision SIMD MAC units, structured pruning, and reconfigurable activation cores to deliver up to 9x kernel-level latency reduction for object detection and classification.
citing papers explorer
-
CARMEN: CORDIC-Accelerated Resource-Efficient Multi-Precision Inference Engine for Deep Learning
CARMEN delivers a CORDIC-based multi-precision vector engine achieving up to 33% fewer computation cycles and 21% power savings per MAC in 28 nm ASIC while supporting 8/16-bit flexible precision for deep learning inference.
-
TREA: Low-precision Time-Multiplexed, Resource-Efficient Edge Accelerator for Object Detection and Classification
TREA is a low-precision time-multiplexed edge accelerator using dual-precision SIMD MAC units, structured pruning, and reconfigurable activation cores to deliver up to 9x kernel-level latency reduction for object detection and classification.