Tempus delivers 607 GOPS at 10.677 W using fixed 16 AIE cores on Versal AI Edge, with 211.2x better platform-aware utility than spatial SOTA ARIES and zero URAM/DSP utilization.
Title resolution pending
3 Pith papers cite this work. Polarity classification is still indexing.
years
2026 3verdicts
UNVERDICTED 3representative citing papers
AI Engines enable larger low-latency neural networks for extreme-edge scientific computing on FPGAs than programmable logic, via a new latency-adjusted resource equivalence metric and tailored optimizations.
Hybrid FPGA-AI Engine deployment of a dynamic GNN for Belle II trigger achieves 2.94M events/s throughput at 7.15us latency with 53% better throughput and DSP usage reduced from 99% to 19%.
citing papers explorer
-
Tempus: A Temporally Scalable Resource-Invariant GEMM Streaming Framework for Versal AI Edge
Tempus delivers 607 GOPS at 10.677 W using fixed 16 AIE cores on Versal AI Edge, with 211.2x better platform-aware utility than spatial SOTA ARIES and zero URAM/DSP utilization.
-
Design Rules for Extreme-Edge Scientific Computing on AI Engines
AI Engines enable larger low-latency neural networks for extreme-edge scientific computing on FPGAs than programmable logic, via a new latency-adjusted resource equivalence metric and tailored optimizations.
-
Reconfigurable Computing Challenge: Real-Time Graph Neural Networks for Online Event Selection in Big Science
Hybrid FPGA-AI Engine deployment of a dynamic GNN for Belle II trigger achieves 2.94M events/s throughput at 7.15us latency with 53% better throughput and DSP usage reduced from 99% to 19%.