This work provides the first systematic study of transferring direct-coded spiking neural networks to event-based representations while aiming to preserve accuracy and reduce energy use.
Spiking deep residual networks
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
cs.NE 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
LSFormer uses local structure-aware spiking self-attention and spiking response pooling to cut global attention bottlenecks, delivering 4.3% and 8.6% accuracy gains on Tiny-ImageNet and N-CALTECH101 over prior transformer-based SNNs.
citing papers explorer
-
Direct-to-Event Spiking Neural Network Transfer
This work provides the first systematic study of transferring direct-coded spiking neural networks to event-based representations while aiming to preserve accuracy and reduce energy use.
-
Breaking Global Self-Attention Bottlenecks in Transformer-based Spiking Neural Networks with Local Structure-Aware Self-Attention
LSFormer uses local structure-aware spiking self-attention and spiking response pooling to cut global attention bottlenecks, delivering 4.3% and 8.6% accuracy gains on Tiny-ImageNet and N-CALTECH101 over prior transformer-based SNNs.