Recognition: unknown
Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition
read the original abstract
Describes an audio dataset of spoken words designed to help train and evaluate keyword spotting systems. Discusses why this task is an interesting challenge, and why it requires a specialized dataset that is different from conventional datasets used for automatic speech recognition of full sentences. Suggests a methodology for reproducible and comparable accuracy metrics for this task. Describes how the data was collected and verified, what it contains, previous versions and properties. Concludes by reporting baseline results of models trained on this dataset.
This paper has not been read by Pith yet.
Forward citations
Cited by 15 Pith papers
-
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Mamba is a linear-time sequence model using input-dependent selective SSMs that achieves SOTA results across modalities and matches twice-larger Transformers on language modeling with 5x higher inference throughput.
-
Efficiently Modeling Long Sequences with Structured State Spaces
S4 is an efficient state space sequence model that captures long-range dependencies via structured parameterization of the SSM, achieving state-of-the-art results on the Long Range Arena and other benchmarks while bei...
-
FiTS: Interpretable Spiking Neurons via Frequency Selectivity and Temporal Shaping
FiTS spiking neurons improve auditory task performance over LIF baselines by factorizing computation into frequency selectivity and group-delay-based temporal shaping, yielding interpretable per-neuron parameters.
-
End-to-End Keyword Spotting on FPGA Using Graph Neural Networks with a Neuromorphic Auditory Sensor
An FPGA implementation of a neuromorphic auditory sensor plus graph neural network achieves 87.43% accuracy on Google Speech Commands v2 with sub-35 µs latency and 1.12 W power.
-
MMEB-V3: Measuring the Performance Gaps of Omni-Modality Embedding Models
MMEB-V3 benchmark shows omni-modality embedding models fail to enforce instruction-specified modality constraints and exhibit asymmetric, query-biased retrieval.
-
AudioMosaic: Contrastive Masked Audio Representation Learning
AudioMosaic learns general-purpose audio representations through contrastive pre-training with structured spectrogram masking, reaching state-of-the-art results on standard benchmarks and improving audio-language tasks.
-
EdgeSpike: Spiking Neural Networks for Low-Power Autonomous Sensing in Edge IoT Architectures
EdgeSpike delivers 91.4% mean accuracy on five sensing tasks with 31x lower energy on neuromorphic hardware and 6.3x longer battery life in a seven-month field deployment compared to conventional CNNs.
-
ShiftLIF: Efficient Multi-Level Spiking Neurons with Power-of-Two Quantization
ShiftLIF maps membrane potentials to logarithmically spaced power-of-two spike levels, improving representational capacity in SNNs while keeping synaptic operations multiplier-free.
-
From Cortical Synchronous Rhythm to Brain Inspired Learning Mechanism: An Oscillatory Spiking Neural Network with Time-Delayed Coordination
S2-Net is an oscillatory spiking neural network that uses time-delayed synchronization for bottom-up and top-down coordination to enable efficient, brain-inspired information processing across tasks like decoding and ...
-
ULTRAS -- Unified Learning of Transformer Representations for Audio and Speech Signals
ULTRAS unifies audio and speech representation learning in a single transformer by applying patch masking to log-mel spectrograms and using a joint spectral-temporal prediction loss.
-
minAction.net: Energy-First Neural Architecture Design -- From Biological Principles to Systematic Validation
Large-scale experiments show architecture performance depends on task type, not universality, and a single-parameter energy penalty reduces computational energy by ~1000x with negligible accuracy cost.
-
Whisper-AuT: Domain-Adapted Audio Encoder for Efficient Audio-LLM Training
Whisper-AuT is a domain-adapted audio encoder obtained by fine-tuning Whisper-large-v3 on mixed speech, environmental, and music data, yielding gains of +23% on ESC-50, +5% on GTZAN, and +0.7% on Speech Commands.
-
Practical Bayesian Inference for Speech SNNs: Uncertainty and Loss-Landscape Smoothing
Bayesian weight learning in surrogate-gradient SNNs smooths the loss landscape and improves negative log-likelihood plus Brier score on Heidelberg Digits and Speech Commands datasets.
-
Attention Is not Everything: Efficient Alternatives for Vision
A survey that taxonomizes non-Transformer vision models and evaluates their practical trade-offs across efficiency, scalability, and robustness.
-
Keyword spotting using convolutional neural network for speech recognition in Hindi
CNNs using MFCC features achieve 91.79% accuracy for keyword spotting in Hindi speech on a 40,000-sample dataset.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.