Recognition: 1 theorem link
· Lean TheoremHyperSpace: A Generalized Framework for Spatial Encoding in Hyperdimensional Representations
Pith reviewed 2026-05-12 00:54 UTC · model grok-4.3
The pith
HyperSpace shows HRR and FHRR deliver comparable end-to-end performance in spatial tasks because similarity and cleanup dominate runtime.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The HyperSpace framework reveals that in spatial domains, the runtime of VSA systems is dominated by similarity and cleanup operations rather than the encoding or binding steps. Consequently, HRR and FHRR exhibit comparable end-to-end performance despite FHRR's lower theoretical complexity per operation, while HRR offers a significant memory advantage requiring approximately half the storage of FHRR vectors.
What carries the argument
HyperSpace, the modular decomposition of VSA systems into operators for encoding, binding, bundling, similarity, cleanup, and regression, which enables system-level benchmarking beyond per-operation analysis.
If this is right
- VSA system performance in spatial encoding depends primarily on similarity and cleanup efficiency rather than binding or encoding complexity.
- Memory requirements differ substantially between real-valued HRR and complex-valued FHRR, influencing hardware choices.
- Theoretical complexity analysis alone is insufficient for predicting practical VSA pipeline performance.
- Modular frameworks allow identification of bottlenecks that are invisible in isolated operator evaluations.
Where Pith is reading between the lines
- Similar modular analysis could uncover performance characteristics for other hyperdimensional representation methods in non-spatial tasks.
- The framework might support the design of adaptive VSA systems that switch between representations based on task demands.
- Integration with machine learning pipelines could benefit from these trade-off insights when using VSAs for compositional reasoning.
Load-bearing premise
That the chosen spatial-domain benchmarks and the modular operator breakdown accurately represent typical VSA system behavior in practice without significant hidden effects from specific implementations.
What would settle it
Measuring runtime breakdowns on a wider set of spatial tasks or with alternative implementations of the similarity and cleanup operators to verify if they consistently dominate and equalize the end-to-end times.
Figures
read the original abstract
Vector Symbolic Architectures (VSAs) provide a well-defined algebraic framework for compositional representations in hyperdimensional spaces. We introduce HyperSpace, an open-source framework that decomposes VSA systems into modular operators for encoding, binding, bundling, similarity, cleanup, and regression. Using HyperSpace, we analyze and benchmark two representative VSA backends: Holographic Reduced Representations (HRR) and Fourier Holographic Reduced Representations (FHRR). Although FHRR provides lower theoretical complexity for individual operations, HyperSpaces modularity reveals that similarity and cleanup dominate runtime in spatial domains. As a result, HRR and FHRR exhibit comparable end-to-end performance. Differences in memory footprint introduce additional deployment trade-offs where HRR requires approximately half the memory of FHRR vectors. By enabling modular, system-level evaluation, HyperSpace reveals practical trade-offs in VSA pipelines that are not apparent from theoretical or operator-level comparisons alone.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces HyperSpace, an open-source framework that decomposes Vector Symbolic Architectures (VSAs) into modular operators for encoding, binding, bundling, similarity, cleanup, and regression. It applies this framework to benchmark Holographic Reduced Representations (HRR) and Fourier Holographic Reduced Representations (FHRR) in spatial domains. The key findings are that, although FHRR has lower theoretical complexity for individual operations, similarity and cleanup operations dominate the runtime, resulting in comparable end-to-end performance between HRR and FHRR. Additionally, HRR vectors require approximately half the memory of FHRR vectors, highlighting deployment trade-offs.
Significance. If the modular decomposition is shown to be accurate and free of significant hidden costs, the paper offers important practical guidance for VSA system design by shifting focus from per-operation complexity to full-pipeline performance and memory usage. The provision of an open-source framework is a strength that promotes transparency and further experimentation in the field of hyperdimensional computing.
major comments (2)
- [§5.1] §5.1 (Runtime Breakdown): The central claim that similarity and cleanup dominate runtime (leading to comparable HRR/FHRR end-to-end performance) depends on the framework's operator timings accurately partitioning total costs. The section provides no ablation study, overhead measurement, or direct comparison against standalone HRR/FHRR implementations to rule out framework-induced costs, data movement, or backend-specific effects (e.g., FFT vs. dot-product). This is load-bearing for the conclusion.
- [Table 2] Table 2 (Memory Footprint): The quantitative claim that HRR requires approximately half the memory of FHRR vectors is presented without specifying vector dimensionality, data types, or storage formats in the table or adjacent text, limiting assessment of the trade-off's generality and reproducibility.
minor comments (3)
- [Abstract] Abstract: 'HyperSpaces modularity' contains a typo and should read 'HyperSpace's modularity'.
- [§3] §3 (Framework Description): A diagram showing how the modular operators compose into an end-to-end spatial encoding pipeline would improve clarity.
- [Figures 4-6] Figures 4-6: Performance plots lack error bars, number of runs, or details on benchmark datasets and exclusion criteria, which would strengthen the empirical claims.
Simulated Author's Rebuttal
We thank the referee for their constructive comments on our manuscript introducing HyperSpace. We address the major concerns point by point below and outline the revisions we will make to improve the paper's rigor and clarity.
read point-by-point responses
-
Referee: [§5.1] §5.1 (Runtime Breakdown): The central claim that similarity and cleanup dominate runtime (leading to comparable HRR/FHRR end-to-end performance) depends on the framework's operator timings accurately partitioning total costs. The section provides no ablation study, overhead measurement, or direct comparison against standalone HRR/FHRR implementations to rule out framework-induced costs, data movement, or backend-specific effects (e.g., FFT vs. dot-product). This is load-bearing for the conclusion.
Authors: We recognize the importance of validating that the observed runtime dominance of similarity and cleanup operations is not an artifact of the HyperSpace framework. While the framework consists of lightweight Python wrappers around highly optimized numerical libraries, we agree that explicit measurements would bolster confidence in the results. In the revised version, we will add an ablation study in §5.1 that includes direct timing comparisons against standalone NumPy/SciPy implementations for key operations, as well as measurements of framework overhead and data movement costs. This will allow readers to verify that the comparable end-to-end performance between HRR and FHRR stems from the computational demands of similarity and cleanup rather than framework-specific effects. revision: yes
-
Referee: [Table 2] Table 2 (Memory Footprint): The quantitative claim that HRR requires approximately half the memory of FHRR vectors is presented without specifying vector dimensionality, data types, or storage formats in the table or adjacent text, limiting assessment of the trade-off's generality and reproducibility.
Authors: We agree that the lack of specification limits the assessment of the memory trade-off. In the revised manuscript, we will update Table 2 and the text in §5.2 to specify that experiments used 10,000-dimensional vectors, with HRR employing 64-bit floating-point storage and FHRR using 128-bit complex floating-point storage. This results in HRR vectors requiring approximately half the memory of FHRR vectors. We will also include a discussion on the generality across dimensionalities and the use of standard array formats. revision: yes
Circularity Check
No circularity; performance claims from new empirical benchmarks
full rationale
The paper introduces HyperSpace as a new modular framework for decomposing VSA operators and then executes fresh benchmarks on HRR and FHRR to measure runtime dominance of similarity/cleanup, end-to-end comparability, and memory footprints. These results are obtained by running the introduced code on spatial-domain tasks rather than by fitting parameters to a subset of data and relabeling the fit as a prediction, by self-defining one quantity in terms of another, or by load-bearing self-citations whose prior results are themselves unverified. No equations or derivations in the provided text reduce the reported outcomes to the framework's own inputs by construction; the claims remain externally falsifiable via independent re-implementation of the benchmarks.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Standard algebraic properties of vector symbolic architectures hold for the decomposed operators of encoding, binding, bundling, similarity, cleanup, and regression.
invented entities (1)
-
HyperSpace framework
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C Stewart, Daniel Rasmussen, Xuan Choo, Aaron Russell Voelker, and Chris Eliasmith. 2014. Nengo: a Python tool for building large-scale functional brain models.Frontiers in neuroinformatics7 (2014), 48. HyperSpace: A Generalized Framework for Spatial Encoding in Hyperdimensional Repr...
work page 2014
-
[2]
2025.Symbols, Dynamics, and Maps: A Neurosymbolic Approach to Spatial Cognition
Nicole Sandra-Yaffa Dumont. 2025.Symbols, Dynamics, and Maps: A Neurosymbolic Approach to Spatial Cognition. Ph. D. Dissertation. University of Waterloo
work page 2025
-
[3]
E Paxon Frady, Spencer J Kent, Bruno A Olshausen, and Friedrich T Sommer. 2020. Resonator networks, 1: An efficient solution for factoring high-dimensional, distributed representations of data structures.Neural computation32, 12 (2020), 2311–2331
work page 2020
-
[4]
E Paxon Frady, Denis Kleyko, Christopher J Kymn, Bruno A Olshausen, and Friedrich T Sommer. 2022. Computing on functions using randomized vector representations (in brief). InProceedings of the 2022 Annual Neuro-Inspired Computational Elements Conference. 115–122
work page 2022
-
[5]
P Michael Furlong and Chris Eliasmith. 2024. Modelling neural probabilistic computation using vector symbolic architectures.Cognitive Neurodynamics18, 6 (2024), 1–24
work page 2024
-
[6]
Mike Heddes, Igor Nunes, Pere Vergés, Denis Kleyko, Danny Abraham, Tony Givargis, Alexandru Nicolau, and Alex Veidenbaum. 2023. Torchhd: An Open Source Python Library to Support Research on Hyperdimensional Computing and Vector Symbolic Architectures.Journal of Machine Learning Research24, 255 (2023), 1–10. http://jmlr.org/papers/v24/23-0300.html
work page 2023
-
[7]
Mohsen Imani, Samuel Bosch, Sohum Datta, Sharadhi Ramakrishna, Sahand Salamat, Jan M Rabaey, and Tajana Rosing. 2019. Quanthd: A quantization framework for hyperdimensional computing.IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems39, 10 (2019), 2268–2278
work page 2019
-
[8]
Mohsen Imani, Abbas Rahimi, Deqian Kong, Tajana Rosing, and Jan M Rabaey. 2017. Exploring hyperdimensional associative memory. In2017 IEEE international symposium on high performance computer architecture (HPCA). IEEE, 445–456
work page 2017
-
[9]
Pentti Kanerva. 1994. The spatter code for encoding concepts at many levels. InICANN’94: Proceedings of the International Conference on Artificial Neural Networks Sorrento, Italy, 26–29 May 1994 Volume 1, Parts 1 and 2 4. Springer, 226–229
work page 1994
-
[10]
Rachkovskij, Evgeny Osipov, and Abbas Rahimi
Denis Kleyko, Dmitri A. Rachkovskij, Evgeny Osipov, and Abbas Rahimi. 2022. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations.Comput. Surveys55, 6 (2022). doi:10.1145/3538531
-
[11]
2020.Biologically Inspired Spatial Representation
Komer, Brent. 2020.Biologically Inspired Spatial Representation. Ph. D. Dissertation. University of Waterloo. http://hdl.handle.net/10012/16430
work page 2020
-
[12]
Yann LeCun, Corinna Cortes, and CJ Burges. 2010. MNIST handwritten digit database.ATT Labs [Online]. A vailable: http://yann.lecun.com/exdb/mnist 2 (2010)
work page 2010
-
[13]
Yang Ni, Zhuowen Zou, Wenjun Huang, Hanning Chen, William Youngwoo Chung, Samuel Cho, Ranganath Krishnan, Pietro Mercati, and Mohsen Imani. 2025. HEAL: Brain-inspired hyperdimensional efficient active learning.IEEE Transactions on Artificial Intelligence(2025)
work page 2025
-
[14]
Igor Nunes, Mike Heddes, Tony Givargis, Alexandru Nicolau, and Alex Veidenbaum. 2022. GraphHD: Efficient graph classification using hyperdi- mensional computing. In2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 1485–1490
work page 2022
-
[15]
Garrick Orchard, Ajinkya Jayawant, Gregory K. Cohen, and Nitish Thakor. 2015. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.Frontiers in NeuroscienceVolume 9 - 2015 (2015). doi:10.3389/fnins.2015.00437
-
[16]
Tony Plate et al. 1991. Holographic Reduced Representations: Convolution Algebra for Compositional Distributed Representations.. InIJCAI. 30–35
work page 1991
-
[17]
2003.Holographic Reduced Representation: Distributed representation for cognitive structures
Tony A Plate. 2003.Holographic Reduced Representation: Distributed representation for cognitive structures. Vol. 150. CSLI Publications Stanford
work page 2003
-
[18]
Hubert Ramsauer, Bernhard Schäfl, Johannes Lehner, Philipp Seidl, Michael Widrich, Thomas Adler, Lukas Gruber, Markus Holzleitner, Milena Pavlović, Geir Kjetil Sandve, et al. 2020. Hopfield networks is all you need.arXiv preprint arXiv:2008.02217(2020)
work page internal anchor Pith review arXiv 2020
-
[19]
Mohamed Reda, Ahmed Onsy, Amira Y Haikal, and Ali Ghanbari. 2024. Path planning algorithms in the autonomous driving system: A comprehensive review.Robotics and Autonomous Systems174 (2024), 104630
work page 2024
-
[20]
Olshausen, Yulia Sandamirskaya, Friedrich T
Alpha Renner, Lazar Supic, Andreea Danielescu, Giacomo Indiveri, Bruno A. Olshausen, Yulia Sandamirskaya, Friedrich T. Sommer, and E. Paxon Frady. 2024. Neuromorphic visual scene understanding with resonator networks.Nature Machine Intelligence6, 6 (01 Jun 2024), 641–652. doi:10.1038/s42256-024-00848-0
-
[21]
Shay Snyder, Andrew Capodieci, David Gorsich, and Maryam Parsa. 2026. Brain Inspired Probabilistic Occupancy Grid Mapping with Vector Symbolic Architectures.npj Unconventional Computing3, 1 (2026), 13
work page 2026
- [22]
-
[23]
Ye Tian, Rishikanth Chandrasekaran, Kazim Ergun, Xiaofan Yu, and Tajana Rosing. 2025. Federated Hyperdimensional Computing: Comprehensive Analysis and Robust Communication.ACM Trans. Internet Things6, 3, Article 14 (May 2025), 30 pages. doi:10.1145/3724129
-
[24]
Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, İlhan Polat, Yu Feng, Eric W. Mo...
-
[25]
Methods17, 261–272, DOI: 10.1038/s41592-019-0686-2 (2020)
SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python.Nature Methods17 (2020), 261–272. doi:10.1038/s41592-019-0686-2 Received 8 April 2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.