Recognition: unknown
Impact of leaky dynamics on predictive path integration accuracy in recurrent neural networks
Pith reviewed 2026-05-10 08:03 UTC · model grok-4.3
The pith
Adding a leak term to recurrent neural networks improves the emergence of regular hexagonal firing patterns and path integration accuracy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Recurrent neural networks discretized from continuous attractor firing rate models and equipped with a leak term develop well-defined and highly regular hexagonal firing patterns. These patterns enable more accurate position estimates and reliable grid-cell-like representations compared with vanilla RNNs. Under identical noise conditions the leaky networks exhibit more stable dynamics and better-defined grid structures. The learned dynamics produce stable torus attractors with a clear central hole that supports robust and regular grid-like activity.
What carries the argument
The leak term added to the RNN update rule, which introduces multi-timescale dynamics and acts as a low-pass filter on network activity.
Load-bearing premise
The chosen leak term and discretization from continuous attractors correctly capture the relevant biological time scales without creating artifacts that appear only in the training simulations.
What would settle it
Training identical networks with the leak term removed but every other condition held fixed, then checking whether hexagonal pattern regularity and position accuracy remain unchanged, would test whether the leak drives the reported improvements.
Figures
read the original abstract
Experimental evidence indicates that intrinsic temporal dynamics operating across multiple time scales are closely associated with the emergence of periodic spatial activity of increasing complexity. However, how information encoded in grid-like firing patterns for path integration is processed across these intrinsic time scales remains unclear. To address this question, we introduce adaptive time scales through a leak term in recurrent neural networks (RNNs), forming leaky RNNs discretized from the continuous attractors of firing rate models. Our results demonstrate that leaky RNNs substantially enhance the emergence of well-defined and highly regular hexagonal firing patterns. Compared with vanilla RNNs lacking a leak term, the trained leaky RNNs produce more accurate position estimates while generating reliable grid-cell-like representations. Furthermore, under identical noise conditions, leaky RNNs consistently exhibit more stable dynamics and better-defined grid structures. The learned dynamics also give rise to stable torus attractors with a clear central hole, supporting robust and regular grid-like activity. Overall, the dynamic leak acts as a low-pass filtering mechanism that protects recurrent neural circuitry from noise, stabilizes network dynamics, and improves path-integration accuracy in recurrent neural networks.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces a leak term into recurrent neural networks (RNNs) for path integration, discretizing it from continuous attractor firing-rate models to create leaky RNNs with adaptive timescales. It claims that these leaky RNNs produce substantially more regular and well-defined hexagonal grid-like firing patterns, yield more accurate position estimates, exhibit greater stability under identical noise conditions, and form stable torus attractors with a clear central hole, with the leak functioning as a low-pass filter that protects dynamics from noise.
Significance. If the simulation results hold under standard controls, the work demonstrates that incorporating leaky dynamics can stabilize RNN attractors and improve path-integration performance while promoting grid-cell-like representations. This provides a concrete computational mechanism linking multi-timescale intrinsic dynamics to spatial coding and could inform both the design of robust sequential models and interpretations of biological grid-cell emergence.
minor comments (3)
- [Abstract] The abstract asserts clear performance gains (more accurate position estimates, better-defined grid structures) without reporting any quantitative metrics, error bars, or statistical comparisons; these should be added to the abstract and results sections for clarity.
- [Methods] The exact discretization of the leak term from the continuous attractor equations is not shown; providing the update rule and any associated parameters (e.g., leak coefficient value) would improve reproducibility.
- [Results] Figures illustrating firing patterns and torus attractors would benefit from explicit quantitative measures (gridness scores, attractor stability metrics) with direct comparisons to the vanilla RNN baseline.
Simulated Author's Rebuttal
We thank the referee for their positive summary and significance assessment of our work, as well as the recommendation for minor revision. The referee's description accurately captures our main claims regarding the benefits of leaky dynamics in RNNs for path integration, grid-cell-like representations, and noise robustness. Since no specific major comments were provided in the report, we will incorporate minor revisions to improve clarity, figures, and any minor presentation issues in the next version of the manuscript.
Circularity Check
No significant circularity in derivation chain
full rationale
The paper is an empirical simulation study that introduces a leak term (discretized from continuous attractor firing-rate models) into RNNs and trains both leaky and vanilla variants on path-integration tasks. Performance differences in grid-pattern regularity, position-estimate accuracy, and attractor stability are reported as direct outcomes of training under matched noise conditions. No equations or claims reduce a prediction to a fitted parameter by construction, no load-bearing self-citations appear, and the leak's low-pass filtering effect follows immediately from its explicit addition rather than from any tautological redefinition. The results remain falsifiable by altering the leak coefficient or training objective.
Axiom & Free-Parameter Ledger
free parameters (1)
- leak coefficient
axioms (1)
- domain assumption Discretization of continuous attractor firing-rate models yields RNN dynamics whose stability and grid-forming properties are preserved under the leak term.
Reference graph
Works this paper leans on
-
[1]
Leaky Recurrent Neural Network The recurrent network consists ofN r = 4096 units, each receiving a two-dimensional body velocity input V t = (v t x, vt y) at timet. Each neuron’s firing rater t i in conventional recurrent neural networks is computed using an activation function.Continuous-time recurrent neural networks consist of model neurons governed by...
-
[2]
Read-out neurons of predicting the corresponding position sequence W out ij is the strength of connection fromj th to ith readout neuron. The readout matrixW out was initialized using the Xavier uniform initialization scheme [42], with each element sampled fromW out ij ∼ U − q 6 Nr+Np , q 6 Nr+Np , whereN p = 512 denotes the number of place cells in the p...
-
[3]
Trajectory Generation Process of Path integration task We trained leaky RNN on a path integration task within a supervised learning framework[17, 24, 43]. The network was provided with encoded initial positionsX 0 and a sequence of velocity inputsV T , whereTrepre- sents the length of the trajectory sequences.The leaky RNN was trained to predict the corre...
-
[4]
These noisy velocity sequences are then used as inputs to the leaky RNN, enabling the network to learn how to infer the true trajectories from the corrupted motion signals
Noisy input To simulate imperfect sensory perception, we also in- troduce stochastic noise to corrupt the linear speed∥Vt b∥, and then recompute the corresponding noisy velocity vec- tors as follows: ˜Vt b = [˜vt x,b,˜vt y,b] = [∥ ˜Vt b∥cos(ω t),∥ ˜Vt b∥sin(ω t)].(7) where∥ ˜Vt b∥represents the noisy linear speed after per- turbation. These noisy velocity...
-
[5]
The regularization term is defined asL w =λ∥W rec∥2, which helps prevent the model from over fitting to specific features and improves its general- ization capability
Learning rule of loss function utilizing the cross-entropy function The learning rules in this work consist of the cross- entropy function [47] and a regularization term with a small penaltyL w[29]. The regularization term is defined asL w =λ∥W rec∥2, which helps prevent the model from over fitting to specific features and improves its general- ization ca...
-
[6]
The grid cell rate maps (e.g., in Fig
Spatial Autocorrelogram (SAC) and Grid score (GS) Grid cell spatial firing rate map:Rate maps were constructed for arena and track recordings by sorting the position data within a partitioned bins. The grid cell rate maps (e.g., in Fig. 2(a)) were computed as fol- lows. Simulated rats performed path integration tasks with arena size (AZ) 220cm×220cm. To e...
-
[7]
, Tdenote the ground-truth 2D position and the predicted position at timet, respec- tively
Mean Squared Error (MSE) For a trajectory of lengthT, letX t = (x t, yt) and ˆXt = (ˆxt,ˆyt) fort= 1, . . . , Tdenote the ground-truth 2D position and the predicted position at timet, respec- tively. The step-wise position error is therefore defined as et = q (ˆxt −x t)2 + (ˆyt −y t)2.(15) FIG. 2. Three representative firing patterns (top panels) are quan...
-
[8]
Persistent Homology To better reveal stable toroidal attractors character- ized by a clear central hole and robust, regular grid-like activity, we employed persistent homology, a topologi- cal data analysis framework that extracts geometric fea- tures across multiple spatial scales. Persistent homology characterizes the multiscale structure of data by tra...
-
[9]
The degree to which grid-like firing patterns emerge is quantitatively evaluated using the grid score and the mean squared er- ror (MSE; Eq
The effect of introducing the leak termαon the emergence of grid-cell firing patterns in RNNs To assess the impact of the leak parameterαon the emergence of representative firing patterns, we compare the activity patterns generated by leaky RNNs with those produced by vanilla RNNs (i.e.,α= 1). The degree to which grid-like firing patterns emerge is quanti...
-
[10]
Comparison of Integrated Navigation Paths Generated by the vanilla RNN and the leaky RNN FIG. 5. Illustration of the path integration task and training process, compared to the ground truth in the simulated envi- ronment, for the RNN and leaky RNN models withα= 0.95. (a) Testing results for a sequence length ofT= 20. (b) Test- ing results for an extended ...
-
[11]
Specifically, we investigate the effects of two types of noise: Gaussian white noise and OU noise
Impact of noise on grid cell firing pattern formation in the leaky RNN To better understand the impact of noise on grid cell firing pattern formation in the leaky RNN, we analyze how the mean grid score varies as a function of both the leak parameterαand noise intensity. Specifically, we investigate the effects of two types of noise: Gaussian white noise ...
-
[12]
Two-dimensional attractor dynamics underlies path integration in the vanilla RNN and leaky RNN FIG. 10. Projections of the toroidal manifold onto the three principal axes,k 1 (blue rings),k 2 (origin rings) andk 3 (green rings), reveal three distinct rings in both the RNN (top plots) and leaky RNN(bottom plots), corresponding to positions along the 0 ◦, 6...
-
[13]
In this equation, αis the smoothing factor within the range 0≤α≤1
The low pass filtering effect of the leak term enhances spatial representations The discrete-time implementation of a simple RC low- pass filter,y t = (1−α)y t−1 +αx t represents the simplest form of exponential smoothing, also known as the expo- nentially weighted moving average [62]. In this equation, αis the smoothing factor within the range 0≤α≤1. Whe...
-
[14]
A mechanistic interpretation of optimized leak constants linking internal Dynamics and environmental Structure The use of RNNs without a leak term represents a sim- plified formulation that may limit both optimal perfor- mance and mechanistic insight into the underlying dy- namical processes [35, 58, 59]. Previous studies have shown that three main approa...
-
[15]
What do grid cells contribute to place cell firing?Trends in neurosciences, 37(3):136–145, 2014
Daniel Bush, Caswell Barry, and Neil Burgess. What do grid cells contribute to place cell firing?Trends in neurosciences, 37(3):136–145, 2014
2014
-
[16]
The chicken and egg problem of grid cells and place cells.Trends in Cognitive Sciences, 27(2):125–138, 2023
Genela Morris and Dori Derdikman. The chicken and egg problem of grid cells and place cells.Trends in Cognitive Sciences, 27(2):125–138, 2023
2023
-
[17]
Rectangular and hexagonal grids used for ob- servation, experiment and simulation in ecology.Ecolog- ical modelling, 206(3-4):347–359, 2007
Colin PD Birch, Sander P Oom, and Jonathan A Beecham. Rectangular and hexagonal grids used for ob- servation, experiment and simulation in ecology.Ecolog- ical modelling, 206(3-4):347–359, 2007
2007
-
[18]
Spatial periodicity in grid cell firing is explained by a neural sequence code of 2-d trajectories
RG Rebecca, Giorgio A Ascoli, Nate M Sutton, and Hol- ger Dannenberg. Spatial periodicity in grid cell firing is explained by a neural sequence code of 2-d trajectories. Elife, 13:RP96627, 2025
2025
-
[19]
A unified theory for the origin of grid cells through the lens of pattern formation.Advances in neural infor- mation processing systems, 32, 2019
Ben Sorscher, Gabriel Mel, Surya Ganguli, and Samuel Ocko. A unified theory for the origin of grid cells through the lens of pattern formation.Advances in neural infor- mation processing systems, 32, 2019
2019
-
[20]
Grid cells in cognition: mechanisms and function.Annual Review of Neuro- science, 47, 2024
Ling L Dong and Ila R Fiete. Grid cells in cognition: mechanisms and function.Annual Review of Neuro- science, 47, 2024
2024
-
[21]
Evidence for grid cells in a human memory network.Na- ture, 463(7281):657–661, 2010
Christian F Doeller, Caswell Barry, and Neil Burgess. Evidence for grid cells in a human memory network.Na- ture, 463(7281):657–661, 2010
2010
-
[22]
A review of the hippocampal place cells
John O’Keefe. A review of the hippocampal place cells. Progress in neurobiology, 13(4):419–439, 1979
1979
-
[23]
The head direction signal: origins and sensory-motor integration.Annu
Jeffrey S Taube. The head direction signal: origins and sensory-motor integration.Annu. Rev. Neurosci., 30(1):181–207, 2007
2007
-
[24]
Microstructure of a spatial map in the entorhinal cortex.Nature, 436(7052):801–806, 2005
Torkel Hafting, Marianne Fyhn, Sturla Molden, May- Britt Moser, and Edvard I Moser. Microstructure of a spatial map in the entorhinal cortex.Nature, 436(7052):801–806, 2005
2005
-
[25]
Speed cells in the medial entorhinal cortex.Nature, 523(7561):419–424, 2015
Emilio Kropff, James E Carmichael, May-Britt Moser, and Edvard I Moser. Speed cells in the medial entorhinal cortex.Nature, 523(7561):419–424, 2015
2015
-
[26]
Representation of geometric borders in the entorhinal cortex.Science, 322(5909):1865–1868, 2008
Trygve Solstad, Charlotte N Boccara, Emilio Kropff, May-Britt Moser, and Edvard I Moser. Representation of geometric borders in the entorhinal cortex.Science, 322(5909):1865–1868, 2008
2008
-
[27]
The neu- robiology of mammalian navigation.Current Biology, 28(17):R1023–R1042, 2018
Steven Poulter, Tom Hartley, and Colin Lever. The neu- robiology of mammalian navigation.Current Biology, 28(17):R1023–R1042, 2018
2018
-
[28]
Place cells, grid cells, and the brain’s spatial representa- tion system.Annu
Edvard I Moser, Emilio Kropff, and May-Britt Moser. Place cells, grid cells, and the brain’s spatial representa- tion system.Annu. Rev. Neurosci., 31(1):69–89, 2008
2008
-
[29]
Ten years of grid cells.Annual review of neuroscience, 39(1):19–40, 2016
David C Rowland, Yasser Roudi, May-Britt Moser, and Edvard I Moser. Ten years of grid cells.Annual review of neuroscience, 39(1):19–40, 2016
2016
-
[30]
Grid cells and cortical representation.Nature Reviews Neuroscience, 15(7):466–481, 2014
Edvard I Moser, Yasser Roudi, Menno P Witter, Clif- ford Kentros, Tobias Bonhoeffer, and May-Britt Moser. Grid cells and cortical representation.Nature Reviews Neuroscience, 15(7):466–481, 2014
2014
-
[31]
Vector-based navigation using grid-like repre- sentations in artificial agents.Nature, 557(7705):429–433, 2018
Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J Chadwick, Thomas Degris, Joseph Mo- dayil, et al. Vector-based navigation using grid-like repre- sentations in artificial agents.Nature, 557(7705):429–433, 2018
2018
-
[32]
Computational models of grid cells.Neuron, 71(4):589– 603, 2011
Lisa M Giocomo, May-Britt Moser, and Edvard I Moser. Computational models of grid cells.Neuron, 71(4):589– 603, 2011
2011
-
[33]
Accurate path integra- tion in continuous attractor network models of grid cells
Yoram Burak and Ila R Fiete. Accurate path integra- tion in continuous attractor network models of grid cells. PLoS computational biology, 5(2):e1000291, 2009
2009
-
[34]
A spin glass model of path integration in rat medial entorhinal cortex.Jour- nal of Neuroscience, 26(16):4266–4276, 2006
Mark C Fuhs and David S Touretzky. A spin glass model of path integration in rat medial entorhinal cortex.Jour- nal of Neuroscience, 26(16):4266–4276, 2006
2006
-
[35]
A unified theory for the origin of grid cells through the lens of pattern formation
Ben Sorscher, Gabriel Mel, Surya Ganguli, and Samuel Ocko. A unified theory for the origin of grid cells through the lens of pattern formation. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´ e-Buc, E. Fox, and R. Garnett, editors,Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019
2019
-
[36]
Dual phase and rate cod- ing in hippocampal place cells: theoretical significance and relationship to entorhinal grid cells.Hippocampus, 15(7):853–866, 2005
John O’keefe and Neil Burgess. Dual phase and rate cod- ing in hippocampal place cells: theoretical significance and relationship to entorhinal grid cells.Hippocampus, 15(7):853–866, 2005
2005
-
[37]
Path integration and the neural basis of the’cognitive map’
Bruce L McNaughton, Francesco P Battaglia, Ole Jensen, Edvard I Moser, and May-Britt Moser. Path integration and the neural basis of the’cognitive map’. Nature Reviews Neuroscience, 7(8):663–678, 2006
2006
-
[38]
Recur- rent spiking neural networks as models of the entorhi- nal–hippocampal system for path integration: Grid cells and beyond.Neurocomputing, 651:130814, 2025
Ruilan Gao, Changjian Jiang, and Yu Zhang. Recur- rent spiking neural networks as models of the entorhi- nal–hippocampal system for path integration: Grid cells and beyond.Neurocomputing, 651:130814, 2025
2025
-
[39]
A model of grid cell development through spatial exploration and spike time- dependent plasticity.Neuron, 83(2):481–495, 2014
John Widloski and Ila R Fiete. A model of grid cell development through spatial exploration and spike time- dependent plasticity.Neuron, 83(2):481–495, 2014
2014
-
[40]
Emergent elasticity in the neural code for space.Proceedings of the National Academy of Sciences, 115(50):E11798–E11806, 2018
Samuel A Ocko, Kiah Hardcastle, Lisa M Giocomo, and Surya Ganguli. Emergent elasticity in the neural code for space.Proceedings of the National Academy of Sciences, 115(50):E11798–E11806, 2018
2018
-
[41]
Principles governing the integration of landmark and self- motion cues in entorhinal cortical codes for navigation
Malcolm G Campbell, Samuel A Ocko, Caitlin S Mal- lory, Isabel IC Low, Surya Ganguli, and Lisa M Giocomo. Principles governing the integration of landmark and self- motion cues in entorhinal cortical codes for navigation. Nature neuroscience, 21(8):1096–1106, 2018
2018
-
[42]
Training recurrent networks to generate hypotheses about how the brain solves hard navigation problems.Advances in Neural In- formation Processing Systems, 30, 2017
Ingmar Kanitscheider and Ila Fiete. Training recurrent networks to generate hypotheses about how the brain solves hard navigation problems.Advances in Neural In- formation Processing Systems, 30, 2017. 14
2017
-
[43]
A unified theory for the computational and mechanistic origins of grid cells.Neu- ron, 111(1):121–137, 2023
Ben Sorscher, Gabriel C Mel, Samuel A Ocko, Lisa M Giocomo, and Surya Ganguli. A unified theory for the computational and mechanistic origins of grid cells.Neu- ron, 111(1):121–137, 2023
2023
-
[44]
Gate-variants of gated recurrent unit (gru) neural networks
Rahul Dey and Fathi M Salem. Gate-variants of gated recurrent unit (gru) neural networks. In2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS), pages 1597–1600. IEEE, 2017
2017
-
[45]
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated re- current neural networks on sequence modeling.arXiv preprint arXiv:1412.3555, 2014
work page internal anchor Pith review arXiv 2014
-
[46]
Gated rnn: the gated recurrent unit (gru) rnn
Fathi M Salem. Gated rnn: the gated recurrent unit (gru) rnn. InRecurrent neural networks: from simple to gated architectures, pages 85–100. Springer, 2021
2021
-
[47]
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Kyunghyun Cho, Bart Van Merri¨ enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase represen- tations using rnn encoder-decoder for statistical machine translation.arXiv preprint arXiv:1406.1078, 2014
work page internal anchor Pith review arXiv 2014
-
[48]
Transformation of relu-based recurrent neural networks from discrete- time to continuous-time
Zahra Monfared and Daniel Durstewitz. Transformation of relu-based recurrent neural networks from discrete- time to continuous-time. InInternational Conference on Machine Learning, pages 6999–7009. PMLR, 2020
2020
-
[49]
Adaptive time scales in recurrent neural networks
Silvan C Quax, Michele D’asaro, and Marcel AJ Van Ger- ven. Adaptive time scales in recurrent neural networks. Scientific reports, 10(1):11360, 2020
2020
-
[50]
Emergence of spatial representation in an actor-critic agent with hippocampus-inspired sequence generator
Xiao-Xiong Lin, Yuk-Hoi Yiu, and Christian Leibold. Emergence of spatial representation in an actor-critic agent with hippocampus-inspired sequence generator. In The Fourteenth International Conference on Learning Representations
-
[51]
echo state
Herbert Jaeger. The “echo state” approach to analysing and training recurrent neural networks-with an erra- tum note.Bonn, Germany: German national research center for information technology gmd technical report, 148(34):13, 2001
2001
-
[52]
Optimization and applications of echo state networks with leaky-integrator neurons.Neural networks, 20(3):335–352, 2007
Herbert Jaeger, Mantas Lukoˇ seviˇ cius, Dan Popovici, and Udo Siewert. Optimization and applications of echo state networks with leaky-integrator neurons.Neural networks, 20(3):335–352, 2007
2007
-
[53]
The grid- cell normative model: Unifying ‘principles’.BioSystems, 235:105091, 2024
Jose A Fernandez-Leon and Luca Sarramone. The grid- cell normative model: Unifying ‘principles’.BioSystems, 235:105091, 2024
2024
-
[54]
Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis.Elife, 5:e10094, 2016
Yedidyah Dordek, Daniel Soudry, Ron Meir, and Dori Derdikman. Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis.Elife, 5:e10094, 2016
2016
-
[55]
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.Nature Neuroscience, 27(7):1349–1363, 2024
Laura N Driscoll, Krishna Shenoy, and David Sussillo. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.Nature Neuroscience, 27(7):1349–1363, 2024
2024
-
[56]
Understanding the difficulty of training deep feedforward neural networks
Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. InProceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings, 2010
2010
-
[57]
Active neural localization.arXiv preprint arXiv:1801.08214, 2018
Devendra Singh Chaplot, Emilio Parisotto, and Ruslan Salakhutdinov. Active neural localization.arXiv preprint arXiv:1801.08214, 2018
-
[58]
Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J. Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beattie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu...
2018
-
[59]
Mel, Samuel A
Ben Sorscher, Gabriel C. Mel, Samuel A. Ocko, Lisa M. Giocomo, and Surya Ganguli. A unified theory for the computational and mechanistic origins of grid cells.Neu- ron, 111(1):121–137.e13, January 2023
2023
-
[60]
Modeling boundary vector cell firing given optic flow as a cue.PLoS computational biology, 8(6):e1002553, 2012
Florian Raudies and Michael E Hasselmo. Modeling boundary vector cell firing given optic flow as a cue.PLoS computational biology, 8(6):e1002553, 2012
2012
-
[61]
Correlations of cross-entropy loss in machine learning.Entropy, 26(6):491, 2024
Richard Connor, Alan Dearle, Ben Claydon, and Lucia Vadicamo. Correlations of cross-entropy loss in machine learning.Entropy, 26(6):491, 2024
2024
-
[62]
Saido, and Kei M
Heechul Jun, Allen Bramian, Shogo Soma, Takashi Saito, Takaomi C. Saido, and Kei M. Igarashi. Disrupted place cell remapping and impaired grid cells in a knockin model of alzheimer’s disease.Neuron, 107(6):1095–1112.e6, 2020
2020
-
[63]
Development of the spatial representation system in the rat.Science, 328(5985):1576–1580, 2010
Rosamund F Langston, James A Ainge, Jonathan J Couey, Cathrin B Canto, Tale L Bjerknes, Menno P Wit- ter, Edvard I Moser, and May-Britt Moser. Development of the spatial representation system in the rat.Science, 328(5985):1576–1580, 2010
2010
-
[64]
Visual land- marks sharpen grid cell metric and confer context speci- ficity to neurons of the medial entorhinal cortex.Elife, 5:e16937, 2016
Jos´ e Antonio P´ erez-Escobar, Olga Kornienko, Patrick Latuske, Laura Kohler, and Kevin Allen. Visual land- marks sharpen grid cell metric and confer context speci- ficity to neurons of the medial entorhinal cortex.Elife, 5:e16937, 2016
2016
-
[65]
Computing Per- sistent Homology
Afra Zomorodian and Gunnar Carlsson. Computing Per- sistent Homology. 33(2):249–274
-
[66]
Barcodes: The persistent topology of data
Robert Ghrist. Barcodes: The persistent topology of data. 45(1):61–75
-
[67]
Edelsbrunner, D
H. Edelsbrunner, D. Letscher, and A. Zomorodian. Topo- logical persistence and simplification. InProceedings 41st Annual Symposium on Foundations of Computer Science, pages 454–463
-
[68]
Stable volumes for persistent homol- ogy.Journal of Applied and Computational Topology, 7(4):671–706, 2023
Ippei Obayashi. Stable volumes for persistent homol- ogy.Journal of Applied and Computational Topology, 7(4):671–706, 2023
2023
-
[69]
Functional summaries of persistence diagrams.Journal of Applied and Computational Topol- ogy, 4(2):211–262, 2020
Eric Berry, Yen-Chi Chen, Jessi Cisewski-Kehe, and Brit- tany Terese Fasy. Functional summaries of persistence diagrams.Journal of Applied and Computational Topol- ogy, 4(2):211–262, 2020
2020
-
[70]
Ripser.py: A lean persistent homology library for python
Christopher Tralie, Nathaniel Saul, and Rann Bar-On. Ripser.py: A lean persistent homology library for python. The Journal of Open Source Software, 3(29):925, Sep 2018
2018
-
[71]
Ripser: efficient computation of Vietoris- Rips persistence barcodes.J
Ulrich Bauer. Ripser: efficient computation of Vietoris- Rips persistence barcodes.J. Appl. Comput. Topol., 5(3):391–423, 2021
2021
-
[72]
Oxford University Press, 2016
Jun Tani.Exploring robotic minds: actions, symbols, and consciousness as self-organizing dynamic phenom- ena. Oxford University Press, 2016
2016
-
[73]
Learning the intrin- sic dynamics of spatio-temporal processes through latent dynamics networks.Nature Communications, 15(1):1834, 2024
Francesco Regazzoni, Stefano Pagani, Matteo Salvador, Luca Dede’, and Alfio Quarteroni. Learning the intrin- sic dynamics of spatio-temporal processes through latent dynamics networks.Nature Communications, 15(1):1834, 2024
2024
-
[74]
A controlled attractor network model of path integration in the rat.Journal of computational neuroscience, 18(2):183–203, 2005
John Conklin and Chris Eliasmith. A controlled attractor network model of path integration in the rat.Journal of computational neuroscience, 18(2):183–203, 2005. 15
2005
-
[75]
The emergence of multiple retinal cell types through efficient coding of natural movies.Ad- vances in Neural Information Processing Systems, 31, 2018
Samuel Ocko, Jack Lindsey, Surya Ganguli, and Stephane Deny. The emergence of multiple retinal cell types through efficient coding of natural movies.Ad- vances in Neural Information Processing Systems, 31, 2018
2018
-
[76]
Springer, 2018
Amir Momeni, Matthew Pincus, Jenny Libien, et al.In- troduction to statistical methods in pathology. Springer, 2018
2018
-
[77]
A non-spiking neu- ron model with dynamic leak to avoid instability in recur- rent networks.Frontiers in computational neuroscience, 15:656401, 2021
Udaya B Rongala, Jonas MD Enander, Matthias Kohler, Gerald E Loeb, and Henrik J¨ orntell. A non-spiking neu- ron model with dynamic leak to avoid instability in recur- rent networks.Frontiers in computational neuroscience, 15:656401, 2021
2021
-
[78]
Gradient calculations for dynamic recurrent neural networks: A survey.IEEE Transactions on Neural networks, 6(5):1212–1228, 1995
Barak A Pearlmutter. Gradient calculations for dynamic recurrent neural networks: A survey.IEEE Transactions on Neural networks, 6(5):1212–1228, 1995
1995
-
[79]
Self-organization and compositionality in cog- nitive brains: A neurorobotics study.Proceedings of the IEEE, 102(4):586–605, 2014
Jun Tani. Self-organization and compositionality in cog- nitive brains: A neurorobotics study.Proceedings of the IEEE, 102(4):586–605, 2014
2014
-
[80]
The self- organization of grid cells in 3d.Elife, 4:e05913, 2015
Federico Stella and Alessandro Treves. The self- organization of grid cells in 3d.Elife, 4:e05913, 2015
2015
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.