pith. machine review for the scientific record. sign in

arxiv: 2604.16547 · v1 · submitted 2026-04-17 · 💻 cs.NE · physics.bio-ph

Recognition: unknown

Impact of leaky dynamics on predictive path integration accuracy in recurrent neural networks

Authors on Pith no claims yet

Pith reviewed 2026-05-10 08:03 UTC · model grok-4.3

classification 💻 cs.NE physics.bio-ph
keywords leaky recurrent neural networkspath integrationgrid cellshexagonal firing patternsattractor dynamicsnoise robustnesslow-pass filtering
0
0 comments X

The pith

Adding a leak term to recurrent neural networks improves the emergence of regular hexagonal firing patterns and path integration accuracy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper adds a leak term to recurrent neural networks to create adaptive time scales drawn from continuous attractor models. This produces more regular and well-defined hexagonal firing patterns that resemble grid cell activity. The leaky networks also generate more accurate position estimates during path integration tasks than networks without the leak. They maintain greater stability when the same level of noise is present. The leak functions as a low-pass filter that protects the network from noise and supports consistent dynamics.

Core claim

Recurrent neural networks discretized from continuous attractor firing rate models and equipped with a leak term develop well-defined and highly regular hexagonal firing patterns. These patterns enable more accurate position estimates and reliable grid-cell-like representations compared with vanilla RNNs. Under identical noise conditions the leaky networks exhibit more stable dynamics and better-defined grid structures. The learned dynamics produce stable torus attractors with a clear central hole that supports robust and regular grid-like activity.

What carries the argument

The leak term added to the RNN update rule, which introduces multi-timescale dynamics and acts as a low-pass filter on network activity.

Load-bearing premise

The chosen leak term and discretization from continuous attractors correctly capture the relevant biological time scales without creating artifacts that appear only in the training simulations.

What would settle it

Training identical networks with the leak term removed but every other condition held fixed, then checking whether hexagonal pattern regularity and position accuracy remain unchanged, would test whether the leak drives the reported improvements.

Figures

Figures reproduced from arXiv: 2604.16547 by Kesheng Xu, Muhua Zheng, Yanlin Zhang, Yan Zhang.

Figure 1
Figure 1. Figure 1: FIG. 1. A schematic diagram of the path integration task and [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIG. 2. Three representative firing patterns (top panels) are [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIG. 3. Statistical analysis of the grid score (a) and mean [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4. Comparison of the hexagonal firing patterns for grid [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: FIG. 5. Illustration of the path integration task and training [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: FIG. 7. Combined effects of the leak parameter [PITH_FULL_IMAGE:figures/full_fig_p008_7.png] view at source ↗
Figure 10
Figure 10. Figure 10: FIG. 10. Projections of the toroidal manifold onto the three [PITH_FULL_IMAGE:figures/full_fig_p009_10.png] view at source ↗
Figure 12
Figure 12. Figure 12: FIG. 12. Comparison of Betti barcodes of neural activity [PITH_FULL_IMAGE:figures/full_fig_p010_12.png] view at source ↗
Figure 11
Figure 11. Figure 11: FIG. 11. Persistence diagrams of the leaky RNN. Gray and [PITH_FULL_IMAGE:figures/full_fig_p010_11.png] view at source ↗
Figure 13
Figure 13. Figure 13: FIG. 13. The grid scores and mean squared errors (MSE) as [PITH_FULL_IMAGE:figures/full_fig_p012_13.png] view at source ↗
read the original abstract

Experimental evidence indicates that intrinsic temporal dynamics operating across multiple time scales are closely associated with the emergence of periodic spatial activity of increasing complexity. However, how information encoded in grid-like firing patterns for path integration is processed across these intrinsic time scales remains unclear. To address this question, we introduce adaptive time scales through a leak term in recurrent neural networks (RNNs), forming leaky RNNs discretized from the continuous attractors of firing rate models. Our results demonstrate that leaky RNNs substantially enhance the emergence of well-defined and highly regular hexagonal firing patterns. Compared with vanilla RNNs lacking a leak term, the trained leaky RNNs produce more accurate position estimates while generating reliable grid-cell-like representations. Furthermore, under identical noise conditions, leaky RNNs consistently exhibit more stable dynamics and better-defined grid structures. The learned dynamics also give rise to stable torus attractors with a clear central hole, supporting robust and regular grid-like activity. Overall, the dynamic leak acts as a low-pass filtering mechanism that protects recurrent neural circuitry from noise, stabilizes network dynamics, and improves path-integration accuracy in recurrent neural networks.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The paper introduces a leak term into recurrent neural networks (RNNs) for path integration, discretizing it from continuous attractor firing-rate models to create leaky RNNs with adaptive timescales. It claims that these leaky RNNs produce substantially more regular and well-defined hexagonal grid-like firing patterns, yield more accurate position estimates, exhibit greater stability under identical noise conditions, and form stable torus attractors with a clear central hole, with the leak functioning as a low-pass filter that protects dynamics from noise.

Significance. If the simulation results hold under standard controls, the work demonstrates that incorporating leaky dynamics can stabilize RNN attractors and improve path-integration performance while promoting grid-cell-like representations. This provides a concrete computational mechanism linking multi-timescale intrinsic dynamics to spatial coding and could inform both the design of robust sequential models and interpretations of biological grid-cell emergence.

minor comments (3)
  1. [Abstract] The abstract asserts clear performance gains (more accurate position estimates, better-defined grid structures) without reporting any quantitative metrics, error bars, or statistical comparisons; these should be added to the abstract and results sections for clarity.
  2. [Methods] The exact discretization of the leak term from the continuous attractor equations is not shown; providing the update rule and any associated parameters (e.g., leak coefficient value) would improve reproducibility.
  3. [Results] Figures illustrating firing patterns and torus attractors would benefit from explicit quantitative measures (gridness scores, attractor stability metrics) with direct comparisons to the vanilla RNN baseline.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive summary and significance assessment of our work, as well as the recommendation for minor revision. The referee's description accurately captures our main claims regarding the benefits of leaky dynamics in RNNs for path integration, grid-cell-like representations, and noise robustness. Since no specific major comments were provided in the report, we will incorporate minor revisions to improve clarity, figures, and any minor presentation issues in the next version of the manuscript.

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper is an empirical simulation study that introduces a leak term (discretized from continuous attractor firing-rate models) into RNNs and trains both leaky and vanilla variants on path-integration tasks. Performance differences in grid-pattern regularity, position-estimate accuracy, and attractor stability are reported as direct outcomes of training under matched noise conditions. No equations or claims reduce a prediction to a fitted parameter by construction, no load-bearing self-citations appear, and the leak's low-pass filtering effect follows immediately from its explicit addition rather than from any tautological redefinition. The results remain falsifiable by altering the leak coefficient or training objective.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that a single leak parameter can be chosen to represent biological multi-timescale dynamics and that the resulting discrete dynamics preserve the continuous attractor properties without additional fitting artifacts.

free parameters (1)
  • leak coefficient
    The leak term is introduced as an adaptive time-scale parameter whose specific value is presumably fitted or chosen during training to produce the reported grid patterns and accuracy gains.
axioms (1)
  • domain assumption Discretization of continuous attractor firing-rate models yields RNN dynamics whose stability and grid-forming properties are preserved under the leak term.
    Invoked when the authors state that leaky RNNs are formed by discretizing continuous attractors.

pith-pipeline@v0.9.0 · 5493 in / 1134 out tokens · 26556 ms · 2026-05-10T08:03:28.773146+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

98 extracted references · 3 canonical work pages · 2 internal anchors

  1. [1]

    Leaky Recurrent Neural Network The recurrent network consists ofN r = 4096 units, each receiving a two-dimensional body velocity input V t = (v t x, vt y) at timet. Each neuron’s firing rater t i in conventional recurrent neural networks is computed using an activation function.Continuous-time recurrent neural networks consist of model neurons governed by...

  2. [2]

    Read-out neurons of predicting the corresponding position sequence W out ij is the strength of connection fromj th to ith readout neuron. The readout matrixW out was initialized using the Xavier uniform initialization scheme [42], with each element sampled fromW out ij ∼ U − q 6 Nr+Np , q 6 Nr+Np , whereN p = 512 denotes the number of place cells in the p...

  3. [3]

    Trajectory Generation Process of Path integration task We trained leaky RNN on a path integration task within a supervised learning framework[17, 24, 43]. The network was provided with encoded initial positionsX 0 and a sequence of velocity inputsV T , whereTrepre- sents the length of the trajectory sequences.The leaky RNN was trained to predict the corre...

  4. [4]

    These noisy velocity sequences are then used as inputs to the leaky RNN, enabling the network to learn how to infer the true trajectories from the corrupted motion signals

    Noisy input To simulate imperfect sensory perception, we also in- troduce stochastic noise to corrupt the linear speed∥Vt b∥, and then recompute the corresponding noisy velocity vec- tors as follows: ˜Vt b = [˜vt x,b,˜vt y,b] = [∥ ˜Vt b∥cos(ω t),∥ ˜Vt b∥sin(ω t)].(7) where∥ ˜Vt b∥represents the noisy linear speed after per- turbation. These noisy velocity...

  5. [5]

    The regularization term is defined asL w =λ∥W rec∥2, which helps prevent the model from over fitting to specific features and improves its general- ization capability

    Learning rule of loss function utilizing the cross-entropy function The learning rules in this work consist of the cross- entropy function [47] and a regularization term with a small penaltyL w[29]. The regularization term is defined asL w =λ∥W rec∥2, which helps prevent the model from over fitting to specific features and improves its general- ization ca...

  6. [6]

    The grid cell rate maps (e.g., in Fig

    Spatial Autocorrelogram (SAC) and Grid score (GS) Grid cell spatial firing rate map:Rate maps were constructed for arena and track recordings by sorting the position data within a partitioned bins. The grid cell rate maps (e.g., in Fig. 2(a)) were computed as fol- lows. Simulated rats performed path integration tasks with arena size (AZ) 220cm×220cm. To e...

  7. [7]

    , Tdenote the ground-truth 2D position and the predicted position at timet, respec- tively

    Mean Squared Error (MSE) For a trajectory of lengthT, letX t = (x t, yt) and ˆXt = (ˆxt,ˆyt) fort= 1, . . . , Tdenote the ground-truth 2D position and the predicted position at timet, respec- tively. The step-wise position error is therefore defined as et = q (ˆxt −x t)2 + (ˆyt −y t)2.(15) FIG. 2. Three representative firing patterns (top panels) are quan...

  8. [8]

    Persistent Homology To better reveal stable toroidal attractors character- ized by a clear central hole and robust, regular grid-like activity, we employed persistent homology, a topologi- cal data analysis framework that extracts geometric fea- tures across multiple spatial scales. Persistent homology characterizes the multiscale structure of data by tra...

  9. [9]

    The degree to which grid-like firing patterns emerge is quantitatively evaluated using the grid score and the mean squared er- ror (MSE; Eq

    The effect of introducing the leak termαon the emergence of grid-cell firing patterns in RNNs To assess the impact of the leak parameterαon the emergence of representative firing patterns, we compare the activity patterns generated by leaky RNNs with those produced by vanilla RNNs (i.e.,α= 1). The degree to which grid-like firing patterns emerge is quanti...

  10. [10]

    Comparison of Integrated Navigation Paths Generated by the vanilla RNN and the leaky RNN FIG. 5. Illustration of the path integration task and training process, compared to the ground truth in the simulated envi- ronment, for the RNN and leaky RNN models withα= 0.95. (a) Testing results for a sequence length ofT= 20. (b) Test- ing results for an extended ...

  11. [11]

    Specifically, we investigate the effects of two types of noise: Gaussian white noise and OU noise

    Impact of noise on grid cell firing pattern formation in the leaky RNN To better understand the impact of noise on grid cell firing pattern formation in the leaky RNN, we analyze how the mean grid score varies as a function of both the leak parameterαand noise intensity. Specifically, we investigate the effects of two types of noise: Gaussian white noise ...

  12. [12]

    Two-dimensional attractor dynamics underlies path integration in the vanilla RNN and leaky RNN FIG. 10. Projections of the toroidal manifold onto the three principal axes,k 1 (blue rings),k 2 (origin rings) andk 3 (green rings), reveal three distinct rings in both the RNN (top plots) and leaky RNN(bottom plots), corresponding to positions along the 0 ◦, 6...

  13. [13]

    In this equation, αis the smoothing factor within the range 0≤α≤1

    The low pass filtering effect of the leak term enhances spatial representations The discrete-time implementation of a simple RC low- pass filter,y t = (1−α)y t−1 +αx t represents the simplest form of exponential smoothing, also known as the expo- nentially weighted moving average [62]. In this equation, αis the smoothing factor within the range 0≤α≤1. Whe...

  14. [14]

    A mechanistic interpretation of optimized leak constants linking internal Dynamics and environmental Structure The use of RNNs without a leak term represents a sim- plified formulation that may limit both optimal perfor- mance and mechanistic insight into the underlying dy- namical processes [35, 58, 59]. Previous studies have shown that three main approa...

  15. [15]

    What do grid cells contribute to place cell firing?Trends in neurosciences, 37(3):136–145, 2014

    Daniel Bush, Caswell Barry, and Neil Burgess. What do grid cells contribute to place cell firing?Trends in neurosciences, 37(3):136–145, 2014

  16. [16]

    The chicken and egg problem of grid cells and place cells.Trends in Cognitive Sciences, 27(2):125–138, 2023

    Genela Morris and Dori Derdikman. The chicken and egg problem of grid cells and place cells.Trends in Cognitive Sciences, 27(2):125–138, 2023

  17. [17]

    Rectangular and hexagonal grids used for ob- servation, experiment and simulation in ecology.Ecolog- ical modelling, 206(3-4):347–359, 2007

    Colin PD Birch, Sander P Oom, and Jonathan A Beecham. Rectangular and hexagonal grids used for ob- servation, experiment and simulation in ecology.Ecolog- ical modelling, 206(3-4):347–359, 2007

  18. [18]

    Spatial periodicity in grid cell firing is explained by a neural sequence code of 2-d trajectories

    RG Rebecca, Giorgio A Ascoli, Nate M Sutton, and Hol- ger Dannenberg. Spatial periodicity in grid cell firing is explained by a neural sequence code of 2-d trajectories. Elife, 13:RP96627, 2025

  19. [19]

    A unified theory for the origin of grid cells through the lens of pattern formation.Advances in neural infor- mation processing systems, 32, 2019

    Ben Sorscher, Gabriel Mel, Surya Ganguli, and Samuel Ocko. A unified theory for the origin of grid cells through the lens of pattern formation.Advances in neural infor- mation processing systems, 32, 2019

  20. [20]

    Grid cells in cognition: mechanisms and function.Annual Review of Neuro- science, 47, 2024

    Ling L Dong and Ila R Fiete. Grid cells in cognition: mechanisms and function.Annual Review of Neuro- science, 47, 2024

  21. [21]

    Evidence for grid cells in a human memory network.Na- ture, 463(7281):657–661, 2010

    Christian F Doeller, Caswell Barry, and Neil Burgess. Evidence for grid cells in a human memory network.Na- ture, 463(7281):657–661, 2010

  22. [22]

    A review of the hippocampal place cells

    John O’Keefe. A review of the hippocampal place cells. Progress in neurobiology, 13(4):419–439, 1979

  23. [23]

    The head direction signal: origins and sensory-motor integration.Annu

    Jeffrey S Taube. The head direction signal: origins and sensory-motor integration.Annu. Rev. Neurosci., 30(1):181–207, 2007

  24. [24]

    Microstructure of a spatial map in the entorhinal cortex.Nature, 436(7052):801–806, 2005

    Torkel Hafting, Marianne Fyhn, Sturla Molden, May- Britt Moser, and Edvard I Moser. Microstructure of a spatial map in the entorhinal cortex.Nature, 436(7052):801–806, 2005

  25. [25]

    Speed cells in the medial entorhinal cortex.Nature, 523(7561):419–424, 2015

    Emilio Kropff, James E Carmichael, May-Britt Moser, and Edvard I Moser. Speed cells in the medial entorhinal cortex.Nature, 523(7561):419–424, 2015

  26. [26]

    Representation of geometric borders in the entorhinal cortex.Science, 322(5909):1865–1868, 2008

    Trygve Solstad, Charlotte N Boccara, Emilio Kropff, May-Britt Moser, and Edvard I Moser. Representation of geometric borders in the entorhinal cortex.Science, 322(5909):1865–1868, 2008

  27. [27]

    The neu- robiology of mammalian navigation.Current Biology, 28(17):R1023–R1042, 2018

    Steven Poulter, Tom Hartley, and Colin Lever. The neu- robiology of mammalian navigation.Current Biology, 28(17):R1023–R1042, 2018

  28. [28]

    Place cells, grid cells, and the brain’s spatial representa- tion system.Annu

    Edvard I Moser, Emilio Kropff, and May-Britt Moser. Place cells, grid cells, and the brain’s spatial representa- tion system.Annu. Rev. Neurosci., 31(1):69–89, 2008

  29. [29]

    Ten years of grid cells.Annual review of neuroscience, 39(1):19–40, 2016

    David C Rowland, Yasser Roudi, May-Britt Moser, and Edvard I Moser. Ten years of grid cells.Annual review of neuroscience, 39(1):19–40, 2016

  30. [30]

    Grid cells and cortical representation.Nature Reviews Neuroscience, 15(7):466–481, 2014

    Edvard I Moser, Yasser Roudi, Menno P Witter, Clif- ford Kentros, Tobias Bonhoeffer, and May-Britt Moser. Grid cells and cortical representation.Nature Reviews Neuroscience, 15(7):466–481, 2014

  31. [31]

    Vector-based navigation using grid-like repre- sentations in artificial agents.Nature, 557(7705):429–433, 2018

    Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J Chadwick, Thomas Degris, Joseph Mo- dayil, et al. Vector-based navigation using grid-like repre- sentations in artificial agents.Nature, 557(7705):429–433, 2018

  32. [32]

    Computational models of grid cells.Neuron, 71(4):589– 603, 2011

    Lisa M Giocomo, May-Britt Moser, and Edvard I Moser. Computational models of grid cells.Neuron, 71(4):589– 603, 2011

  33. [33]

    Accurate path integra- tion in continuous attractor network models of grid cells

    Yoram Burak and Ila R Fiete. Accurate path integra- tion in continuous attractor network models of grid cells. PLoS computational biology, 5(2):e1000291, 2009

  34. [34]

    A spin glass model of path integration in rat medial entorhinal cortex.Jour- nal of Neuroscience, 26(16):4266–4276, 2006

    Mark C Fuhs and David S Touretzky. A spin glass model of path integration in rat medial entorhinal cortex.Jour- nal of Neuroscience, 26(16):4266–4276, 2006

  35. [35]

    A unified theory for the origin of grid cells through the lens of pattern formation

    Ben Sorscher, Gabriel Mel, Surya Ganguli, and Samuel Ocko. A unified theory for the origin of grid cells through the lens of pattern formation. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´ e-Buc, E. Fox, and R. Garnett, editors,Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019

  36. [36]

    Dual phase and rate cod- ing in hippocampal place cells: theoretical significance and relationship to entorhinal grid cells.Hippocampus, 15(7):853–866, 2005

    John O’keefe and Neil Burgess. Dual phase and rate cod- ing in hippocampal place cells: theoretical significance and relationship to entorhinal grid cells.Hippocampus, 15(7):853–866, 2005

  37. [37]

    Path integration and the neural basis of the’cognitive map’

    Bruce L McNaughton, Francesco P Battaglia, Ole Jensen, Edvard I Moser, and May-Britt Moser. Path integration and the neural basis of the’cognitive map’. Nature Reviews Neuroscience, 7(8):663–678, 2006

  38. [38]

    Recur- rent spiking neural networks as models of the entorhi- nal–hippocampal system for path integration: Grid cells and beyond.Neurocomputing, 651:130814, 2025

    Ruilan Gao, Changjian Jiang, and Yu Zhang. Recur- rent spiking neural networks as models of the entorhi- nal–hippocampal system for path integration: Grid cells and beyond.Neurocomputing, 651:130814, 2025

  39. [39]

    A model of grid cell development through spatial exploration and spike time- dependent plasticity.Neuron, 83(2):481–495, 2014

    John Widloski and Ila R Fiete. A model of grid cell development through spatial exploration and spike time- dependent plasticity.Neuron, 83(2):481–495, 2014

  40. [40]

    Emergent elasticity in the neural code for space.Proceedings of the National Academy of Sciences, 115(50):E11798–E11806, 2018

    Samuel A Ocko, Kiah Hardcastle, Lisa M Giocomo, and Surya Ganguli. Emergent elasticity in the neural code for space.Proceedings of the National Academy of Sciences, 115(50):E11798–E11806, 2018

  41. [41]

    Principles governing the integration of landmark and self- motion cues in entorhinal cortical codes for navigation

    Malcolm G Campbell, Samuel A Ocko, Caitlin S Mal- lory, Isabel IC Low, Surya Ganguli, and Lisa M Giocomo. Principles governing the integration of landmark and self- motion cues in entorhinal cortical codes for navigation. Nature neuroscience, 21(8):1096–1106, 2018

  42. [42]

    Training recurrent networks to generate hypotheses about how the brain solves hard navigation problems.Advances in Neural In- formation Processing Systems, 30, 2017

    Ingmar Kanitscheider and Ila Fiete. Training recurrent networks to generate hypotheses about how the brain solves hard navigation problems.Advances in Neural In- formation Processing Systems, 30, 2017. 14

  43. [43]

    A unified theory for the computational and mechanistic origins of grid cells.Neu- ron, 111(1):121–137, 2023

    Ben Sorscher, Gabriel C Mel, Samuel A Ocko, Lisa M Giocomo, and Surya Ganguli. A unified theory for the computational and mechanistic origins of grid cells.Neu- ron, 111(1):121–137, 2023

  44. [44]

    Gate-variants of gated recurrent unit (gru) neural networks

    Rahul Dey and Fathi M Salem. Gate-variants of gated recurrent unit (gru) neural networks. In2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS), pages 1597–1600. IEEE, 2017

  45. [45]

    Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

    Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated re- current neural networks on sequence modeling.arXiv preprint arXiv:1412.3555, 2014

  46. [46]

    Gated rnn: the gated recurrent unit (gru) rnn

    Fathi M Salem. Gated rnn: the gated recurrent unit (gru) rnn. InRecurrent neural networks: from simple to gated architectures, pages 85–100. Springer, 2021

  47. [47]

    Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

    Kyunghyun Cho, Bart Van Merri¨ enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase represen- tations using rnn encoder-decoder for statistical machine translation.arXiv preprint arXiv:1406.1078, 2014

  48. [48]

    Transformation of relu-based recurrent neural networks from discrete- time to continuous-time

    Zahra Monfared and Daniel Durstewitz. Transformation of relu-based recurrent neural networks from discrete- time to continuous-time. InInternational Conference on Machine Learning, pages 6999–7009. PMLR, 2020

  49. [49]

    Adaptive time scales in recurrent neural networks

    Silvan C Quax, Michele D’asaro, and Marcel AJ Van Ger- ven. Adaptive time scales in recurrent neural networks. Scientific reports, 10(1):11360, 2020

  50. [50]

    Emergence of spatial representation in an actor-critic agent with hippocampus-inspired sequence generator

    Xiao-Xiong Lin, Yuk-Hoi Yiu, and Christian Leibold. Emergence of spatial representation in an actor-critic agent with hippocampus-inspired sequence generator. In The Fourteenth International Conference on Learning Representations

  51. [51]

    echo state

    Herbert Jaeger. The “echo state” approach to analysing and training recurrent neural networks-with an erra- tum note.Bonn, Germany: German national research center for information technology gmd technical report, 148(34):13, 2001

  52. [52]

    Optimization and applications of echo state networks with leaky-integrator neurons.Neural networks, 20(3):335–352, 2007

    Herbert Jaeger, Mantas Lukoˇ seviˇ cius, Dan Popovici, and Udo Siewert. Optimization and applications of echo state networks with leaky-integrator neurons.Neural networks, 20(3):335–352, 2007

  53. [53]

    The grid- cell normative model: Unifying ‘principles’.BioSystems, 235:105091, 2024

    Jose A Fernandez-Leon and Luca Sarramone. The grid- cell normative model: Unifying ‘principles’.BioSystems, 235:105091, 2024

  54. [54]

    Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis.Elife, 5:e10094, 2016

    Yedidyah Dordek, Daniel Soudry, Ron Meir, and Dori Derdikman. Extracting grid cell characteristics from place cell inputs using non-negative principal component analysis.Elife, 5:e10094, 2016

  55. [55]

    Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.Nature Neuroscience, 27(7):1349–1363, 2024

    Laura N Driscoll, Krishna Shenoy, and David Sussillo. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs.Nature Neuroscience, 27(7):1349–1363, 2024

  56. [56]

    Understanding the difficulty of training deep feedforward neural networks

    Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. InProceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings, 2010

  57. [57]

    Active neural localization.arXiv preprint arXiv:1801.08214, 2018

    Devendra Singh Chaplot, Emilio Parisotto, and Ruslan Salakhutdinov. Active neural localization.arXiv preprint arXiv:1801.08214, 2018

  58. [58]

    Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J. Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beattie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu...

  59. [59]

    Mel, Samuel A

    Ben Sorscher, Gabriel C. Mel, Samuel A. Ocko, Lisa M. Giocomo, and Surya Ganguli. A unified theory for the computational and mechanistic origins of grid cells.Neu- ron, 111(1):121–137.e13, January 2023

  60. [60]

    Modeling boundary vector cell firing given optic flow as a cue.PLoS computational biology, 8(6):e1002553, 2012

    Florian Raudies and Michael E Hasselmo. Modeling boundary vector cell firing given optic flow as a cue.PLoS computational biology, 8(6):e1002553, 2012

  61. [61]

    Correlations of cross-entropy loss in machine learning.Entropy, 26(6):491, 2024

    Richard Connor, Alan Dearle, Ben Claydon, and Lucia Vadicamo. Correlations of cross-entropy loss in machine learning.Entropy, 26(6):491, 2024

  62. [62]

    Saido, and Kei M

    Heechul Jun, Allen Bramian, Shogo Soma, Takashi Saito, Takaomi C. Saido, and Kei M. Igarashi. Disrupted place cell remapping and impaired grid cells in a knockin model of alzheimer’s disease.Neuron, 107(6):1095–1112.e6, 2020

  63. [63]

    Development of the spatial representation system in the rat.Science, 328(5985):1576–1580, 2010

    Rosamund F Langston, James A Ainge, Jonathan J Couey, Cathrin B Canto, Tale L Bjerknes, Menno P Wit- ter, Edvard I Moser, and May-Britt Moser. Development of the spatial representation system in the rat.Science, 328(5985):1576–1580, 2010

  64. [64]

    Visual land- marks sharpen grid cell metric and confer context speci- ficity to neurons of the medial entorhinal cortex.Elife, 5:e16937, 2016

    Jos´ e Antonio P´ erez-Escobar, Olga Kornienko, Patrick Latuske, Laura Kohler, and Kevin Allen. Visual land- marks sharpen grid cell metric and confer context speci- ficity to neurons of the medial entorhinal cortex.Elife, 5:e16937, 2016

  65. [65]

    Computing Per- sistent Homology

    Afra Zomorodian and Gunnar Carlsson. Computing Per- sistent Homology. 33(2):249–274

  66. [66]

    Barcodes: The persistent topology of data

    Robert Ghrist. Barcodes: The persistent topology of data. 45(1):61–75

  67. [67]

    Edelsbrunner, D

    H. Edelsbrunner, D. Letscher, and A. Zomorodian. Topo- logical persistence and simplification. InProceedings 41st Annual Symposium on Foundations of Computer Science, pages 454–463

  68. [68]

    Stable volumes for persistent homol- ogy.Journal of Applied and Computational Topology, 7(4):671–706, 2023

    Ippei Obayashi. Stable volumes for persistent homol- ogy.Journal of Applied and Computational Topology, 7(4):671–706, 2023

  69. [69]

    Functional summaries of persistence diagrams.Journal of Applied and Computational Topol- ogy, 4(2):211–262, 2020

    Eric Berry, Yen-Chi Chen, Jessi Cisewski-Kehe, and Brit- tany Terese Fasy. Functional summaries of persistence diagrams.Journal of Applied and Computational Topol- ogy, 4(2):211–262, 2020

  70. [70]

    Ripser.py: A lean persistent homology library for python

    Christopher Tralie, Nathaniel Saul, and Rann Bar-On. Ripser.py: A lean persistent homology library for python. The Journal of Open Source Software, 3(29):925, Sep 2018

  71. [71]

    Ripser: efficient computation of Vietoris- Rips persistence barcodes.J

    Ulrich Bauer. Ripser: efficient computation of Vietoris- Rips persistence barcodes.J. Appl. Comput. Topol., 5(3):391–423, 2021

  72. [72]

    Oxford University Press, 2016

    Jun Tani.Exploring robotic minds: actions, symbols, and consciousness as self-organizing dynamic phenom- ena. Oxford University Press, 2016

  73. [73]

    Learning the intrin- sic dynamics of spatio-temporal processes through latent dynamics networks.Nature Communications, 15(1):1834, 2024

    Francesco Regazzoni, Stefano Pagani, Matteo Salvador, Luca Dede’, and Alfio Quarteroni. Learning the intrin- sic dynamics of spatio-temporal processes through latent dynamics networks.Nature Communications, 15(1):1834, 2024

  74. [74]

    A controlled attractor network model of path integration in the rat.Journal of computational neuroscience, 18(2):183–203, 2005

    John Conklin and Chris Eliasmith. A controlled attractor network model of path integration in the rat.Journal of computational neuroscience, 18(2):183–203, 2005. 15

  75. [75]

    The emergence of multiple retinal cell types through efficient coding of natural movies.Ad- vances in Neural Information Processing Systems, 31, 2018

    Samuel Ocko, Jack Lindsey, Surya Ganguli, and Stephane Deny. The emergence of multiple retinal cell types through efficient coding of natural movies.Ad- vances in Neural Information Processing Systems, 31, 2018

  76. [76]

    Springer, 2018

    Amir Momeni, Matthew Pincus, Jenny Libien, et al.In- troduction to statistical methods in pathology. Springer, 2018

  77. [77]

    A non-spiking neu- ron model with dynamic leak to avoid instability in recur- rent networks.Frontiers in computational neuroscience, 15:656401, 2021

    Udaya B Rongala, Jonas MD Enander, Matthias Kohler, Gerald E Loeb, and Henrik J¨ orntell. A non-spiking neu- ron model with dynamic leak to avoid instability in recur- rent networks.Frontiers in computational neuroscience, 15:656401, 2021

  78. [78]

    Gradient calculations for dynamic recurrent neural networks: A survey.IEEE Transactions on Neural networks, 6(5):1212–1228, 1995

    Barak A Pearlmutter. Gradient calculations for dynamic recurrent neural networks: A survey.IEEE Transactions on Neural networks, 6(5):1212–1228, 1995

  79. [79]

    Self-organization and compositionality in cog- nitive brains: A neurorobotics study.Proceedings of the IEEE, 102(4):586–605, 2014

    Jun Tani. Self-organization and compositionality in cog- nitive brains: A neurorobotics study.Proceedings of the IEEE, 102(4):586–605, 2014

  80. [80]

    The self- organization of grid cells in 3d.Elife, 4:e05913, 2015

    Federico Stella and Alessandro Treves. The self- organization of grid cells in 3d.Elife, 4:e05913, 2015

Showing first 80 references.