Recognition: unknown
Consistent Geometric Deep Learning via Hilbert Bundles and Cellular Sheaves
Pith reviewed 2026-05-08 12:48 UTC · model grok-4.3
The pith
Sampling a manifold with a Hilbert bundle induces a cellular sheaf whose Laplacian converges in probability to the connection Laplacian, enabling consistent discrete networks for infinite-dimensional signals.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We prove that the sheaf Laplacian of the Hilbert cellular sheaf induced by sampling converges in probability to the connection Laplacian of the Hilbert bundle as sampling density increases. This result generalizes the Belkin-Niyogi convergence of graph Laplacians to the infinite-dimensional bundle setting. We further prove that the discretized HilbNets converge to the continuous architectures and remain transferable across different samplings of the same bundle, yielding a consistent learning procedure for infinite-dimensional signals supported on manifolds.
What carries the argument
The Hilbert cellular sheaf: a graph whose vertices carry Hilbert spaces and whose edges carry coupling rules derived from the bundle so that the discrete Laplacian approximates the continuous connection operator.
If this is right
- The discretized convolutional filters and neural networks converge to their continuous counterparts as sampling density increases.
- The learned networks remain consistent when applied to new samplings of the same underlying bundle.
- Convolution becomes implementable on irregular domains for signals whose values at each point live in a Hilbert space.
- Geometric learning frameworks that rely on Laplacian operators can be lifted to the infinite-dimensional bundle case without losing consistency guarantees.
Where Pith is reading between the lines
- The same sampling construction might allow consistent discretizations for other differential operators on Hilbert bundles beyond the connection Laplacian.
- The transferability result suggests that models trained at one sampling density could be deployed directly on data collected at a different density without retraining.
- The framework could be tested on concrete infinite-dimensional data such as functional time series or distributions defined over point clouds.
Load-bearing premise
The sampling procedure on the manifold must produce edge-wise coupling rules in the induced Hilbert cellular sheaf that preserve the structure needed for Laplacian convergence in the infinite-dimensional setting.
What would settle it
A specific Hilbert bundle together with a sequence of increasingly dense samplings where the empirical sheaf Laplacian does not converge in probability to the connection Laplacian in the appropriate operator norm.
Figures
read the original abstract
Modern deep learning architectures increasingly contend with sophisticated signals that are natively infinite-dimensional, such as time series, probability distributions, or operators, and are defined over irregular domains. Yet, a unified learning theory for these settings has been lacking. To start addressing this gap, we introduce a novel convolutional learning framework for possibly infinite-dimensional signals supported on a manifold. Namely, we use the connection Laplacian associated with a Hilbert bundle as a convolutional operator, and we derive filters and neural networks, dubbed as \textit{HilbNets}. We make HilbNets and, more generally, the convolution operation, implementable via a two-stage sampling procedure. First, we show that sampling the manifold induces a Hilbert Cellular Sheaf, a generalized graph structure with Hilbert feature spaces and edge-wise coupling rules, and we prove that its sheaf Laplacian converges in probability to the underlying connection Laplacian as the sampling density increases. Notably, this result is a generalization to the infinite-dimensional bundle setting of the Belkin \& Niyogi \cite{BELKIN20081289} convergence result for the graph Laplacian to the manifold Laplacian, a theoretical cornerstone of geometric learning methods. Second, we discretize the signals and prove that the discretized (implementable) HilbNets converge to the underlying continuous architectures and are transferable across different samplings of the same bundle, providing consistency for learning. Finally, we validate our framework on synthetic and real-world tasks. Overall, our results broaden the scope of geometric learning as a whole by lifting classical Laplacian-based frameworks to settings where the signal at each point lives in its own Hilbert space.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces HilbNets, a convolutional framework for infinite-dimensional signals supported on manifolds, based on the connection Laplacian of a Hilbert bundle. It claims that sampling the manifold induces a Hilbert cellular sheaf whose sheaf Laplacian converges in probability to the underlying connection Laplacian (generalizing the Belkin-Niyogi result to infinite-dimensional bundles), that discretized HilbNets converge to their continuous counterparts and are transferable across samplings, and that the framework is validated on synthetic and real-world tasks.
Significance. If the convergence and consistency results hold rigorously, the work would meaningfully extend geometric deep learning beyond finite-dimensional features to settings with signals in Hilbert spaces (e.g., distributions, operators, or time series), providing a unified theoretical foundation via bundles and sheaves. The explicit generalization of the Belkin-Niyogi theorem and the two-stage sampling/discretization procedure are strengths that could support consistency guarantees for learning on irregular domains.
major comments (3)
- The main convergence result (sheaf Laplacian to connection Laplacian in probability) is load-bearing for all subsequent claims on HilbNet consistency and transferability. The proof must establish convergence in the strong operator topology (or norm topology) uniformly over fibers; weak or pointwise convergence alone does not suffice to control the quadratic forms or ensure the discretized operators approximate the continuous bundle Laplacian when fibers are infinite-dimensional Hilbert spaces. The manuscript should explicitly state the topology used and any fiberwise compactness or trace-class assumptions on the connection forms that upgrade the convergence.
- The edge-wise coupling rules in the induced Hilbert cellular sheaf (parallel transport or connection operators between fibers) are central to preserving the structure needed for the Laplacian approximation. The paper must verify that these operators produce a kernel whose quadratic form converges to the bundle connection Laplacian without implicit finite-rank reductions; otherwise the generalization from the finite-dimensional Belkin-Niyogi setting fails for general Hilbert bundles.
- The discretization step and transferability proof rely on the sampling density increasing and the sheaf Laplacian convergence. Any gap in controlling the approximation error uniformly (e.g., via explicit rates or bounds that hold in infinite dimensions) would undermine the claim that discretized HilbNets are consistent and transferable across different samplings of the same bundle.
minor comments (2)
- Notation for the Hilbert bundle, connection form, and sheaf Laplacian should be introduced with explicit definitions and compared to standard references (e.g., the precise relation to the Belkin-Niyogi graph Laplacian) to improve readability for readers outside bundle theory.
- The empirical validation section would benefit from clearer statements of the data exclusion rules, hyperparameter choices, and how the infinite-dimensional signals are approximated in practice (e.g., truncation of basis expansions).
Simulated Author's Rebuttal
We thank the referee for the careful and constructive review. The emphasis on rigorous convergence in infinite dimensions is appreciated, and we address each major comment below with clarifications and planned revisions.
read point-by-point responses
-
Referee: The main convergence result (sheaf Laplacian to connection Laplacian in probability) is load-bearing for all subsequent claims on HilbNet consistency and transferability. The proof must establish convergence in the strong operator topology (or norm topology) uniformly over fibers; weak or pointwise convergence alone does not suffice to control the quadratic forms or ensure the discretized operators approximate the continuous bundle Laplacian when fibers are infinite-dimensional Hilbert spaces. The manuscript should explicitly state the topology used and any fiberwise compactness or trace-class assumptions on the connection forms that upgrade the convergence.
Authors: We agree that the topology must be stated explicitly. Theorem 3.2 and its proof in Appendix A establish convergence in the strong operator topology, uniformly over fibers, under the assumption that the connection 1-forms are trace-class. This controls the quadratic forms in the infinite-dimensional setting. We will revise Section 3 to state the topology and trace-class assumption explicitly. revision: yes
-
Referee: The edge-wise coupling rules in the induced Hilbert cellular sheaf (parallel transport or connection operators between fibers) are central to preserving the structure needed for the Laplacian approximation. The paper must verify that these operators produce a kernel whose quadratic form converges to the bundle connection Laplacian without implicit finite-rank reductions; otherwise the generalization from the finite-dimensional Belkin-Niyogi setting fails for general Hilbert bundles.
Authors: The couplings are realized by parallel transport operators, which are unitary (hence isometries) between the infinite-dimensional Hilbert fibers. No finite-rank reduction is applied; the quadratic-form convergence follows from the sheaf Laplacian result in Theorem 3.2 while preserving full fiber dimension. We will add a clarifying remark in Section 2.3 confirming the generalization holds without rank reduction. revision: yes
-
Referee: The discretization step and transferability proof rely on the sampling density increasing and the sheaf Laplacian convergence. Any gap in controlling the approximation error uniformly (e.g., via explicit rates or bounds that hold in infinite dimensions) would undermine the claim that discretized HilbNets are consistent and transferable across different samplings of the same bundle.
Authors: Uniform error bounds for the discretization and transferability are derived in the proof of Theorem 4.3 (and Appendix B) under the trace-class assumption; these bounds hold in infinite dimensions and control consistency across samplings. Explicit rates are sampling-density dependent and manifold-specific. We will expand the discussion of these uniform bounds in Section 4. revision: partial
Circularity Check
No significant circularity; central convergence is an independent generalization of an external result
full rationale
The paper's load-bearing claim is the probabilistic convergence of the induced sheaf Laplacian to the connection Laplacian on a Hilbert bundle, presented explicitly as a generalization of the external Belkin & Niyogi theorem (cited as [BELKIN20081289]). No equations reduce this convergence to a self-definition, a fitted parameter renamed as a prediction, or a chain of self-citations whose validity is assumed rather than independently established. The Hilbert cellular sheaf construction and discretization steps are derived from the sampling procedure and the external manifold Laplacian result without circular reduction; the framework therefore remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption The connection Laplacian associated with a Hilbert bundle serves as a valid convolutional operator for infinite-dimensional signals supported on a manifold.
- ad hoc to paper Sampling the manifold induces a Hilbert Cellular Sheaf whose Laplacian converges in probability to the bundle connection Laplacian.
invented entities (2)
-
Hilbert Cellular Sheaf
no independent evidence
-
HilbNets
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Applying convolutional neural networks concepts to hybrid nn-hmm model for speech recognition
Ossama Abdel-Hamid et al. Applying convolutional neural networks concepts to hybrid nn-hmm model for speech recognition. In2012 International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012. doi: 10.1109/ICASSP.2012.6288864
-
[2]
Springer Nature, 2020
Manasvi Aggarwal and M Narasimha Murty.Machine Learning in Social Networks: Embed- ding Nodes, Edges, Communities, and Graphs. Springer Nature, 2020
2020
-
[3]
Cambridge Studies in Advanced Mathematics
Antonio Ambrosetti and Giovanni Prodi.A Primer of Nonlinear Analysis. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, UK, 1995. ISBN 9780521454057
1995
-
[4]
Scott Axelrod, Steve della Pietra, and Edward Witten. Geometric quantization of chern– simons gauge theory.Journal of Differential Geometry, 33(3):787–902, May 1991. doi: 10.4310/jdg/1214446565
-
[5]
Bronstein
Jacob Bamberger, Federico Barbero, Xiaowen Dong, and Michael M. Bronstein. Bundle neural network for message diffusion on graphs. InThe Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id= scI9307PLG
2025
-
[6]
Barbarossa and S
S. Barbarossa and S. Sardellitti. Topological signal processing over simplicial complexes. IEEE Trans. on Signal Processing, 68:2992–3007, 2020
2020
-
[7]
Sheaf neural networks with connection laplacians, 2022
Federico Barbero et al. Sheaf neural networks with connection laplacians, 2022. URL https://arxiv.org/abs/2206.08702
-
[8]
Thomas Batard. Heat equations on vector bundles—application to color image regu- larization.Journal of Mathematical Imaging and Vision, 41(1-2):59–85, 2011. doi: 10.1007/s10851-011-0265-3
-
[9]
Generalized simplicial attention neural networks.IEEE Transactions on Signal and Information Processing over Networks, 10:833–850, 2024
Claudio Battiloro, Lucia Testa, Lorenzo Giusti, Stefania Sardellitti, Paolo Di Lorenzo, and Sergio Barbarossa. Generalized simplicial attention neural networks.IEEE Transactions on Signal and Information Processing over Networks, 10:833–850, 2024
2024
-
[10]
Tangent bundle convolutional learning: from manifolds to cellular sheaves and back.IEEE Transactions on Signal Processing, 2024
Claudio Battiloro, Zhiyang Wang, Hans Riess, Paolo Di Lorenzo, and Alejandro Ribeiro. Tangent bundle convolutional learning: from manifolds to cellular sheaves and back.IEEE Transactions on Signal Processing, 2024
2024
-
[11]
Tangent bundle filters and neural networks: From manifolds to cellular sheaves and back
Claudio Battiloro et al. Tangent bundle filters and neural networks: From manifolds to cellular sheaves and back. InICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5. IEEE, 2023
2023
-
[12]
M. Faisal Beg, Michael I. Miller, Alain Trouvé, and Laurent Younes. Computing large deformation metric mappings via geodesic flows of diffeomorphisms.International Journal of Computer Vision, 61(2):139–157, 2005. doi: 10.1023/B:VISI.0000043755.93987.aa
-
[13]
Laplacian eigenmaps and spectral techniques for embedding and clustering.Advances in neural information processing systems, 14, 2001
Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering.Advances in neural information processing systems, 14, 2001
2001
-
[14]
Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for laplacian-based manifold methods.Journal of Computer and System Sciences, 74(8):1289–1308, 2008. ISSN 0022-0000. doi: https://doi.org/10.1016/j.jcss.2007.08.006. URL https://www. sciencedirect.com/science/article/pii/S0022000007001274. Learning Theory 2005
-
[15]
Nicole Berline, Ezra Getzler, and Michèle Vergne.Heat Kernels and Dirac Operators. Springer Berlin, Heidelberg, 1992. doi: 10.1007/978-3-642-58088-8
-
[16]
Bodnar et al
C. Bodnar et al. Weisfeiler and Lehman go topological: Message passing simplicial networks. InICLR 2021 Workshop on Geometrical and Topological Representation Learning, 2021
2021
-
[17]
Bodnar et al
C. Bodnar et al. Weisfeiler and lehman go cellular: Cw networks. InAdvances in Neural Information Processing Systems, volume 34, pp. 2625–2640. Curran Associates, Inc., 2021. 10
2021
-
[18]
Neural sheaf diffusion: A topological perspective on heterophily and oversmoothing in gnns, 2022
Cristian Bodnar et al. Neural sheaf diffusion: A topological perspective on heterophily and oversmoothing in gnns, 2022
2022
-
[19]
Spherical fourier neural operators: Learning stable dynamics on the sphere
Boris Bonev, Thorsten Kurth, Christian Hundt, Jaideep Pathak, Maximilian Baust, Karthik Kashinath, and Anima Anandkumar. Spherical fourier neural operators: Learning stable dynamics on the sphere. InInternational conference on machine learning, pp. 2806–2823. PMLR, 2023
2023
-
[20]
Matérn gaussian processes on Riemannian manifolds
Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc Peter Deisenroth. Matérn gaussian processes on Riemannian manifolds. InAdvances in Neural Information Processing Systems, volume 33, 2020. URL https://proceedings.neurips. cc/paper/2020/hash/92bf5e6240737e0326ea59846a83e076-Abstract. html
2020
-
[21]
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veliˇckovi´c. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges.arXiv preprint arXiv:2104.13478, 2021
work page internal anchor Pith review arXiv 2021
-
[22]
Geometric deep learning: going beyond euclidean data.IEEE Signal Processing Magazine, 34(4):18–42, 2017
Michael M Bronstein et al. Geometric deep learning: going beyond euclidean data.IEEE Signal Processing Magazine, 34(4):18–42, 2017
2017
-
[23]
Brüning and M
J. Brüning and M. Lesch. Hilbert complexes.Journal of Functional Analysis, 108:88–132, 1992
1992
-
[24]
Asymptotics for spherical functional autoregres- sions.The Annals of Statistics, 49(1):346–369, 2021
Alessia Caponera and Domenico Marinucci. Asymptotics for spherical functional autoregres- sions.The Annals of Statistics, 49(1):346–369, 2021. doi: 10.1214/20-AOS1959
-
[25]
Princeton University Press, Princeton, NJ, 1967
Élie Cartan.Differential Calculus. Princeton University Press, Princeton, NJ, 1967
1967
-
[26]
Learning neural operators on riemannian manifolds.National Science Open, 3(6):20240001, 2024
Gengxiang Chen, Xu Liu, Qinglu Meng, Lu Chen, Changqing Liu, and Yingguang Li. Learning neural operators on riemannian manifolds.National Science Open, 3(6):20240001, 2024
2024
-
[27]
A general theory of equivariant cnns on homogeneous spaces.Advances in neural information processing systems, 32, 2019
Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces.Advances in neural information processing systems, 32, 2019
2019
-
[28]
Ronald R. Coifman and Stéphane Lafon. Diffusion maps.Applied and Computational Harmonic Analysis, 21(1):5–30, 2006. doi: 10.1016/j.acha.2006.04.006. URL https: //doi.org/10.1016/j.acha.2006.04.006
-
[29]
University of Pennsylvania, 2014
Justin Michael Curry.Sheaves, cosheaves and applications. University of Pennsylvania, 2014
2014
-
[30]
The relativity of causal knowledge
Gabriele D’Acunto and Claudio Battiloro. The relativity of causal knowledge. InThe 41st Conference on Uncertainty in Artificial Intelligence, 2025. URL https://openreview. net/forum?id=aS8mPNs5u5
2025
-
[31]
Xiongtao Dai and Hans-Georg Müller. Principal component analysis for functional data on Riemannian manifolds and spheres.The Annals of Statistics, 46(6B):3309–3338, 2018. doi: 10.1214/17-AOS1660. URLhttps://doi.org
-
[32]
Equivariant contrastive learning.arXiv preprint arXiv:2111.00899, 2021
Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Che- ung, Pulkit Agrawal, and Marin Solja ˇci´c. Equivariant contrastive learning.arXiv preprint arXiv:2111.00899, 2021
-
[33]
Riemannian score-based generative modelling.Advances in neural information processing systems, 35:2406–2422, 2022
Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, and Arnaud Doucet. Riemannian score-based generative modelling.Advances in neural information processing systems, 35:2406–2422, 2022
2022
-
[34]
Pim De Haan et al. Gauge equivariant mesh cnns: Anisotropic convolutions on geometric graphs.arXiv preprint arXiv:2003.05425, 2020
-
[35]
Learning the structure of connection graphs.arXiv preprint arXiv:2510.11245, 2025
Leonardo Di Nino, Gabriele D’Acunto, Sergio Barbarossa, and Paolo Di Lorenzo. Learning the structure of connection graphs.arXiv preprint arXiv:2510.11245, 2025
-
[36]
do Carmo.Riemannian Geometry
Manfredo P. do Carmo.Riemannian Geometry. Mathematics: Theory & Applications. Birkhäuser, Boston, 1992. ISBN 978-0817634902. 11
1992
-
[37]
Sheaf hypergraph networks
Iulia Duta, Giulia Cassarà, Fabrizio Silvestri, and Pietro Liò. Sheaf hypergraph networks. Advances in Neural Information Processing Systems, 36:12087–12099, 2023
2023
-
[38]
Testing the manifold hypothesis
Charles Fefferman, Sanjoy Mitter, and Hariharan Narayanan. Testing the manifold hypothesis. Journal of the American Mathematical Society, 29(4):983–1049, 2016
2016
-
[39]
Sheaves reloaded: A directional awakening.arXiv preprint arXiv:2506.02842, 2025
Stefano Fiorini, Hakan Aktas, Iulia Duta, Stefano Coniglio, Pietro Morerio, Alessio Del Bue, and Pietro Liò. Sheaves reloaded: A directional awakening.arXiv preprint arXiv:2506.02842, 2025
-
[40]
Convolutional neural network architectures for signals supported on graphs.IEEE Transactions on Signal Processing, 67(4):1034–1049, 2018
Fernando Gama et al. Convolutional neural network architectures for signals supported on graphs.IEEE Transactions on Signal Processing, 67(4):1034–1049, 2018
2018
-
[41]
Fernando Gama et al. Graphs, convolutions, and neural networks: From graph filters to graph neural networks.IEEE Signal Processing Magazine, 37:128–138, 11 2020. doi: 10.1109/MSP.2020.3016143
-
[42]
Cellular sheaves of lattices and the tarski laplacian.Homology, Homotopy and Applications, 24(1):325–345, 2022
Robert Ghrist and Hans Riess. Cellular sheaves of lattices and the tarski laplacian.Homology, Homotopy and Applications, 24(1):325–345, 2022
2022
-
[43]
Cell attention networks
Lorenzo Giusti, Claudio Battiloro, et al. Cell attention networks. In2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2023
2023
-
[44]
Gould.Cellular Sheaves of Hilbert Spaces
Julian J. Gould.Cellular Sheaves of Hilbert Spaces. PhD thesis, University of Pennsylvania, 2025
2025
-
[45]
A time- vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs.IEEE Transactions on Signal Processing, 66(3):817–829, 2017
Francesco Grassi, Andreas Loukas, Nathanaël Perraudin, and Benjamin Ricaud. A time- vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs.IEEE Transactions on Signal Processing, 66(3):817–829, 2017
2017
-
[46]
American Mathematical Society, Providence, RI, 2009
Alexander Grigor’yan.Heat Kernel and Analysis on Manifolds, volume 47 ofAMS/IP Studies in Advanced Mathematics. American Mathematical Society, Providence, RI, 2009
2009
-
[47]
Learning network sheaves for ai-native semantic communication.arXiv preprint arXiv:2512.03248, 2025
Enrico Grimaldi, Mario Edoardo Pandolfo, Gabriele D’Acunto, Sergio Barbarossa, and Paolo Di Lorenzo. Learning network sheaves for ai-native semantic communication.arXiv preprint arXiv:2512.03248, 2025
-
[48]
Number 4
Alexandre Grothendieck.A general theory of fibre spaces with structure sheaf. Number 4. University of Kansas, Department of Mathematics, 1955
1955
-
[49]
Distributed multi-agent coordination over cellular sheaves.arXiv preprint arXiv:2504.02049, 2025
Tyler Hanks, Hans Riess, Samuel Cohen, Trevor Gross, Matthew Hale, and James Fairbanks. Distributed multi-agent coordination over cellular sheaves.arXiv preprint arXiv:2504.02049, 2025
-
[50]
Jakob Hansen and Thomas Gebhart. Sheaf neural networks, 2020. URLhttps://arxiv. org/abs/2012.06333
-
[51]
Jakob Hansen and Robert Ghrist. Toward a spectral theory of cellular sheaves.Journal of Ap- plied and Computational Topology, 3(4):315–358, Dec 2019. ISSN 2367-1734. doi: 10.1007/ s41468-019-00038-7. URLhttps://doi.org/10.1007/s41468-019-00038-7
-
[52]
Learning sheaf laplacians from smooth signals
Jakob Hansen and Robert Ghrist. Learning sheaf laplacians from smooth signals. InICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5446–5450, 2019. doi: 10.1109/ICASSP.2019.8683709
-
[53]
Opinion dynamics on discourse sheaves.SIAM Journal on Applied Mathematics, 81(5):2033–2060, 2021
Jakob Hansen and Robert Ghrist. Opinion dynamics on discourse sheaves.SIAM Journal on Applied Mathematics, 81(5):2033–2060, 2021. doi: 10.1137/20M1341088
-
[54]
Amortizing intractable inference in large language models
Edward J Hu, Moksh Jain, Eric Elmoznino, Younesse Kaddar, Guillaume Lajoie, Yoshua Bengio, and Nikolay Malkin. Amortizing intractable inference in large language models. InThe Twelfth International Conference on Learning Representations, 2024. URL https: //openreview.net/forum?id=Ouj6p4ca60
2024
-
[55]
Hai Huang, Yann LeCun, and Randall Balestriero. Semantic tube prediction: Beating llm data efficiency with jepa, 2026. URLhttps://arxiv.org/abs/2602.22617. 12
-
[56]
Vector-valued gaussian processes on riemannian manifolds via gauge independent projected kernels.Advances in Neural Information Processing Systems, 34:17160–17169, 2021
Michael Hutchinson, Alexander Terenin, Viacheslav Borovitskiy, So Takao, Yee Teh, and Marc Deisenroth. Vector-valued gaussian processes on riemannian manifolds via gauge independent projected kernels.Advances in Neural Information Processing Systems, 34:17160–17169, 2021
2021
-
[57]
Feng Ji, Yanan Zhao, See Hian Lee, Kai Zhao, Wee Peng Tay, W. P. Tay, and Jielong Yang. Graph distributional signals for regularization in graph neural networks.IEEE Transactions on Signal and Information Processing over Networks, 11:670–682, Jul 2025. doi: 10.1109/ tsipn.2025.3587400
-
[58]
Anran Jiao, Qile Yan, Jhn Harlim, and Lu Lu. Solving forward and inverse pde problems on unknown manifolds via physics-informed neural operators.arXiv preprint arXiv:2407.05477, 2024
-
[59]
T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Net- works. InProc. of the 5th International Conference on Learning Representations (ICLR), 2017. URLhttps://openreview.net/forum?id=SJU4ayYgl
2017
-
[60]
Imagenet classification with deep convolutional neural networks
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. InAdvances in Neural Information Processing Systems, volume 25, 2012
2012
-
[61]
Nicolaas H. Kuiper. The homotopy type of the unitary group of hilbert space.Topology, 3(1): 19–30, 1965. doi: 10.1016/0040-9383(65)90067-4
-
[62]
Springer, New York, NY , 3 edition, 1995
Serge Lang.Differential and Riemannian Manifolds, volume 160 ofGraduate Texts in Mathematics. Springer, New York, NY , 3 edition, 1995. ISBN 978-0-387-94338-1. doi: 10.1007/978-1-4612-4182-9
-
[63]
Gradient-based learning applied to document recognition.Proc
Yann LeCun et al. Gradient-based learning applied to document recognition.Proc. of the IEEE, 86(11):2278–2324, 1998
1998
-
[64]
Springer- Verlag, Berlin, 1991
Michel Ledoux and Michel Talagrand.Probability in Banach Spaces: Isoperimetry and Processes, volume 23 ofErgebnisse der Mathematik und ihrer Grenzgebiete (3). Springer- Verlag, Berlin, 1991
1991
-
[65]
L’anneau d’homologie d’une représentation.Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, 222:1366–1368, 1946
Jean Leray. L’anneau d’homologie d’une représentation.Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, 222:1366–1368, 1946
1946
-
[66]
Graph signal processing: History, development, impact, and outlook.IEEE Signal Processing Magazine, 40(4):49–60, 2023
Geert Leus, Antonio G Marques, José MF Moura, Antonio Ortega, and David I Shuman. Graph signal processing: History, development, impact, and outlook.IEEE Signal Processing Magazine, 40(4):49–60, 2023
2023
-
[67]
Transferability of spectral graph convolutional neural networks
Ron Levie, Wei Huang, et al. Transferability of spectral graph convolutional neural networks. Journal of Machine Learning Research, 22(272):1–59, 2021
2021
-
[68]
Learning from frustration: Torsor cnns on graphs
Daiyuan Li, Shreya Arya, and Robert Ghrist. Learning from frustration: Torsor cnns on graphs. InProceedings of the Workshop on Symmetry and Geometry in Neural Representations at NeurIPS 2025, 2025. URL https://arxiv.org/abs/2510.23288. Workshop paper
-
[69]
Diffusion convolutional recurrent neural network: Data-driven traffic forecasting
Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. InInternational Conference on Learning Representa- tions (ICLR), 2018. URLhttps://openreview.net/forum?id=SJiHXGWAZ
2018
-
[70]
Smooth principal component analysis over two-dimensional manifolds with an application to neuroimaging
Eardi Lila, John AD Aston, and Laura M Sangalli. Smooth principal component analysis over two-dimensional manifolds with an application to neuroimaging. 2016
2016
-
[71]
Spatio-temporal adaptive embedding makes vanilla transformer SOTA for traffic forecasting
Hangchen Liu, Zheng Dong, Renhe Jiang, Jiewen Deng, Jinliang Deng, Quanjun Chen, and Xuan Song. Spatio-temporal adaptive embedding makes vanilla transformer SOTA for traffic forecasting. InProceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM), 2023. doi: 10.1145/3583780.3615160. 13
-
[72]
Wasserstein riemannian ge- ometry of gaussian densities.Information Geometry, 1(2):137–179, 12 2018
Luigi Malagò, Luigi Montrucchio, and Giovanni Pistone. Wasserstein riemannian ge- ometry of gaussian densities.Information Geometry, 1(2):137–179, 12 2018. ISSN 2511-249X. doi: 10.1007/s41884-018-0014-4. URL https://doi.org/10.1007/ s41884-018-0014-4
-
[73]
Bronstein
Ivan Marisca, Jacob Bamberger, Cesare Alippi, and Michael M. Bronstein. Over-squashing in spatiotemporal graph neural networks. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems, 2026. URL https://openreview.net/forum?id= CVp0WCw4a1
2026
-
[74]
Peter Mostowsky, Vincent Dutordoir, Iskander Azangulov, Noémie Jaquier, Michael John Hutchinson, Aditya Ravuri, Leonel Rozo, Alexander Terenin, and Viacheslav Borovitskiy. The GeometricKernels package: Heat and matérn kernels for geometric learning on manifolds, meshes, and graphs.arXiv:2407.08086, 2024. URL https://arxiv.org/abs/2407. 08086
-
[75]
Christoph Müller and Christoph Wockel. Equivalences of smooth and continuous principal bundles with infinite-dimensional structure group.advg, 9(4):605–626, Sept 2009. ISSN 1615-715X. doi: 10.1515/advgeom.2009.032. URL http://dx.doi.org/10.1515/ ADVGEOM.2009.032
-
[76]
Nicolaescu.Lectures on the Geometry of Manifolds
Liviu I. Nicolaescu.Lectures on the Geometry of Manifolds. World Scientific, Singapore, 2 edition, 2007. ISBN 9789812708533
2007
-
[77]
Topotune: A framework for generalized combinatorial complex neural networks, 2025
Mathilde Papillon, Guillermo Bernardez, Claudio Battiloro, and Nina Miolane. Topotune: A framework for generalized combinatorial complex neural networks, 2025. URL https: //openreview.net/forum?id=2MqyCIxLSi
2025
-
[78]
Sheaf Neural Networks on SPD Manifolds: Second-Order Geometric Representation Learning
Yuhan Peng, Junwen Dong, Yuzhi Zeng, Hao Li, Ce Ju, Huitao Feng, Diaaeldin Taha, Anna Wienhard, and Kelin Xia. Sheaf neural networks on spd manifolds: Second-order geometric representation learning.arXiv preprint arXiv:2604.20308, 2026
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[79]
Peter Petersen.Riemannian Geometry, volume 171 ofGraduate Texts in Mathematics. Springer, New York, 2 edition, 2006. ISBN 978-0-387-29403-2. doi: 10.1007/978-0-387-29403-2
-
[80]
I. F. Pinelis. Inequalities for distributions of sums of independent random vectors and their application to estimating a density.Theory of Probability & Its Applications, 35(3):605–607,
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.