Recognition: unknown
No Triangulation Without Representation: Generalization in Topological Deep Learning
Pith reviewed 2026-05-08 12:33 UTC · model grok-4.3
The pith
Existing models in topological deep learning saturate benchmarks only when given the right representation and fail to generalize beyond combinatorial structure.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Both graph neural networks and higher-order message passing methods can saturate the extended MANTRA benchmark when given appropriate representations and feature assignments, yet they show no capacity to generalize beyond the combinatorial structure of the data when evaluated under triangulation refinement and representational diversity. This demonstrates that existing models capture discrete, scale-dependent properties of the triangulations instead of the underlying homeomorphism type or topological structure independent of scale.
What carries the argument
The protocol of representational diversity plus triangulation refinement, which preserves topological type while altering discrete realizations to test whether models learn structure independent of combinatorics.
If this is right
- Graph neural networks and higher-order message passing methods can reach benchmark saturation once representations and features are chosen appropriately.
- Performance collapses under triangulation refinement, showing dependence on the original combinatorial scale.
- No current models exhibit generalization to topological invariants independent of discrete structure.
- New inductive biases that operate directly on topological properties are required to close the identified gap.
Where Pith is reading between the lines
- Future architectures may need explicit mechanisms for computing or preserving topological invariants such as homology across different discretizations.
- The same representational sensitivity could affect generalization in other domains that use higher-order or simplicial data.
- Systematic tests across a broader set of homeomorphism types would help quantify how much current models rely on scale versus topology.
Load-bearing premise
That the chosen representations, feature assignments, and the protocol of representational diversity with triangulation refinement are sufficient to separate generalization to topological structure from learning specific combinatorial details.
What would settle it
A model that maintains high accuracy on refined triangulations of the same manifold (different combinatorics, same homeomorphism type) after training on the original set would demonstrate generalization to topological structure beyond combinatorics.
Figures
read the original abstract
Despite an ever-increasing interest in topological deep learning models that target higher-order datasets, there is no consensus on how to evaluate such models. This is exacerbated by the fact that topological objects permit operations, such as structural refinements, that are not appropriate for graph data. In this work, we extend MANTRA, a benchmark dataset containing manifold triangulations, to a larger class of manifolds with more diverse homeomorphism types. We show that, unlike prior claims, both graph neural networks (GNNs) and higher-order message passing (HOMP) methods can saturate the benchmark. However, we find that this is contingent on the right representation and feature assignment, emphasizing their importance in baseline models. We thus provide a novel evaluation protocol based on representational diversity and triangulation refinement. Surprisingly, we find no indication that existing models are capable of generalizing beyond the combinatorial structure of the data. This points towards a research gap in developing models that understand topological structure independent of scale. Our work thus provides the necessary scaffolding to evaluate future models and enable the development of topology-aware inductive biases.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript extends the MANTRA benchmark to a larger class of manifolds with diverse homeomorphism types. It reports that both GNNs and higher-order message passing methods can saturate the benchmark when using appropriate representations and feature assignments. Using a new protocol based on representational diversity and triangulation refinement, the authors claim that existing models show no ability to generalize beyond the combinatorial structure of the data, identifying a research gap for topology-aware inductive biases.
Significance. If the empirical findings and protocol hold, the work supplies useful scaffolding for evaluating topological deep learning models and underscores the distinction between combinatorial and topological generalization. The emphasis on representation choice and the introduction of refinement-based testing are constructive contributions that could help standardize future benchmarks.
major comments (2)
- [Evaluation protocol and results sections] The central claim that 'existing models are incapable of generalizing beyond the combinatorial structure' (abstract) rests on the new protocol successfully isolating topological invariants from combinatorial scale. No ablations are presented to demonstrate that performance degradation on refined triangulations arises from missing topological inductive biases rather than scale sensitivity, simplex-count proxies, or refinement-induced dataset artifacts. This is load-bearing for the generalization conclusion.
- [Benchmark extension and saturation experiments] The saturation result for GNNs and HOMP is stated to be 'contingent on the right representation and feature assignment,' yet the manuscript provides no systematic comparison or controls quantifying how different feature assignments affect saturation versus the baseline MANTRA protocol. This weakens the contrast drawn with prior claims.
minor comments (3)
- [Introduction] The abstract and introduction would benefit from an explicit statement of the new manifolds added to MANTRA and their homeomorphism types, ideally in a table.
- [Figures] Figure captions should include the key numerical takeaway (e.g., accuracy drop on refined vs. original triangulations) rather than only describing the plot.
- [Methods] Notation for triangulation refinement steps and representational diversity metrics is introduced without a dedicated definitions subsection, making the protocol harder to reproduce.
Simulated Author's Rebuttal
We thank the referee for their thoughtful and constructive review of our manuscript. The comments highlight important aspects of our evaluation protocol and experimental design that we have addressed in the revision. We respond to each major comment below.
read point-by-point responses
-
Referee: [Evaluation protocol and results sections] The central claim that 'existing models are incapable of generalizing beyond the combinatorial structure' (abstract) rests on the new protocol successfully isolating topological invariants from combinatorial scale. No ablations are presented to demonstrate that performance degradation on refined triangulations arises from missing topological inductive biases rather than scale sensitivity, simplex-count proxies, or refinement-induced dataset artifacts. This is load-bearing for the generalization conclusion.
Authors: We agree that the load-bearing nature of this claim requires explicit controls to rule out confounds. The refinement protocol preserves the homeomorphism type while changing the triangulation (combinatorial structure) and simplex count. In the revised manuscript we add three targeted ablations in a new subsection of the Evaluation Protocol: (i) matched-simplex-count comparisons between original and refined triangulations of the same manifold, (ii) controlled scaling experiments that increase simplex count without altering topology, and (iii) variance analysis across multiple independent refinement strategies to check for dataset artifacts. These results show that performance degradation persists even when simplex count is controlled, supporting that the models rely on specific combinatorial patterns rather than topological invariants. The Results and Discussion sections have been updated to present these controls and their implications. revision: yes
-
Referee: [Benchmark extension and saturation experiments] The saturation result for GNNs and HOMP is stated to be 'contingent on the right representation and feature assignment,' yet the manuscript provides no systematic comparison or controls quantifying how different feature assignments affect saturation versus the baseline MANTRA protocol. This weakens the contrast drawn with prior claims.
Authors: We acknowledge that a more systematic quantification would strengthen the contrast with prior work. The revised manuscript adds a dedicated subsection 'Impact of Representation and Feature Assignment' that reports a controlled comparison across feature types (constant, random, geometric, and learned embeddings) on both the original MANTRA and the extended benchmark. We quantify saturation thresholds, the minimal representational complexity needed to reach near-perfect accuracy, and performance curves relative to the baseline protocol. These experiments demonstrate that saturation is indeed highly sensitive to representation choice and that the extended benchmark reveals this dependence more clearly than the original MANTRA. The text now explicitly contrasts these findings with earlier claims. revision: yes
Circularity Check
No significant circularity; claims rest on empirical benchmark evaluations
full rationale
The paper extends the external MANTRA benchmark with additional manifolds and homeomorphism types, then reports model performance (GNNs and HOMP) under varied representations, feature assignments, and triangulation refinements. The central finding—that existing models saturate the benchmark only with appropriate representations but show no generalization beyond combinatorial structure—is presented as an empirical observation from these controlled experiments. No mathematical derivation, first-principles prediction, or fitted parameter is defined in terms of the target result. The evaluation protocol is introduced as a methodological contribution rather than a self-referential fit. No load-bearing self-citations or ansatz smuggling appear in the abstract or described chain; the work is self-contained against the extended external benchmark.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Manifolds admit triangulations whose homeomorphism type can be varied while preserving the underlying topological structure.
Reference graph
Works this paper leans on
-
[1]
Ballester, P
R. Ballester, P. Hern´andez-Garc´ıa, M. Papillon, C. Battiloro, N. Miolane, T. Birdal, C. Casacu- berta, S. Escalera, and M. Hajij. Attending to topological spaces: The cellular transformer,
- [2]
-
[3]
Ballester, E
R. Ballester, E. R ¨oell, D. B. Schmid, M. Alain, S. Escalera, C. Casacuberta, and B. Rieck. MANTRA: The manifold triangulations assemblage. In Y . Yue, A. Garg, N. Peng, F. Sha, and R. Yu, editors,International Conference on Learning Representations, volume 2025, pages 7437–7466, 2025
2025
-
[4]
Barsbey, R
M. Barsbey, R. Ballester, A. Demir, C. Casacuberta, P. Hern ´andez-Garc´ıa, D. Pujol-Perich, S. Yurtseven, S. Escalera, C. Battiloro, M. Hajij, and T. Birdal. Higher-order molecular learning: The cellular transformer. InICLR 2025 Workshop on Generative and Experimental Perspectives for Biomolecular Design, 2025. URLhttps://openreview.net/forum?id=GW3h79mxrF
2025
-
[5]
Bechler-Speicher, B
M. Bechler-Speicher, B. Finkelshtein, F. Frasca, L. M¨uller, J. T¨onshoff, A. Siraudin, V . Zaverkin, M. M. Bronstein, M. Niepert, B. Perozzi, M. Galkin, and C. Morris. Position: Graph learning will lose relevance due to poor benchmarks. In A. Singh, M. Fazel, D. Hsu, S. Lacoste- Julien, F. Berkenkamp, T. Maharaj, K. Wagstaff, and J. Zhu, editors,Proceedi...
2025
-
[6]
Bodnar, F
C. Bodnar, F. Frasca, N. Otter, Y . Wang, P. Li`o, G. Montufar, and M. Bronstein. Weisfeiler and Lehman go cellular: CW Networks. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P. Liang, and J. W. Vaughan, editors,Advances in Neural Information Processing Systems, volume 34, pages 2625–2640. Curran Associates, Inc., 2021
2021
-
[7]
Bodnar, F
C. Bodnar, F. Frasca, Y . Wang, N. Otter, G. F. Montufar, P. Li´o, and M. Bronstein. Weisfeiler and Lehman go topological: Message passing simplicial networks. In M. Meila and T. Zhang, editors,Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 1026–1037. PMLR, 2021. 10
2021
-
[8]
H. R. Brahana. Systems of circuits on two-dimensional manifolds.Annals of Mathematics, 23: 144, 1921
1921
-
[9]
X. Bresson and T. Laurent. Residual gated graph convnets.arXiv preprint arXiv:1711.07553, 2017
-
[10]
Cang and G.-W
Z. Cang and G.-W. Wei. TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions.PLOS Computational Biology, 13(7): 1–27, 07 2017
2017
-
[11]
M. Carrasco, G. Bernardez, M. Montagna, N. Miolane, and L. Telyatnikov. HOPSE: Scalable higher-order positional and structural encoder for combinatorial representations, 2025. URL https://arxiv.org/abs/2505.15405
-
[12]
Cole and G
A. Cole and G. Shiu. Topological data analysis for the string landscape.Journal of High Energy Physics, 2019(3):54, 2019
2019
-
[13]
Coupette, J
C. Coupette, J. Wayland, E. Simons, and B. Rieck. No metric to rule them all: Toward principled evaluations of graph-learning datasets. In A. Singh, M. Fazel, D. Hsu, S. Lacoste- Julien, F. Berkenkamp, T. Maharaj, K. Wagstaff, and J. Zhu, editors,Proceedings of the 42nd International Conference on Machine Learning, volume 267 ofProceedings of Machine Lear...
2025
-
[14]
Donato, M
I. Donato, M. Gori, M. Pettini, G. Petri, S. De Nigris, R. Franzosi, and F. Vaccarino. Persistent homology analysis of phase transitions.Physical Review E, 93:052138, 2016
2016
-
[15]
V . P. Dwivedi, A. T. Luu, T. Laurent, Y . Bengio, and X. Bresson. Graph neural networks with learnable structural and positional representations. InInternational Conference on Learning Representations, 2022. URLhttps://openreview.net/forum?id=wTTjnvGphYj
2022
-
[16]
S. Ebli, M. Defferrard, and G. Spreemann. Simplicial neural networks. InTopological Data Analysis and Beyond Workshop at NeurIPS, 2020
2020
-
[17]
Eitan, Y
Y . Eitan, Y . Gelberg, G. Bar-Shalom, F. Frasca, M. M. Bronstein, and H. Maron. Topological blindspots: Understanding and extending topological deep learning through the lens of expres- sivity. InThe Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=EzjsoomYEb
2025
-
[18]
C. W. J. Goh, C. Bodnar, and P. Lio. Simplicial attention networks. InICLR 2022 Workshop on Geometrical and Topological Representation Learning, 2022. URL https://openreview. net/forum?id=ScfRNWkpec
2022
-
[19]
Hajij, K
M. Hajij, K. Istvan, and G. Zamzmi. Cell complex neural networks. InTopological Data Analysis and Beyond Workshop at NeurIPS, 2020
2020
-
[20]
Hajij, G
M. Hajij, G. Zamzmi, T. Papamarkou, A. Guzman-Saenz, T. Birdal, and M. T. Schaub. Com- binatorial complexes: Bridging the gap between cell complexes and hypergraphs. In57th Asilomar Conference on Signals, Systems, and Computers, pages 799–803, 2023
2023
-
[21]
T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Representations, 2017. URL https://openreview. net/forum?id=SJU4ayYgl
2017
-
[22]
F. H. Lutz. The Manifold Page. https://www3.math.tu-berlin.de/IfM/Nachrufe/ Frank_Lutz/stellar/, 2017. Accessed: September 19, 2024
2017
-
[23]
Maggs, C
K. Maggs, C. Hacker, and B. Rieck. Simplicial representation learning with neural k-forms. In International Conference on Learning Representations, 2024. URL https://openreview. net/forum?id=Djw0XhjHZb
2024
-
[24]
Manolescu
C. Manolescu. Triangulations of manifolds.ICCM Notices, 2(2):21–23, 2014
2014
-
[25]
Minamitani, T
E. Minamitani, T. Nakamura, I. Obayashi, and H. Mizuno. Persistent homology elucidates hierarchical structures responsible for mechanical properties in covalent amorphous solids. Nature Communications, 16(1):8226, 2025. 11
2025
-
[26]
E. E. Moise. Affine structures in 3-manifolds: V . The triangulation theorem and Hauptvermutung. Annals of Mathematics, 56(1):96–114, 1952
1952
-
[27]
Morris, N
C. Morris, N. M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. InICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020. URLwww.graphlearning.io
2020
-
[28]
Morris, Y
C. Morris, Y . Lipman, H. Maron, B. Rieck, N. M. Kriege, M. Grohe, M. Fey, and K. Borgwardt. Weisfeiler and Leman go machine learning: The story so far.Journal of Machine Learning Research, 24(333):1–59, 2023
2023
-
[29]
J. R. Munkres.Elements of algebraic topology. Addison-Wesley Publishing Company, Menlo Park, CA, 1984
1984
-
[30]
U. Pachner. P.L. homeomorphic manifolds are equivalent by elementary shellings.European Journal of Combinatorics, 12(2):129–145, 1991
1991
-
[31]
Papamarkou, T
T. Papamarkou, T. Birdal, M. Bronstein, G. Carlsson, J. Curry, Y . Gao, M. Hajij, R. Kwitt, P. Li`o, P. D. Lorenzo, V . Maroulas, N. Miolane, F. Nasrin, K. N. Ramamurthy, B. Rieck, S. Scardapane, M. T. Schaub, P. Veli ˇckovi´c, B. Wang, Y . Wang, G.-W. Wei, and G. Zamzmi. Position: Topological deep learning is the new frontier for relational learning. In ...
2024
-
[32]
Papillon, G
M. Papillon, G. Bernardez, C. Battiloro, and N. Miolane. TopoTune: A framework for gener- alized combinatorial complex neural networks. InForty-second International Conference on Machine Learning, 2025. URLhttps://openreview.net/forum?id=S5njonQdBf
2025
- [33]
-
[34]
T. Rad´o. ¨Uber den Begriff der Riemannschen Fl ¨ache.Acta Sci. Math. (Szeged), 2:101–121, 1925
1925
-
[35]
T. M. Roddenberry, N. Glaze, and S. Segarra. Principled simplicial neural networks for trajectory prediction. In M. Meila and T. Zhang, editors,Proceedings of the 38th International Conference on Machine Learning, volume 139 ofProceedings of Machine Learning Research, pages 9020–9029. PMLR, 2021
2021
-
[36]
Sardellitti, S
S. Sardellitti, S. Barbarossa, and L. Testa. Topological signal processing over cell complexes. InAsilomar Conference on Signals, Systems, and Computers, pages 1558–1562, 2021
2021
-
[37]
Stillwell.Classical Topology and Combinatorial Group Theory, volume 72 ofGraduate Texts in Mathematics
J. Stillwell.Classical Topology and Combinatorial Group Theory, volume 72 ofGraduate Texts in Mathematics. Springer, New York, NY , USA, 2nd edition, 1993
1993
-
[38]
Telyatnikov, G
L. Telyatnikov, G. Bernardez, M. Montagna, M. Hajij, M. Carrasco, P. Vasylenko, M. Papillon, G. Zamzmi, M. T. Schaub, J. Verhellen, P. Snopov, B. Miquel-Oliver, M. Gil-Sorribes, A. Molina, V . Guallar, T. Long, J. Suk, P. Rygiel, A. V . Nikitin, G. Escalona, M. Banf, D. Filipiak, L. Imasheva, M. Schattauer, A. L. Martinez, H. Fritze, M. Masden, V . S ´anc...
2025
-
[39]
van der Duin, R
J. van der Duin, R. Loll, M. Schiffer, and A. Silva. Quantum gravity and effective topology. The European Physical Journal C, 86(2):102, 2026
2026
-
[40]
X. Xu, J. Cisewski-Kehe, S. Green, and D. Nagai. Finding cosmic voids and filament loops using topological data analysis.Astronomy and Computing, 27:34–52, 2019
2019
-
[41]
M. Yang, E. Isufi, and G. Leus. Simplicial convolutional neural networks. InIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8847–8851, 2022. 12
2022
-
[42]
M. Yang, G. Leus, and E. Isufi. Hodge-aware convolutional learning on simplicial complexes. Transactions on Machine Learning Research, 2025. URLhttps://openreview.net/forum? id=Nm5sp09Q25
2025
-
[43]
neighborhoods
C. Ying, T. Cai, S. Luo, S. Zheng, G. Ke, D. He, Y . Shen, and T.-Y . Liu. Do transformers really perform badly for graph representation? In M. Ranzato, A. Beygelzimer, Y . Dauphin, P. Liang, and J. W. Vaughan, editors,Advances in Neural Information Processing Systems, volume 34, pages 28877–28888. Curran Associates, Inc., 2021. 13 Appendix (Supplementary...
2021
-
[44]
they are either both orientable or both non-orientable, whereχis the Euler characteristic. 21
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.