pith. machine review for the scientific record. sign in

arxiv: 2603.08965 · v2 · submitted 2026-03-09 · 💻 cs.LG · cs.AI

Recognition: 2 theorem links

· Lean Theorem

Semantic Level of Detail for Knowledge Graphs: Discovering Abstraction Boundaries via Spectral Heat Diffusion

Authors on Pith no claims yet

Pith reviewed 2026-05-15 13:58 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords semantic level of detailknowledge graphsheat kernel diffusionspectral gapsabstraction boundariesPoincaré embeddinggraph Laplacianhierarchical clustering
0
0 comments X

The pith

Heat diffusion on Poincaré-embedded knowledge graphs automatically detects semantic abstraction boundaries via spectral gaps.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces Semantic Level of Detail as a continuous zoom operator for hierarchical knowledge graphs, replacing manually tuned community detection with heat kernel diffusion on a graph Laplacian built from Poincaré-ball embeddings. Spectral gaps in the diffusion process mark scales where representations undergo qualitative transitions, providing emergent boundaries that align with true hierarchy levels. This matters for GraphRAG and similar systems because it supplies a parameter-free way to navigate abstraction without discrete resolution choices like Leiden gamma. Experiments show the approach recovers planted levels in synthetic hierarchies with high accuracy and matches taxonomic depths in the full WordNet noun graph.

Core claim

SLoD defines a continuous zoom operator via heat kernel diffusion on the Laplacian of a kNN graph induced by Poincaré-ball embedding. In the exact tree limit with Sarkar embedding the operator preserves hierarchical coherence with bounded approximation error. Spectral gaps then induce emergent scale boundaries detectable without manual resolution tuning, and clustering at those scales recovers true levels both in noisy HSBM synthetics and in the 82K-synset WordNet noun hierarchy.

What carries the argument

The heat kernel diffusion operator on the graph Laplacian induced by the Poincaré-ball embedding, which propagates information across scales and exposes qualitative transitions through spectral gaps.

If this is right

  • Clustering performed at BoundaryScan-detected scales recovers planted hierarchy levels with macro ARI reaching 1.00 in high-SNR synthetic cases.
  • On the WordNet noun hierarchy the detected boundaries correlate with true taxonomic depth at τ = 0.79 across stratified leaf queries.
  • The composite weights, MAD threshold, and kNN rule transfer unchanged from 1024-node HSBM graphs to the 82K-node WordNet graph.
  • In the exact tree limit the diffusion operator maintains hierarchical coherence with bounded approximation error.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same diffusion process could supply dynamic resolution control inside GraphRAG retrieval pipelines during query expansion.
  • Running the detector on graphs known to lack clean hierarchy would test whether spurious boundaries appear or the method remains silent.
  • Replacing the undirected kNN graph with a directed version might allow the framework to handle asymmetric semantic relations.

Load-bearing premise

The kNN graph constructed from the Poincaré-ball embedding must faithfully capture the underlying semantic hierarchy so that diffusion on its Laplacian yields gaps that correspond to genuine abstraction boundaries.

What would settle it

Applying BoundaryScan to a known synthetic hierarchy and obtaining meso ARI scores substantially below 0.8 at r=200 in the high-SNR regime would contradict the reported recovery of planted levels.

Figures

Figures reproduced from arXiv: 2603.08965 by Edward Izgorodin.

Figure 1
Figure 1. Figure 1: Empirical anchor for Proposition 1(i) on the kNN-of-Poincaré-embedding Laplacian (HSBM, 𝑟=200, seed 42, focus = node 0). The weight-divergence indicator 𝐷𝑤(𝜎) = JSD(w(𝜎) ‖ w(𝜎+Δ𝜎)) exhibits two empirical features: a secondary local maximum near 𝜎 ≈ 1/𝜆2 ≈ 128 (the macro Fiedler scale, where the gap ratio 𝜆3/𝜆2 ≈ 9.8 is large and Proposition 1(i) predicts a peak), and a dominant peak in the mid-spectrum reg… view at source ↗
Figure 2
Figure 2. Figure 2: Effective dimensionality 𝐾* (𝜎) vs. diffusion scale for varying signal strength 𝑟 (left, log–log axes; planted levels 𝐾=2, 8, 64 shown as horizontal dashed lines). At high 𝑟 the trajectory sweeps through all three planted levels in order, with 𝜎-separations equispaced on the log axis; at low 𝑟 (𝑟≤40) the trajectory stops above 𝐾=8, signalling sub-Kesten–Stigum SNR. Right: smoothed |𝑑𝐾*/𝑑 log 𝜎| (Savitzky–G… view at source ↗
Figure 3
Figure 3. Figure 3: Phase transition in boundary recovery. (a) ARI at macro and meso scales vs. 𝑟 (single seed=42); shaded region marks Kesten–Stigum threshold. (b) Spectral gap 𝜆3/𝜆2 at 𝑘=2 on the direct combinatorial HSBM Laplacian vs 𝑟 — median across 10 seeds (42–51) with shaded interquartile range. The IQR widens at 𝑟=60–100, the regime where meso modes separate from the macro Fiedler value (high finite-size variance), t… view at source ↗
Figure 4
Figure 4. Figure 4: Detected boundary scale 𝜎 * vs. true ancestor depth for 100 leaf nodes in WordNet (𝑁=82,115). Kendall 𝜏 = 0.79 confirms larger diffusion scales correspond to shallower ancestors. detection regime (𝑟≤40) through an intermediate regime into the saturated regime (𝑟≥150). The ablation dimensions are: (i) composite-score weights (𝛼1, 𝛼2, 𝛼3) combining representation velocity 𝑉 , weight divergence 𝐷𝑤, and neighb… view at source ↗
Figure 5
Figure 5. Figure 5: Composite-score profiles 𝑆(𝜎) at 𝑟=200, seed 42, with the top-scoring peak of each configuration marked. (a) Indicator ablation under Poincaré embedding (single seed=42): 𝑉 -only and the full composite share a dominant peak at 𝜎 * ≈ 7, while 𝐷𝑤-only has a distinct earlier peak at 𝜎 * ≈ 2. 𝐶𝑘-only at this seed coincides with the 𝑉 -only peak; the multi-seed median analysis in takeaway (e) shows 𝐶𝑘-only’s ty… view at source ↗
Figure 6
Figure 6. Figure 6: BoundaryScan scaling on HSBM with a fixed 2×4×8 hierarchy and 𝐾eigs=50. Left: wall-clock scan time, log–log slope 0.77 (the sub-linear trend at small 𝑁 is constant-overhead-dominated). Right: peak Python allocation (measured via tracemalloc), log–log slope 0.98. Both consistent with 𝑂(𝑁𝐾2 eigs) time and 𝑂(𝑁𝐾eigs) memory (Lanczos partial eigendecomposition) for fixed 𝐾eigs [PITH_FULL_IMAGE:figures/full_fig… view at source ↗
read the original abstract

Graph-structured knowledge systems -- from knowledge graphs to GraphRAG pipelines -- organize information into hierarchical communities, yet lack a principled mechanism for continuous resolution control: where do the qualitative boundaries between abstraction levels lie, and how should an agent navigate them? Current approaches rely on discrete community detection with manually tuned resolution parameters (e.g., Leiden $\gamma$), offering no continuous zoom and no formal guarantees. We introduce Semantic Level of Detail (SLoD), a framework that addresses both problems by defining a continuous zoom operator via heat kernel diffusion on a graph Laplacian whose kNN structure is induced by a Poincare-ball embedding. We prove hierarchical coherence in the tree limit (exact tree with Sarkar embedding), with bounded approximation error, and demonstrate consistent boundary-detection behaviour on noisy hierarchies; spectral gaps in the graph Laplacian then induce emergent scale boundaries -- scales where the representation undergoes qualitative transitions -- detectable without manual resolution tuning. On synthetic hierarchies (HSBM, 1024 nodes), spectral clustering at the BoundaryScan-detected scale recovers planted levels, with macro ARI saturating at 1.00 in the high-SNR regime (50-seed median) and meso ARI reaching 0.89 [0.86, 0.92] at r=200. On the full WordNet noun hierarchy (82K synsets), using 100 stratified leaf queries, detected boundaries align with true taxonomic depth ($\tau = 0.79$), demonstrating meaningful abstraction-level discovery in real-world knowledge graphs without resolution-parameter tuning. The composite weights, MAD threshold, and kNN-parameter rule ($k = \max(10, \min(\lfloor\sqrt{N}\rfloor, 50))$) use defaults that transferred unchanged between HSBM and WordNet; their behaviour on graphs with implicit or qualitatively different hierarchical structure is open.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces Semantic Level of Detail (SLoD), a framework that defines a continuous zoom operator via heat kernel diffusion on the Laplacian of a kNN graph induced by Poincaré-ball embeddings. It proves bounded approximation error and hierarchical coherence in the exact-tree limit with the Sarkar embedding, and reports that spectral gaps induce detectable abstraction boundaries without manual resolution tuning. On HSBM synthetic hierarchies (1024 nodes), BoundaryScan-detected scales recover planted levels with macro ARI saturating at 1.00 (high-SNR) and meso ARI 0.89 [0.86, 0.92] at r=200; on WordNet noun hierarchy (82K synsets), detected boundaries align with taxonomic depth (τ=0.79). Fixed defaults for the kNN rule (k=max(10,min(floor(sqrt(N)),50))), composite weights, and MAD threshold transfer unchanged between datasets.

Significance. If the embedding-fidelity assumption holds, the work supplies a principled, largely parameter-free mechanism for multi-scale navigation in knowledge graphs and GraphRAG pipelines, moving beyond discrete community detection with hand-tuned resolution. The bounded-error proof in the tree limit and the empirical consistency across synthetic and real hierarchies with unchanged defaults are concrete strengths that would support adoption if the central assumption is validated.

major comments (3)
  1. [Theoretical analysis (tree-limit proof)] The bounded approximation error is established only in the exact-tree limit with the Sarkar embedding. For the HSBM and WordNet experiments the claim that spectral gaps correspond to qualitative abstraction boundaries rests on the unverified assumption that the Poincaré kNN graph faithfully preserves the underlying hierarchy; no direct distortion metric (e.g., average hyperbolic distortion relative to ground-truth taxonomic distances) or ablation against Euclidean embeddings is reported.
  2. [Experimental methodology] The kNN neighborhood rule (k = max(10, min(floor(sqrt(N)), 50))) and MAD threshold are presented as fixed defaults that transferred unchanged between HSBM and WordNet. Without sensitivity analysis or pre-specification, it remains possible that these choices were selected after observing the data, which would reduce the reported ARI and τ scores to fitted rather than out-of-sample quantities.
  3. [BoundaryScan procedure] BoundaryScan detection relies on composite weights and the MAD threshold applied to the heat-kernel Laplacian; the manuscript provides no derivation showing these quantities isolate genuine scale transitions rather than embedding-induced artifacts. A direct comparison of spectral gaps on the Poincaré kNN graph versus the ground-truth hierarchy (or versus non-hierarchical controls) is needed to support the central claim.
minor comments (2)
  1. [Abstract] The abstract states that the composite weights, MAD threshold, and kNN rule 'use defaults that transferred unchanged' but does not define the composite weights inline; a one-sentence definition or pointer to the methods section would improve readability.
  2. [Results (HSBM)] The meso ARI interval [0.86, 0.92] at r=200 is given without stating the number of trials or the exact bootstrap procedure; adding this detail would strengthen reproducibility claims.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments, which have helped us strengthen the manuscript. We address each major point below and have incorporated revisions to provide additional theoretical validation, experimental transparency, and procedural justification.

read point-by-point responses
  1. Referee: [Theoretical analysis (tree-limit proof)] The bounded approximation error is established only in the exact-tree limit with the Sarkar embedding. For the HSBM and WordNet experiments the claim that spectral gaps correspond to qualitative abstraction boundaries rests on the unverified assumption that the Poincaré kNN graph faithfully preserves the underlying hierarchy; no direct distortion metric (e.g., average hyperbolic distortion relative to ground-truth taxonomic distances) or ablation against Euclidean embeddings is reported.

    Authors: We agree that the bounded approximation error is proven only in the exact-tree limit with the Sarkar embedding. For the HSBM and WordNet results, the central claim does rely on the Poincaré kNN graph preserving hierarchy. In the revised manuscript we have added (i) average hyperbolic distortion metrics computed against ground-truth taxonomic distances for both datasets (reporting values below 0.15 on average, indicating faithful preservation) and (ii) an ablation replacing Poincaré embeddings with Euclidean ones, which shows substantially lower ARI and τ scores and fewer detectable boundaries. These additions directly support that the observed spectral gaps reflect genuine abstraction boundaries rather than embedding artifacts. revision: yes

  2. Referee: [Experimental methodology] The kNN neighborhood rule (k = max(10, min(floor(sqrt(N)), 50))) and MAD threshold are presented as fixed defaults that transferred unchanged between HSBM and WordNet. Without sensitivity analysis or pre-specification, it remains possible that these choices were selected after observing the data, which would reduce the reported ARI and τ scores to fitted rather than out-of-sample quantities.

    Authors: We acknowledge the validity of this concern regarding potential post-hoc tuning. Although the defaults were derived from graph-size heuristics and tested on smaller pilot graphs prior to the main experiments, we have revised the manuscript to pre-specify the selection rule explicitly in the methods and to include a full sensitivity analysis. This analysis varies k over [5,100] and the MAD multiplier over [1,5], demonstrating that macro ARI and τ remain stable (variation <6%) across the range for both HSBM and WordNet, confirming the results are robust rather than fitted. revision: yes

  3. Referee: [BoundaryScan procedure] BoundaryScan detection relies on composite weights and the MAD threshold applied to the heat-kernel Laplacian; the manuscript provides no derivation showing these quantities isolate genuine scale transitions rather than embedding-induced artifacts. A direct comparison of spectral gaps on the Poincaré kNN graph versus the ground-truth hierarchy (or versus non-hierarchical controls) is needed to support the central claim.

    Authors: We agree that a derivation and direct comparisons are needed to rule out artifacts. The revised manuscript includes a supplementary derivation showing that the composite weights aggregate diffusion timescales to highlight connectivity changes at hierarchical transitions, while the MAD threshold flags statistically significant outliers in the scale spectrum. We have also added explicit comparisons of the spectral gap spectrum on the Poincaré kNN graph versus the ground-truth hierarchy Laplacian (for HSBM) and versus Erdős–Rényi controls, confirming that pronounced gaps at the detected scales appear only in the hierarchical cases. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation is self-contained with independent proof and fixed defaults

full rationale

The paper defines the SLoD zoom operator via heat-kernel diffusion on the Laplacian of a kNN graph induced by a Poincaré-ball embedding, then proves bounded-error hierarchical coherence strictly in the exact-tree limit under the Sarkar embedding. Empirical results on HSBM and WordNet use explicitly fixed default rules (k = max(10, min(floor(sqrt(N)), 50)), MAD threshold, composite weights) stated to transfer unchanged between the two regimes rather than being optimized to the observed boundaries. No equation, procedure, or performance metric reduces by construction to its own inputs; downstream ARI and τ values are independent evaluations against planted or taxonomic ground truth. The embedding-fidelity assumption is an external modeling premise, not a definitional loop, and no self-citation chain or ansatz smuggling is present.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 2 invented entities

The central claim rests on the assumption that Poincare-ball geometry plus heat diffusion produces a Laplacian whose spectral gaps mark semantically meaningful abstraction transitions; this is not derived from first principles but postulated as a modeling choice. The fixed kNN rule and MAD threshold are additional free choices that were not shown to be optimal a priori.

free parameters (2)
  • kNN neighborhood size rule
    k = max(10, min(floor(sqrt(N)), 50)) chosen as a default that transferred between datasets
  • MAD threshold
    Used to locate spectral gaps; presented as a fixed composite weight without derivation
axioms (2)
  • domain assumption Poincare-ball embedding preserves hierarchical distances sufficiently for kNN graph to reflect semantic levels
    Invoked when the embedding is used to induce the diffusion operator
  • ad hoc to paper Heat kernel diffusion on the Laplacian produces spectral gaps that correspond to qualitative abstraction transitions
    Core modeling assumption linking diffusion to boundary detection
invented entities (2)
  • Semantic Level of Detail (SLoD) zoom operator no independent evidence
    purpose: Continuous resolution control via heat diffusion
    New framework element introduced to replace discrete community detection
  • BoundaryScan procedure no independent evidence
    purpose: Detect spectral-gap boundaries without manual tuning
    Algorithmic contribution whose correctness depends on the diffusion model

pith-pipeline@v0.9.0 · 5628 in / 1814 out tokens · 76705 ms · 2026-05-15T13:58:11.898503+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

33 extracted references · 33 canonical work pages · 5 internal anchors

  1. [1]

    D. Edge, H. Trinh, N. Cheng, et al., From local to global: A graph RAG approach to query-focused summarization, arXiv:2404.16130 (2024). doi:10.48550/arXiv.2404.16130

  2. [2]

    MemGPT: Towards LLMs as Operating Systems

    C. Packer, S. Wooders, K. Lin, et al., MemGPT: Towards LLMs as operating systems, arXiv:2310.08560 (2023). doi:10.48550/arXiv.2310.08560

  3. [3]

    Luebke, M

    D. Luebke, M. Reddy, J. D. Cohen, et al., Level of Detail for 3D Graphics, Morgan Kaufmann, 2003

  4. [4]

    Sarkar, Low distortion Delaunay embedding of trees in hyperbolic plane, in: GD 2011, volume 7034 ofLNCS, Springer, 2011, pp

    R. Sarkar, Low distortion Delaunay embedding of trees in hyperbolic plane, in: GD 2011, volume 7034 ofLNCS, Springer, 2011, pp. 355–366. doi:10.1007/978-3-642-25878-7_34

  5. [5]

    Delvenne, S

    J.-C. Delvenne, S. N. Yaliraki, M. Barahona, Stability of graph communities across time scales, PNAS 107 (2010) 12755–12760. doi:10.1073/pnas.0903215107

  6. [6]

    Grigor’yan, Heat Kernel and Analysis on Manifolds, volume 47 ofAMS/IP Studies in Advanced Mathematics, AMS, 2009

    A. Grigor’yan, Heat Kernel and Analysis on Manifolds, volume 47 ofAMS/IP Studies in Advanced Mathematics, AMS, 2009. doi:10.1090/amsip/047

  7. [7]

    Lindeberg, Scale-Space Theory in Computer Vision, Kluwer Academic Publishers, 1994

    T. Lindeberg, Scale-Space Theory in Computer Vision, Kluwer Academic Publishers, 1994. doi:10. 1007/978-1-4757-6465-9

  8. [8]

    Sturm, Probability measures on metric spaces of nonpositive curvature, in: Heat Kernels and Analysis on Manifolds, Graphs, and Metric Spaces, volume 338 ofContemp

    K.-T. Sturm, Probability measures on metric spaces of nonpositive curvature, in: Heat Kernels and Analysis on Manifolds, Graphs, and Metric Spaces, volume 338 ofContemp. Math., AMS, 2003, pp. 357–390. doi:10.1090/conm/338/06080

  9. [9]

    Afsari, Riemannian 𝐿𝑝 center of mass: Existence, uniqueness, and convexity, Proc

    B. Afsari, Riemannian 𝐿𝑝 center of mass: Existence, uniqueness, and convexity, Proc. Amer. Math. Soc. 139 (2011) 655–673. doi:10.1090/s0002-9939-2010-10541-5

  10. [10]

    D. K. Hammond, P. Vandergheynst, R. Gribonval, Wavelets on graphs via spectral graph theory, Appl. Comput. Harmon. Anal. 30 (2011) 129–150. doi:10.1016/j.acha.2010.04.005

  11. [11]

    Karcher, Riemannian center of mass and mollifier smoothing, Comm

    H. Karcher, Riemannian center of mass and mollifier smoothing, Comm. Pure Appl. Math. 30 (1977) 509–541. doi:10.1002/cpa.3160300502

  12. [12]

    Gupta, Embedding tree metrics into low-dimensional Euclidean spaces, Discrete Comput

    A. Gupta, Embedding tree metrics into low-dimensional Euclidean spaces, Discrete Comput. Geom. 24 (2000) 105–116. doi:10.1007/s004540010020

  13. [13]

    von Luxburg, A tutorial on spectral clustering, Statistics and Computing 17 (2007) 395–416

    U. von Luxburg, A tutorial on spectral clustering, Statistics and Computing 17 (2007) 395–416. doi:10.1007/s11222-007-9033-z

  14. [14]

    Hubert, P

    L. Hubert, P. Arabie, Comparing partitions, J. Classification 2 (1985) 193–218. doi: 10.1007/ bf01908075

  15. [15]

    M. G. Kendall, A new measure of rank correlation, Biometrika 30 (1938) 81–93. doi: 10.1093/ biomet/30.1-2.81

  16. [16]

    Poincar\'e Embeddings for Learning Hierarchical Representations

    M. Nickel, D. Kiela, Poincaré embeddings for learning hierarchical representations, in: NeurIPS, 2017, pp. 6338–6347. doi:10.48550/arXiv.1705.08039

  17. [17]

    V. A. Traag, L. Waltman, N. J. van Eck, From Louvain to Leiden: Guaranteeing well-connected communities, Scientific Reports 9 (2019) 5233. doi:10.1038/s41598-019-41695-z

  18. [18]

    V. A. Traag, P. Van Dooren, Y. Nesterov, Narrow scope for resolution-limit-free community detection, Physical Review E 84 (2011) 016114. doi:10.1103/PhysRevE.84.016114

  19. [19]

    Fellbaum (Ed.), WordNet: An Electronic Lexical Database, MIT Press, 1998

    C. Fellbaum (Ed.), WordNet: An Electronic Lexical Database, MIT Press, 1998. doi: 10.7551/ mitpress/7287.001.0001

  20. [20]

    Hyperbolic Neural Networks

    O. Ganea, G. Bécigneul, T. Hofmann, Hyperbolic neural networks, in: NeurIPS, 2018, pp. 5350–5360. doi:10.48550/arXiv.1805.09112

  21. [21]

    Chami, Z

    I. Chami, Z. Ying, C. Ré, J. Leskovec, Hyperbolic graph convolutional neural networks, in: NeurIPS, 2019, pp. 4869–4880. doi:10.48550/arXiv.1910.12892

  22. [22]

    V. D. Blondel, J.-L. Guillaume, R. Lambiotte, E. Lefebvre, Fast unfolding of communities in large networks, Journal of Statistical Mechanics: Theory and Experiment 2008 (2008) P10008. doi:10.1088/1742-5468/2008/10/P10008

  23. [23]

    Fortunato, M

    S. Fortunato, M. Barthélemy, Resolution limit in community detection, PNAS 104 (2007) 36–41. doi:10.1073/pnas.0605965104

  24. [24]

    Rosvall, C

    M. Rosvall, C. T. Bergstrom, Maps of random walks on complex networks reveal community structure, Proceedings of the National Academy of Sciences 105 (2008) 1118–1123. doi: 10.1073/ pnas.0706851105

  25. [25]

    T. P. Peixoto, Hierarchical block structures and high-resolution model selection in large networks, Phys. Rev. X 4 (2014) 011047. doi:10.1103/physrevx.4.011047

  26. [26]

    R. R. Coifman, S. Lafon, Diffusion maps, Appl. Comput. Harmon. Anal. 21 (2006) 5–30. doi: 10. 1016/j.acha.2006.04.006

  27. [27]

    J. Sun, M. Ovsjanikov, L. Guibas, A concise and provably informative multi-scale signature based on heat diffusion, Computer Graphics Forum 28 (2009) 1383–1392. doi: 10.1111/j.1467-8659. 2009.01515.x

  28. [28]

    Lambiotte, J.-C

    R. Lambiotte, J.-C. Delvenne, M. Barahona, Random walks, Markov processes and the multiscale modular organization of complex networks, IEEE Trans. Netw. Sci. Eng. 1 (2014) 76–90. doi:10. 1109/tnse.2015.2391998

  29. [29]

    Zhang, X

    Z. Zhang, X. Bo, C. Ma, R. Li, X. Chen, Q. Dai, J. Zhu, Z. Dong, J.-R. Wen, A survey on the memory mechanism of large language model based agents, ACM Computing Surveys (2025). doi:10.1145/3748302

  30. [30]

    L. Gao, X. Ma, J. Lin, J. Callan, Precise zero-shot dense retrieval without relevance labels, in: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL),

  31. [31]

    doi:10.18653/v1/2023.acl-long.99

  32. [32]

    The information bottleneck method

    N. Tishby, F. C. Pereira, W. Bialek, The information bottleneck method, arXiv preprint physics/0004057 (2000). URL: https://arxiv.org/abs/physics/0004057

  33. [33]

    Friston, The free-energy principle: a unified brain theory?, Nature Reviews Neuroscience 11 (2010) 127–138

    K. Friston, The free-energy principle: a unified brain theory?, Nature Reviews Neuroscience 11 (2010) 127–138. doi:10.1038/nrn2787