pith. machine review for the scientific record. sign in

arxiv: 2604.20744 · v1 · submitted 2026-04-22 · 💻 cs.AI · cs.LG· cs.RO

Recognition: unknown

AAC: Admissible-by-Architecture Differentiable Landmark Compression for ALT

Authors on Pith no claims yet

Pith reviewed 2026-05-09 23:53 UTC · model grok-4.3

classification 💻 cs.AI cs.LGcs.RO
keywords admissible heuristicslandmark selectionALT algorithmdifferentiable learningshortest pathA* searchroad networksheuristic compression
0
0 comments X

The pith

AAC makes landmark selection for ALT heuristics differentiable and admissible by construction.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces AAC, a differentiable landmark-selection module for ALT shortest-path heuristics. Its outputs are admissible by construction because each forward pass is a row-stochastic mixture of triangle-inequality lower bounds. This allows the heuristic to remain admissible for every parameter setting without convergence, calibration, or projection. The module reduces to classical ALT at deployment on a learned subset and achieves near-ceiling coverage on road networks and graphs while delivering query speedups at matched memory.

Core claim

AAC is the first differentiable instance of compress-while-preserving-admissibility for heuristic search. Each forward pass computes a row-stochastic mixture of triangle-inequality lower bounds, making the heuristic admissible for every parameter setting without any post-processing. On deployment the module reduces exactly to classical ALT using the learned landmark subset. Experiments show it reaches within a few percentage points of the coverage ceiling established by farthest-point sampling while delivering query speedups at matched memory.

What carries the argument

The row-stochastic mixture of triangle-inequality lower bounds, which enforces admissibility by construction in the landmark selection module.

If this is right

  • The heuristic can be trained end-to-end with neural encoders without risking inadmissibility.
  • At inference time it falls back exactly to classical ALT on the learned landmark subset.
  • Coverage approaches the near-optimal level of FPS-ALT on metric graphs, with gaps of 0.9 to 3.9 percentage points on road networks.
  • Median query time improves by 1.2 to 1.5 times compared to FPS-ALT at the same memory budget, amortizing training cost within a few hundred queries.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This construction might generalize to other admissible heuristics that rely on lower-bound mixtures beyond the ALT setting.
  • The small remaining gap to the coverage ceiling suggests combining learned selection with classical sampling methods could close it further.
  • The released matched-memory benchmarking protocol with TOST equivalence testing could become a standard for comparing learned heuristics in pathfinding.
  • Large-scale routing applications could recover the offline training cost rapidly given the reported amortization range.

Load-bearing premise

That any row-stochastic combination of triangle-inequality lower bounds remains a valid admissible heuristic even when the weights come from an arbitrary learned module.

What would settle it

A counterexample where the output heuristic value exceeds the true shortest-path distance for some source-target pair under some learned parameter values would disprove the admissibility-by-construction claim.

Figures

Figures reproduced from arXiv: 2604.20744 by An T. Le, Vien Ngo.

Figure 1
Figure 1. Figure 1: AAC method overview. The top ribbon states the architectural admissibility certificate [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Landmark selection on NY (DIMACS). Left: K0=64 FPS teacher landmarks, dispersed across the graph. Right: AAC’s m=16 directional selection (8 forward + 8 backward; 11 distinct vertices, 5 shared across directions), boundary-concentrated rather than dispersed [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Landmark selection on SBM (5×2000, pin=0.05, pout=0.001; spring layout, color = block). Four selection rules from the same K0=64 FPS teacher pool, m=16. (a): FPS teacher (all 64). (b): ALT first-m – the actual matched-memory baseline, algebraically equal to FPS-ALT K=m via the forced-first-m identity (Section 5.9). (c): AAC learned selection. (d): Greedy-Max coverage oracle on the same pool. Under per-grap… view at source ↗
Figure 4
Figure 4. Figure 4: Gap-to-teacher and covering radius diverge on the toy path P7. The same two candidate m=2 landmark selections appear in both panels (top row: Scov={2, 4}; bottom row: Sgap={0, 6}); queries are drawn uniformly from {1, . . . , 5} 2 \ diag (20 ordered pairs). (a) Covering view. Shaded bands are coverage balls S l∈S [l−r2, l+r2]. The symmetric k-center subset Scov wins on r2 (=2 vs. =3 for Sgap). (b) Gap view… view at source ↗
Figure 5
Figure 5. Figure 5: Preprocessing vs. deployed memory for admissible landmark-based methods; the [PITH_FULL_IMAGE:figures/full_fig_p014_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Pareto frontier of memory budget vs. expansion reduction on DIMACS road networks (admissible [PITH_FULL_IMAGE:figures/full_fig_p017_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Training-objective drift across two graph families (rows: SBM, BA) and two memory budgets [PITH_FULL_IMAGE:figures/full_fig_p024_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Total wall-clock cost vs. query workload [PITH_FULL_IMAGE:figures/full_fig_p038_8.png] view at source ↗
read the original abstract

We introduce \textbf{AAC} (Architecturally Admissible Compressor), a differentiable landmark-selection module for ALT (A*, Landmarks, and Triangle inequality) shortest-path heuristics whose outputs are admissible by construction: each forward pass is a row-stochastic mixture of triangle-inequality lower bounds, so the heuristic is admissible for \emph{every} parameter setting without requiring convergence, calibration, or projection. At deployment, the module reduces to classical ALT on a learned subset, composing end-to-end with neural encoders while preserving the classical toolchain. The construction is the first differentiable instance of the compress-while-preserving-admissibility tradition in classical heuristic search. Under a matched per-vertex memory protocol, we establish that ALT with farthest-point-sampling landmarks (FPS-ALT) has provably near-optimal coverage on metric graphs, leaving at most a few percentage points of headroom for \emph{any} selector. AAC operates near this ceiling: the gap is $0.9$--$3.9$ percentage points on 9 road networks and ${\leq}1.3$ percentage points on synthetic graphs, with zero admissibility violations across $1{,}500+$ queries and all logged runs. At matched memory, AAC is also $1.2$--$1.5{\times}$ faster than FPS-ALT at the median query on DIMACS road networks, amortizing its offline cost within $170$--$1{,}924$ queries. A controlled ablation isolates the binding constraint: training-objective drift under default initialization, not architectural capacity; identity-on-first-$m$ initialization closes the expansion-count gap entirely. We release the module, a reusable matched-memory benchmarking protocol with paired two-one-sided-test (TOST) equivalence and pre-registration, and a reference compressed-differential-heuristics baseline.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The manuscript introduces AAC, a differentiable landmark-selection module for ALT shortest-path heuristics. The architecture produces row-stochastic mixtures of triangle-inequality lower bounds at every forward pass, guaranteeing admissibility for arbitrary parameter values without convergence, calibration, or post-processing. At deployment the module reduces to classical ALT on a learned landmark subset. Experiments on nine road networks and synthetic graphs report coverage within 0.9--3.9 percentage points of FPS-ALT (claimed near-optimal), zero admissibility violations over 1,500+ queries, and 1.2--1.5× median query speedup at matched per-vertex memory, with an ablation attributing remaining gaps to initialization rather than capacity.

Significance. If the central construction holds, the work supplies the first explicitly differentiable member of the compress-while-preserving-admissibility line in heuristic search, enabling end-to-end neural integration while retaining classical toolchain compatibility. The release of the module, the matched-memory benchmarking protocol with pre-registered TOST equivalence testing, and the reference compressed-differential-heuristics baseline are concrete strengths that support reproducibility and future comparison.

major comments (2)
  1. [§3.2 (FPS-ALT near-optimality)] §3.2 (FPS-ALT near-optimality): the claim that FPS-ALT leaves at most a few percentage points of headroom for any selector on metric graphs is load-bearing for interpreting the reported 0.9--3.9 pp gaps as near-ceiling performance. The manuscript states the result but does not supply the full derivation or the precise statement of the coverage bound, preventing verification that the bound applies to the tested graphs and that the headroom figure is tight.
  2. [Experimental protocol (§5)] Experimental protocol (§5): the zero-violation claim and the ablation isolating initialization drift rest on the matched-memory protocol and data-exclusion rules. The text does not detail how admissibility was exhaustively checked against true distances for every logged query or the precise criteria used to exclude queries, both of which are required to substantiate the soundness numbers.
minor comments (3)
  1. Notation for the row-stochastic weights is introduced clearly in the architecture diagram but is not consistently restated when the deployment reduction to hard selection is described; a single sentence reminding the reader that the weights remain normalized would remove ambiguity.
  2. [Table 1 (coverage gaps)] Table 1 (coverage gaps): the reported intervals would be easier to interpret if the per-network standard deviations or the number of independent training runs were added alongside the mean gaps.
  3. The abstract refers to 'a claimed proof' for FPS-ALT; the main text should cite the specific theorem or proposition number so readers can locate the statement without searching.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed review. The comments identify areas where additional rigor will strengthen the manuscript. We address each major comment below and will incorporate the requested clarifications and derivations in the revised version.

read point-by-point responses
  1. Referee: [§3.2 (FPS-ALT near-optimality)] §3.2 (FPS-ALT near-optimality): the claim that FPS-ALT leaves at most a few percentage points of headroom for any selector on metric graphs is load-bearing for interpreting the reported 0.9--3.9 pp gaps as near-ceiling performance. The manuscript states the result but does not supply the full derivation or the precise statement of the coverage bound, preventing verification that the bound applies to the tested graphs and that the headroom figure is tight.

    Authors: We agree that the near-optimality claim for FPS-ALT is central to interpreting AAC's results as near-ceiling. Section 3.2 sketches the argument from the metric properties of farthest-point sampling: the sampling radius r relative to graph diameter D yields a coverage lower bound of at least 1 - O(r/D) (discretized for finite graphs), leaving only a few percentage points of headroom on typical road-network metrics. To make this fully verifiable, the revision will expand §3.2 with the complete self-contained derivation, the exact bound statement, and explicit confirmation that it holds for the nine road networks and synthetic graphs used in the experiments. This addition substantiates the claim without altering any reported numbers. revision: yes

  2. Referee: [Experimental protocol (§5)] Experimental protocol (§5): the zero-violation claim and the ablation isolating initialization drift rest on the matched-memory protocol and data-exclusion rules. The text does not detail how admissibility was exhaustively checked against true distances for every logged query or the precise criteria used to exclude queries, both of which are required to substantiate the soundness numbers.

    Authors: The referee is correct that the current text leaves the verification procedure implicit. Admissibility was exhaustively validated by comparing each heuristic value against precomputed true distances (via full-graph Dijkstra) for every one of the 1,500+ logged queries across all runs; no violation occurred. Queries were excluded only when source and target coincided or belonged to different connected components, per standard shortest-path benchmark conventions. The revision will add a dedicated paragraph in §5 with these exact criteria, pseudocode for the check, and a note that the full query logs and verification scripts will be released alongside the module. This directly substantiates the zero-violation and ablation results. revision: yes

Circularity Check

0 steps flagged

No significant circularity; admissibility follows directly from triangle inequality and row-stochastic construction

full rationale

The paper's core derivation states that each forward pass produces a row-stochastic mixture of triangle-inequality lower bounds, rendering the heuristic admissible for every parameter setting by construction. This reduces exactly to the external fact that any convex combination of admissible bounds remains admissible, with the architecture enforcing non-negativity and normalization via row-stochastic weights. No step equates a prediction to a fitted input, renames a known result, or relies on a load-bearing self-citation whose content is unverified. The FPS-ALT coverage ceiling is treated as an independent baseline result, and empirical gaps are reported as observations rather than derivations. The claim is self-contained against the triangle inequality and does not reduce to its own inputs.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard metric properties and the new architectural construction; no new entities are postulated.

free parameters (1)
  • trainable mixture/selection parameters
    The differentiable module contains parameters that are optimized during training.
axioms (1)
  • domain assumption The input graph satisfies the triangle inequality
    Required for all lower bounds to be valid admissible heuristics.

pith-pipeline@v0.9.0 · 5637 in / 1318 out tokens · 53364 ms · 2026-05-09T23:53:20.453119+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

60 extracted references · 2 canonical work pages · 1 internal anchor

  1. [1]

    IEEE Transactions on Systems Science and Cybernetics , volume =

    A Formal Basis for the Heuristic Determination of Minimum Cost Paths , author =. IEEE Transactions on Systems Science and Cybernetics , volume =. 1968 , publisher =

  2. [2]

    Numerische Mathematik , volume =

    A Note on Two Problems in Connexion with Graphs , author =. Numerische Mathematik , volume =. 1959 , publisher =

  3. [3]

    and Harrelson, Chris , booktitle =

    Goldberg, Andrew V. and Harrelson, Chris , booktitle =. Computing the Shortest Path:

  4. [4]

    Theoretical Computer Science , volume =

    Clustering to minimize the maximum intercluster distance , author =. Theoretical Computer Science , volume =. 1985 , publisher =

  5. [5]

    Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI) , pages =

    Memory-Based Heuristics for Explicit State Spaces , author =. Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI) , pages =

  6. [6]

    Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI) , pages =

    The Compressed Differential Heuristic , author =. Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI) , pages =

  7. [7]

    AI Communications , volume =

    The Compressed Differential Heuristic , author =. AI Communications , volume =. 2017 , publisher =

  8. [8]

    1995 , publisher =

    Faloutsos, Christos and Lin, King-Ip , booktitle =. 1995 , publisher =

  9. [9]

    Cohen, Liron and Uras, Tansel and Jahangiri, Shiva and Arunasalam, Aliyah and Koenig, Sven and Kumar, T. K. Satish , booktitle =. The

  10. [10]

    SIAM Journal on Computing , volume =

    Reachability and Distance Queries via 2-Hop Labels , author =. SIAM Journal on Computing , volume =. 2003 , publisher =

  11. [11]

    Proceedings of the 20th European Symposium on Algorithms (ESA) , series =

    Hierarchical Hub Labelings for Shortest Paths , author =. Proceedings of the 20th European Symposium on Algorithms (ESA) , series =. 2012 , publisher =

  12. [12]

    Proceedings of the 7th International Workshop on Experimental Algorithms (WEA) , series =

    Contraction Hierarchies: Faster and Simpler Hierarchical Routing in Road Networks , author =. Proceedings of the 7th International Workshop on Experimental Algorithms (WEA) , series =. 2008 , publisher =

  13. [13]

    Proceedings of the First Learning on Graphs Conference (LoG) , series =

    Learning Graph Search Heuristics , author =. Proceedings of the First Learning on Graphs Conference (LoG) , series =

  14. [14]

    Path Planning using Neural

    Yonetani, Ryo and Taniai, Tatsunori and Barekatain, Mohammadamin and Nishimura, Mai and Kanezaki, Asako , booktitle =. Path Planning using Neural

  15. [15]

    Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI) , pages =

    On Using Admissible Bounds for Learning Forward Search Heuristics , author =. Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI) , pages =. 2024 , doi =

  16. [16]

    Lahoud, Alan and Schaffernicht, Erik and Stork, Johannes , booktitle =

  17. [17]

    Proceedings of the 8th International Conference on Learning Representations (ICLR) , year =

    Differentiation of Blackbox Combinatorial Solvers , author =. Proceedings of the 8th International Conference on Learning Representations (ICLR) , year =

  18. [18]

    Paulus, Anselm and Rolinek, Michal and Musil, Vit and Amos, Brandon and Martius, Georg , booktitle =

  19. [19]

    Categorical Reparameterization with

    Jang, Eric and Gu, Shixiang and Poole, Ben , booktitle =. Categorical Reparameterization with

  20. [20]

    Proceedings of the 5th International Conference on Learning Representations (ICLR) , year =

    The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , author =. Proceedings of the 5th International Conference on Learning Representations (ICLR) , year =

  21. [21]

    Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation

    Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , author =. arXiv preprint arXiv:1308.3432 , year =

  22. [22]

    Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI) , pages =

    Euclidean Heuristic Optimization , author =. Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI) , pages =

  23. [23]

    Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI) , pages =

    Subset Selection of Search Heuristics , author =. Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI) , pages =

  24. [24]

    Proceedings of the 6th International Workshop on Experimental Algorithms (WEA) , pages =

    Better Landmarks Within Reach , author =. Proceedings of the 6th International Workshop on Experimental Algorithms (WEA) , pages =. 2007 , publisher =

  25. [25]

    Proceedings of the 6th ACM SIGSPATIAL Workshop on Computational Transportation Science , year =

    Optimizing Landmark-based Routing and Preprocessing , author =. Proceedings of the 6th ACM SIGSPATIAL Workshop on Computational Transportation Science , year =

  26. [26]

    Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data (SIGMOD) , pages =

    Fast Exact Shortest-Path Distance Queries on Large Networks by Pruned Landmark Labeling , author =. Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data (SIGMOD) , pages =

  27. [27]

    and Johnson, David S

    Demetrescu, Camil and Goldberg, Andrew V. and Johnson, David S. , booktitle =. The Shortest Path Problem: Ninth

  28. [28]

    2017 , publisher =

    Boeing, Geoff , journal =. 2017 , publisher =

  29. [29]

    , booktitle =

    Futuhi, Ehsan and Sturtevant, Nathan R. , booktitle =. Learning Admissible Heuristics for. 2026 , note =

  30. [30]

    Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI) , pages =

    Scalable Landmark Hub Labeling for Optimal and Bounded Suboptimal Pathfinding , author =. Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI) , pages =

  31. [31]

    33rd International Symposium on Algorithms and Computation (ISAAC) , series =

    Algorithms for Landmark Hub Labeling , author =. 33rd International Symposium on Algorithms and Computation (ISAAC) , series =. 2022 , publisher =

  32. [32]

    24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS) , series =

    Landmark Hub Labeling: Improved Bounds and Faster Query Answering , author =. 24th Symposium on Algorithmic Approaches for Transportation Modelling, Optimization, and Systems (ATMOS) , series =. 2024 , publisher =

  33. [33]

    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) , pages =

    Beyond Single-Step Updates: Reinforcement Learning of Heuristics with Limited-Horizon Search , author =. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) , pages =. 2026 , doi =

  34. [34]

    Journal of Artificial Intelligence Research (JAIR) , volume =

    Compressed Pattern Databases , author =. Journal of Artificial Intelligence Research (JAIR) , volume =. 2007 , doi =

  35. [35]

    iA*: Imperative Learning-based

    Chen, Xiangyu and Yang, Fan and Wang, Chen , journal =. iA*: Imperative Learning-based. 2025 , doi =

  36. [36]

    2026 , doi =

    Ananikian, Aleksandr and Drozdov, Daniil and Yakovlev, Konstantin , journal =. 2026 , doi =

  37. [37]

    arXiv preprint arXiv:2602.04068 , year =

    An Empirical Survey and Benchmark of Learned Distance Indexes for Road Networks , author =. arXiv preprint arXiv:2602.04068 , year =

  38. [38]

    Embeddings and Labeling Schemes for

    Eden, Talya and Indyk, Piotr and Xu, Haike , booktitle =. Embeddings and Labeling Schemes for. 2022 , doi =

  39. [39]

    Sample Complexity of Learning Heuristic Functions for Greedy-Best-First and

    Sakaue, Shinsaku and Oki, Taihei , booktitle =. Sample Complexity of Learning Heuristic Functions for Greedy-Best-First and

  40. [40]

    Proceedings of the 24th International Conference on Algorithms and Complexity (CIAC) , series =

    The Complexity of Landmark Hub Labeling , author =. Proceedings of the 24th International Conference on Algorithms and Complexity (CIAC) , series =. 2025 , publisher =

  41. [41]

    A Fast and Tight Heuristic for

    Strasser, Ben and Zeitz, Tim , booktitle =. A Fast and Tight Heuristic for. 2021 , publisher =

  42. [42]

    10th International Symposium on Experimental Algorithms (SEA) , series =

    Customizable Route Planning , author =. 10th International Symposium on Experimental Algorithms (SEA) , series =. 2011 , publisher =

  43. [43]

    ACM Journal of Experimental Algorithmics , volume =

    Customizable Contraction Hierarchies , author =. ACM Journal of Experimental Algorithmics , volume =. 2016 , publisher =

  44. [44]

    Algorithm Engineering , pages =

    Route Planning in Transportation Networks , author =. Algorithm Engineering , pages =. 2016 , note =

  45. [45]

    Advances in Neural Information Processing Systems , volume =

    Differentiable Top-k with Optimal Transport , author =. Advances in Neural Information Processing Systems , volume =

  46. [46]

    Hazimeh, Hussein and Zhao, Zhe and Chowdhery, Aakanksha and Sathiamoorthy, Maheswaran and Chen, Yihua and Mazumder, Rahul and Hong, Lichan and Chi, Ed , booktitle =

  47. [47]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages =

    Differentiable Patch Selection for Image Recognition , author =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages =

  48. [48]

    Proceedings of the 36th International Conference on Machine Learning (ICML) , series =

    Concrete Autoencoders: Differentiable Feature Selection and Reconstruction , author =. Proceedings of the 36th International Conference on Machine Learning (ICML) , series =

  49. [49]

    1984 , address =

    Heuristics: Intelligent Search Strategies for Computer Problem Solving , author =. 1984 , address =

  50. [50]

    Proceedings of the 35th International Conference on Machine Learning (ICML) , series =

    Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , author =. Proceedings of the 35th International Conference on Machine Learning (ICML) , series =

  51. [51]

    Proceedings of the 41st International Conference on Machine Learning (ICML) , series =

    Indirectly Parameterized Concrete Autoencoders , author =. Proceedings of the 41st International Conference on Machine Learning (ICML) , series =. 2024 , note =

  52. [52]

    Combining hierarchical and goal-directed speed-up techniques for

    Bauer, Reinhard and Delling, Daniel and Sanders, Peter and Schieferdecker, Dennis and Schultes, Dominik and Wagner, Dorothea , booktitle =. Combining hierarchical and goal-directed speed-up techniques for. 2010 , note =

  53. [53]

    Advances in Neural Information Processing Systems (NeurIPS) , volume =

    Learning Differentiable Programs with Admissible Neural Heuristics , author =. Advances in Neural Information Processing Systems (NeurIPS) , volume =. 2020 , note =

  54. [54]

    Kirilenko, Daniil and Andreychuk, Anton and Panov, Aleksandr and Yakovlev, Konstantin , booktitle =

  55. [55]

    Advances in Neural Information Processing Systems (NeurIPS) , volume =

    Open Graph Benchmark: Datasets for Machine Learning on Graphs , author =. Advances in Neural Information Processing Systems (NeurIPS) , volume =

  56. [56]

    Computational Intelligence , volume =

    Pattern Databases , author =. Computational Intelligence , volume =

  57. [57]

    Journal of the ACM , volume =

    Merge-and-Shrink Abstraction: A Method for Generating Lower Bounds in Factored State Spaces , author =. Journal of the ACM , volume =

  58. [58]

    Social Psychological and Personality Science , volume =

    Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses , author =. Social Psychological and Personality Science , volume =

  59. [59]

    2026 , note =

    Du, Jinchun and Shen, Bojie and Cheema, Muhammad Aamir , booktitle =. 2026 , note =

  60. [60]

    2021 , note =

    Proceedings of the NeurIPS 2020 Workshop on Pre-registration in Machine Learning , editor =. 2021 , note =