pith. machine review for the scientific record. sign in

arxiv: 2605.08679 · v1 · submitted 2026-05-09 · 💻 cs.SI · cs.AI· cs.LG

Recognition: 2 theorem links

· Lean Theorem

Attention-based graph neural networks: a survey

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:02 UTC · model grok-4.3

classification 💻 cs.SI cs.AIcs.LG
keywords attention-based graph neural networksgraph attention networksgraph transformerssurveytaxonomygraph recurrent attention networks
0
0 comments X

The pith

A two-level taxonomy organizes attention-based graph neural networks into three developmental stages.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper fills the gap of a missing systematic overview by proposing a novel two-level taxonomy for attention-based graph neural networks. The upper level divides models into three developmental stages: graph recurrent attention networks, graph attention networks, and graph transformers. The lower level examines typical architectures within each stage. It then reviews the methods in detail, provides a comparison table of model characteristics, and discusses open issues along with future directions.

Core claim

The paper claims that attention-based GNNs can be systematically surveyed through a two-level taxonomy based on development history and architectural perspectives, enabling a detailed review of how attention mechanisms adaptively select discriminative features and filter noisy information while preserving graph topological structures.

What carries the argument

The two-level taxonomy that first groups models by developmental stages (graph recurrent attention networks, graph attention networks, graph transformers) and then by architectural perspectives within each stage.

If this is right

  • A characteristics table allows direct comparison of advantages and disadvantages across models.
  • Identified open issues direct attention to unresolved problems for future research.
  • The taxonomy helps position new contributions within the existing literature.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The taxonomy may require periodic updates as new architectures emerge beyond current stages.
  • Cross-referencing this survey with general GNN reviews could reveal how attention specifically alters performance patterns.
  • Empirical tests on benchmark datasets could check whether models from different stages show consistent differences in behavior.

Load-bearing premise

That no prior systematic overview existed and that the two-level taxonomy organizes the literature without significant omissions or overlaps.

What would settle it

Publication of a major attention-based GNN model that cannot be placed into any of the three developmental stages or does not match the described typical architectures.

read the original abstract

Graph neural networks (GNNs) aim to learn well-trained representations in a lower-dimension space for downstream tasks while preserving the topological structures. In recent years, attention mechanism, which is brilliant in the fields of natural language processing and computer vision, is introduced to GNNs to adaptively select the discriminative features and automatically filter the noisy information. To the best of our knowledge, due to the fast-paced advances in this domain, a systematic overview of attention-based GNNs is still missing. To fill this gap, this paper aims to provide a comprehensive survey on recent advances in attention-based GNNs. Firstly, we propose a novel two-level taxonomy for attention-based GNNs from the perspective of development history and architectural perspectives. Specifically, the upper level reveals the three developmental stages of attention-based GNNs, including graph recurrent attention networks, graph attention networks, and graph transformers. The lower level focuses on various typical architectures of each stage. Secondly, we review these attention-based methods following the proposed taxonomy in detail and summarize the advantages and disadvantages of various models. A model characteristics table is also provided for a more comprehensive comparison. Thirdly, we share our thoughts on some open issues and future directions of attention-based GNNs. We hope this survey will provide researchers with an up-to-date reference regarding applications of attention-based GNNs. In addition, to cope with the rapid development in this field, we intend to share the relevant latest papers as an open resource at https://github.com/sunxiaobei/awesome-attention-based-gnns.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript is a survey on attention-based graph neural networks (GNNs). It claims that no systematic overview exists due to rapid advances and proposes a novel two-level taxonomy: an upper level with three developmental stages (graph recurrent attention networks, graph attention networks, and graph transformers) plus a lower level of typical architectures within each stage. The paper reviews methods following the taxonomy, summarizes advantages and disadvantages of various models, provides a model characteristics comparison table, discusses open issues and future directions, and maintains a GitHub repository for latest papers.

Significance. If the taxonomy is shown to be comprehensive and the reviews accurate, the survey would provide a useful organized reference for the graph machine learning community, particularly by tracing the historical development of attention mechanisms in GNNs and highlighting architectural variants. The comparison table and commitment to an open GitHub resource for updates would add practical value in a fast-moving field.

major comments (1)
  1. [Abstract] Abstract: The assertion that 'a systematic overview of attention-based GNNs is still missing' is presented without any explicit literature search protocol, keyword list, date bounds, or side-by-side comparison to prior GNN surveys that already cover attention mechanisms (e.g., GAT or Graphormer sections in general GNN reviews). This leaves the novelty of the two-level taxonomy (developmental stages plus architectural perspectives) and the gap-filling justification unverified and potentially vulnerable to post-hoc adjustments.
minor comments (2)
  1. The abstract refers to a 'model characteristics table' for comprehensive comparison; ensure the table explicitly includes dimensions such as computational complexity, scalability to large graphs, and empirical performance on standard benchmarks to maximize its utility.
  2. The GitHub link (https://github.com/sunxiaobei/awesome-attention-based-gnns) is presented as an open resource for latest papers; the manuscript should clarify the update frequency and curation criteria to support its role as a living reference.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our survey. We address the major comment below and will revise the manuscript accordingly to strengthen the justification for our claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The assertion that 'a systematic overview of attention-based GNNs is still missing' is presented without any explicit literature search protocol, keyword list, date bounds, or side-by-side comparison to prior GNN surveys that already cover attention mechanisms (e.g., GAT or Graphormer sections in general GNN reviews). This leaves the novelty of the two-level taxonomy (developmental stages plus architectural perspectives) and the gap-filling justification unverified and potentially vulnerable to post-hoc adjustments.

    Authors: We agree that the abstract and introduction would benefit from greater transparency on this point. In the revised version, we will add a dedicated paragraph (or short subsection) describing our literature search protocol, including the primary keywords (e.g., 'attention-based graph neural networks', 'graph attention networks', 'graph transformers', 'GAT', 'Graphormer'), the time window (papers published up to the submission date), and the sources consulted (arXiv, Google Scholar, major venues such as NeurIPS, ICML, ICLR, KDD). We will also insert a concise comparison with representative prior GNN surveys, noting that while several existing reviews mention attention mechanisms (often as subsections within broader taxonomies), our two-level taxonomy is distinguished by its explicit organization around developmental stages—graph recurrent attention networks, graph attention networks, and graph transformers—together with architectural variants within each stage. This historical-evolutionary lens, combined with the model characteristics table and the maintained GitHub repository, provides a perspective not emphasized in prior works. These additions will make the novelty claim verifiable without changing the survey's core structure or content. revision: yes

Circularity Check

0 steps flagged

Survey paper with no derivations, equations, or fitted predictions exhibits no circularity

full rationale

This manuscript is an explicit literature review that organizes existing attention-based GNN papers under a proposed two-level taxonomy and summarizes advantages/disadvantages. It contains no mathematical derivations, equations, parameter fittings, predictions, or self-referential claims that reduce to inputs by construction. The assertion that no prior systematic overview exists is a standard survey framing (not a derivation), and the taxonomy is presented as an organizational contribution rather than a forced result from self-citation or ansatz. Per the guidelines, honest non-findings are expected for self-contained reviews lacking the enumerated circular patterns; the paper is therefore scored 0 with empty steps.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a literature survey that introduces a taxonomy for organizing prior work. No free parameters, mathematical axioms, or new scientific entities are postulated.

pith-pipeline@v0.9.0 · 5599 in / 1000 out tokens · 52885 ms · 2026-05-12T01:02:16.872286+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • IndisputableMonolith/Cost/FunctionalEquation.lean washburn_uniqueness_aczel unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    We propose a novel two-level taxonomy for attention-based GNNs from the perspective of development history and architectural perspectives. Specifically, the upper level reveals the three developmental stages of attention-based GNNs, including graph recurrent attention networks, graph attention networks, and graph transformers.

  • IndisputableMonolith/Foundation/ArithmeticFromLogic.lean LogicNat unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    The attention mechanism in GNNs allows the neural networks to learn a dynamic and adaptive aggregation of the neighborhood...

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

300 extracted references · 300 canonical work pages · 2 internal anchors

  1. [1]

    AI Open , volume=

    Graph neural networks: A review of methods and applications , author=. AI Open , volume=. 2020 , publisher=

  2. [2]

    Semi-supervised classification with graph convolutional networks , author=. J. International Conference on Learning Representations (ICLR 2017) , year=

  3. [3]

    Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

    Are we really making much progress? Revisiting, benchmarking and refining heterogeneous graph neural networks , author=. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages=

  4. [4]

    IEEE Transactions on Neural Networks and Learning Systems , year=

    A comprehensive survey on community detection with deep learning , author=. IEEE Transactions on Neural Networks and Learning Systems , year=

  5. [5]

    Advances in Neural Information Processing Systems , volume=

    Do transformers really perform badly for graph representation? , author=. Advances in Neural Information Processing Systems , volume=

  6. [6]

    science , volume=

    Nonlinear dimensionality reduction by locally linear embedding , author=. science , volume=. 2000 , publisher=

  7. [7]

    Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining , pages=

    Deepwalk: Online learning of social representations , author=. Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining , pages=

  8. [8]

    Proceedings of the 24th international conference on world wide web , pages=

    Line: Large-scale information network embedding , author=. Proceedings of the 24th international conference on world wide web , pages=

  9. [9]

    Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining , pages=

    node2vec: Scalable feature learning for networks , author=. Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining , pages=

  10. [10]

    Proceedings of the Web Conference 2021 , pages=

    Effective and Scalable Clustering on Massive Attributed Graphs , author=. Proceedings of the Web Conference 2021 , pages=

  11. [11]

    IEEE transactions on neural networks and learning systems , volume=

    A comprehensive survey on graph neural networks , author=. IEEE transactions on neural networks and learning systems , volume=. 2020 , publisher=

  12. [12]

    arXiv preprint arXiv:1911.05485 , year=

    Diffusion improves graph learning , author=. arXiv preprint arXiv:1911.05485 , year=

  13. [13]

    International Conference on Learning Representations , year=

    Graph Attention Networks , author=. International Conference on Learning Representations , year=

  14. [14]

    International Conference on Learning Representations (ICLR2014), CBLS, April 2014 , pages=

    Spectral networks and locally connected networks on graphs , author=. International Conference on Learning Representations (ICLR2014), CBLS, April 2014 , pages=

  15. [15]

    Advances in neural information processing systems , volume=

    Convolutional neural networks on graphs with fast localized spectral filtering , author=. Advances in neural information processing systems , volume=

  16. [16]

    Advances in neural information processing systems , volume=

    Inductive representation learning on large graphs , author=. Advances in neural information processing systems , volume=

  17. [17]

    International Conference on Learning Representations , year=

    How Powerful are Graph Neural Networks? , author=. International Conference on Learning Representations , year=

  18. [18]

    International conference on machine learning , pages=

    Neural message passing for quantum chemistry , author=. International conference on machine learning , pages=. 2017 , organization=

  19. [19]

    The Ninth International Conference on Learning Representations (ICLR 2021) , year=

    How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision , author=. The Ninth International Conference on Learning Representations (ICLR 2021) , year=

  20. [20]

    ACM Transactions on Knowledge Discovery from Data (TKDD) , volume=

    Attention models in graphs: A survey , author=. ACM Transactions on Knowledge Discovery from Data (TKDD) , volume=. 2019 , publisher=

  21. [21]

    Proceedings of ICLR'16 , year=

    Gated Graph Sequence Neural Networks , author=. Proceedings of ICLR'16 , year=

  22. [22]

    34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018 , year=

    GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs , author=. 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018 , year=

  23. [23]

    Advances in neural information processing systems , volume=

    Efficient graph generation with graph recurrent attention networks , author=. Advances in neural information processing systems , volume=

  24. [24]

    arXiv preprint arXiv:2001.05140 , year=

    Graph-bert: Only attention is needed for learning graph representations , author=. arXiv preprint arXiv:2001.05140 , year=

  25. [25]

    arXiv preprint arXiv:1909.11855 , year=

    Universal graph transformer self-attention networks , author=. arXiv preprint arXiv:1909.11855 , year=

  26. [26]

    International conference on machine learning , pages=

    Representation learning on graphs with jumping knowledge networks , author=. International conference on machine learning , pages=. 2018 , organization=

  27. [27]

    arXiv preprint arXiv:1910.11945 , year=

    Improving graph attention networks with large margin-based constraints , author=. arXiv preprint arXiv:1910.11945 , year=

  28. [28]

    International Conference on Learning Representations , year=

    How Attentive are Graph Attention Networks? , author=. International Conference on Learning Representations , year=

  29. [29]

    Graph transformer , author=

  30. [30]

    Advances in Neural Information Processing Systems , volume=

    Transformers Generalize DeepSets and Can be Extended to Graphs & Hypergraphs , author=. Advances in Neural Information Processing Systems , volume=

  31. [31]

    IEEE Transactions on Artificial Intelligence , volume=

    Graph learning: A survey , author=. IEEE Transactions on Artificial Intelligence , volume=. 2021 , publisher=

  32. [32]

    Computational Visual Media , pages=

    Attention mechanisms in computer vision: A survey , author=. Computational Visual Media , pages=. 2022 , publisher=

  33. [33]

    IEEE Transactions on Knowledge and Data Engineering , year=

    A general survey on attention mechanisms in deep learning , author=. IEEE Transactions on Knowledge and Data Engineering , year=

  34. [34]

    ACM Transactions on Intelligent Systems and Technology (TIST) , volume=

    An attentive survey of attention models , author=. ACM Transactions on Intelligent Systems and Technology (TIST) , volume=. 2021 , publisher=

  35. [35]

    ACM Computing Surveys (CSUR) , year=

    Efficient transformers: A survey , author=. ACM Computing Surveys (CSUR) , year=

  36. [36]

    arXiv preprint arXiv:1803.03735 , year=

    Attention-based graph neural network for semi-supervised learning , author=. arXiv preprint arXiv:1803.03735 , year=

  37. [37]

    arXiv preprint arXiv:1905.12712 , year=

    Path-augmented graph transformer network , author=. arXiv preprint arXiv:1905.12712 , year=

  38. [38]

    Advances in neural information processing systems , volume=

    Graph transformer networks , author=. Advances in neural information processing systems , volume=

  39. [39]

    arXiv preprint arXiv:2203.12944 , year=

    Transformers Meet Visual Learning Understanding: A Comprehensive Review , author=. arXiv preprint arXiv:2203.12944 , year=

  40. [40]

    Proceedings of the 28th International Joint Conference on Artificial Intelligence , pages=

    SPAGAN: shortest path graph attention network , author=. Proceedings of the 28th International Joint Conference on Artificial Intelligence , pages=

  41. [41]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Learning signed network embedding via graph attention , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  42. [42]

    Relational Graph Attention Networks

    Relational graph attention networks , author=. arXiv preprint arXiv:1904.05811 , year=

  43. [43]

    The world wide web conference , pages=

    Heterogeneous graph attention network , author=. The world wide web conference , pages=

  44. [44]

    Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining , pages=

    Graph Embedding with Hierarchical Attentive Membership , author=. Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining , pages=

  45. [45]

    Advances in neural information processing systems , volume=

    Watch your step: Learning node embeddings via graph attention , author=. Advances in neural information processing systems , volume=

  46. [46]

    Advances in neural information processing systems , volume=

    Understanding attention and generalization in graph neural networks , author=. Advances in neural information processing systems , volume=

  47. [47]

    International Conference on Learning Representations (ICLR) , year=

    Hyper-SAGNN: a self-attention based graph neural network for hypergraphs , author=. International Conference on Learning Representations (ICLR) , year=

  48. [48]

    Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining , pages=

    Towards deeper graph neural networks , author=. Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining , pages=

  49. [49]

    arXiv preprint arXiv:2206.04355 , year=

    Graph attention multi-layer perceptron , author=. arXiv preprint arXiv:2206.04355 , year=

  50. [50]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Beyond low-frequency information in graph convolutional networks , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  51. [51]

    arXiv preprint arXiv:2109.05641 , year=

    Is Heterophily A Real Nightmare For Graph Neural Networks To Do Node Classification? , author=. arXiv preprint arXiv:2109.05641 , year=

  52. [52]

    Proceedings of the 26th ACM SIGKDD International conference on knowledge discovery & data mining , pages=

    Am-gcn: Adaptive multi-channel graph convolutional networks , author=. Proceedings of the 26th ACM SIGKDD International conference on knowledge discovery & data mining , pages=

  53. [53]

    Proceedings of the 30th ACM International Conference on Information & Knowledge Management , pages=

    Semi-Supervised and Self-Supervised Classification with Multi-View Graph Neural Networks , author=. Proceedings of the 30th ACM International Conference on Information & Knowledge Management , pages=

  54. [54]

    arXiv preprint arXiv:1812.09430 , year=

    Dynamic graph representation learning via self-attention networks , author=. arXiv preprint arXiv:1812.09430 , year=

  55. [55]

    Proceedings of the 28th ACM international conference on information and knowledge management , pages=

    Temporal network embedding with micro-and macro-dynamics , author=. Proceedings of the 28th ACM international conference on information and knowledge management , pages=

  56. [56]

    International Conference on Learning Representations (ICLR) , year=

    Graph-Guided Network for Irregularly Sampled Multivariate Time Series , author=. International Conference on Learning Representations (ICLR) , year=

  57. [57]

    2020 IEEE International Conference on Data Mining (ICDM) , pages=

    Multivariate time-series anomaly detection via graph attention network , author=. 2020 IEEE International Conference on Data Mining (ICDM) , pages=. 2020 , organization=

  58. [58]

    arXiv preprint arXiv:2106.04554 , year=

    A survey of transformers , author=. arXiv preprint arXiv:2106.04554 , year=

  59. [59]

    IEEE Transactions on Signal Processing , volume=

    Gated graph recurrent neural networks , author=. IEEE Transactions on Signal Processing , volume=. 2020 , publisher=

  60. [60]

    Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pages=

    Graph classification using structural attention , author=. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pages=

  61. [61]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Geniepath: Graph neural networks with adaptive receptive paths , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  62. [62]

    IEEE transactions on neural networks , volume=

    The graph neural network model , author=. IEEE transactions on neural networks , volume=. 2008 , publisher=

  63. [63]

    Advances in Neural Information Processing Systems , volume=

    Diverse message passing for attribute with heterophily , author=. Advances in Neural Information Processing Systems , volume=

  64. [64]

    ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages=

    Personalized Pagerank Graph Attention Networks , author=. ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages=. 2022 , organization=

  65. [65]

    Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pages=

    Graph representation learning via hard and channel-wise attention networks , author=. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pages=

  66. [66]

    Advances in Neural Information Processing Systems , volume=

    Learning conjoint attentions for graph neural nets , author=. Advances in Neural Information Processing Systems , volume=

  67. [67]

    IEEE Transactions on Neural Networks and Learning Systems , volume=

    Neighborhood attention networks with adversarial learning for link prediction , author=. IEEE Transactions on Neural Networks and Learning Systems , volume=. 2020 , publisher=

  68. [68]

    Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining , pages=

    Embedding temporal network via neighborhood formation , author=. Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining , pages=

  69. [69]

    IEEE Transactions on Big Data , year=

    Hyperbolic graph attention network , author=. IEEE Transactions on Big Data , year=

  70. [70]

    Advances in neural information processing systems , volume=

    Hyperbolic graph convolutional neural networks , author=. Advances in neural information processing systems , volume=

  71. [71]

    Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence , pages=

    Hype-han: Hyperbolic hierarchical attention network for semantic embedding , author=. Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence , pages=

  72. [72]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Distilling knowledge from graph convolutional networks , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  73. [73]

    Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

    Dynamic attentive graph learning for image restoration , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

  74. [74]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Structured co-reference graph attention for video-grounded dialogue , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  75. [75]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Co-gat: A co-interactive graph attention network for joint dialog act recognition and sentiment classification , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  76. [76]

    Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL-2020) , year=

    Entity-aware dependency-based deep graph attention network for comparative preference classification , author=. Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL-2020) , year=

  77. [77]

    Syntax-Aware Aspect Level Sentiment Classification with Graph Attention Networks , author=. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages=

  78. [78]

    Proceedings of The Web Conference 2020 , pages=

    Graph attention topic modeling network , author=. Proceedings of The Web Conference 2020 , pages=

  79. [79]

    Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining , pages=

    GRAM: graph-based attention model for healthcare representation learning , author=. Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining , pages=

  80. [80]

    Joint European Conference on Machine Learning and Knowledge Discovery in Databases , pages=

    Inductive Link Prediction with Interactive Structure Learning on Attributed Graph , author=. Joint European Conference on Machine Learning and Knowledge Discovery in Databases , pages=. 2021 , organization=

Showing first 80 references.