pith. machine review for the scientific record. sign in

arxiv: 2605.04652 · v1 · submitted 2026-05-06 · 💻 cs.CL

Recognition: unknown

CHE-TKG: Collaborative Historical Evidence and Evolutionary Dynamics Learning for Temporal Knowledge Graph Reasoning

Authors on Pith no claims yet

Pith reviewed 2026-05-08 16:09 UTC · model grok-4.3

classification 💻 cs.CL
keywords temporal knowledge graph reasoningdual-view learninghistorical evidenceevolutionary dynamicscontrastive alignmentrelation decompositionfuture event predictiontemporal graphs
0
0 comments X

The pith

Modeling historical evidence and evolutionary dynamics as separate aligned views improves prediction of future events in temporal knowledge graphs.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Existing methods for temporal knowledge graph reasoning typically emphasize either long-term historical patterns or short-term changes but rarely exploit both together. The paper introduces CHE-TKG as a dual-view framework that builds one graph to capture stable historical evidence and structural regularities and a second graph to model evolutionary dynamics and recent transitions. Dedicated encoders process each view, while relation decomposition and a contrastive alignment objective help the model extract and combine their complementary signals. A sympathetic reader would care because accurate forecasting of future facts matters for applications that track evolving real-world relations over time. If the approach holds, it would mean better use of both persistent constraints and changing patterns without forcing a single representation to handle both.

Core claim

CHE-TKG explicitly separates historical evidence and evolutionary dynamics by constructing a historical evidence graph to capture long-term structural regularities and stable relational constraints alongside an evolutionary dynamics graph to model temporal transitions and recent changes, each processed by dedicated encoders, with relation decomposition and a contrastive alignment objective used to capture and exploit their complementary predictive signals for reasoning about future events.

What carries the argument

The collaborative dual-view learning framework that constructs a historical evidence graph and an evolutionary dynamics graph, processes them with dedicated encoders, and aligns them through relation decomposition plus contrastive objective.

If this is right

  • The model achieves state-of-the-art performance on multiple TKG reasoning benchmarks by using both long-term regularities and recent changes.
  • Relation decomposition isolates predictive signals that belong primarily to one view or the other.
  • Contrastive alignment ensures the two views reinforce each other rather than duplicate information.
  • The framework supports better handling of both stable relational constraints and dynamic temporal shifts in the same prediction task.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The explicit separation into two views could make it simpler to inspect which temporal aspect drives a given future-event prediction.
  • The same dual-view construction might transfer to other sequential prediction problems that mix stable structure with changing dynamics.
  • If the contrastive objective succeeds across datasets, it points to a general strategy for avoiding redundancy when fusing multi-scale temporal signals.

Load-bearing premise

The two constructed views supply genuinely complementary predictive signals that dedicated encoders and the alignment objective can capture without information loss or overfitting to the benchmarks.

What would settle it

An ablation study on standard benchmarks such as ICEWS or GDELT showing that removing either the historical evidence view, the evolutionary dynamics view, or the contrastive alignment objective produces no drop in reasoning performance would falsify the value of the collaborative separation.

Figures

Figures reproduced from arXiv: 2605.04652 by Guoxi Sun, Jiarui Liang, Shuai-Long Lei, Xiaobin Zhu, Xu-Cheng Yin, Zhiyu Fang.

Figure 1
Figure 1. Figure 1: An illustrative example of the two views, including graph construction on the ICEWS view at source ↗
Figure 2
Figure 2. Figure 2: CHE-TKG constructs a historical evidence graph and an evolutionary dynamics graph from view at source ↗
Figure 2
Figure 2. Figure 2: Overall architecture of CHE-TKG (a). The framework consists of spatio-temporal initializa view at source ↗
Figure 3
Figure 3. Figure 3: Effect of the upper bound N on the evolutionary dynamics graph. 0.1 0.15 0.2 0.3 0.4 0.5 50.8 51.0 51.2 51.4 51.6 51.8 52.0 52.2 MRR MRR H1 40.8 41.0 41.2 41.4 41.6 41.8 H1 (a) ICEWS14s 0.005 0.008 0.01 0.03 0.05 0.08 38.2 38.3 38.4 38.5 38.6 38.7 38.8 38.9 39.0 MRR MRR H1 27.2 27.3 27.4 27.5 27.6 27.7 27.8 H1 (b) ICEWS18 view at source ↗
Figure 4
Figure 4. Figure 4: Effect of the contrastive alignment weight view at source ↗
Figure 5
Figure 5. Figure 5: Robustness analysis under Gaussian noise perturbations. view at source ↗
Figure 6
Figure 6. Figure 6: t-SNE visualization of entity and relation embeddings under different settings. view at source ↗
Figure 7
Figure 7. Figure 7: Efficiency–performance trade-off comparisons. view at source ↗
Figure 8
Figure 8. Figure 8: Test-time efficiency comparison. E.10 Execution Time Analysis We analyze the efficiency and predictive performance of CHE-TKG against strong open-source baselines, including RE-GCN [30], TiRGN [27], and LogCL [9], as shown in view at source ↗
read the original abstract

Temporal knowledge graph (TKG) reasoning aims to predict future events from historical facts. A key challenge lies in jointly capturing two sources of predictive information in TKGs: historical evidence and evolutionary dynamics. However, existing methods typically focus on only one of these sources, which limits the ability to fully exploit the complementary predictive signals in TKGs. To address this, we propose CHE-TKG, a novel collaborative dual-view learning framework for TKG reasoning. CHE-TKG explicitly separates and jointly models historical evidence and evolutionary dynamics, aiming to learn and exploit their complementary predictive signals. Specifically, CHE-TKG constructs a historical evidence graph to capture long-term structural regularities and stable relational constraints, alongside an evolutionary dynamics graph to model temporal transitions and recent changes, with dedicated encoders for each view. We further employ relation decomposition and a contrastive alignment objective to better capture the predictive signals across the two views. Extensive experiments demonstrate that CHE-TKG achieves state-of-the-art performance on multiple benchmarks.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript proposes CHE-TKG, a collaborative dual-view learning framework for temporal knowledge graph (TKG) reasoning. It explicitly separates historical evidence (long-term structural regularities and stable relational constraints) from evolutionary dynamics (temporal transitions and recent changes) by constructing two graphs from the same timestamped triples, applies dedicated encoders to each view, and uses relation decomposition together with a contrastive alignment objective to capture and exploit their complementary predictive signals, claiming state-of-the-art performance on multiple benchmarks.

Significance. If the two constructed views supply genuinely distinct predictive signals that can be aligned without redundancy or information loss, the dual-view design would constitute a meaningful architectural advance over single-focus TKG methods. The explicit separation and joint modeling could improve exploitation of both stable constraints and dynamic changes in temporal data.

major comments (2)
  1. [Section 3 (Method)] The central claim requires that the historical evidence graph and evolutionary dynamics graph supply genuinely complementary signals. Both graphs are built from the identical set of timestamped triples; the manuscript does not describe an explicit partitioning mechanism (e.g., strict temporal cutoffs, orthogonal feature extraction, or disjoint edge sets) that would prevent the evolutionary signal from already being latent in the historical graph. Consequently, the subsequent relation decomposition and contrastive objective may align redundant rather than complementary representations, so observed gains could stem from added capacity rather than the dual-view design.
  2. [Section 4 (Experiments)] To substantiate that the dual-view architecture drives the reported SOTA results rather than extra parameters, the experiments section should include ablations that (i) compare the full model against single-view baselines with matched parameter counts and (ii) quantify the incremental benefit of the contrastive alignment term. Without such controls, the claim that the two views provide complementary signals remains unproven.
minor comments (1)
  1. [Abstract] The abstract asserts SOTA performance from extensive experiments yet supplies no numerical results, baseline names, or improvement margins; including at least one concrete metric (e.g., MRR or Hits@10 delta on a standard benchmark) would strengthen the summary.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and insightful comments on our manuscript. We have carefully considered the points raised and provide detailed point-by-point responses below. Where appropriate, we have revised the manuscript to address the concerns and strengthen the presentation of our dual-view framework.

read point-by-point responses
  1. Referee: [Section 3 (Method)] The central claim requires that the historical evidence graph and evolutionary dynamics graph supply genuinely complementary signals. Both graphs are built from the identical set of timestamped triples; the manuscript does not describe an explicit partitioning mechanism (e.g., strict temporal cutoffs, orthogonal feature extraction, or disjoint edge sets) that would prevent the evolutionary signal from already being latent in the historical graph. Consequently, the subsequent relation decomposition and contrastive objective may align redundant rather than complementary representations, so observed gains could stem from added capacity rather than the dual-view design.

    Authors: We thank the referee for this important observation. While both graphs are indeed derived from the same timestamped triples, their construction differs substantially in intent and implementation. The historical evidence graph aggregates all historical triples cumulatively to encode long-term structural regularities and stable relational constraints. In contrast, the evolutionary dynamics graph extracts transition-focused edges that emphasize changes between consecutive timestamps, thereby isolating recent temporal dynamics. This distinction is realized through different edge formation rules and feature aggregation strategies, as outlined in Section 3.2, together with dedicated encoders and the relation decomposition step. We acknowledge that the original manuscript could have made this partitioning more explicit. We have therefore revised Section 3 to include a dedicated subsection with pseudocode and a clearer description of the temporal aggregation versus differential transition mechanisms, demonstrating how redundancy is minimized. We believe these clarifications show that the contrastive alignment operates on genuinely complementary signals rather than redundant ones. revision: yes

  2. Referee: [Section 4 (Experiments)] To substantiate that the dual-view architecture drives the reported SOTA results rather than extra parameters, the experiments section should include ablations that (i) compare the full model against single-view baselines with matched parameter counts and (ii) quantify the incremental benefit of the contrastive alignment term. Without such controls, the claim that the two views provide complementary signals remains unproven.

    Authors: We agree that controlled ablations with matched capacity are necessary to isolate the benefit of the dual-view design. In the revised manuscript we have added a new subsection (Section 4.3) containing the requested experiments. We compare the full CHE-TKG model against historical-only and evolutionary-only single-view variants whose hidden dimensions and layer counts are adjusted to match the parameter count of the full model. We also report performance with the contrastive alignment objective ablated. The results indicate that the full collaborative model consistently outperforms the capacity-matched single-view baselines, and that removing the contrastive term leads to a measurable drop, supporting that the gains arise from the joint modeling of complementary signals rather than increased model capacity alone. revision: yes

Circularity Check

0 steps flagged

No circularity in CHE-TKG derivation chain

full rationale

The paper proposes an explicit dual-view construction (historical evidence graph for long-term regularities + evolutionary dynamics graph for temporal transitions) from the same timestamped triples, followed by dedicated encoders, relation decomposition, and contrastive alignment. This modeling choice is presented as a design decision motivated by the stated challenge of complementary signals, not derived by construction from any self-citation, fitted parameter renamed as prediction, or self-definitional loop. No equations reduce the claimed predictive signals to tautological inputs, and the SOTA claims rest on benchmark experiments rather than any load-bearing reduction to prior author work. The framework is therefore self-contained against external validation.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 2 invented entities

The central claim rests on the assumption that historical evidence and evolutionary dynamics are complementary and can be modeled separately then aligned; the paper introduces two new graph constructions and a contrastive objective whose effectiveness is asserted but not independently verified in the abstract.

free parameters (2)
  • contrastive alignment hyperparameters
    Weighting and temperature parameters for the contrastive objective are likely fitted or chosen to optimize benchmark performance.
  • encoder architecture choices
    Specific layer counts, dimensions, and relation decomposition parameters are model-specific and not detailed in abstract.
axioms (1)
  • domain assumption Historical evidence and evolutionary dynamics supply complementary predictive signals in TKGs
    Invoked to justify the dual-view construction and alignment; appears in the motivation section of the abstract.
invented entities (2)
  • historical evidence graph no independent evidence
    purpose: Capture long-term structural regularities and stable relational constraints
    Newly constructed view from the TKG data; no independent evidence outside the model performance is provided.
  • evolutionary dynamics graph no independent evidence
    purpose: Model temporal transitions and recent changes
    Newly constructed view from the TKG data; no independent evidence outside the model performance is provided.

pith-pipeline@v0.9.0 · 5485 in / 1390 out tokens · 64424 ms · 2026-05-08T16:09:28.232349+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

59 extracted references · 1 canonical work pages · 1 internal anchor

  1. [1]

    Beamqa: Multi-hop knowledge graph question answering with sequence-to-sequence prediction and beam search

    Farah Atif, Ola El Khatib, and Djellel Difallah. Beamqa: Multi-hop knowledge graph question answering with sequence-to-sequence prediction and beam search. InProceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 781–790, 2023

  2. [2]

    Long Bai, Zixuan Li, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng, and Tat-Seng Chua. G2s: A general-to-specific learning framework for temporal knowledge graph forecasting with large language models.Findings of the Association for Computational Linguistics: ACL, 2025

  3. [3]

    Translating embeddings for modeling multi-relational data.Advances in neural information processing systems, 2013

    Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data.Advances in neural information processing systems, 2013

  4. [4]

    ICEWS Coded Event Data, 2015

    Elizabeth Boschee, Jennifer Lautenschlager, Sean O’Brien, Steve Shellman, James Starz, and Michael Ward. ICEWS Coded Event Data, 2015

  5. [5]

    Ling Chen, Xing Tang, Weiqi Chen, Yuntao Qian, Yansheng Li, and Yongjun Zhang. Dacha: A dual graph convolution based temporal knowledge graph representation learning method using historical relation.ACM Transactions on Knowledge Discovery from Data, pages 1–18, 2021

  6. [6]

    Decrl: A deep evolutionary clustering jointed temporal knowledge graph representation learning approach.Advances in Neural Information Processing Systems, 37:55204–55227, 2024

    Qian Chen and Ling Chen. Decrl: A deep evolutionary clustering jointed temporal knowledge graph representation learning approach.Advances in Neural Information Processing Systems, 37:55204–55227, 2024

  7. [7]

    Beyond entity correlations: Disentangling event causal puzzles in temporal knowledge graphs

    Qian Chen, Jinyu Zhang, and Ling Chen. Beyond entity correlations: Disentangling event causal puzzles in temporal knowledge graphs. InInternational Conference on Learning Repre- sentations, 2026

  8. [8]

    Htccn: Temporal causal convolutional networks with hawkes process for extrapolation reasoning in temporal knowledge graphs

    Tingxuan Chen, Jun Long, Liu Yang, Zidong Wang, Yongheng Wang, and Xiongnan Jin. Htccn: Temporal causal convolutional networks with hawkes process for extrapolation reasoning in temporal knowledge graphs. InProceedings of the North American Chapter of the Association for Computational Linguistics, pages 4056–4066, 2024

  9. [9]

    Local-global history-aware contrastive learning for temporal knowledge graph reasoning

    Wei Chen, Huaiyu Wan, Yuting Wu, Shuyuan Zhao, Jiayaqi Cheng, Yuxin Li, and Youfang Lin. Local-global history-aware contrastive learning for temporal knowledge graph reasoning. In IEEE International Conference on Data Engineering (ICDE), pages 733–746. IEEE, 2024

  10. [10]

    Dynamic knowledge graph based multi- event forecasting

    Songgaojun Deng, Huzefa Rangwala, and Yue Ning. Dynamic knowledge graph based multi- event forecasting. InProceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 1585–1595, 2020

  11. [11]

    Enhancing complex question answering over knowledge graphs through evidence pattern retrieval

    Wentao Ding, Jinmao Li, Liangchuan Luo, and Yuzhong Qu. Enhancing complex question answering over knowledge graphs through evidence pattern retrieval. InProceedings of the ACM Web Conference, pages 2106–2115, 2024

  12. [12]

    Adaptive path-memory network for temporal knowledge graph reasoning

    Hao Dong, Zhiyuan Ning, Pengyang Wang, Ziyue Qiao, Pengfei Wang, Yuanchun Zhou, and Yanjie Fu. Adaptive path-memory network for temporal knowledge graph reasoning. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 2086–2094, 2023

  13. [13]

    Hawkes based representation learning for reasoning over scale-free community-structured temporal knowledge graphs

    Yuwei Du, Xinyue Liu, Wenxin Liang, Linlin Zong, and Xianchao Zhang. Hawkes based representation learning for reasoning over scale-free community-structured temporal knowledge graphs. InProceedings of the International Conference on Computational Linguistics, pages 2935–2946, 2025

  14. [14]

    Transformer-based reasoning for learning evolutionary chain of events on temporal knowledge graph

    Zhiyu Fang, Shuai-Long Lei, Xiaobin Zhu, Chun Yang, Shi-Xue Zhang, Xu-Cheng Yin, and Jingyan Qin. Transformer-based reasoning for learning evolutionary chain of events on temporal knowledge graph. InProceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 70–79, 2024. 10

  15. [15]

    Arbitrary time information modeling via polynomial approximation for temporal knowledge graph embedding

    Zhiyu Fang, Jingyan Qin, Xiaobin Zhu, Chun Yang, and Xu-Cheng Yin. Arbitrary time information modeling via polynomial approximation for temporal knowledge graph embedding. InProceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation, pages 1455–1465, 2024

  16. [16]

    On the evaluation of methods for temporal knowledge graph forecasting

    Julia Gastinger, Timo Sztyler, Lokesh Sharma, and Anett Schuelke. On the evaluation of methods for temporal knowledge graph forecasting. InNeurIPS Temporal Graph Learning Workshop, 2022

  17. [17]

    Learning neural ordinary equations for forecasting future links on temporal knowledge graphs

    Zhen Han, Zifeng Ding, Yunpu Ma, Yujia Gu, and V olker Tresp. Learning neural ordinary equations for forecasting future links on temporal knowledge graphs. InProceedings of the Conference on Empirical Methods in Natural Language Processing, pages 8352–8364, 2021

  18. [18]

    Confi- dence is not timeless: Modeling temporal validity for rule-based temporal knowledge graph forecasting

    Rikui Huang, Wei Wei, Xiaoye Qu, Shengzhe Zhang, Dangyang Chen, and Yu Cheng. Confi- dence is not timeless: Modeling temporal validity for rule-based temporal knowledge graph forecasting. InProceedings of the Annual Meeting of the Association for Computational Linguistics, pages 10783–10794, 2024

  19. [19]

    Recurrent event network: Autoregressive structure inference over temporal knowledge graphs

    Woojeong Jin, Meng Qu, Xisen Jin, and Xiang Ren. Recurrent event network: Autoregressive structure inference over temporal knowledge graphs. InProceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 6669–6683, 2020

  20. [20]

    Tensor decompositions for temporal knowledge base completion

    Timothée Lacroix, Guillaume Obozinski, and Nicolas Usunier. Tensor decompositions for temporal knowledge base completion. InInternational Conference on Learning Representations, 2020

  21. [21]

    Deriving validity time in knowledge graph

    Julien Leblay and Melisachew Wudage Chekol. Deriving validity time in knowledge graph. In Proceedings of the ACM Web Conference, 2018

  22. [22]

    Temporal knowledge graph forecasting without knowledge using in-context learning

    Dong-Ho Lee, Kian Ahrabian, Woojeong Jin, Fred Morstatter, and Jay Pujara. Temporal knowledge graph forecasting without knowledge using in-context learning. InProceedings of the Conference on Empirical Methods in Natural Language Processing, pages 544–557, 2023

  23. [23]

    Gdelt: Global data on events, location, and tone

    Kalev Leetaru and Philip A Schrodt. Gdelt: Global data on events, location, and tone. InISA Annual Convention, pages 1–49, 2013

  24. [24]

    Teast: Temporal knowledge graph embedding via archimedean spiral timeline

    Jiang Li, Xiangdong Su, and Guanglai Gao. Teast: Temporal knowledge graph embedding via archimedean spiral timeline. InProceedings of the Annual Meeting of the Association for Computational Linguistics, pages 15460–15474, 2023

  25. [25]

    Tr-rules: Rule-based model for link forecasting on temporal knowledge graph considering temporal redundancy

    Ningyuan Li, E Haihong, Shi Li, Mingzhi Sun, Tianyu Yao, Meina Song, Yong Wang, and Haoran Luo. Tr-rules: Rule-based model for link forecasting on temporal knowledge graph considering temporal redundancy. InFindings of the Association for Computational Linguistics: EMNLP, pages 7885–7894, 2023

  26. [26]

    Infer: A neural-symbolic model for extrapolation reasoning on temporal knowledge graph

    Ningyuan Li, E Haihong, Tianyu Yao, Tianyi Hu, Yuhan Li, Haoran Luo, Meina Song, and Yifan Zhu. Infer: A neural-symbolic model for extrapolation reasoning on temporal knowledge graph. InInternational Conference on Learning Representations, 2025

  27. [27]

    Tirgn: Time-guided recurrent graph network with local-global historical patterns for temporal knowledge graph reasoning

    Yujia Li, Shiliang Sun, and Jing Zhao. Tirgn: Time-guided recurrent graph network with local-global historical patterns for temporal knowledge graph reasoning. InProceedings of the International Joint Conference on Artificial Intelligence, pages 2152–2158, 2022

  28. [28]

    Complex evolutional pattern learning for temporal knowledge graph reasoning

    Zixuan Li, Saiping Guan, Xiaolong Jin, Weihua Peng, Yajuan Lyu, Yong Zhu, Long Bai, Wei Li, Jiafeng Guo, and Xueqi Cheng. Complex evolutional pattern learning for temporal knowledge graph reasoning. InProceedings of the Annual Meeting of the Association for Computational Linguistics, pages 290–296, 2022

  29. [29]

    Hismatch: Historical structure matching based temporal knowledge graph reasoning

    Zixuan Li, Zhongni Hou, Saiping Guan, Xiaolong Jin, Weihua Peng, Long Bai, Yajuan Lyu, Wei Li, Jiafeng Guo, and Xueqi Cheng. Hismatch: Historical structure matching based temporal knowledge graph reasoning. InFindings of the Association for Computational Linguistics: EMNLP, pages 7328–7338, 2022. 11

  30. [30]

    Temporal knowledge graph reasoning based on evolutional representation learning

    Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang, and Xueqi Cheng. Temporal knowledge graph reasoning based on evolutional representation learning. InProceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 408–417, 2021

  31. [31]

    Learn from relational correlations and periodic events for temporal knowledge graph reasoning

    Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenxuan Tu, Siwei Wang, Sihang Zhou, and Xinwang Liu. Learn from relational correlations and periodic events for temporal knowledge graph reasoning. InProceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1559–1568, 2023

  32. [32]

    Gentkg: Generative forecasting on temporal knowledge graph with large language models

    Ruotong Liao, Xu Jia, Yangzhe Li, Yunpu Ma, and V olker Tresp. Gentkg: Generative forecasting on temporal knowledge graph with large language models. InFindings of the Association for Computational Linguistics: NAACL, pages 4303–4317, 2024

  33. [33]

    Retia: relation-entity twin-interact aggregation for temporal knowledge graph extrapolation

    Kangzheng Liu, Feng Zhao, Guandong Xu, Xianzhi Wang, and Hai Jin. Retia: relation-entity twin-interact aggregation for temporal knowledge graph extrapolation. InIEEE International Conference on Data Engineering (ICDE), pages 1761–1774. IEEE, 2023

  34. [34]

    Joint knowledge graph completion and question answering

    Lihui Liu, Boxin Du, Jiejun Xu, Yinglong Xia, and Hanghang Tong. Joint knowledge graph completion and question answering. InProceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1098–1108, 2022

  35. [35]

    Tlogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs

    Yushan Liu, Yunpu Ma, Marcel Hildebrandt, Mitchell Joblin, and V olker Tresp. Tlogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4120–4127, 2022

  36. [36]

    Terdy: Temporal relation dynamics through frequency decom- position for temporal knowledge graph completion

    Ziyang Liu and Chaokun Wang. Terdy: Temporal relation dynamics through frequency decom- position for temporal knowledge graph completion. InProceedings of the Annual Meeting of the Association for Computational Linguistics, pages 9611–9622, 2025

  37. [37]

    Temporal knowledge graph completion using box embeddings

    Johannes Messner, Ralph Abboud, and Ismail Ilkan Ceylan. Temporal knowledge graph completion using box embeddings. InProceedings of the AAAI Conference on Artificial Intelligence, pages 7779–7787, 2022

  38. [38]

    Gradient starvation: A learning proclivity in neural networks

    Mohammad Pezeshki, Oumar Kaba, Yoshua Bengio, Aaron C Courville, Doina Precup, and Guillaume Lajoie. Gradient starvation: A learning proclivity in neural networks. InAdvances in Neural Information Processing Systems, pages 1256–1272, 2021

  39. [39]

    Which shortcut cues will dnns choose? a study from the parameter-space perspective

    Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Michael Poli, and Sangdoo Yun. Which shortcut cues will dnns choose? a study from the parameter-space perspective. InInternational Conference on Learning Representations, 2022

  40. [40]

    GLU Variants Improve Transformer

    Noam Shazeer. Glu variants improve transformer.arXiv preprint arXiv:2002.05202, 2020

  41. [41]

    Graph hawkes transformer for extrapolated reasoning on temporal knowledge graphs

    Haohai Sun, Shangyi Geng, Jialun Zhong, Han Hu, and Kun He. Graph hawkes transformer for extrapolated reasoning on temporal knowledge graphs. InProceedings of the Conference on Empirical Methods in Natural Language Processing, pages 7481–7493, 2022

  42. [42]

    Editkg: Editing knowledge graph for recommendation

    Gu Tang, Xiaoying Gan, Jinghe Wang, Bin Lu, Lyuwen Wu, Luoyi Fu, and Chenghu Zhou. Editkg: Editing knowledge graph for recommendation. InProceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 112–122, 2024

  43. [43]

    Anre: Analogical replay for temporal knowledge graph forecasting

    Guo Tang, Zheng Chu, Wenxiang Zheng, Junjia Xiang, Yizhuo Li, Weihao Zhang, Ming Liu, and Bing Qin. Anre: Analogical replay for temporal knowledge graph forecasting. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 4632–4650, 2025

  44. [44]

    Gtrl: An entity group-aware temporal knowledge graph representa- tion learning method.IEEE Transactions on Knowledge and Data Engineering, 36(9):4707– 4721, 2023

    Xing Tang and Ling Chen. Gtrl: An entity group-aware temporal knowledge graph representa- tion learning method.IEEE Transactions on Knowledge and Data Engineering, 36(9):4707– 4721, 2023

  45. [45]

    Dhyper: A recurrent dual hypergraph neural network for event prediction in temporal knowledge graphs.ACM Transactions on Information Systems, 42(5):1–23, 2024

    Xing Tang, Ling Chen, Hongyu Shi, and Dandan Lyu. Dhyper: A recurrent dual hypergraph neural network for event prediction in temporal knowledge graphs.ACM Transactions on Information Systems, 42(5):1–23, 2024. 12

  46. [46]

    Large language models-guided dynamic adaptation for temporal knowledge graph reasoning.Advances in Neural Information Processing Systems, pages 8384–8410, 2024

    Jiapu Wang, Kai Sun, Linhao Luo, Wei Wei, Yongli Hu, Alan W Liew, Shirui Pan, and Baocai Yin. Large language models-guided dynamic adaptation for temporal knowledge graph reasoning.Advances in Neural Information Processing Systems, pages 8384–8410, 2024

  47. [47]

    Mixed- curvature manifolds interaction learning for knowledge graph-aware recommendation

    Jihu Wang, Yuliang Shi, Han Yu, Xinjun Wang, Zhongmin Yan, and Fanyu Kong. Mixed- curvature manifolds interaction learning for knowledge graph-aware recommendation. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 372–382, 2023

  48. [48]

    Chain-of- history reasoning for temporal knowledge graph forecasting

    Yuwei Xia, Ding Wang, Qiang Liu, Liang Wang, Shu Wu, and Xiao-Yu Zhang. Chain-of- history reasoning for temporal knowledge graph forecasting. InFindings of the Association for Computational Linguistics: ACL, pages 16144–16159, 2024

  49. [49]

    Inductive repre- sentation learning on temporal graphs

    Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Inductive repre- sentation learning on temporal graphs. InInternational Conference on Learning Representations, 2020

  50. [50]

    Retrieval-augmented generation with knowledge graphs for customer service question answering

    Zhentao Xu, Mark Jerome Cruz, Matthew Guevara, Tie Wang, Manasi Deshpande, Xiaofeng Wang, and Zheng Li. Retrieval-augmented generation with knowledge graphs for customer service question answering. InProceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2905–2909, 2024

  51. [51]

    Simple but effective compound geometric operations for temporal knowledge graph completion

    Rui Ying, Mengting Hu, Jianfeng Wu, Yalan Xie, Xiaoyi Liu, Zhunheng Wang, Ming Jiang, Hang Gao, Linlin Zhang, and Renhong Cheng. Simple but effective compound geometric operations for temporal knowledge graph completion. InProceedings of the Annual Meeting of the Association for Computational Linguistics, pages 11074–11086, 2024

  52. [52]

    Temporal knowledge graph reasoning with dynamic memory enhancement.IEEE Transactions on Knowledge and Data Engineering, 36(11):7115–7128, 2024

    Fuwei Zhang, Zhao Zhang, Fuzhen Zhuang, Yu Zhao, Deqing Wang, and Hongwei Zheng. Temporal knowledge graph reasoning with dynamic memory enhancement.IEEE Transactions on Knowledge and Data Engineering, 36(11):7115–7128, 2024

  53. [53]

    Temporal knowledge graph rep- resentation learning with local and global evolutions.Knowledge-Based Systems, 251:109234, 2022

    Jiasheng Zhang, Shuang Liang, Yongpan Sheng, and Jie Shao. Temporal knowledge graph rep- resentation learning with local and global evolutions.Knowledge-Based Systems, 251:109234, 2022

  54. [54]

    Histori- cally relevant event structuring for temporal knowledge graph reasoning

    Jinchuan Zhang, Ming Sun, Chong Mu, Jinhao Zhang, Quanjiang Guo, and Ling Tian. Histori- cally relevant event structuring for temporal knowledge graph reasoning. InIEEE International Conference on Data Engineering (ICDE), pages 3179–3192. IEEE, 2025

  55. [55]

    Learning latent relations for temporal knowledge graph reasoning

    Mengqi Zhang, Yuwei Xia, Qiang Liu, Shu Wu, and Liang Wang. Learning latent relations for temporal knowledge graph reasoning. InProceedings of the Annual Meeting of the Association for Computational Linguistics, pages 12617–12631, 2023

  56. [56]

    Learning long-and short-term representations for temporal knowledge graph reasoning

    Mengqi Zhang, Yuwei Xia, Qiang Liu, Shu Wu, and Liang Wang. Learning long-and short-term representations for temporal knowledge graph reasoning. InProceedings of the ACM Web Conference, pages 2412–2422, 2023. A Related Work A.1 Differences from Local-Based Methods Our method differs fundamentally from prior approaches such as HGLS [56], LogCL [9], and His...

  57. [57]

    A.3 Extrapolation Reasoning Beyond embedding-based methods, extrapolation approaches can also be categorized into rule-based and LLM-based methods

    decomposes relation embeddings in the frequency domain to capture long- and short-term dynamics. A.3 Extrapolation Reasoning Beyond embedding-based methods, extrapolation approaches can also be categorized into rule-based and LLM-based methods. Rule-Based TKGR.TLogic [ 35] extracts temporal logical rules via non-increasing temporal random walks, estimates...

  58. [58]

    x” and “o

    incorporates temporal validity, fact frequency, and embedding information for extrapolation reasoning, while DaeMon [12] learns continuous and implicit path representations through neural networks without explicitly constructing logical rules. LLM-Based TKGR.Recently, several studies explore leveraging the inductive and reasoning capabilities of large lan...

  59. [59]

    Guidelines: • The answer [N/A] means that the paper does not involve crowdsourcing nor research with human subjects

    Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or ...