Recognition: 2 theorem links
· Lean TheoremCoDCL: Counterfactual-Inspired Augmentation Contrastive Learning for Temporal Link Prediction in Social Networks
Pith reviewed 2026-05-16 09:45 UTC · model grok-4.3
The pith
CoDCL adds counterfactual data augmentation to contrastive learning to improve temporal link prediction in evolving networks.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
CoDCL is a dynamic network learning framework that integrates counterfactual-inspired augmentation with contrastive learning. It generates high-quality counterfactual data by combining dynamic treatments design with efficient structural neighborhood exploration to quantify temporal changes in interaction patterns, enabling adaptation to complex evolving structures as a plug-and-play module for existing temporal graph models.
What carries the argument
Counterfactual-inspired augmentation strategy using dynamic treatments design and structural neighborhood exploration to create augmented data for contrastive learning in temporal graphs.
If this is right
- Existing temporal graph models gain improved adaptation to emerging structural changes without any architectural modifications.
- Prediction performance rises on multiple real-world social network datasets compared to current baselines.
- Models become more robust to complex temporal environments by learning from quantified interaction pattern changes.
Where Pith is reading between the lines
- The plug-and-play design suggests the same augmentation approach could transfer to other dynamic graph tasks like node classification or community detection.
- Causal augmentation ideas may apply beyond social networks to domains such as traffic flow or financial transaction graphs.
- Controlled tests on synthetic networks with known temporal shifts could isolate whether the augmentation truly captures causal mechanisms.
Load-bearing premise
The strategy combining dynamic treatments with structural neighborhood exploration generates high-quality counterfactual data that accurately quantifies temporal changes without introducing bias or artifacts.
What would settle it
An experiment that swaps the counterfactual augmentation for random perturbations and finds no remaining performance gains on the same datasets would indicate the specific counterfactual design is not the driver of improvement.
Figures
read the original abstract
Temporal link prediction is crucial for rapidly growing social networks. Existing methods often overlook the underlying causal mechanisms that drive link formation, making it difficult for algorithms to adapt to complex structures that continuously evolve over time. To enable prediction models to adapt to complex temporal environments, they need to be robust to emerging structural changes. We propose a dynamic network learning framework CoDCL, which combines counterfactual-inspired augmentation with contrastive learning to address this deficiency. Furthermore, we devise a comprehensive strategy to generate high-quality counterfactual data, combining a dynamic treatments design with efficient structural neighborhood exploration to quantify the temporal changes in interaction patterns. Crucially, the entire CoDCL is designed as a plug-and-play universal module that can be seamlessly integrated into various existing temporal graph models without requiring architectural modifications. Extensive experiments conducted on multiple real-world datasets demonstrate that CoDCL significantly outperforms state-of-the-art baselines in temporal link prediction, highlighting the effectiveness of integrating counterfactual-inspired data augmentation into dynamic representation learning.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes CoDCL, a plug-and-play framework for temporal link prediction that integrates counterfactual-inspired data augmentation with contrastive learning. It devises a strategy combining dynamic treatments design with structural neighborhood exploration to generate counterfactual data quantifying temporal changes in interaction patterns, and claims this enables better adaptation to evolving network structures. Extensive experiments on multiple real-world datasets are reported to show significant outperformance over state-of-the-art baselines.
Significance. If the counterfactual augmentation strategy produces causally meaningful data without bias or artifacts, the work could meaningfully advance dynamic graph representation learning by improving robustness to structural evolution; the universal plug-and-play design would further increase its utility across existing temporal GNN architectures.
major comments (1)
- [Method (counterfactual generation strategy)] The central claim that the dynamic treatments design plus structural neighborhood exploration generates high-quality counterfactual data without bias or artifacts is load-bearing but unsupported: no explicit formulation of treatment assignment, no sensitivity analysis on neighborhood parameters, and no experiments on synthetic graphs with known ground-truth causality are described. This directly affects the validity of the reported performance gains on real-world datasets.
minor comments (1)
- [Abstract] The abstract would benefit from naming the specific real-world datasets and reporting quantitative improvement margins (e.g., AUC or MRR deltas) to allow immediate assessment of the claimed gains.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback on our manuscript. We address the major comment point by point below and outline the revisions we will make to strengthen the presentation of the counterfactual generation strategy.
read point-by-point responses
-
Referee: [Method (counterfactual generation strategy)] The central claim that the dynamic treatments design plus structural neighborhood exploration generates high-quality counterfactual data without bias or artifacts is load-bearing but unsupported: no explicit formulation of treatment assignment, no sensitivity analysis on neighborhood parameters, and no experiments on synthetic graphs with known ground-truth causality are described. This directly affects the validity of the reported performance gains on real-world datasets.
Authors: We appreciate the referee highlighting the need for stronger support of the counterfactual generation claims. In the revised manuscript we will add an explicit mathematical formulation of the treatment assignment mechanism within the dynamic treatments design, including the precise criteria used to define interventions on interaction patterns. We will also include a dedicated sensitivity analysis varying the neighborhood exploration parameters (e.g., depth and size) and report their effects on both counterfactual quality metrics and downstream link prediction performance. Regarding synthetic graphs with known ground-truth causality, we agree this would provide valuable additional validation; we will generate and evaluate on controlled synthetic temporal networks in the revision to quantify bias and artifacts, thereby directly addressing the concern about the validity of gains on real-world data. revision: yes
Circularity Check
No circularity in derivation chain
full rationale
The paper presents CoDCL as a plug-and-play module that combines counterfactual-inspired augmentation with contrastive learning, with the central claims resting on experimental results from independent real-world datasets. No equations, self-citations, or fitted parameters are shown reducing by construction to the inputs; the counterfactual generation strategy is described as an external devised method rather than a self-definitional or renamed result. The derivation chain remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We propose a dynamic network learning framework CoDCL, which combines counterfactual-inspired augmentation with contrastive learning... dynamic treatments design with efficient structural neighborhood exploration
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Counterfactual contrastive loss employs an InfoNCE-based framework... Ltotal = α·Lf + (1−α)·Lc
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
[Aamandet al., 2022 ] Anders Aamand, Justin Chen, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Nicholas Schiefer, Sandeep Silwal, and Tal Wagner. Expo- nentially improving the complexity of simulating the weisfeiler-lehman test with graph neural networks. Advances in Neural Information Processing Systems, 35:27333–27346,
work page 2022
-
[2]
[Alvarez-Rodriguezet al., 2021 ] Unai Alvarez-Rodriguez, Federico Battiston, Guilherme Ferraz de Arruda, Yamir Moreno, Matja ˇz Perc, and Vito Latora. Evolution- ary dynamics of higher-order interactions in social networks.Nature Human Behaviour, 5(5):586–595,
work page 2021
-
[3]
[Conget al., 2023 ] Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, and Mehrdad Mahdavi. Do we really need complicated model architectures for temporal networks? InInterna- tional Conference on Learning Representations,
work page 2023
-
[4]
Dy- namic fraud detection: Integrating reinforcement learn- ing into graph neural networks
[Donget al., 2024 ] Yuxin Dong, Jianhua Yao, Jiajing Wang, Yingbin Liang, Shuhan Liao, and Minheng Xiao. Dy- namic fraud detection: Integrating reinforcement learn- ing into graph neural networks. In2024 6th Interna- tional Conference on Data-driven Optimization of Com- plex Systems (DOCS), pages 818–823. IEEE,
work page 2024
-
[5]
[Guidotti, 2024] Riccardo Guidotti. Counterfactual expla- nations and how to find them: literature review and benchmarking.Data Mining and Knowledge Discov- ery, 38(5):2770–2824,
work page 2024
-
[6]
[Huanget al., 2025 ] Yinxuan Huang, Ke Liang, Yanyi Huang, Xiang Zeng, Kai Chen, and Bin Zhou. Social recommendation via graph-level counterfactual aug- mentation.Proceedings of the AAAI Conference on Ar- tificial Intelligence, 39(1):334–342, Apr
work page 2025
-
[7]
Neural temporal walks: Motif-aware representation learning on continuous-time dynamic graphs
[Jinet al., 2022 ] Ming Jin, Yuan-Fang Li, and Shirui Pan. Neural temporal walks: Motif-aware representation learning on continuous-time dynamic graphs. In NeurIPS,
work page 2022
-
[8]
Representation learning for dy- namic graphs: A survey.J
[Kazemiet al., 2020 ] Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, and Pascal Poupart. Representation learning for dy- namic graphs: A survey.J. Mach. Learn. Res., 21:70:1– 70:73,
work page 2020
-
[9]
On generating plausible counterfactual and semi-factual explanations for deep learning
[Kenny and Keane, 2021] Eoin M Kenny and Mark T Keane. On generating plausible counterfactual and semi-factual explanations for deep learning. InProceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11575–11585,
work page 2021
-
[10]
Predicting dynamic embedding trajectory in temporal interaction networks
[Kumaret al., 2019 ] Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1269–
work page 2019
-
[11]
Neighborhood- aware scalable temporal network representation learn- ing
[Luo and Li, 2022] Yuhong Luo and Pan Li. Neighborhood- aware scalable temporal network representation learn- ing. InThe First Learning on Graphs Conference,
work page 2022
-
[12]
Streaming graph neural networks
[Maet al., 2020 ] Yao Ma, Ziyi Guo, Zhaochun Ren, Jiliang Tang, and Dawei Yin. Streaming graph neural networks. InProceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 719–728. ACM,
work page 2020
-
[13]
[Maet al., 2022 ] Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, and Jundong Li. Clear: Genera- tive counterfactual explanations on graphs.Advances in neural information processing systems, 35:25895– 25907,
work page 2022
-
[14]
[Melistaset al., 2024 ] Thomas Melistas, Nikos Spyrou, Ne- feli Gkouti, Pedro Sanchez, Athanasios Vlontzos, Yan- nis Panagakis, Giorgos Papanastasiou, and Sotirios A Tsaftaris. Benchmarking counterfactual image genera- tion.Advances in Neural Information Processing Sys- tems, 37:133207–133230,
work page 2024
-
[15]
Towards better evaluation for dynamic link prediction
[Poursafaeiet al., 2022 ] Farimah Poursafaei, Andy Huang, Kellin Pelrine, and Reihaneh Rabbany. Towards better evaluation for dynamic link prediction. InThirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track,
work page 2022
-
[16]
[Prenkajet al., 2024 ] Bardh Prenkaj, Mario Villaiz ´an- Vallelado, Tobias Leemann, and Gjergji Kasneci. Unifying evolution, explanation, and discernment: A generative approach for dynamic graph counterfactuals. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2420–2431,
work page 2024
-
[17]
Cody: Counterfactual explainers for dynamic graphs
[Quet al., 2025 ] Zhan Qu, Daniel Gomm, and Michael F¨arber. Cody: Counterfactual explainers for dynamic graphs. InForty-second International Conference on Machine Learning,
work page 2025
-
[18]
Temporal graph networks for deep learning on dynamic graphs
[Rossiet al., 2020 ] Emanuele Rossi, Ben Chamberlain, Fab- rizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. InICML 2020 Workshop on Graph Representation Learning,
work page 2020
-
[19]
[Rubin, 2005] Donald B Rubin. Causal inference using po- tential outcomes: Design, modeling, decisions.Journal of the American statistical Association, 100(469):322– 331,
work page 2005
-
[20]
[Slacket al., 2021 ] Dylan Slack, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. Counterfactual explana- tions can be manipulated.Advances in neural informa- tion processing systems, 34:62–75,
work page 2021
-
[21]
[Tanet al., 2022 ] Juntao Tan, Shijie Geng, Zuohui Fu, Yingqiang Ge, Shuyuan Xu, Yunqi Li, and Yongfeng Zhang. Learning and evaluating graph neural network explanations based on counterfactual and factual rea- soning. InProceedings of the ACM web conference 2022, pages 1018–1027,
work page 2022
-
[22]
Freedyg: Frequency enhanced continuous-time dy- namic graph model for link prediction
[Tianet al., 2024 ] Yuxing Tian, Yiyan Qi, and Fan Guo. Freedyg: Frequency enhanced continuous-time dy- namic graph model for link prediction. InThe Twelfth International Conference on Learning Representations,
work page 2024
-
[23]
Dyrep: Learn- ing representations over dynamic graphs
[Trivediet al., 2019 ] Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learn- ing representations over dynamic graphs. In7th Interna- tional Conference on Learning Representations. Open- Review.net,
work page 2019
-
[24]
TCL: transformer- based dynamic graph modelling via contrastive learn- ing.CoRR, abs/2105.07944,
[Wanget al., 2021a ] Lu Wang, Xiaofu Chang, Shuang Li, Yunfei Chu, Hui Li, Wei Zhang, Xiaofeng He, Le Song, Jingren Zhou, and Hongxia Yang. TCL: transformer- based dynamic graph modelling via contrastive learn- ing.CoRR, abs/2105.07944,
-
[25]
Dynamic graph transformer with correlated spatial-temporal positional encoding
[Wanget al., 2025 ] Zhe Wang, Sheng Zhou, Jiawei Chen, Zhen Zhang, Binbin Hu, Yan Feng, Chun Chen, and Can Wang. Dynamic graph transformer with correlated spatial-temporal positional encoding. InProceedings of the Eighteenth ACM International Conference on Web Search and Data Mining, pages 60–69,
work page 2025
-
[26]
[Xiaoet al., 2024 ] Chunjing Xiao, Shikang Pang, Xovee Xu, Xuan Li, Goce Trajcevski, and Fan Zhou. Counter- factual data augmentation with denoising diffusion for graph anomaly detection.IEEE Transactions on Com- putational Social Systems, 11(6):7555–7567,
work page 2024
-
[27]
Fac- tual and informative review generation for explainable recommendation
[Xieet al., 2023 ] Zhouhang Xie, Sameer Singh, Julian McAuley, and Bodhisattwa Prasad Majumder. Fac- tual and informative review generation for explainable recommendation. InProceedings of the AAAI Confer- ence on Artificial Intelligence, volume 37, pages 13816– 13824,
work page 2023
-
[28]
Inductive repre- sentation learning on temporal graphs
[Xuet al., 2020 ] Da Xu, Chuanwei Ruan, Evren K¨orpeoglu, Sushant Kumar, and Kannan Achan. Inductive repre- sentation learning on temporal graphs. In8th Interna- tional Conference on Learning Representations. Open- Review.net,
work page 2020
-
[29]
[Yuet al., 2023 ] Le Yu, Leilei Sun, Bowen Du, and Weifeng Lv. Towards better dynamic graph learning: New archi- tecture and unified library.Advances in Neural Infor- mation Processing Systems, 36:67686–67700,
work page 2023
-
[30]
In- ductive matrix completion based on graph neural net- works
[Zhang and Chen, 2020] Muhan Zhang and Yixin Chen. In- ductive matrix completion based on graph neural net- works. InICLR,
work page 2020
-
[31]
An attentional multi-scale co-evolving model for dynamic link prediction
[Zhanget al., 2023 ] Guozhen Zhang, Tian Ye, Depeng Jin, and Yong Li. An attentional multi-scale co-evolving model for dynamic link prediction. InProceedings of the ACM Web Conference 2023, pages 429–437,
work page 2023
-
[32]
Learning from coun- terfactual links for link prediction
[Zhaoet al., 2022 ] Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. Learning from coun- terfactual links for link prediction. InInternational Conference on Machine Learning, pages 26911–26926. PMLR,
work page 2022
-
[33]
[Zhuet al., 2021 ] Zhaocheng Zhu, Zuobai Zhang, Louis- Pascal A. C. Xhonneux, and Jian Tang. Neural bellman- ford networks: A general graph neural network frame- work for link prediction. InNeurIPS, pages 29476– 29490, 2021
work page 2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.