pith. machine review for the scientific record. sign in

arxiv: 2601.22427 · v2 · submitted 2026-01-30 · 💻 cs.LG · cs.AI

Recognition: 2 theorem links

· Lean Theorem

CoDCL: Counterfactual-Inspired Augmentation Contrastive Learning for Temporal Link Prediction in Social Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-16 09:45 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords temporal link predictioncounterfactual augmentationcontrastive learningdynamic networksdata augmentationsocial networksgraph representation learning
0
0 comments X

The pith

CoDCL adds counterfactual data augmentation to contrastive learning to improve temporal link prediction in evolving networks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces CoDCL as a framework that generates counterfactual examples of link formation to help models learn robust representations despite changing network structures. It combines this augmentation with contrastive learning and designs a generation strategy using dynamic treatments and neighborhood exploration. The module is built to plug into existing temporal graph models without changes. A reader would care because social networks evolve rapidly and better predictions could support more reliable recommendations and trend analysis over time.

Core claim

CoDCL is a dynamic network learning framework that integrates counterfactual-inspired augmentation with contrastive learning. It generates high-quality counterfactual data by combining dynamic treatments design with efficient structural neighborhood exploration to quantify temporal changes in interaction patterns, enabling adaptation to complex evolving structures as a plug-and-play module for existing temporal graph models.

What carries the argument

Counterfactual-inspired augmentation strategy using dynamic treatments design and structural neighborhood exploration to create augmented data for contrastive learning in temporal graphs.

If this is right

  • Existing temporal graph models gain improved adaptation to emerging structural changes without any architectural modifications.
  • Prediction performance rises on multiple real-world social network datasets compared to current baselines.
  • Models become more robust to complex temporal environments by learning from quantified interaction pattern changes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The plug-and-play design suggests the same augmentation approach could transfer to other dynamic graph tasks like node classification or community detection.
  • Causal augmentation ideas may apply beyond social networks to domains such as traffic flow or financial transaction graphs.
  • Controlled tests on synthetic networks with known temporal shifts could isolate whether the augmentation truly captures causal mechanisms.

Load-bearing premise

The strategy combining dynamic treatments with structural neighborhood exploration generates high-quality counterfactual data that accurately quantifies temporal changes without introducing bias or artifacts.

What would settle it

An experiment that swaps the counterfactual augmentation for random perturbations and finds no remaining performance gains on the same datasets would indicate the specific counterfactual design is not the driver of improvement.

Figures

Figures reproduced from arXiv: 2601.22427 by Duxin Chen, Hantong Feng, Wenwu Yu.

Figure 1
Figure 1. Figure 1: A simple toy example of counterfactual-enhanced link pre [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The framework of the proposed counterfactual data augmentation dynamic network contrastive learning. [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Ablation study results under both transductive and induc [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Results of hyper-parameters sensitivity study in dy [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Results of hyper-parameters sensitivity study in dynamic [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
read the original abstract

Temporal link prediction is crucial for rapidly growing social networks. Existing methods often overlook the underlying causal mechanisms that drive link formation, making it difficult for algorithms to adapt to complex structures that continuously evolve over time. To enable prediction models to adapt to complex temporal environments, they need to be robust to emerging structural changes. We propose a dynamic network learning framework CoDCL, which combines counterfactual-inspired augmentation with contrastive learning to address this deficiency. Furthermore, we devise a comprehensive strategy to generate high-quality counterfactual data, combining a dynamic treatments design with efficient structural neighborhood exploration to quantify the temporal changes in interaction patterns. Crucially, the entire CoDCL is designed as a plug-and-play universal module that can be seamlessly integrated into various existing temporal graph models without requiring architectural modifications. Extensive experiments conducted on multiple real-world datasets demonstrate that CoDCL significantly outperforms state-of-the-art baselines in temporal link prediction, highlighting the effectiveness of integrating counterfactual-inspired data augmentation into dynamic representation learning.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper proposes CoDCL, a plug-and-play framework for temporal link prediction that integrates counterfactual-inspired data augmentation with contrastive learning. It devises a strategy combining dynamic treatments design with structural neighborhood exploration to generate counterfactual data quantifying temporal changes in interaction patterns, and claims this enables better adaptation to evolving network structures. Extensive experiments on multiple real-world datasets are reported to show significant outperformance over state-of-the-art baselines.

Significance. If the counterfactual augmentation strategy produces causally meaningful data without bias or artifacts, the work could meaningfully advance dynamic graph representation learning by improving robustness to structural evolution; the universal plug-and-play design would further increase its utility across existing temporal GNN architectures.

major comments (1)
  1. [Method (counterfactual generation strategy)] The central claim that the dynamic treatments design plus structural neighborhood exploration generates high-quality counterfactual data without bias or artifacts is load-bearing but unsupported: no explicit formulation of treatment assignment, no sensitivity analysis on neighborhood parameters, and no experiments on synthetic graphs with known ground-truth causality are described. This directly affects the validity of the reported performance gains on real-world datasets.
minor comments (1)
  1. [Abstract] The abstract would benefit from naming the specific real-world datasets and reporting quantitative improvement margins (e.g., AUC or MRR deltas) to allow immediate assessment of the claimed gains.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback on our manuscript. We address the major comment point by point below and outline the revisions we will make to strengthen the presentation of the counterfactual generation strategy.

read point-by-point responses
  1. Referee: [Method (counterfactual generation strategy)] The central claim that the dynamic treatments design plus structural neighborhood exploration generates high-quality counterfactual data without bias or artifacts is load-bearing but unsupported: no explicit formulation of treatment assignment, no sensitivity analysis on neighborhood parameters, and no experiments on synthetic graphs with known ground-truth causality are described. This directly affects the validity of the reported performance gains on real-world datasets.

    Authors: We appreciate the referee highlighting the need for stronger support of the counterfactual generation claims. In the revised manuscript we will add an explicit mathematical formulation of the treatment assignment mechanism within the dynamic treatments design, including the precise criteria used to define interventions on interaction patterns. We will also include a dedicated sensitivity analysis varying the neighborhood exploration parameters (e.g., depth and size) and report their effects on both counterfactual quality metrics and downstream link prediction performance. Regarding synthetic graphs with known ground-truth causality, we agree this would provide valuable additional validation; we will generate and evaluate on controlled synthetic temporal networks in the revision to quantify bias and artifacts, thereby directly addressing the concern about the validity of gains on real-world data. revision: yes

Circularity Check

0 steps flagged

No circularity in derivation chain

full rationale

The paper presents CoDCL as a plug-and-play module that combines counterfactual-inspired augmentation with contrastive learning, with the central claims resting on experimental results from independent real-world datasets. No equations, self-citations, or fitted parameters are shown reducing by construction to the inputs; the counterfactual generation strategy is described as an external devised method rather than a self-definitional or renamed result. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review provides no explicit free parameters, axioms, or invented entities; the counterfactual generation is described at conceptual level without detailing any fitted quantities or background assumptions.

pith-pipeline@v0.9.0 · 5473 in / 1038 out tokens · 22069 ms · 2026-05-16T09:45:56.578578+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

33 extracted references · 33 canonical work pages

  1. [1]

    Expo- nentially improving the complexity of simulating the weisfeiler-lehman test with graph neural networks

    [Aamandet al., 2022 ] Anders Aamand, Justin Chen, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Nicholas Schiefer, Sandeep Silwal, and Tal Wagner. Expo- nentially improving the complexity of simulating the weisfeiler-lehman test with graph neural networks. Advances in Neural Information Processing Systems, 35:27333–27346,

  2. [2]

    Evolution- ary dynamics of higher-order interactions in social networks.Nature Human Behaviour, 5(5):586–595,

    [Alvarez-Rodriguezet al., 2021 ] Unai Alvarez-Rodriguez, Federico Battiston, Guilherme Ferraz de Arruda, Yamir Moreno, Matja ˇz Perc, and Vito Latora. Evolution- ary dynamics of higher-order interactions in social networks.Nature Human Behaviour, 5(5):586–595,

  3. [3]

    Do we really need complicated model architectures for temporal networks? InInterna- tional Conference on Learning Representations,

    [Conget al., 2023 ] Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, and Mehrdad Mahdavi. Do we really need complicated model architectures for temporal networks? InInterna- tional Conference on Learning Representations,

  4. [4]

    Dy- namic fraud detection: Integrating reinforcement learn- ing into graph neural networks

    [Donget al., 2024 ] Yuxin Dong, Jianhua Yao, Jiajing Wang, Yingbin Liang, Shuhan Liao, and Minheng Xiao. Dy- namic fraud detection: Integrating reinforcement learn- ing into graph neural networks. In2024 6th Interna- tional Conference on Data-driven Optimization of Com- plex Systems (DOCS), pages 818–823. IEEE,

  5. [5]

    Counterfactual expla- nations and how to find them: literature review and benchmarking.Data Mining and Knowledge Discov- ery, 38(5):2770–2824,

    [Guidotti, 2024] Riccardo Guidotti. Counterfactual expla- nations and how to find them: literature review and benchmarking.Data Mining and Knowledge Discov- ery, 38(5):2770–2824,

  6. [6]

    Social recommendation via graph-level counterfactual aug- mentation.Proceedings of the AAAI Conference on Ar- tificial Intelligence, 39(1):334–342, Apr

    [Huanget al., 2025 ] Yinxuan Huang, Ke Liang, Yanyi Huang, Xiang Zeng, Kai Chen, and Bin Zhou. Social recommendation via graph-level counterfactual aug- mentation.Proceedings of the AAAI Conference on Ar- tificial Intelligence, 39(1):334–342, Apr

  7. [7]

    Neural temporal walks: Motif-aware representation learning on continuous-time dynamic graphs

    [Jinet al., 2022 ] Ming Jin, Yuan-Fang Li, and Shirui Pan. Neural temporal walks: Motif-aware representation learning on continuous-time dynamic graphs. In NeurIPS,

  8. [8]

    Representation learning for dy- namic graphs: A survey.J

    [Kazemiet al., 2020 ] Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, and Pascal Poupart. Representation learning for dy- namic graphs: A survey.J. Mach. Learn. Res., 21:70:1– 70:73,

  9. [9]

    On generating plausible counterfactual and semi-factual explanations for deep learning

    [Kenny and Keane, 2021] Eoin M Kenny and Mark T Keane. On generating plausible counterfactual and semi-factual explanations for deep learning. InProceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11575–11585,

  10. [10]

    Predicting dynamic embedding trajectory in temporal interaction networks

    [Kumaret al., 2019 ] Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1269–

  11. [11]

    Neighborhood- aware scalable temporal network representation learn- ing

    [Luo and Li, 2022] Yuhong Luo and Pan Li. Neighborhood- aware scalable temporal network representation learn- ing. InThe First Learning on Graphs Conference,

  12. [12]

    Streaming graph neural networks

    [Maet al., 2020 ] Yao Ma, Ziyi Guo, Zhaochun Ren, Jiliang Tang, and Dawei Yin. Streaming graph neural networks. InProceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 719–728. ACM,

  13. [13]

    Clear: Genera- tive counterfactual explanations on graphs.Advances in neural information processing systems, 35:25895– 25907,

    [Maet al., 2022 ] Jing Ma, Ruocheng Guo, Saumitra Mishra, Aidong Zhang, and Jundong Li. Clear: Genera- tive counterfactual explanations on graphs.Advances in neural information processing systems, 35:25895– 25907,

  14. [14]

    Benchmarking counterfactual image genera- tion.Advances in Neural Information Processing Sys- tems, 37:133207–133230,

    [Melistaset al., 2024 ] Thomas Melistas, Nikos Spyrou, Ne- feli Gkouti, Pedro Sanchez, Athanasios Vlontzos, Yan- nis Panagakis, Giorgos Papanastasiou, and Sotirios A Tsaftaris. Benchmarking counterfactual image genera- tion.Advances in Neural Information Processing Sys- tems, 37:133207–133230,

  15. [15]

    Towards better evaluation for dynamic link prediction

    [Poursafaeiet al., 2022 ] Farimah Poursafaei, Andy Huang, Kellin Pelrine, and Reihaneh Rabbany. Towards better evaluation for dynamic link prediction. InThirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track,

  16. [16]

    Unifying evolution, explanation, and discernment: A generative approach for dynamic graph counterfactuals

    [Prenkajet al., 2024 ] Bardh Prenkaj, Mario Villaiz ´an- Vallelado, Tobias Leemann, and Gjergji Kasneci. Unifying evolution, explanation, and discernment: A generative approach for dynamic graph counterfactuals. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2420–2431,

  17. [17]

    Cody: Counterfactual explainers for dynamic graphs

    [Quet al., 2025 ] Zhan Qu, Daniel Gomm, and Michael F¨arber. Cody: Counterfactual explainers for dynamic graphs. InForty-second International Conference on Machine Learning,

  18. [18]

    Temporal graph networks for deep learning on dynamic graphs

    [Rossiet al., 2020 ] Emanuele Rossi, Ben Chamberlain, Fab- rizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. InICML 2020 Workshop on Graph Representation Learning,

  19. [19]

    Causal inference using po- tential outcomes: Design, modeling, decisions.Journal of the American statistical Association, 100(469):322– 331,

    [Rubin, 2005] Donald B Rubin. Causal inference using po- tential outcomes: Design, modeling, decisions.Journal of the American statistical Association, 100(469):322– 331,

  20. [20]

    Counterfactual explana- tions can be manipulated.Advances in neural informa- tion processing systems, 34:62–75,

    [Slacket al., 2021 ] Dylan Slack, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. Counterfactual explana- tions can be manipulated.Advances in neural informa- tion processing systems, 34:62–75,

  21. [21]

    Learning and evaluating graph neural network explanations based on counterfactual and factual rea- soning

    [Tanet al., 2022 ] Juntao Tan, Shijie Geng, Zuohui Fu, Yingqiang Ge, Shuyuan Xu, Yunqi Li, and Yongfeng Zhang. Learning and evaluating graph neural network explanations based on counterfactual and factual rea- soning. InProceedings of the ACM web conference 2022, pages 1018–1027,

  22. [22]

    Freedyg: Frequency enhanced continuous-time dy- namic graph model for link prediction

    [Tianet al., 2024 ] Yuxing Tian, Yiyan Qi, and Fan Guo. Freedyg: Frequency enhanced continuous-time dy- namic graph model for link prediction. InThe Twelfth International Conference on Learning Representations,

  23. [23]

    Dyrep: Learn- ing representations over dynamic graphs

    [Trivediet al., 2019 ] Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learn- ing representations over dynamic graphs. In7th Interna- tional Conference on Learning Representations. Open- Review.net,

  24. [24]

    TCL: transformer- based dynamic graph modelling via contrastive learn- ing.CoRR, abs/2105.07944,

    [Wanget al., 2021a ] Lu Wang, Xiaofu Chang, Shuang Li, Yunfei Chu, Hui Li, Wei Zhang, Xiaofeng He, Le Song, Jingren Zhou, and Hongxia Yang. TCL: transformer- based dynamic graph modelling via contrastive learn- ing.CoRR, abs/2105.07944,

  25. [25]

    Dynamic graph transformer with correlated spatial-temporal positional encoding

    [Wanget al., 2025 ] Zhe Wang, Sheng Zhou, Jiawei Chen, Zhen Zhang, Binbin Hu, Yan Feng, Chun Chen, and Can Wang. Dynamic graph transformer with correlated spatial-temporal positional encoding. InProceedings of the Eighteenth ACM International Conference on Web Search and Data Mining, pages 60–69,

  26. [26]

    Counter- factual data augmentation with denoising diffusion for graph anomaly detection.IEEE Transactions on Com- putational Social Systems, 11(6):7555–7567,

    [Xiaoet al., 2024 ] Chunjing Xiao, Shikang Pang, Xovee Xu, Xuan Li, Goce Trajcevski, and Fan Zhou. Counter- factual data augmentation with denoising diffusion for graph anomaly detection.IEEE Transactions on Com- putational Social Systems, 11(6):7555–7567,

  27. [27]

    Fac- tual and informative review generation for explainable recommendation

    [Xieet al., 2023 ] Zhouhang Xie, Sameer Singh, Julian McAuley, and Bodhisattwa Prasad Majumder. Fac- tual and informative review generation for explainable recommendation. InProceedings of the AAAI Confer- ence on Artificial Intelligence, volume 37, pages 13816– 13824,

  28. [28]

    Inductive repre- sentation learning on temporal graphs

    [Xuet al., 2020 ] Da Xu, Chuanwei Ruan, Evren K¨orpeoglu, Sushant Kumar, and Kannan Achan. Inductive repre- sentation learning on temporal graphs. In8th Interna- tional Conference on Learning Representations. Open- Review.net,

  29. [29]

    Towards better dynamic graph learning: New archi- tecture and unified library.Advances in Neural Infor- mation Processing Systems, 36:67686–67700,

    [Yuet al., 2023 ] Le Yu, Leilei Sun, Bowen Du, and Weifeng Lv. Towards better dynamic graph learning: New archi- tecture and unified library.Advances in Neural Infor- mation Processing Systems, 36:67686–67700,

  30. [30]

    In- ductive matrix completion based on graph neural net- works

    [Zhang and Chen, 2020] Muhan Zhang and Yixin Chen. In- ductive matrix completion based on graph neural net- works. InICLR,

  31. [31]

    An attentional multi-scale co-evolving model for dynamic link prediction

    [Zhanget al., 2023 ] Guozhen Zhang, Tian Ye, Depeng Jin, and Yong Li. An attentional multi-scale co-evolving model for dynamic link prediction. InProceedings of the ACM Web Conference 2023, pages 429–437,

  32. [32]

    Learning from coun- terfactual links for link prediction

    [Zhaoet al., 2022 ] Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. Learning from coun- terfactual links for link prediction. InInternational Conference on Machine Learning, pages 26911–26926. PMLR,

  33. [33]

    [Zhuet al., 2021 ] Zhaocheng Zhu, Zuobai Zhang, Louis- Pascal A. C. Xhonneux, and Jian Tang. Neural bellman- ford networks: A general graph neural network frame- work for link prediction. InNeurIPS, pages 29476– 29490, 2021