pith. machine review for the scientific record. sign in

arxiv: 2604.07361 · v2 · submitted 2026-04-01 · 💻 cs.LG

Recognition: 3 theorem links

· Lean Theorem

BLEG: LLM Functions as Powerful fMRI Graph-Enhancer for Brain Network Analysis

Authors on Pith no claims yet

Pith reviewed 2026-05-13 23:16 UTC · model grok-4.3

classification 💻 cs.LG
keywords BLEGLLMGNNfMRIbrain networksinstruction tuninggraph enhancementalignment loss
0
0 comments X

The pith

An LLM can enhance GNN performance on fMRI brain network tasks by generating and aligning augmented text representations.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper introduces BLEG, a method that uses a large language model to improve graph neural networks for analyzing brain networks derived from fMRI data. GNNs alone face challenges from sparse features and limited domain knowledge in these neurographs. The approach prompts the LLM to create augmented texts from the graph data, applies instruction tuning to develop enhanced textual representations at lower cost, aligns them with the GNN through coarsened training and logit matching, and then finetunes an adapter for specific tasks. If effective, this shows how LLMs' generalization abilities can be leveraged to overcome uni-modal limitations in brain imaging analysis without full model retraining.

Core claim

BLEG divides the process into three stages where the LLM is prompted to generate augmented texts for fMRI graph data, followed by LLM-LM instruction tuning to obtain enhanced textual representations while the GNN is trained for coarsened alignment, and finally an adapter is finetuned for downstream tasks with an alignment loss between the LM and GNN logits to further boost representations.

What carries the argument

The BLEG three-stage pipeline that uses LLM prompting for text augmentation from fMRI graphs, instruction tuning for alignment, and logit matching to enhance GNN representations.

If this is right

  • GNNs gain improved ability to handle sparse fMRI features through LLM-derived knowledge.
  • Downstream brain network analysis tasks achieve higher performance across datasets.
  • LLM enhancement occurs at relatively lower computational cost compared to direct tuning.
  • Alignment via logit matching ensures consistent representations between text and graph modalities.
  • The method generalizes to various brain network datasets.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the alignment works, similar LLM enhancers could apply to other graph-based scientific data like molecular structures.
  • Testing the method on clinical datasets with patient outcomes could reveal practical diagnostic benefits.
  • Future work might explore direct integration without the adapter stage for even tighter coupling.

Load-bearing premise

That the augmented texts generated by prompting an LLM from fMRI graphs produce representations that, when aligned with GNNs via instruction tuning and logit matching, meaningfully improve performance on brain network tasks.

What would settle it

Running experiments where the LLM-generated texts are replaced with generic or unrelated text and checking whether the performance gains on downstream tasks vanish compared to the original BLEG method.

Figures

Figures reproduced from arXiv: 2604.07361 by Jiaxing Li, Rui Dong, Weihuang Zheng, Youyong Kong, Zitong Wang.

Figure 1
Figure 1. Figure 1: A illustration of our method. GNN-based methods have limited performance. LLM methods re￾quire great training cost. Our method aims to enhance GNN’s performance with much less training cost. firmed BLEG’s superiority. Code can be available at https://github.com/KamonRiderDR/BLEG. 1. Introduction Brain network analysis holds significant impor￾tance for investigating intrinsic mechanisms of hu￾man brains and… view at source ↗
Figure 2
Figure 2. Figure 2: The overall framework of BLEG. (1) We prompt LLM to generate augmented text data for input graph. (2) A [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 4
Figure 4. Figure 4: Few shot results on different ratios. 5.5. Empirical studies We conduct biomarker visualization of brain regions for empirical studies. We average embeddings after GNN encoder for all samples and statistically analyze and visualize the top 10 brain regions. The results are shown in [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 3
Figure 3. Figure 3: (a)–(b) k-shot experiments on different datasets. (c) Ablation studies on different datasets. (d) Biomarker visualizations on ABIDE, (e) ADHD and (f) zhongdaxinxiang datasets. Other top-10 regions are consistent with prior findings regarding identification of salient brain regions [8, 30]. Qwen3-8B (with tuning) analysis: The fMRI graph analysis based on AAL template reveals a globally hyperconnected netwo… view at source ↗
Figure 5
Figure 5. Figure 5: Text generation for ZDXX dataset from LM. [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Prompt design for given FC graph. 𝓟𝒊 𝑮 𝓟𝒊 𝑫 𝓟𝒊 𝑸 { "analysis": "The fMRI graph data from the ABIDE dataset, preprocessed using the AAL template, shows extensive connectivity patterns across multiple brain regions. The connections are uniformly strong (strength=1.0), indicating robust functional interactions. Key regions involved include the prefrontal cortex, temporal lobes, and subcortical structures, whi… view at source ↗
Figure 7
Figure 7. Figure 7: An example for prompt response from LLM of ABIDE dataset. [PITH_FULL_IMAGE:figures/full_fig_p010_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Preprocess of fMRI data and construction for FC dataset. [PITH_FULL_IMAGE:figures/full_fig_p011_8.png] view at source ↗
read the original abstract

Graph Neural Networks (GNNs) have been widely used in diverse brain network analysis tasks based on preprocessed functional magnetic resonance imaging (fMRI) data. However, their performances are constrained due to high feature sparsity and inherent limitations of domain knowledge within uni-modal neurographs. Meanwhile, large language models (LLMs) have demonstrated powerful representation capabilities. Combining LLMs with GNNs presents a promising direction for brain network analysis. While LLMs and MLLMs have emerged in neuroscience, integration of LLMs with graph-based data remains unexplored. In this work, we deal with these issues by incorporating LLM's powerful representation and generalization capabilities. Considering great cost for directly tuning LLMs, we instead function LLM as enhancer to boost GNN's performance on downstream tasks. Our method, namely BLEG, can be divided into three stages. We firstly prompt LLM to get augmented texts for fMRI graph data, then we design a LLM-LM instruction tuning method to get enhanced textual representations at a relatively lower cost. GNN is trained together for coarsened alignment. Finally we finetune an adapter after GNN for given downstream tasks. Alignment loss between LM and GNN logits is designed to further enhance GNN's representation. Extensive experiments on different datasets confirmed BLEG's superiority.Code can be available at https://github.com/KamonRiderDR/BLEG.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes BLEG, a three-stage method to enhance GNNs for fMRI brain network analysis by using LLMs as enhancers: (1) prompting an LLM to generate augmented texts from fMRI graphs, (2) LLM-LM instruction tuning to obtain enhanced textual representations while jointly training the GNN with coarsened alignment, and (3) adapter finetuning on the GNN for downstream tasks with an added alignment loss between LM and GNN logits. The central claim is that this approach overcomes GNN limitations from feature sparsity and limited domain knowledge, with superiority confirmed by extensive experiments on multiple datasets.

Significance. If the experimental results hold, the work could meaningfully advance multimodal integration of LLMs with GNNs in neuroscience by leveraging LLM generalization to augment sparse neurographs. The explicit code repository link (https://github.com/KamonRiderDR/BLEG) is a strength that supports reproducibility.

major comments (2)
  1. [Abstract] Abstract: the assertion that 'Extensive experiments on different datasets confirmed BLEG's superiority' is unsupported by any quantitative results, baseline comparisons, dataset details, performance metrics, or ablation studies, which is load-bearing for the central claim.
  2. [Method] Method description: the graph-to-text serialization step is underspecified (no example prompts, node/edge encoding details, or serialization procedure), so it is impossible to verify whether the LLM augmentation transfers graph-specific structural signal or merely adds generic LLM priors; this directly affects the validity of the alignment losses and downstream gains.
minor comments (1)
  1. [Abstract] Abstract: the phrase 'LLM-LM instruction tuning' is introduced without a brief definition or reference to the specific tuning objective.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive comments. We address each major point below and will revise the manuscript to improve clarity and substantiation of claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the assertion that 'Extensive experiments on different datasets confirmed BLEG's superiority' is unsupported by any quantitative results, baseline comparisons, dataset details, performance metrics, or ablation studies, which is load-bearing for the central claim.

    Authors: We agree that the abstract would be strengthened by including specific quantitative highlights. In the revised version, we will update the abstract to briefly report key metrics (e.g., accuracy or AUC improvements over baselines on the primary datasets) while retaining the overall length constraints. This will directly support the superiority claim with concrete evidence from the experiments section. revision: yes

  2. Referee: [Method] Method description: the graph-to-text serialization step is underspecified (no example prompts, node/edge encoding details, or serialization procedure), so it is impossible to verify whether the LLM augmentation transfers graph-specific structural signal or merely adds generic LLM priors; this directly affects the validity of the alignment losses and downstream gains.

    Authors: We acknowledge that the current description of the graph-to-text serialization is insufficient for full reproducibility and verification of structural signal preservation. In the revised manuscript, we will expand Section 3.1 to include: (1) the exact prompt templates used for LLM augmentation, (2) detailed node feature and edge encoding procedures (including how fMRI connectivity values are serialized), and (3) the step-by-step serialization algorithm. These additions will clarify how graph structure is conveyed to the LLM and support the rationale for the subsequent alignment losses. The linked code repository already implements these steps and can be referenced in the revision. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical method with no self-referential derivations

full rationale

The paper describes a three-stage empirical pipeline (LLM prompting for text augmentation from fMRI graphs, instruction tuning with coarsened alignment, and adapter fine-tuning with logit-matching loss) but presents no equations, uniqueness theorems, or derivations. Claims of superiority rest on experimental results rather than any mathematical reduction to fitted parameters or self-citations. No load-bearing self-citation chains, ansatz smuggling, or renaming of known results appear in the provided text. The method is self-contained against external benchmarks and does not reduce its central improvement claim to a construction by definition.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract provides no explicit free parameters, axioms, or invented entities beyond standard LLM and GNN components; any hyperparameters such as alignment loss weights are not detailed.

pith-pipeline@v0.9.0 · 5559 in / 1079 out tokens · 33569 ms · 2026-05-13T23:16:04.571235+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

50 extracted references · 50 canonical work pages · 4 internal anchors

  1. [1]

    GPT-4 Technical Report

    Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report.arXiv preprint arXiv:2303.08774, 2023. 2

  2. [2]

    Medblip: Bootstrapping language-image pre-training from 3d medical im- ages and texts

    Qiuhui Chen and Yi Hong. Medblip: Bootstrapping language-image pre-training from 3d medical im- ages and texts. InProceedings of the Asian Confer- ence on Computer Vision, pages 2404–2420, 2024. 2

  3. [3]

    Computational ap- 14 proaches to fmri analysis.Nature neuroscience, 20 (3):304–313, 2017

    Jonathan D Cohen, Nathaniel Daw, Barbara En- gelhardt, Uri Hasson, Kai Li, Yael Niv, Ken- neth A Norman, Jonathan Pillow, Peter J Ramadge, Nicholas B Turk-Browne, et al. Computational ap- 14 proaches to fmri analysis.Nature neuroscience, 20 (3):304–313, 2017. 2

  4. [4]

    The adhd-200 consortium: a model to advance the translational potential of neuroimaging in clinical neuroscience.Frontiers in systems neuroscience, 6:62, 2012

    ADHD-200 consortium. The adhd-200 consortium: a model to advance the translational potential of neuroimaging in clinical neuroscience.Frontiers in systems neuroscience, 6:62, 2012. 6

  5. [5]

    Interpretable graph neu- ral networks for connectome-based brain disorder analysis

    Hejie Cui, Wei Dai, Yanqiao Zhu, Xiaoxiao Li, Li- fang He, and Carl Yang. Interpretable graph neu- ral networks for connectome-based brain disorder analysis. InInternational Conference on Medi- cal Image Computing and Computer-Assisted In- tervention, pages 375–385. Springer, 2022. 2, 3, 6, 9

  6. [6]

    Major depressive disorder: hypothesis, mechanism, prevention and treatment.Signal transduction and targeted ther- apy, 9(1):30, 2024

    Lulu Cui, Shu Li, Siman Wang, Xiafang Wu, Yingyu Liu, Weiyang Yu, Yijun Wang, Yong Tang, Maosheng Xia, and Baoman Li. Major depressive disorder: hypothesis, mechanism, prevention and treatment.Signal transduction and targeted ther- apy, 9(1):30, 2024. 1

  7. [7]

    The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain archi- tecture in autism.Molecular psychiatry, 19(6): 659–667, 2014

    Adriana Di Martino, Chao-Gan Yan, Qingyang Li, Erin Denio, Francisco X Castellanos, Kaat Alaerts, Jeffrey S Anderson, Michal Assaf, Su- san Y Bookheimer, Mirella Dapretto, et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain archi- tecture in autism.Molecular psychiatry, 19(6): 659–667, 2014. 6

  8. [8]

    Functional connectivity signatures of major depressive disorder: machine learning analysis of two multicenter neuroimaging studies.Molecular Psychiatry, 28(7):3013–3022,

    Selene Gallo, Ahmed El-Gazzar, Paul Zhutovsky, Rajat M Thomas, Nooshin Javaheripour, Meng Li, Lucie Bartova, Deepti Bathula, Udo Dannlowski, Christopher Davey, et al. Functional connectivity signatures of major depressive disorder: machine learning analysis of two multicenter neuroimaging studies.Molecular Psychiatry, 28(7):3013–3022,

  9. [9]

    The Llama 3 Herd of Models

    Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ah- mad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models.arXiv preprint arXiv:2407.21783, 2024. 2

  10. [10]

    Inductive representation learning on large graphs

    Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. InAdvances in Neural Information Processing Sys- tems (NeurIPS), pages 1024–1034, 2017. 2, 6, 9

  11. [11]

    Teruo Hashimoto, Susumu Yokota, Yutaka Mat- suzaki, and Ryuta Kawashima. Intrinsic hippocam- pal functional connectivity underlying rigid mem- ory in children and adolescents with autism spec- trum disorder: A case–control study.Autism, 25 (7):1901–1912, 2021. 7

  12. [12]

    arXiv preprint arXiv:2305.19523 , year=

    Xiaoxin He, Xavier Bresson, Thomas Laurent, Adam Perold, Yann LeCun, and Bryan Hooi. Har- nessing explanations: Llm-to-lm interpreter for en- hanced text-attributed graph representation learn- ing.arXiv preprint arXiv:2305.19523, 2023. 4, 6, 12

  13. [13]

    Autism spec- trum disorder: a review.Jama, 329(2):157–168,

    Tomoya Hirota and Bryan H King. Autism spec- trum disorder: a review.Jama, 329(2):157–168,

  14. [14]

    Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen

    Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021. 7

  15. [15]

    Brainnpt: Pre-training transformer networks for brain network classification.IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2024

    Jinlong Hu, Yangmin Huang, Nan Wang, and Shoubin Dong. Brainnpt: Pre-training transformer networks for brain network classification.IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2024. 2, 6, 9

  16. [16]

    Med-moe: Mix- ture of domain-specific experts for lightweight medical vision-language models.arXiv preprint arXiv:2404.10237, 2024

    Songtao Jiang, Tuo Zheng, Yan Zhang, Yeying Jin, Li Yuan, and Zuozhu Liu. Med-moe: Mix- ture of domain-specific experts for lightweight medical vision-language models.arXiv preprint arXiv:2404.10237, 2024. 2, 3

  17. [17]

    Semi-supervised classification with graph convolutional networks

    Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Represen- tations (ICLR), 2017. 2, 6, 9

  18. [18]

    Biobert: a pre-trained biomedical language representation model for biomedical text mining.Bioinformatics, 36(4):1234–1240, 2020

    Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining.Bioinformatics, 36(4):1234–1240, 2020. 2, 3

  19. [19]

    Llava- med: Training a large language-and-vision assis- tant for biomedicine in one day.Advances in Neural Information Processing Systems, 36:28541–28564,

    Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Nau- mann, Hoifung Poon, and Jianfeng Gao. Llava- med: Training a large language-and-vision assis- tant for biomedicine in one day.Advances in Neural Information Processing Systems, 36:28541–28564,

  20. [20]

    Braingnn: Interpretable brain graph neural network for fmri analysis.Medical Image Analysis, 74:102233, 2021

    Xiaoxiao Li, Yuan Zhou, Nicha Dvornek, Muhan Zhang, Siyuan Gao, Juntang Zhuang, Dustin Scheinost, Lawrence H Staib, Pamela Ventola, and James S Duncan. Braingnn: Interpretable brain graph neural network for fmri analysis.Medical Image Analysis, 74:102233, 2021. 2, 3, 6, 9

  21. [21]

    DeepSeek-V3 Technical Report

    Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. 15 Deepseek-v3 technical report.arXiv preprint arXiv:2412.19437, 2024. 4

  22. [22]

    Visual instruction tuning.Ad- vances in neural information processing systems, 36:34892–34916, 2023

    Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning.Ad- vances in neural information processing systems, 36:34892–34916, 2023. 5

  23. [23]

    One for all: Towards training one graph model for all classification tasks, 2024

    Hao Liu, Jiarui Feng, Lecheng Kong, Ningyue Liang, Dacheng Tao, Yixin Chen, and Muhan Zhang. One for all: Towards training one graph model for all classification tasks, 2024. 6

  24. [24]

    Altered resting-state dynamic func- tional brain networks in major depressive disorder: Findings from the rest-meta-mdd consortium.Neu- roImage: Clinical, 26:102163, 2020

    Yicheng Long, Hengyi Cao, Chaogan Yan, Xiao Chen, Le Li, Francisco Xavier Castellanos, Tongjian Bai, Qijing Bo, Guanmao Chen, Ningx- uan Chen, et al. Altered resting-state dynamic func- tional brain networks in major depressive disorder: Findings from the rest-meta-mdd consortium.Neu- roImage: Clinical, 26:102163, 2020. 7

  25. [25]

    Biogpt: generative pre-trained transformer for biomedical text generation and mining.Briefings in bioinformatics, 23(6):bbac409, 2022

    Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. Biogpt: generative pre-trained transformer for biomedical text generation and mining.Briefings in bioinformatics, 23(6):bbac409, 2022. 2, 3

  26. [26]

    Systematic evaluation of fmri data-processing pipelines for consistent functional connectomics.Nature Com- munications, 15(1):4745, 2024

    Andrea I Luppi, Helena M Gellersen, Zhen-Qi Liu, Alexander RD Peattie, Anne E Manktelow, Ram Adapa, Adrian M Owen, Lorina Naci, David K Menon, Stavros I Dimitriadis, et al. Systematic evaluation of fmri data-processing pipelines for consistent functional connectomics.Nature Com- munications, 15(1):4745, 2024. 2

  27. [27]

    The default mode network in autism.Biological Psychiatry: Cog- nitive Neuroscience and Neuroimaging, 2(6):476– 486, 2017

    Aarthi Padmanabhan, Charles J Lynch, Marie Schaer, and Vinod Menon. The default mode network in autism.Biological Psychiatry: Cog- nitive Neuroscience and Neuroimaging, 2(6):476– 486, 2017. 7

  28. [28]

    Neu- roimaging insights into autism spectrum disorder: Structural and functional brain.Health Psychology Research, 12:123439, 2024

    Mahie Patil, Nofel Iftikhar, and Latha Ganti. Neu- roimaging insights into autism spectrum disorder: Structural and functional brain.Health Psychology Research, 12:123439, 2024. 7

  29. [29]

    Mmgpl: Multimodal medical data analysis with graph prompt learning.Medical Image Analysis, 97: 103225, 2024

    Liang Peng, Songyue Cai, Zongqian Wu, Huifang Shang, Xiaofeng Zhu, and Xiaoxiao Li. Mmgpl: Multimodal medical data analysis with graph prompt learning.Medical Image Analysis, 97: 103225, 2024. 2, 3

  30. [30]

    A multimetric systematic review of fmri findings in patients with mdd receiving ect.Progress in Neuro- Psychopharmacology and Biological Psychiatry, 108:110178, 2021

    Daniel Porta-Caster `as, Marta Cano, Joan A Cam- prodon, Colleen Loo, Diego Palao, Carles Soriano- Mas, and Narc ´ıs Cardoner. A multimetric systematic review of fmri findings in patients with mdd receiving ect.Progress in Neuro- Psychopharmacology and Biological Psychiatry, 108:110178, 2021. 8

  31. [31]

    Capabilities of Gemini Models in Medicine

    Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, et al. Capabilities of gemini models in medicine.arXiv preprint arXiv:2404.18416, 2024. 3

  32. [32]

    Rest: a toolkit for resting-state functional magnetic reso- nance imaging data processing.PloS one, 6(9): e25031, 2011

    Xiao-Wei Song, Zhang-Ye Dong, Xiang-Yu Long, Su-Fang Li, Xi-Nian Zuo, Chao-Zhe Zhu, Yong He, Chao-Gan Yan, and Yu-Feng Zang. Rest: a toolkit for resting-state functional magnetic reso- nance imaging data processing.PloS one, 6(9): e25031, 2011. 9

  33. [33]

    Graphgpt: Graph instruction tuning for large lan- guage models

    Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, and Chao Huang. Graphgpt: Graph instruction tuning for large lan- guage models. InProceedings of the 47th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, pages 491– 500, 2024. 4

  34. [34]

    Qwq-32b: Embracing the power of reinforcement learning, 2025

    Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, 2025. Accessed:2025-09-

  35. [35]

    Constructing high-order functional connectivity networks with temporal information from fmri data.IEEE Transactions on Medical Imaging, 2024

    Yingzhi Teng, Kai Wu, Jing Liu, Yifan Li, and Xiangyi Teng. Constructing high-order functional connectivity networks with temporal information from fmri data.IEEE Transactions on Medical Imaging, 2024. 2, 6, 9

  36. [36]

    The wu-minn human connectome project: an overview

    David C Van Essen, Stephen M Smith, Deanna M Barch, Timothy EJ Behrens, Essa Yacoub, Kamil Ugurbil, Wu-Minn HCP Consortium, et al. The wu-minn human connectome project: an overview. Neuroimage, 80:62–79, 2013. 1, 6

  37. [37]

    Graph attention networks

    Petar Veli ˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. Graph attention networks. InInternational Conference on Learning Representations (ICLR),

  38. [38]

    Brainbert: Self- supervised representation learning for intracranial recordings.arXiv preprint arXiv:2302.14367,

    Christopher Wang, Vighnesh Subramaniam, Adam Uri Yaari, Gabriel Kreiman, Boris Katz, Ignacio Cases, and Andrei Barbu. Brainbert: Self- supervised representation learning for intracranial recordings.arXiv preprint arXiv:2302.14367,

  39. [39]

    Four social brain regions, their dysfunctions, and sequelae, extensively ex- plain autism spectrum disorder symptomatology

    Charles SE Weston. Four social brain regions, their dysfunctions, and sequelae, extensively ex- plain autism spectrum disorder symptomatology. Brain sciences, 9(6):130, 2019. 7

  40. [40]

    Metrics for graph comparison: a practitioner’s guide.Plos one, 15(2):e0228728, 2020

    Peter Wills and Franc ¸ois G Meyer. Metrics for graph comparison: a practitioner’s guide.Plos one, 15(2):e0228728, 2020. 2 16

  41. [41]

    Representing long-range context for graph neural networks with global attention.Advances in neural information processing systems, 34:13266–13279,

    Zhanghao Wu, Paras Jain, Matthew Wright, Aza- lia Mirhoseini, Joseph E Gonzalez, and Ion Stoica. Representing long-range context for graph neural networks with global attention.Advances in neural information processing systems, 34:13266–13279,

  42. [42]

    Contrastive graph pooling for explainable classification of brain networks.IEEE Transactions on Medical Imaging,

    Jiaxing Xu, Qingtian Bian, Xinhang Li, Aihu Zhang, Yiping Ke, Miao Qiao, Wei Zhang, Wei Khang Jeremy Sim, and Bal´azs Guly´as. Contrastive graph pooling for explainable classification of brain networks.IEEE Transactions on Medical Imaging,

  43. [43]

    Qwen3 tech- nical report, 2025

    An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Jun- yang Lin, Kai Dang, Keqin Bao, Kexin Yang...

  44. [44]

    Functional connectivity network fusion with dynamic thresholding for mci diagnosis

    Xi Yang, Yan Jin, Xiaobo Chen, Han Zhang, Gang Li, and Dinggang Shen. Functional connectivity network fusion with dynamic thresholding for mci diagnosis. InMachine Learning in Medical Imag- ing: 7th International Workshop, MLMI 2016, Held in Conjunction with MICCAI 2016, Athens, Greece, October 17, 2016, Proceedings 7, pages 246–253. Springer, 2016. 2

  45. [45]

    Jia Yi, Huilin Jiang, Xiaoyong Wang, and Yong Tan. A comprehensive review on sparse represen- tation and compressed perception in optical image reconstruction.Archives of Computational Meth- ods in Engineering, 31(5):3197–3209, 2024. 2

  46. [46]

    Connectivity-based brain network supports re- stricted and repetitive behaviors in autism spectrum disorder across development.Frontiers in psychia- try, 13:874090, 2022

    Anyi Zhang, Lin Liu, Suhua Chang, Le Shi, Peng Li, Jie Shi, Lin Lu, Yanping Bao, and Jiajia Liu. Connectivity-based brain network supports re- stricted and repetitive behaviors in autism spectrum disorder across development.Frontiers in psychia- try, 13:874090, 2022. 7

  47. [47]

    Multimodal fusion on low-quality data: A comprehensive survey.arXiv preprint arXiv:2404.18947, 2024

    Qingyang Zhang, Yake Wei, Zongbo Han, Huazhu Fu, Xi Peng, Cheng Deng, Qinghua Hu, Cai Xu, Jie Wen, Di Hu, et al. Multimodal fusion on low-quality data: A comprehensive survey.arXiv preprint arXiv:2404.18947, 2024. 2

  48. [48]

    A- gcl: Adversarial graph contrastive learning for fmri analysis to diagnose neurodevelopmental disorders

    Shengjie Zhang, Xiang Chen, Xin Shen, Bohan Ren, Ziqi Yu, Haibo Yang, Xi Jiang, Dinggang Shen, Yuan Zhou, and Xiao-Yong Zhang. A- gcl: Adversarial graph contrastive learning for fmri analysis to diagnose neurodevelopmental disorders. Medical Image Analysis, 90:102932, 2023. 2

  49. [49]

    Can llm graph reasoning gener- alize beyond pattern memorization? InFindings of the Association for Computational Linguistics: EMNLP 2024, pages 2289–2305, 2024

    Yizhuo Zhang, Heng Wang, Shangbin Feng, Zhaoxuan Tan, Xiaochuang Han, Tianxing He, and Yulia Tsvetkov. Can llm graph reasoning gener- alize beyond pattern memorization? InFindings of the Association for Computational Linguistics: EMNLP 2024, pages 2289–2305, 2024. 4

  50. [50]

    Multimodal clinical trial outcome pre- diction with large language models.arXiv preprint arXiv:2402.06512, 2024

    Wenhao Zheng, Liaoyaqi Wang, Dongshen Peng, Hongxia Xu, Yun Li, Hongtu Zhu, Tianfan Fu, and Huaxiu Yao. Multimodal clinical trial outcome pre- diction with large language models.arXiv preprint arXiv:2402.06512, 2024. 2, 3 17