Recognition: 3 theorem links
· Lean TheoremBLEG: LLM Functions as Powerful fMRI Graph-Enhancer for Brain Network Analysis
Pith reviewed 2026-05-13 23:16 UTC · model grok-4.3
The pith
An LLM can enhance GNN performance on fMRI brain network tasks by generating and aligning augmented text representations.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
BLEG divides the process into three stages where the LLM is prompted to generate augmented texts for fMRI graph data, followed by LLM-LM instruction tuning to obtain enhanced textual representations while the GNN is trained for coarsened alignment, and finally an adapter is finetuned for downstream tasks with an alignment loss between the LM and GNN logits to further boost representations.
What carries the argument
The BLEG three-stage pipeline that uses LLM prompting for text augmentation from fMRI graphs, instruction tuning for alignment, and logit matching to enhance GNN representations.
If this is right
- GNNs gain improved ability to handle sparse fMRI features through LLM-derived knowledge.
- Downstream brain network analysis tasks achieve higher performance across datasets.
- LLM enhancement occurs at relatively lower computational cost compared to direct tuning.
- Alignment via logit matching ensures consistent representations between text and graph modalities.
- The method generalizes to various brain network datasets.
Where Pith is reading between the lines
- If the alignment works, similar LLM enhancers could apply to other graph-based scientific data like molecular structures.
- Testing the method on clinical datasets with patient outcomes could reveal practical diagnostic benefits.
- Future work might explore direct integration without the adapter stage for even tighter coupling.
Load-bearing premise
That the augmented texts generated by prompting an LLM from fMRI graphs produce representations that, when aligned with GNNs via instruction tuning and logit matching, meaningfully improve performance on brain network tasks.
What would settle it
Running experiments where the LLM-generated texts are replaced with generic or unrelated text and checking whether the performance gains on downstream tasks vanish compared to the original BLEG method.
Figures
read the original abstract
Graph Neural Networks (GNNs) have been widely used in diverse brain network analysis tasks based on preprocessed functional magnetic resonance imaging (fMRI) data. However, their performances are constrained due to high feature sparsity and inherent limitations of domain knowledge within uni-modal neurographs. Meanwhile, large language models (LLMs) have demonstrated powerful representation capabilities. Combining LLMs with GNNs presents a promising direction for brain network analysis. While LLMs and MLLMs have emerged in neuroscience, integration of LLMs with graph-based data remains unexplored. In this work, we deal with these issues by incorporating LLM's powerful representation and generalization capabilities. Considering great cost for directly tuning LLMs, we instead function LLM as enhancer to boost GNN's performance on downstream tasks. Our method, namely BLEG, can be divided into three stages. We firstly prompt LLM to get augmented texts for fMRI graph data, then we design a LLM-LM instruction tuning method to get enhanced textual representations at a relatively lower cost. GNN is trained together for coarsened alignment. Finally we finetune an adapter after GNN for given downstream tasks. Alignment loss between LM and GNN logits is designed to further enhance GNN's representation. Extensive experiments on different datasets confirmed BLEG's superiority.Code can be available at https://github.com/KamonRiderDR/BLEG.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes BLEG, a three-stage method to enhance GNNs for fMRI brain network analysis by using LLMs as enhancers: (1) prompting an LLM to generate augmented texts from fMRI graphs, (2) LLM-LM instruction tuning to obtain enhanced textual representations while jointly training the GNN with coarsened alignment, and (3) adapter finetuning on the GNN for downstream tasks with an added alignment loss between LM and GNN logits. The central claim is that this approach overcomes GNN limitations from feature sparsity and limited domain knowledge, with superiority confirmed by extensive experiments on multiple datasets.
Significance. If the experimental results hold, the work could meaningfully advance multimodal integration of LLMs with GNNs in neuroscience by leveraging LLM generalization to augment sparse neurographs. The explicit code repository link (https://github.com/KamonRiderDR/BLEG) is a strength that supports reproducibility.
major comments (2)
- [Abstract] Abstract: the assertion that 'Extensive experiments on different datasets confirmed BLEG's superiority' is unsupported by any quantitative results, baseline comparisons, dataset details, performance metrics, or ablation studies, which is load-bearing for the central claim.
- [Method] Method description: the graph-to-text serialization step is underspecified (no example prompts, node/edge encoding details, or serialization procedure), so it is impossible to verify whether the LLM augmentation transfers graph-specific structural signal or merely adds generic LLM priors; this directly affects the validity of the alignment losses and downstream gains.
minor comments (1)
- [Abstract] Abstract: the phrase 'LLM-LM instruction tuning' is introduced without a brief definition or reference to the specific tuning objective.
Simulated Author's Rebuttal
We thank the referee for the detailed and constructive comments. We address each major point below and will revise the manuscript to improve clarity and substantiation of claims.
read point-by-point responses
-
Referee: [Abstract] Abstract: the assertion that 'Extensive experiments on different datasets confirmed BLEG's superiority' is unsupported by any quantitative results, baseline comparisons, dataset details, performance metrics, or ablation studies, which is load-bearing for the central claim.
Authors: We agree that the abstract would be strengthened by including specific quantitative highlights. In the revised version, we will update the abstract to briefly report key metrics (e.g., accuracy or AUC improvements over baselines on the primary datasets) while retaining the overall length constraints. This will directly support the superiority claim with concrete evidence from the experiments section. revision: yes
-
Referee: [Method] Method description: the graph-to-text serialization step is underspecified (no example prompts, node/edge encoding details, or serialization procedure), so it is impossible to verify whether the LLM augmentation transfers graph-specific structural signal or merely adds generic LLM priors; this directly affects the validity of the alignment losses and downstream gains.
Authors: We acknowledge that the current description of the graph-to-text serialization is insufficient for full reproducibility and verification of structural signal preservation. In the revised manuscript, we will expand Section 3.1 to include: (1) the exact prompt templates used for LLM augmentation, (2) detailed node feature and edge encoding procedures (including how fMRI connectivity values are serialized), and (3) the step-by-step serialization algorithm. These additions will clarify how graph structure is conveyed to the LLM and support the rationale for the subsequent alignment losses. The linked code repository already implements these steps and can be referenced in the revision. revision: yes
Circularity Check
No significant circularity; empirical method with no self-referential derivations
full rationale
The paper describes a three-stage empirical pipeline (LLM prompting for text augmentation from fMRI graphs, instruction tuning with coarsened alignment, and adapter fine-tuning with logit-matching loss) but presents no equations, uniqueness theorems, or derivations. Claims of superiority rest on experimental results rather than any mathematical reduction to fitted parameters or self-citations. No load-bearing self-citation chains, ansatz smuggling, or renaming of known results appear in the provided text. The method is self-contained against external benchmarks and does not reduce its central improvement claim to a construction by definition.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Alignment loss between LM and GNN logits is designed to further enhance GNN's representation. ... L_align = 1/N sum || ZG_i / ||ZG_i||_2 - ZT_i / ||ZT_i||_2 ||_2^2
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanLogicNat recovery unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We firstly prompt LLM to get augmented texts for fMRI graph data, then we design a LLM-LM instruction tuning method to get enhanced textual representations
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Extensive experiments on different datasets confirmed BLEG's superiority
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report.arXiv preprint arXiv:2303.08774, 2023. 2
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[2]
Medblip: Bootstrapping language-image pre-training from 3d medical im- ages and texts
Qiuhui Chen and Yi Hong. Medblip: Bootstrapping language-image pre-training from 3d medical im- ages and texts. InProceedings of the Asian Confer- ence on Computer Vision, pages 2404–2420, 2024. 2
work page 2024
-
[3]
Computational ap- 14 proaches to fmri analysis.Nature neuroscience, 20 (3):304–313, 2017
Jonathan D Cohen, Nathaniel Daw, Barbara En- gelhardt, Uri Hasson, Kai Li, Yael Niv, Ken- neth A Norman, Jonathan Pillow, Peter J Ramadge, Nicholas B Turk-Browne, et al. Computational ap- 14 proaches to fmri analysis.Nature neuroscience, 20 (3):304–313, 2017. 2
work page 2017
-
[4]
ADHD-200 consortium. The adhd-200 consortium: a model to advance the translational potential of neuroimaging in clinical neuroscience.Frontiers in systems neuroscience, 6:62, 2012. 6
work page 2012
-
[5]
Interpretable graph neu- ral networks for connectome-based brain disorder analysis
Hejie Cui, Wei Dai, Yanqiao Zhu, Xiaoxiao Li, Li- fang He, and Carl Yang. Interpretable graph neu- ral networks for connectome-based brain disorder analysis. InInternational Conference on Medi- cal Image Computing and Computer-Assisted In- tervention, pages 375–385. Springer, 2022. 2, 3, 6, 9
work page 2022
-
[6]
Lulu Cui, Shu Li, Siman Wang, Xiafang Wu, Yingyu Liu, Weiyang Yu, Yijun Wang, Yong Tang, Maosheng Xia, and Baoman Li. Major depressive disorder: hypothesis, mechanism, prevention and treatment.Signal transduction and targeted ther- apy, 9(1):30, 2024. 1
work page 2024
-
[7]
Adriana Di Martino, Chao-Gan Yan, Qingyang Li, Erin Denio, Francisco X Castellanos, Kaat Alaerts, Jeffrey S Anderson, Michal Assaf, Su- san Y Bookheimer, Mirella Dapretto, et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain archi- tecture in autism.Molecular psychiatry, 19(6): 659–667, 2014. 6
work page 2014
-
[8]
Selene Gallo, Ahmed El-Gazzar, Paul Zhutovsky, Rajat M Thomas, Nooshin Javaheripour, Meng Li, Lucie Bartova, Deepti Bathula, Udo Dannlowski, Christopher Davey, et al. Functional connectivity signatures of major depressive disorder: machine learning analysis of two multicenter neuroimaging studies.Molecular Psychiatry, 28(7):3013–3022,
-
[9]
Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ah- mad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models.arXiv preprint arXiv:2407.21783, 2024. 2
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[10]
Inductive representation learning on large graphs
Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. InAdvances in Neural Information Processing Sys- tems (NeurIPS), pages 1024–1034, 2017. 2, 6, 9
work page 2017
-
[11]
Teruo Hashimoto, Susumu Yokota, Yutaka Mat- suzaki, and Ryuta Kawashima. Intrinsic hippocam- pal functional connectivity underlying rigid mem- ory in children and adolescents with autism spec- trum disorder: A case–control study.Autism, 25 (7):1901–1912, 2021. 7
work page 1901
-
[12]
arXiv preprint arXiv:2305.19523 , year=
Xiaoxin He, Xavier Bresson, Thomas Laurent, Adam Perold, Yann LeCun, and Bryan Hooi. Har- nessing explanations: Llm-to-lm interpreter for en- hanced text-attributed graph representation learn- ing.arXiv preprint arXiv:2305.19523, 2023. 4, 6, 12
-
[13]
Autism spec- trum disorder: a review.Jama, 329(2):157–168,
Tomoya Hirota and Bryan H King. Autism spec- trum disorder: a review.Jama, 329(2):157–168,
-
[14]
Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021. 7
work page 2021
-
[15]
Jinlong Hu, Yangmin Huang, Nan Wang, and Shoubin Dong. Brainnpt: Pre-training transformer networks for brain network classification.IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2024. 2, 6, 9
work page 2024
-
[16]
Songtao Jiang, Tuo Zheng, Yan Zhang, Yeying Jin, Li Yuan, and Zuozhu Liu. Med-moe: Mix- ture of domain-specific experts for lightweight medical vision-language models.arXiv preprint arXiv:2404.10237, 2024. 2, 3
-
[17]
Semi-supervised classification with graph convolutional networks
Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Represen- tations (ICLR), 2017. 2, 6, 9
work page 2017
-
[18]
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining.Bioinformatics, 36(4):1234–1240, 2020. 2, 3
work page 2020
-
[19]
Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Nau- mann, Hoifung Poon, and Jianfeng Gao. Llava- med: Training a large language-and-vision assis- tant for biomedicine in one day.Advances in Neural Information Processing Systems, 36:28541–28564,
-
[20]
Xiaoxiao Li, Yuan Zhou, Nicha Dvornek, Muhan Zhang, Siyuan Gao, Juntang Zhuang, Dustin Scheinost, Lawrence H Staib, Pamela Ventola, and James S Duncan. Braingnn: Interpretable brain graph neural network for fmri analysis.Medical Image Analysis, 74:102233, 2021. 2, 3, 6, 9
work page 2021
-
[21]
Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. 15 Deepseek-v3 technical report.arXiv preprint arXiv:2412.19437, 2024. 4
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[22]
Visual instruction tuning.Ad- vances in neural information processing systems, 36:34892–34916, 2023
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning.Ad- vances in neural information processing systems, 36:34892–34916, 2023. 5
work page 2023
-
[23]
One for all: Towards training one graph model for all classification tasks, 2024
Hao Liu, Jiarui Feng, Lecheng Kong, Ningyue Liang, Dacheng Tao, Yixin Chen, and Muhan Zhang. One for all: Towards training one graph model for all classification tasks, 2024. 6
work page 2024
-
[24]
Yicheng Long, Hengyi Cao, Chaogan Yan, Xiao Chen, Le Li, Francisco Xavier Castellanos, Tongjian Bai, Qijing Bo, Guanmao Chen, Ningx- uan Chen, et al. Altered resting-state dynamic func- tional brain networks in major depressive disorder: Findings from the rest-meta-mdd consortium.Neu- roImage: Clinical, 26:102163, 2020. 7
work page 2020
-
[25]
Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. Biogpt: generative pre-trained transformer for biomedical text generation and mining.Briefings in bioinformatics, 23(6):bbac409, 2022. 2, 3
work page 2022
-
[26]
Andrea I Luppi, Helena M Gellersen, Zhen-Qi Liu, Alexander RD Peattie, Anne E Manktelow, Ram Adapa, Adrian M Owen, Lorina Naci, David K Menon, Stavros I Dimitriadis, et al. Systematic evaluation of fmri data-processing pipelines for consistent functional connectomics.Nature Com- munications, 15(1):4745, 2024. 2
work page 2024
-
[27]
Aarthi Padmanabhan, Charles J Lynch, Marie Schaer, and Vinod Menon. The default mode network in autism.Biological Psychiatry: Cog- nitive Neuroscience and Neuroimaging, 2(6):476– 486, 2017. 7
work page 2017
-
[28]
Mahie Patil, Nofel Iftikhar, and Latha Ganti. Neu- roimaging insights into autism spectrum disorder: Structural and functional brain.Health Psychology Research, 12:123439, 2024. 7
work page 2024
-
[29]
Liang Peng, Songyue Cai, Zongqian Wu, Huifang Shang, Xiaofeng Zhu, and Xiaoxiao Li. Mmgpl: Multimodal medical data analysis with graph prompt learning.Medical Image Analysis, 97: 103225, 2024. 2, 3
work page 2024
-
[30]
Daniel Porta-Caster `as, Marta Cano, Joan A Cam- prodon, Colleen Loo, Diego Palao, Carles Soriano- Mas, and Narc ´ıs Cardoner. A multimetric systematic review of fmri findings in patients with mdd receiving ect.Progress in Neuro- Psychopharmacology and Biological Psychiatry, 108:110178, 2021. 8
work page 2021
-
[31]
Capabilities of Gemini Models in Medicine
Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, et al. Capabilities of gemini models in medicine.arXiv preprint arXiv:2404.18416, 2024. 3
work page internal anchor Pith review arXiv 2024
-
[32]
Xiao-Wei Song, Zhang-Ye Dong, Xiang-Yu Long, Su-Fang Li, Xi-Nian Zuo, Chao-Zhe Zhu, Yong He, Chao-Gan Yan, and Yu-Feng Zang. Rest: a toolkit for resting-state functional magnetic reso- nance imaging data processing.PloS one, 6(9): e25031, 2011. 9
work page 2011
-
[33]
Graphgpt: Graph instruction tuning for large lan- guage models
Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, and Chao Huang. Graphgpt: Graph instruction tuning for large lan- guage models. InProceedings of the 47th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, pages 491– 500, 2024. 4
work page 2024
-
[34]
Qwq-32b: Embracing the power of reinforcement learning, 2025
Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, 2025. Accessed:2025-09-
work page 2025
-
[35]
Yingzhi Teng, Kai Wu, Jing Liu, Yifan Li, and Xiangyi Teng. Constructing high-order functional connectivity networks with temporal information from fmri data.IEEE Transactions on Medical Imaging, 2024. 2, 6, 9
work page 2024
-
[36]
The wu-minn human connectome project: an overview
David C Van Essen, Stephen M Smith, Deanna M Barch, Timothy EJ Behrens, Essa Yacoub, Kamil Ugurbil, Wu-Minn HCP Consortium, et al. The wu-minn human connectome project: an overview. Neuroimage, 80:62–79, 2013. 1, 6
work page 2013
-
[37]
Petar Veli ˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. Graph attention networks. InInternational Conference on Learning Representations (ICLR),
-
[38]
Christopher Wang, Vighnesh Subramaniam, Adam Uri Yaari, Gabriel Kreiman, Boris Katz, Ignacio Cases, and Andrei Barbu. Brainbert: Self- supervised representation learning for intracranial recordings.arXiv preprint arXiv:2302.14367,
-
[39]
Charles SE Weston. Four social brain regions, their dysfunctions, and sequelae, extensively ex- plain autism spectrum disorder symptomatology. Brain sciences, 9(6):130, 2019. 7
work page 2019
-
[40]
Metrics for graph comparison: a practitioner’s guide.Plos one, 15(2):e0228728, 2020
Peter Wills and Franc ¸ois G Meyer. Metrics for graph comparison: a practitioner’s guide.Plos one, 15(2):e0228728, 2020. 2 16
work page 2020
-
[41]
Zhanghao Wu, Paras Jain, Matthew Wright, Aza- lia Mirhoseini, Joseph E Gonzalez, and Ion Stoica. Representing long-range context for graph neural networks with global attention.Advances in neural information processing systems, 34:13266–13279,
-
[42]
Jiaxing Xu, Qingtian Bian, Xinhang Li, Aihu Zhang, Yiping Ke, Miao Qiao, Wei Zhang, Wei Khang Jeremy Sim, and Bal´azs Guly´as. Contrastive graph pooling for explainable classification of brain networks.IEEE Transactions on Medical Imaging,
-
[43]
Qwen3 tech- nical report, 2025
An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Jun- yang Lin, Kai Dang, Keqin Bao, Kexin Yang...
work page 2025
-
[44]
Functional connectivity network fusion with dynamic thresholding for mci diagnosis
Xi Yang, Yan Jin, Xiaobo Chen, Han Zhang, Gang Li, and Dinggang Shen. Functional connectivity network fusion with dynamic thresholding for mci diagnosis. InMachine Learning in Medical Imag- ing: 7th International Workshop, MLMI 2016, Held in Conjunction with MICCAI 2016, Athens, Greece, October 17, 2016, Proceedings 7, pages 246–253. Springer, 2016. 2
work page 2016
-
[45]
Jia Yi, Huilin Jiang, Xiaoyong Wang, and Yong Tan. A comprehensive review on sparse represen- tation and compressed perception in optical image reconstruction.Archives of Computational Meth- ods in Engineering, 31(5):3197–3209, 2024. 2
work page 2024
-
[46]
Anyi Zhang, Lin Liu, Suhua Chang, Le Shi, Peng Li, Jie Shi, Lin Lu, Yanping Bao, and Jiajia Liu. Connectivity-based brain network supports re- stricted and repetitive behaviors in autism spectrum disorder across development.Frontiers in psychia- try, 13:874090, 2022. 7
work page 2022
-
[47]
Multimodal fusion on low-quality data: A comprehensive survey.arXiv preprint arXiv:2404.18947, 2024
Qingyang Zhang, Yake Wei, Zongbo Han, Huazhu Fu, Xi Peng, Cheng Deng, Qinghua Hu, Cai Xu, Jie Wen, Di Hu, et al. Multimodal fusion on low-quality data: A comprehensive survey.arXiv preprint arXiv:2404.18947, 2024. 2
-
[48]
Shengjie Zhang, Xiang Chen, Xin Shen, Bohan Ren, Ziqi Yu, Haibo Yang, Xi Jiang, Dinggang Shen, Yuan Zhou, and Xiao-Yong Zhang. A- gcl: Adversarial graph contrastive learning for fmri analysis to diagnose neurodevelopmental disorders. Medical Image Analysis, 90:102932, 2023. 2
work page 2023
-
[49]
Yizhuo Zhang, Heng Wang, Shangbin Feng, Zhaoxuan Tan, Xiaochuang Han, Tianxing He, and Yulia Tsvetkov. Can llm graph reasoning gener- alize beyond pattern memorization? InFindings of the Association for Computational Linguistics: EMNLP 2024, pages 2289–2305, 2024. 4
work page 2024
-
[50]
Wenhao Zheng, Liaoyaqi Wang, Dongshen Peng, Hongxia Xu, Yun Li, Hongtu Zhu, Tianfan Fu, and Huaxiu Yao. Multimodal clinical trial outcome pre- diction with large language models.arXiv preprint arXiv:2402.06512, 2024. 2, 3 17
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.