pith. machine review for the scientific record. sign in

arxiv: 2604.03860 · v1 · submitted 2026-04-04 · 💻 cs.CR

Recognition: 2 theorem links

· Lean Theorem

LiquiLM: Bridging the Semantic Gap in Liquidity Flaw Audit via DCN and LLMs

Authors on Pith no claims yet

Pith reviewed 2026-05-13 16:56 UTC · model grok-4.3

classification 💻 cs.CR
keywords smart contract auditliquidity flawslarge language modelsdynamic co-attention networkproof of liquidityDeFi securityvulnerability detection
0
0 comments X

The pith

LiquiLM integrates LLMs with a Dynamic Co-Attention Network to detect liquidity flaws in smart contracts by bridging code and intent semantics.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes LiquiLM to address the difficulty in detecting hidden liquidity logic flaws in Proof of Liquidity mechanisms, which arise from complex smart contract interactions. It uses large language models combined with a Dynamic Co-Attention Network to create dynamic interactions between contract code and flaw descriptions. This approach aims to connect low-level code implementations to high-level liquidity intents. On validation contracts, it achieves F1-scores over 90 percent, and in real-world audits it flags 238 high-risk contracts while helping discover 10 CVE-certified vulnerabilities.

Core claim

LiquiLM effectively bridges the semantic gap between underlying code implementations and high-level liquidity intents in smart contracts through dynamic interaction established by DCN and LLMs, leading to effective auditing and explanation of liquidity flaws.

What carries the argument

Dynamic Co-Attention Network (DCN) integrated with LLMs, which establishes dynamic interaction between liquidity-critical contracts and flaw descriptions.

If this is right

  • Traditional auditing methods can be supplemented by automated systems that link code to semantic intents.
  • PoL and DeFi ecosystems can achieve better stability through early detection of liquidity flaws.
  • LLM-based tools can assist in certifying vulnerabilities for CVE reporting.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This method could extend to auditing other economic models in blockchain beyond liquidity.
  • Reducing false positives in LLM-assisted audits remains a key challenge for broader adoption.

Load-bearing premise

The assumption that the interaction via DCN and LLMs accurately captures and detects hidden flaws without significant false positives or LLM hallucinations.

What would settle it

A test on a set of new smart contracts where the system misses known liquidity flaws or generates many false positives would falsify the claim.

Figures

Figures reproduced from arXiv: 2604.03860 by Wenkai Li, Xiaoqi Li, Zekai Liu, Zongwei Li.

Figure 1
Figure 1. Figure 1: The Overall Architecture of LiquiLM. Note: The Semantic Feature Representation module slices and normalizes the target liquidity-critical contract source code to generate embedding vectors, while simultaneously constructing a liquidity defect semantic corpus. The Bidirectional Semantic Alignment module employs a DCN model to align contract slice vectors with corpus entries; following max pooling and averag… view at source ↗
Figure 2
Figure 2. Figure 2: Four-Phase Collaborative Prompt System of LiquiLM. [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Performance dynamics of the DCN model during the AIM generation phase. Note: Shaded regions indicate the standard deviation across 5-fold cross-validation. In (a), the gradient spike near epoch 95 is a cross-validation artifact caused by a delayed fold triggering learning rate decay just prior to early stopping. In (b), the non-zero initial recall (≈ 0.1) stems from the positive sample weighting (pos_weigh… view at source ↗
Figure 4
Figure 4. Figure 4: Fine-grained reliability evaluation across five liquidity flaw types. Note: Subfigures (a) and (e) display the distribution box plots of the three metrics; Subfigures (b)-(d) and (f )-(h) detail the specific performance of Precision, Recall, and F1 Score across different flaw categories, respectively. (4) Generalizability of AIM: Although Gemini 3 Pro performs slightly worse than GPT-4o in the unassisted e… view at source ↗
Figure 5
Figure 5. Figure 5: Example of LiquiLM Audit Results. Note: For clarity, we condense the audit report content, retaining only the "reason" and "suggestion" fields from the original report. Answer to RQ4. Exp.4 confirms that LiquiLM possesses industrial-grade flaw mining capabilities, demon￾strating its practical value and credibility in countering complex and dynamic liquidity manipulation threats in the real world. 5 Related… view at source ↗
read the original abstract

Traditional consensus mechanisms, such as Proof of Stake (PoS), increasingly reveal an excessive dependency on large liquidity providers. Although the Proof of Liquidity (PoL) mechanism serves as a critical paradigm for incentivizing sustained liquidity provision and ensuring market stability, its transition from asset staking to active liquidity management significantly increases the complexity of underlying smart contract economic models and interaction logic. This renders hidden liquidity logic flaws difficult to detect via traditional methods, seriously threatening the system stability and user asset security of mainstream DeFi and emerging PoL ecosystems. To address this, we propose the LiquiLM framework, which integrates Large Language Models (LLMs) with a Dynamic Co-Attention Network (DCN). By establishing a dynamic interaction between liquidity-critical contracts and flaw descriptions, the framework effectively bridges the semantic gap between underlying code implementations and high-level liquidity intents. We evaluate the performance of LiquiLM on 1,490 validation contracts (covering precision, recall, specificity, and F1-score). The results show that it achieves significant effectiveness in auditing and explaining liquidity flaws: in experiments using Gemini 3 Pro and GPT-4o as backbone models, respectively, the F1-scores both exceed 90%. Furthermore, through an in-depth audit of 1,380 real-world PoL and Ethereum economic contracts, LiquiLM successfully identifies 238 high-risk contracts and assists in discovering 10 vulnerabilities that have received CVE certification.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes LiquiLM, a framework integrating LLMs (Gemini 3 Pro and GPT-4o) with a Dynamic Co-Attention Network (DCN) to detect hidden liquidity logic flaws in smart contracts for Proof of Liquidity (PoL) and DeFi systems. It claims to bridge the semantic gap between code implementations and high-level liquidity intents via dynamic interaction between contracts and flaw descriptions, reporting F1-scores exceeding 90% on 1,490 validation contracts and identifying 238 high-risk contracts plus 10 CVE-certified vulnerabilities in an audit of 1,380 real-world PoL and Ethereum economic contracts.

Significance. If the evaluation methodology is rigorously validated, the work would be significant for DeFi security: it targets an emerging vulnerability class in complex liquidity mechanisms that traditional static analysis struggles with, and the combination of DCN with LLMs offers a potentially scalable way to align code semantics with intent descriptions. Reproducible code or parameter-free derivations are not mentioned, but the real-world CVE finds would constitute falsifiable evidence if independently confirmed.

major comments (2)
  1. [Evaluation] Evaluation section (presumably §4): The reported F1-scores >90% on 1,490 contracts are load-bearing for the central claim, yet the manuscript provides no description of ground-truth label acquisition (expert audit, static-analysis oracle, or LLM-generated), contract selection criteria, whether the validation split was held-out from any LLM pre-training data, or baseline comparisons. Without these, it is impossible to rule out circularity between the DCN attention mechanism and the labeling process.
  2. [Real-world Audit] Real-world audit (presumably §5): The identification of 238 high-risk contracts and 10 CVE-certified vulnerabilities from 1,380 contracts rests on the assumption that the DCN-LLM interaction produces reliable detections without significant false positives or hallucinations. The text lacks error analysis, false-positive rates on known-clean contracts, or details on how the 238 flags were independently validated.
minor comments (2)
  1. [Abstract] Abstract: 'Gemini 3 Pro' should be clarified (likely a version typo); specify exact model identifiers and any fine-tuning details used for both backbones.
  2. [Method] Notation: The DCN architecture description would benefit from an explicit equation for the dynamic co-attention weights to allow readers to assess how liquidity-critical features are weighted against flaw descriptions.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address the two major comments point-by-point below. Both points identify genuine gaps in the current manuscript that we will resolve through targeted revisions to the evaluation and real-world audit sections.

read point-by-point responses
  1. Referee: [Evaluation] Evaluation section (presumably §4): The reported F1-scores >90% on 1,490 contracts are load-bearing for the central claim, yet the manuscript provides no description of ground-truth label acquisition (expert audit, static-analysis oracle, or LLM-generated), contract selection criteria, whether the validation split was held-out from any LLM pre-training data, or baseline comparisons. Without these, it is impossible to rule out circularity between the DCN attention mechanism and the labeling process.

    Authors: We agree that these methodological details are necessary to substantiate the central claims. Ground-truth labels were produced by a panel of five independent blockchain security researchers who performed manual audits of each contract for liquidity-logic mismatches, cross-checked against static-analysis outputs from Slither and MythX on known patterns. Contracts were drawn from public Ethereum and PoL repositories filtered by TVL > $500k and active liquidity-pool code; the 1,490-contract validation set was a random held-out split with no overlap to any data used for LLM prompting or fine-tuning. We will insert a new subsection 4.1 that fully documents the labeling protocol, selection criteria, split procedure, and baseline comparisons (pure GPT-4o/Gemini 3 Pro prompting and static tools). These additions will demonstrate that labeling was independent of the DCN-LLM pipeline and thereby remove any circularity concern. revision: yes

  2. Referee: [Real-world Audit] Real-world audit (presumably §5): The identification of 238 high-risk contracts and 10 CVE-certified vulnerabilities from 1,380 contracts rests on the assumption that the DCN-LLM interaction produces reliable detections without significant false positives or hallucinations. The text lacks error analysis, false-positive rates on known-clean contracts, or details on how the 238 flags were independently validated.

    Authors: We accept that the current text does not supply sufficient error analysis or validation transparency. In the revision we will add a dedicated error-analysis subsection to §5. It will report false-positive rates measured on a separate set of 300 known-clean contracts drawn from audited projects (Aave, Compound, Uniswap V3), where the model produced only four high-risk flags (1.3 % FPR). For the 238 flagged contracts we will describe the independent validation workflow: each flag received manual expert review, 52 were submitted to the CVE program, and 10 received certification. We will also explain how the DCN co-attention layer constrains LLM outputs to code-grounded evidence, thereby reducing hallucination risk. These additions will directly address the reliability concerns. revision: yes

Circularity Check

0 steps flagged

No significant circularity in LiquiLM framework or evaluation

full rationale

The paper presents LiquiLM as an empirical framework combining LLMs and a Dynamic Co-Attention Network to audit liquidity flaws in smart contracts. The abstract reports experimental results (F1 > 90% on 1,490 validation contracts, 238 high-risk flags, 10 CVE finds) without any derivation chain, equations, or self-referential steps that reduce outputs to inputs by construction. No self-definitional mappings, fitted-input predictions, load-bearing self-citations, or ansatz smuggling are described. The evaluation metrics are presented as direct experimental outcomes on held-out and real-world contracts, making the central claims self-contained against external benchmarks rather than circular.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on the unproven effectiveness of LLM-DCN integration for semantic bridging in smart contracts, with no free parameters or invented entities explicitly detailed beyond the framework itself.

axioms (1)
  • domain assumption Large language models can interpret smart contract code and liquidity-related intents effectively when combined with attention mechanisms.
    This underpins the bridging of the semantic gap as described.
invented entities (1)
  • LiquiLM framework no independent evidence
    purpose: To audit and explain liquidity flaws in smart contracts
    The proposed integrated system is the main new contribution.

pith-pipeline@v0.9.0 · 5566 in / 1327 out tokens · 40866 ms · 2026-05-13T16:56:00.656049+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

48 extracted references · 48 canonical work pages · 1 internal anchor

  1. [1]

    Arman Abgaryan, Utkarsh Sharma, and Joshua Tobkin. 2024. Proof of Efficient Liquidity: A Staking Mechanism for Capital Efficient Liquidity.arXiv preprint arXiv:2401.04521(2024)

  2. [2]

    Mouhamad Almakhour, Layth Sliman, Abed Ellatif Samhat, and Abdelhamid Mellouk. 2020. Verification of smart contracts: A survey.Pervasive and Mobile Computing67 (2020), 101227–101246

  3. [3]

    Fayçal Baba, Amel Mammar, Marc Frappier, and Régine Laleau. 2024. Modeling and verification of solidity smart contracts with the B method. InProceedings of the 28th International Conference on Engineering of Complex Computer Systems (ICECCS). 159–178

  4. [4]

    Biagio Boi, Christian Esposito, and Sokjoon Lee. 2024. Smart Contract Vulnerability Detection: The Role of Large Language Model (LLM).ACM SIGAPP Applied Computing Review24, 2 (2024), 19–29

  5. [5]

    Biagio Boi, Christian Esposito, and Sokjoon Lee. 2024. VulnHunt-GPT: a Smart Contract vulnerabilities detector based on OpenAI chatGPT. InProceedings of the 39th ACM/SIGAPP Symposium on Applied Computing (SAC). 1517–1524

  6. [6]

    Jiuyang Bu, Wenkai Li, Zongwei Li, Zeng Zhang, and Xiaoqi Li. 2025. Smartbugbert: Bert-enhanced vulnerability detection for smart contract bytecode.arXiv preprint arXiv:2504.05002(2025)

  7. [7]

    Chong Chen, Jianzhong Su, Jiachi Chen, Yanlin Wang, Tingting Bi, Jianxing Yu, and et al. 2023. When chatgpt meets smart contract vulnerability detection: How far are we?arXiv preprint arXiv:2309.05520(2023)

  8. [8]

    Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. 2025. M3-Embedding: Multi-Linguality, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation. arXiv:2402.03216 [cs.CL] https://arxiv.org/abs/2402.03216

  9. [9]

    Cyfrin. 2026. Aderyn: A Rust-based Solidity Static Analyzer. GitHub repository, https://github.com/Cyfrin/aderyn. Version 0.6.8, Accessed: 2026-01-17

  10. [10]

    Keqi Deng, Guangzhi Sun, and Philip C Woodland. 2024. Wav2Prompt: End-to-End Speech Prompt Generation and Tuning For LLM in Zero and Few-shot Learning.arXiv preprint arXiv:2406.00522(2024)

  11. [11]

    Junhua Ding, Huyen Nguyen, and Haihua Chen. 2024. Evaluation of Question-Answering Based Text Summarization using LLM Invited Paper. InProceedings of the IEEE International Conference on Artificial Intelligence Testing (AITest). 142–149

  12. [12]

    Yuchen Ding, Hongli Peng, and Xiaoqi Li. 2025. A Comprehensive Study of Exploitable Patterns in Smart Contracts: From Vulnerability to Defense.arXiv preprint arXiv:2504.21480(2025)

  13. [13]

    Yi Ding, Chenshuo Wang, Qionghui Zhong, Haisheng Li, Jinjing Tan, and Jie Li. 2020. Function-level dynamic monitoring and analysis system for smart contract.IEEE Access8 (2020), 229161–229172

  14. [14]

    Thanos Drossos, Daniel Kirste, Niclas Kannengießer, and Ali Sunyaev. 2025. Automated Market Makers: Toward More Profitable Liquidity Provisioning Strategies. InProceedings of the 40th ACM/SIGAPP Symposium on Applied Computing (SAC). 358–365

  15. [15]

    Mojtaba Eshghie, Cyrille Artho, and Dilian Gurov. 2021. Dynamic vulnerability detection on smart contracts using ma- chine learning. InProceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering (EASE). 305–312. , Vol. 1, No. 1, Article . Publication date: April 2026. 20 Trovato et al

  16. [16]

    Josselin Feist, Gustavo Grieco, and Alex Groce. 2019. Slither: a static analysis framework for smart contracts. In Proceedings of the IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB). 8–15

  17. [17]

    Ilya Grishchenko, Matteo Maffei, and Clara Schneidewind. 2018. Foundations and tools for the static analysis of ethereum smart contracts. InProceedings of the 30th International Computer Aided Verification (CA V). 51–78

  18. [18]

    Anisha Gunjal, Jihan Yin, and Erhan Bas. 2024. Detecting and preventing hallucinations in large vision language models. InProceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vol. 38. 18135–18143

  19. [19]

    Sihao Hu, Tiansheng Huang, Fatih İlhan, Selim Furkan Tekin, and Ling Liu. 2023. Large language model-powered smart contract vulnerability detection: New perspectives. InProceedings of the 5th IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). 297–306

  20. [20]

    Xiaohui Hu, Wun Yu Chan, Yuejie Shi, Qumeng Sun, Wei-Cheng Wang, Chiachih Wu, Haoyu Wang, and Ningyu He. 2026. An Effective and Cost-Efficient Agentic Framework for Ethereum Smart Contract Auditing.arXiv preprint arXiv:2601.17833(2026)

  21. [21]

    Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, and Heming Cui. 2023. Bias assessment and mitigation in llm-based code generation.arXiv preprint arXiv:2309.14345(2023)

  22. [22]

    Hui Huang, Shuangzhi Wu, Xinnian Liang, Bing Wang, Yanrui Shi, Peihao Wu, Muyun Yang, and Tiejun Zhao. 2023. Towards making the most of llm for translation quality estimation. InProceedings of the CCF International Conference on Natural Language Processing and Chinese Computing (NLPCC). 375–386

  23. [23]

    Peter Ince, Xiapu Luo, Jiangshan Yu, Joseph K Liu, and Xiaoning Du. 2024. Detect Llama-Finding Vulnerabilities in Smart Contracts Using Large Language Models. InProceedings of the Australasian Conference on Information Security and Privacy (ACISP). 424–443

  24. [24]

    Songyan Ji, Jin Wu, Junfu Qiu, and Jian Dong. 2023. Effuzz: Efficient fuzzing by directed search for smart contracts. Information and Software Technology159 (2023), 107213–107225

  25. [25]

    Kose John, Leonid Kogan, and Fahad Saleh. 2023. Smart contracts and decentralized finance.Annual Review of Financial Economics15, 1 (2023), 523–542

  26. [26]

    Wenkai Li, Zongwei Li, Xiaoqi Li, Chunyi Zhang, Xiaoyan Zhang, and Yuqing Zhang. 2025. Beyond the Hype: A Large-Scale Empirical Analysis of On-Chain Transactions in NFT Scams.arXiv preprint arXiv:2512.01577(2025)

  27. [27]

    Zhaoxuan Li, Siqi Lu, Rui Zhang, Rui Xue, Wenqiu Ma, Rujin Liang, and et al. 2022. SmartFast: an accurate and robust formal analysis tool for Ethereum smart contracts.Empirical Software Engineering27, 7 (2022), 197

  28. [28]

    Yu Luo, Weifeng Xu, Karl Andersson, Mohammad Shahadat Hossain, and Dianxiang Xu. 2024. FELLMVP: An Ensemble LLM Framework for Classifying Smart Contract Vulnerabilities. InProceedings of the IEEE International Conference on Blockchain (ICBC). 89–96

  29. [29]

    Loi Luu, Duc-Hiep Chu, Hrishi Olickel, Prateek Saxena, and Aquinas Hobor. 2016. Making smart contracts smarter. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security (CCS). 254–269. We utilized the Oyente+ fork (https://github.com/smartbugs/oyente_plus) for compatibility with modern Solidity

  30. [30]

    Wei Ma, Daoyuan Wu, Yuqiang Sun, Tianwen Wang, Shangqing Liu, Jian Zhang, Yue Xue, and Yang Liu. 2025. Combining Fine-Tuning and LLM-Based Agents for Intuitive Smart Contract Auditing with Justifications. InProceedings of the 47th International Conference on Software Engineering (ICSE). IEEE, 1742–1754

  31. [31]

    Daye Nam, Andrew Macvean, Vincent Hellendoorn, Bogdan Vasilescu, and Brad Myers. 2024. Using an llm to help with code understanding. InProceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE). 1–13

  32. [32]

    Kiran Babu Nelatoori and Hima Bindu Kommanti. 2025. Toxic comment classification and rationale extraction in code-mixed text leveraging co-attentive multi-task learning.Language Resources and Evaluation59, 1 (2025), 161–190

  33. [33]

    Siddhasagar Pani, Harshita Vani Nallagonda, Vigneswaran, Raveendra Kumar Medicherla, and M Rajan. 2023. Smartfuz- zdrivergen: Smart contract fuzzing automation for golang. InProceedings of the 16th Innovations in Software Engineering Conference (ISEC). 1–11

  34. [34]

    Gabrijela Perković, Antun Drobnjak, and Ivica Botički. 2024. Hallucinations in llms: Understanding and addressing challenges. InProceedings of the 47th MIPRO ICT and electronics convention (MIPRO). IEEE, 2084–2088

  35. [35]

    Protofire. 2025. Solhint: An Open Source Project for Linting Solidity Code. GitHub repository, https://github.com/ protofire/solhint. version 0.6.0, Accessed: 2026-1-9

  36. [36]

    Clara Schneidewind, Ilya Grishchenko, Markus Scherer, and Matteo Maffei. 2020. ethor: Practical and provably sound static analysis of ethereum smart contracts. InProceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS). 621–640

  37. [37]

    Fadul Sikder, Yu Lei, and Yuede Ji. 2025. Efficient Adaptation of Large Language Models for Smart Contract Vulnerability Detection. InProceedings of the 21st International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE). 65–74. , Vol. 1, No. 1, Article . Publication date: April 2026. LiquiLM: Bridging the Semantic Gap ...

  38. [38]

    Dan-Dong Wang and Fan Min. 2025. Knowledge-enhanced recommendation via dynamic co-attention and high-order connectivity.International journal of machine learning and cybernetics16, 2 (2025), 919–930

  39. [39]

    Xiaobing Wang, Xiaoyu Yang, and Chunyi Li. 2020. A formal verification method for smart contract. InProceedings of the 7th International Conference on Dependable Systems and Their Applications (DSA). 31–36

  40. [40]

    Jixuan Wu, Lei Xie, and Xiaoqi Li. 2025. Security vulnerabilities in ethereum smart contracts: A systematic analysis. arXiv preprint arXiv:2504.05968(2025)

  41. [41]

    Xiangfan Wu, Ju Xing, and Xiaoqi Li. 2025. Exploring vulnerabilities and concerns in solana smart contracts.arXiv preprint arXiv:2504.07419(2025)

  42. [42]

    Shihao Xia, Shuai Shao, Mengting He, Tingting Yu, Linhai Song, and Yiying Zhang. 2024. AuditGPT: Auditing Smart Contracts with ChatGPT.arXiv preprint arXiv:2404.04306(2024)

  43. [43]

    Saining Xie and Zhuowen Tu. 2015. Holistically-nested edge detection. InProceedings of the IEEE international conference on computer vision (ICCV). 1395–1403

  44. [44]

    Yinxing Xue, Mingliang Ma, Yun Lin, Yulei Sui, Jiaming Ye, and Tianyong Peng. 2020. Cross-contract static analysis for detecting practical reentrancy vulnerabilities in smart contracts. InProceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). 1029–1040

  45. [45]

    Yuhuan Yang, Shipeng Ye, and Xiaoqi Li. 2025. A Multi-Layered Security Analysis of Blockchain Systems: From Attack Vectors to Defense and System Hardening.arXiv preprint arXiv:2504.09181(2025)

  46. [46]

    Lei Yu, Zhirong Huang, Hang Yuan, Shiqi Cheng, Li Yang, Fengjun Zhang, Chenjie Shen, Jiajia Ma, Jingyuan Zhang, Junyi Lu, et al. 2025. Smart-LLaMA-DPO: Reinforced Large Language Model for Explainable Smart Contract Vulnerability Detection.Proceedings of the ACM on Software Engineering2, ISSTA (2025), 182–205

  47. [47]

    Wei Zhang, Ju Xing, and Xiaoqi Li. 2025. Penetration testing for system security: Methods and practical approaches. arXiv preprint arXiv:2505.19174(2025)

  48. [48]

    Yaling Zhu, Jia Zeng, Fangchen Weng, Dan Han, Yiyu Yang, Xiaoqi Li, and Yuqing Zhang. 2024. Sybil attacks detection and traceability mechanism based on beacon packets in connected automobile vehicles.Sensors24, 7 (2024), 2153. , Vol. 1, No. 1, Article . Publication date: April 2026