Recognition: unknown
EvoRAG: Making Knowledge Graph-based RAG Automatically Evolve through Feedback-driven Backpropagation
Pith reviewed 2026-05-10 07:57 UTC · model grok-4.3
The pith
EvoRAG attributes response feedback to individual knowledge-graph triplets and paths so the graph can refine itself and raise reasoning accuracy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
EvoRAG establishes a closed loop in which response-level feedback is attributed to retrieved paths by measuring their utility for the final answer and then propagated to the constituent triplets, enabling the knowledge graph to be updated or filtered so that subsequent retrievals better support accurate generation.
What carries the argument
The feedback-driven backpropagation mechanism that assigns a utility score to each retrieved path based on response quality and distributes that score to the individual triplets along the path to guide graph refinement.
If this is right
- Knowledge graphs become task-adaptive without manual redesign after initial construction.
- Low-utility triplets are progressively removed, shrinking the graph while preserving or raising performance.
- The same feedback loop can be applied across successive user sessions to track shifting requirements.
- Reasoning accuracy rises because retrieved paths more closely match what actually helped produce correct answers.
- The coupling of LLM output, feedback, and graph data creates a self-improving retrieval component for real-world use.
Where Pith is reading between the lines
- The attribution technique might be tested on non-KG retrieval stores such as vector databases if path utilities can be defined analogously.
- Over many iterations the graph could grow or shrink in ways that reduce dependence on the size of the original knowledge base.
- Combining the loop with explicit reinforcement signals from human raters could further stabilize the updates.
- Domains that supply noisy or delayed feedback would expose whether the current attribution step remains reliable.
Load-bearing premise
Response-level feedback can be accurately attributed to the contribution of individual triplets and paths without introducing substantial noise or bias that would degrade the graph over iterations.
What would settle it
Repeated iterations of the update loop produce a measurable drop in reasoning accuracy on held-out queries rather than the claimed improvement.
Figures
read the original abstract
Knowledge Graph-based Retrieval-Augmented Generation (KG-RAG) has emerged as a promising paradigm for enhancing LLM reasoning by retrieving multi-hop paths from KGs. However, existing KG-RAG frameworks often underperform in real-world scenarios because the pre-captured knowledge dependencies are not tailored to the downstream task or its evolving requirements. These frameworks struggle to adapt to task-specific requirements and lack mechanisms to filter low-contribution knowledge during generation. We observe that feedback on generated responses offers effective supervision for improving KG quality, as it directly reflects user expectations and provides insights into the correctness and usefulness of the output. However, a key challenge lies in effectively linking response-level feedback to triplet-level contribution evaluation and knowledge updates in the KG. In this work, we propose EvoRAG, a self-evolving KG-RAG framework that leverages the feedback over generated responses to continuously refine the KG and enhance reasoning accuracy. EvoRAG introduces a feedback-driven backpropagation mechanism that attributes feedback to retrieved paths by measuring their utility for response and propagates this utility back to individual triplets, supporting fine-grained KG refinements towards more adaptive and accurate reasoning. Through EvoRAG, we establish a closed loop that couples feedback, LLM, and graph data, continuously enhancing the performance and robustness in real-world scenarios. Experimental results show that EvoRAG improves reasoning accuracy by $7.34\%$ over state-of-the-art KG-RAG frameworks. The source code has been made available at https://github.com/iDC-NEU/EvoRAG.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes EvoRAG, a self-evolving KG-RAG framework that uses feedback on generated responses to continuously refine the underlying knowledge graph. It introduces a feedback-driven backpropagation mechanism that attributes response-level utility to retrieved paths and then to individual triplets, enabling fine-grained KG updates for better task adaptation. The central empirical claim is a 7.34% improvement in reasoning accuracy over state-of-the-art KG-RAG frameworks, supported by publicly released source code.
Significance. If the attribution mechanism can be shown to be low-bias and stable, the closed-loop coupling of response feedback with KG evolution would represent a meaningful advance for adaptive retrieval-augmented generation, moving beyond static pre-captured knowledge dependencies. The public code release is a clear strength that aids reproducibility and allows direct inspection of the utility function and update rules.
major comments (3)
- [§3.2] §3.2 (Feedback-driven Backpropagation): the attribution of response-level feedback to individual triplets is described at a conceptual level but lacks explicit equations defining the utility function, the path-to-triplet propagation rule, or any regularization to prevent noise accumulation; without these, it is impossible to verify that the reported 7.34% gain arises from genuine KG refinement rather than transient or biased updates.
- [Experimental results section] Experimental results section (likely §5): the 7.34% accuracy improvement is stated without error bars, statistical significance tests, or ablation studies that isolate the contribution of the backpropagation/attribution component versus baseline KG-RAG retrieval; this is load-bearing because the skeptic concern about compounding attribution noise cannot be ruled out from the presented evidence.
- [§4] §4 (KG refinement loop): no quantitative validation of attribution quality (e.g., precision/recall of attributed triplets against ground-truth contribution or an ablation on synthetic attribution error) is provided, which is required to support the claim that the closed loop reliably improves rather than degrades the graph over iterations.
minor comments (2)
- [Abstract] The abstract mentions 'post-hoc triplet filtering' without specifying the filtering criteria or threshold; this notation should be clarified with a brief equation or pseudocode.
- [Figures] Figure captions and axis labels in the evolution-over-iterations plots could be expanded to explicitly show which curves correspond to the attribution step versus the baseline.
Simulated Author's Rebuttal
We thank the referee for the constructive comments. We address each major point below and will revise the manuscript accordingly to improve clarity, rigor, and validation.
read point-by-point responses
-
Referee: [§3.2] §3.2 (Feedback-driven Backpropagation): the attribution of response-level feedback to individual triplets is described at a conceptual level but lacks explicit equations defining the utility function, the path-to-triplet propagation rule, or any regularization to prevent noise accumulation; without these, it is impossible to verify that the reported 7.34% gain arises from genuine KG refinement rather than transient or biased updates.
Authors: We agree that the current description in §3.2 is primarily conceptual and would benefit from formalization. In the revised manuscript we will add explicit equations for the utility function, the path-to-triplet propagation rule, and a regularization term to mitigate noise accumulation. These additions will enable readers to verify that the observed gains result from the intended KG refinement process. revision: yes
-
Referee: Experimental results section (likely §5): the 7.34% accuracy improvement is stated without error bars, statistical significance tests, or ablation studies that isolate the contribution of the backpropagation/attribution component versus baseline KG-RAG retrieval; this is load-bearing because the skeptic concern about compounding attribution noise cannot be ruled out from the presented evidence.
Authors: We acknowledge the value of statistical rigor and component isolation. The revised experimental section will include error bars from repeated runs, statistical significance tests, and dedicated ablation studies that isolate the backpropagation/attribution mechanism from baseline KG-RAG retrieval. This will directly address concerns regarding potential compounding noise. revision: yes
-
Referee: [§4] §4 (KG refinement loop): no quantitative validation of attribution quality (e.g., precision/recall of attributed triplets against ground-truth contribution or an ablation on synthetic attribution error) is provided, which is required to support the claim that the closed loop reliably improves rather than degrades the graph over iterations.
Authors: We agree that quantitative validation of attribution quality is necessary to substantiate the closed-loop claims. We will add a dedicated analysis in §4 that reports precision/recall of attributed triplets against ground-truth contributions together with an ablation on synthetic attribution error. These results will demonstrate that the refinement loop improves rather than degrades the graph. revision: yes
Circularity Check
No significant circularity detected in the derivation chain
full rationale
The paper proposes EvoRAG as a new self-evolving KG-RAG framework using feedback-driven backpropagation to attribute response-level signals to paths and triplets for KG updates. The central claim is an empirical 7.34% accuracy gain over SOTA baselines from experiments. No load-bearing step in the abstract or described mechanism reduces the result to a self-definition, fitted input renamed as prediction, or self-citation chain by construction. The attribution process is presented as an algorithmic contribution with external evaluation; no equations are shown that make the improvement tautological to the inputs. This is a standard empirical systems paper with independent experimental validation.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Response-level feedback directly reflects the correctness and usefulness of retrieved knowledge-graph paths.
Reference graph
Works this paper leans on
-
[1]
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Flo- rencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shya- mal Anadkat, et al. 2023. Gpt-4 technical report.arXiv preprint arXiv:2303.08774 (2023)
work page internal anchor Pith review Pith/arXiv arXiv 2023
- [2]
-
[3]
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data.Advances in neural information processing systems26 (2013)
2013
-
[4]
Léon Bottou, Frank E Curtis, and Jorge Nocedal. 2018. Optimization methods for large-scale machine learning.SIAM review60, 2 (2018), 223–311
2018
-
[5]
2004.Convex optimization
Stephen P Boyd and Lieven Vandenberghe. 2004.Convex optimization. Cam- bridge university press
2004
-
[6]
Kevin Zhou, and Jianliang Xu
Yukun Cao, Zengyi Gao, Zhiyang Li, Xike Xie, S. Kevin Zhou, and Jianliang Xu. 2025. LEGO-GraphRAG: Modularizing Graph-Based Retrieval-Augmented Generation for Design Space Exploration.Proceedings of the VLDB Endowment 18, 10 (2025), 3269–3283
2025
- [7]
-
[8]
Chaoyi Chen, Dechao Gao, Yanfeng Zhang, Qiange Wang, Zhenbo Fu, Xuecang Zhang, Junhua Zhu, Yu Gu, and Ge Yu. 2023. NeutronStream: A Dynamic GNN Training Framework with Sliding Window for Graph Streams.Proceedings of the VLDB Endowment17, 3 (2023), 455–468
2023
-
[9]
Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2024. Benchmarking large language models in retrieval-augmented generation. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 38. 17754–17762
2024
- [10]
- [11]
- [12]
-
[13]
Na Dong, Natthawut Kertkeidkachorn, Xin Liu, and Kiyoaki Shirai. 2025. Refin- ing Noisy Knowledge Graph with Large Language Models. InProceedings of the Workshop on Generative AI and Knowledge Graphs (GenAIK). 78–86
2025
-
[14]
Yuxin Dong, Shuo Wang, Hongye Zheng, Jiajing Chen, Zhenhong Zhang, and Chihang Wang. 2024. Advanced RAG Models with Graph Structures: Optimizing Complex Knowledge Reasoning and Text Generation. In2024 5th International Symposium on Computer Engineering and Intelligent Communications (ISCEIC). 626–630
2024
-
[15]
Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, Dasha Metropolitansky, Robert Osazuwa Ness, and Jonathan Larson. 2024. From local to global: A graph rag approach to query- focused summarization.arXiv preprint arXiv:2404.16130(2024)
work page internal anchor Pith review arXiv 2024
-
[16]
Zhenbo Fu, Xin Ai, Qiange Wang, Yanfeng Zhang, Shizhan Lu, Chaoyi Chen, Chunyu Cao, Hao Yuan, Zhewei Wei, Yu Gu, et al. 2025. NeutronTask: Scalable and efficient multi-GPU GNN training with task parallelism.Proceedings of the VLDB Endowment18, 6 (2025), 1705–1719
2025
-
[17]
Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. 2023. Retrieval-augmented generation for large language models: A survey.arXiv preprint arXiv:2312.109972 (2023)
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[18]
Siddhant Garg, Goutham Ramakrishnan, and Varun Thumbe. 2021. Towards ro- bustness to label noise in text classification via noise modeling. InProceedings of the 30th ACM international conference on information & knowledge management. 3024–3028
2021
-
[19]
Yingqiang Ge, Wenyue Hua, Kai Mei, Juntao Tan, Shuyuan Xu, Zelong Li, Yongfeng Zhang, et al. 2023. Openagi: When llm meets domain experts.Ad- vances in Neural Information Processing Systems36 (2023), 5539–5568
2023
-
[20]
Daya Guo, Canwen Xu, Nan Duan, Jian Yin, and Julian McAuley. 2023. Long- coder: A long-range pre-trained language model for code completion. InPro- ceedings of International Conference on Machine Learning. 12098–12107
2023
-
[21]
Zirui Guo, Lianghao Xia, Yanhua Yu, Tu Ao, and Chao Huang. 2024. Ligh- tRAG: Simple and Fast Retrieval-Augmented Generation.arXiv preprint arXiv:2410.05779(2024)
work page internal anchor Pith review arXiv 2024
-
[22]
Bernal Jiménez Gutiérrez, Yiheng Shu, Yu Gu, Michihiro Yasunaga, and Yu Su. 2024. Hipporag: Neurobiologically inspired long-term memory for large language models. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems
2024
-
[23]
Peixuan Han, Adit Krishnan, Gerald Friedland, Jiaxuan You, and Chris Kong
-
[24]
Self-Aligned Reward: Towards Effective and Efficient Reasoners.arXiv preprint arXiv:2509.05489(2025)
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[25]
Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh Chawla, Thomas Laurent, Yann Le- Cun, Xavier Bresson, and Bryan Hooi. 2024. G-retriever: Retrieval-augmented generation for textual graph understanding and question answering.Advances in Neural Information Processing Systems37 (2024), 132876–132907
2024
-
[26]
Yan Hong, Chenyang Bu, and Xindong Wu. 2021. High-quality noise detection for knowledge graph embedding with rule-based triple confidence. InPRICAI 2021: Trends in Artificial Intelligence: 18th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2021, Hanoi, Vietnam, November 8–12, 2021, Proceedings, Part I 18. 572–585
2021
-
[27]
Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. 2023. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions.arXiv preprint arXiv:2311.05232(2023)
work page internal anchor Pith review arXiv 2023
- [28]
-
[29]
Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S Yu. 2021. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE transactions on neural networks and learning systems33, 2 (2021), 494–514
2021
-
[30]
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation.Comput. Surveys55, 12 (2023), 1–38
2023
-
[31]
Bowen Jin, Chulin Xie, Jiawei Zhang, Kashob Kumar Roy, Yu Zhang, Zheng Li, Ruirui Li, Xianfeng Tang, Suhang Wang, Yu Meng, et al. 2024. Graph Chain-of- Thought: Augmenting Large Language Models by Reasoning on Graphs. (2024), 163–184
2024
-
[32]
Mingyu Jin, Haochen Xue, Zhenting Wang, Boming Kang, Ruosong Ye, Kaixiong Zhou, Mengnan Du, and Yongfeng Zhang. 2024. ProLLM: protein chain-of- thoughts enhanced LLM for protein-protein interaction prediction.bioRxiv (2024), 2024–04
2024
-
[33]
Pei Ke, Bosi Wen, Andrew Feng, Xiao Liu, Xuanyu Lei, Jiale Cheng, Shengyuan Wang, Aohan Zeng, Yuxiao Dong, Hongning Wang, et al. 2024. Critiquellm: To- wards an informative critique generation model for evaluation of large language model generation. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long P...
2024
-
[34]
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. InProceedings of the 29th symposium on operating systems principles. 611–626
2023
-
[35]
Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Ren Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, et al. 2024. RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback. InInternational Conference on Machine Learning. 26874–26901
2024
-
[36]
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks.Advances in neural information processing systems33 (2020), 9459– 9474
2020
-
[37]
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. InProceedings of the 34th Inter- national Conference on Neural Information Processing Systems
2020
-
[38]
Dawei Li, Shu Yang, Zhen Tan, Jae Baik, Sukwon Yun, Joseph Lee, Aaron Chacko, Bojian Hou, Duy Duong-Tran, Ying Ding, et al. 2024. DALK: Dynamic Co-Augmentation of LLMs and KG to answer Alzheimer’s Disease Questions with Scientific Literature. , 2187–2205 pages
2024
-
[39]
Peizheng Li, Chaoyi Chen, Hao Yuan, Zhenbo Fu, Hang Shen, Xinbo Yang, Qiange Wang, Xin Ai, Yanfeng Zhang, Yingyou Wen, and Ge. Yu. 2025. Neutron- RAG: Towards Understanding the Effectiveness of RAG from a Data Retrieval Perspective.Companion of the 2025 International Conference on Management of Data (SIGMOD-Companion ’25)(2025)
2025
-
[40]
Shilong Li, Yancheng He, Hangyu Guo, Xingyuan Bu, Ge Bai, Jie Liu, Jiaheng Liu, Xingwei Qu, Yangguang Li, Wanli Ouyang, et al. 2024. GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models. InFindings of the Association for Computational Linguistics: EMNLP
2024
- [41]
-
[42]
Zongjie Li, Chaozheng Wang, Zhibo Liu, Haoxuan Wang, Dong Chen, Shuai Wang, and Cuiyun Gao. 2023. Cctest: Testing and repairing code completion 13 systems. In2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). 1238–1250
2023
- [43]
- [44]
-
[45]
Haochen Liu, Song Wang, Yaochen Zhu, Yushun Dong, and Jundong Li. 2024. Knowledge Graph-Enhanced Large Language Models via Path Selection. (2024), 6311–6321
2024
-
[46]
Junling Liu, Chao Liu, Peilin Zhou, Renjie Lv, Kang Zhou, and Yan Zhang
- [47]
- [48]
-
[49]
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: NLG evaluation using gpt-4 with better human alignment. InProceedings of the 2023 conference on empirical methods in natural language processing. 2511–2522
2023
-
[50]
Shiheng Ma, Jianhui Ding, Weijia Jia, Kun Wang, and Minyi Guo. 2017. Transt: Type-based multiple embedding representations for knowledge graph com- pletion. InMachine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Pro- ceedings, Part I 10. 717–733
2017
-
[51]
Shengjie Ma, Chengjin Xu, Xuhui Jiang, Muzhi Li, Huaren Qu, and Jian Guo
-
[52]
Think-on-graph 2.0: Deep and interpretable large language model reason- ing with knowledge graph-guided retrieval.arXiv e-prints(2024), arXiv–2407
2024
-
[53]
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al
-
[54]
Self-refine: Iterative refinement with self-feedback.Advances in neural information processing systems36 (2023), 46534–46594
2023
- [55]
- [56]
-
[57]
2013.Introductory lectures on convex optimization: A basic course
Yurii Nesterov. 2013.Introductory lectures on convex optimization: A basic course. Vol. 87. Springer Science & Business Media
2013
-
[58]
Christina Niklaus, Matthias Cetto, André Freitas, and Siegfried Handschuh
-
[59]
InProceedings of the 27th International Conference on Computational Linguistics
A Survey on Open Information Extraction. InProceedings of the 27th International Conference on Computational Linguistics. 3866–3878
-
[60]
Pouya Ghiasnezhad Omran, Kewen Wang, and Zhe Wang. 2019. An embedding- based approach to rule learning in knowledge graphs.IEEE Transactions on Knowledge and Data Engineering33, 4 (2019), 1348–1359
2019
-
[61]
OpenAI. 2024. https://openai.com/blog/chatgpt
2024
-
[62]
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al
-
[63]
Advances in neural information processing systems35 (2022), 27730–27744
Training language models to follow instructions with human feedback. Advances in neural information processing systems35 (2022), 27730–27744
2022
-
[64]
Heiko Paulheim. 2016. Knowledge graph refinement: A survey of approaches and evaluation methods.Semantic web8, 3 (2016), 489–508
2016
-
[65]
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback.arXiv preprint arXiv:2302.12813(2023)
- [66]
- [67]
-
[68]
Parth Sarthi, Salman Abdullah, Aditi Tuli, Shubh Khanna, Anna Goldie, and Christopher D Manning. 2024. Raptor: Recursive abstractive processing for tree-organized retrieval. InThe Twelfth International Conference on Learning Representations
2024
-
[69]
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement learning.Advances in neural information processing systems36 (2023), 8634– 8652
2023
-
[70]
Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al
-
[71]
Large language models encode clinical knowledge.Nature620, 7972 (2023), 172–180
2023
- [72]
-
[73]
Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kalu- arachchi, Rajib Rana, and Suranga Nanayakkara. 2023. Improving the domain adaptation of retrieval augmented generation (RAG) models for open domain question answering.Transactions of the Association for Computational Linguis- tics11 (2023), 1–17
2023
-
[74]
Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, and Pascale Fung. 2019. Generalizing question answering system with pre-trained language model fine-tuning. InProceedings of the 2nd workshop on machine reading for question answering. 203–211
2019
-
[75]
Budhitama Subagdja, D Shanthoshigaa, Zhaoxia Wang, and Ah-Hwee Tan. 2024. Machine learning for refining knowledge graphs: A survey.Comput. Surveys 56, 6 (2024), 1–38
2024
- [76]
-
[77]
Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel Ni, Heung-Yeung Shum, and Jian Guo. 2024. Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph
2024
-
[78]
Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. RotatE: Knowl- edge Graph Embedding by Relational Rotation in Complex Space. InInterna- tional Conference on Learning Representations
2019
-
[79]
Yixuan Tang and Yi Yang. 2024. MultiHop-RAG: Benchmarking Retrieval- Augmented Generation for Multi-Hop Queries. arXiv:2401.15391
work page internal anchor Pith review arXiv 2024
-
[80]
Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhu- patiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. 2024. Gemma: Open models based on gemini research and technology.arXiv preprint arXiv:2403.08295(2024)
work page internal anchor Pith review arXiv 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.