Recognition: no theorem link
NeuroSymActive: Differentiable Neural-Symbolic Reasoning with Active Exploration for Knowledge Graph Question Answering
Pith reviewed 2026-05-15 22:07 UTC · model grok-4.3
The pith
NeuroSymActive pairs differentiable neural-symbolic modules with value-guided Monte-Carlo exploration to answer multi-hop knowledge-graph questions more accurately and with fewer lookups than standard baselines.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
NeuroSymActive couples soft-unification style symbolic modules with a neural path evaluator and a Monte-Carlo style exploration policy that prioritizes high-value path expansions, attaining strong answer accuracy on KGQA benchmarks while reducing the number of expensive graph lookups and model calls compared to common retrieval-augmented baselines.
What carries the argument
The active value-guided Monte-Carlo exploration controller that works with soft-unification symbolic modules and a neural path evaluator to select promising reasoning paths.
If this is right
- The same modular design allows different symbolic reasoning components to be swapped in while keeping end-to-end gradient flow.
- Fewer graph lookups make the method practical for knowledge bases that are expensive or rate-limited to access.
- Value-guided selection focuses computation on high-reward paths, which directly lowers the total number of model evaluations needed.
- The approach supports multi-hop queries without requiring the entire graph to be embedded in a single prompt.
Where Pith is reading between the lines
- The same active-exploration pattern could be applied to other search-heavy tasks such as automated planning or theorem proving.
- Because the controller is value-guided, it may remain effective even when the underlying knowledge graph is incomplete or contains noisy facts.
- Scaling the framework to dynamic graphs that change over time would require only retraining the neural evaluator rather than redesigning the symbolic layer.
Load-bearing premise
That the soft-unification modules and the value-guided exploration policy will combine without hidden performance loss and will continue to work on graphs and question sets larger or different from those tested.
What would settle it
A controlled test on a new or larger knowledge graph where NeuroSymActive requires more graph lookups than a simple retrieval baseline while matching or falling below its accuracy would falsify the efficiency claim.
Figures
read the original abstract
Large pretrained language models and neural reasoning systems have advanced many natural language tasks, yet they remain challenged by knowledge-intensive queries that require precise, structured multi-hop inference. Knowledge graphs provide a compact symbolic substrate for factual grounding, but integrating graph structure with neural models is nontrivial: naively embedding graph facts into prompts leads to inefficiency and fragility, while purely symbolic or search-heavy approaches can be costly in retrievals and lack gradient-based refinement. We introduce NeuroSymActive, a modular framework that combines a differentiable neural-symbolic reasoning layer with an active, value-guided exploration controller for Knowledge Graph Question Answering. The method couples soft-unification style symbolic modules with a neural path evaluator and a Monte-Carlo style exploration policy that prioritizes high-value path expansions. Empirical results on standard KGQA benchmarks show that NeuroSymActive attains strong answer accuracy while reducing the number of expensive graph lookups and model calls compared to common retrieval-augmented baselines.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces NeuroSymActive, a modular framework for Knowledge Graph Question Answering that combines a differentiable neural-symbolic reasoning layer (using soft-unification style symbolic modules and a neural path evaluator) with an active value-guided Monte-Carlo exploration controller. It claims to deliver strong answer accuracy on standard KGQA benchmarks while reducing the number of expensive graph lookups and model calls relative to common retrieval-augmented baselines.
Significance. If the empirical results hold, the work offers a practical advance in neural-symbolic KGQA by showing how targeted exploration can maintain accuracy with lower retrieval and inference cost; the differentiable components enable end-to-end refinement, which is a clear methodological strength over purely symbolic or prompt-only approaches.
minor comments (3)
- [§4] §4 (Method): the integration of the soft-unification modules with the value-guided policy is described at a high level; adding a short pseudocode block or explicit loss formulation would improve reproducibility.
- [Table 2] Table 2: the reported lookup reductions lack error bars or significance tests; including these would strengthen the efficiency claims.
- [§5.3] §5.3 (Ablations): the contribution of the active exploration component versus a greedy baseline is shown, but the paper should explicitly state whether the neural path evaluator is frozen or jointly trained in each ablation setting.
Simulated Author's Rebuttal
We thank the referee for the positive summary of NeuroSymActive and the recommendation for minor revision. The recognition of the framework's ability to maintain accuracy with reduced retrieval and inference costs, along with the value of its differentiable components, is appreciated. No specific major comments were raised in the report.
Circularity Check
No significant circularity in derivation chain
full rationale
The paper presents NeuroSymActive as a modular combination of existing neural-symbolic components (soft-unification modules, neural path evaluator) with a value-guided Monte-Carlo exploration policy. No equations or claims in the abstract or described framework reduce by construction to fitted parameters or self-citations; performance numbers are reported as empirical outcomes on standard KGQA benchmarks rather than tautological re-expressions of inputs. The derivation chain relies on integration of prior ideas with new controller logic, which remains externally testable and does not exhibit self-definitional, fitted-prediction, or uniqueness-imported circularity patterns.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer
Zhengbao Jiang, Luyu Gao, Zhiruo Wang, Jun Araki, Haibo Ding, Jamie Callan, and Graham Neubig. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. InProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2336–2349, 2022
work page 2022
-
[2]
Qiaoyu Tang, Jiawei Chen, Zhuoqun Li, Bowen Yu, Yaojie Lu, Haiyang Yu, Hongyu Lin, Fei Huang, Ben He, Xianpei Han, et al. Self-retrieval: End-to-end information retrieval with one large language model.Advances in Neural Information Processing Systems, 37:63510–63533, 2024
work page 2024
-
[3]
LightRAG: Simple and Fast Retrieval-Augmented Generation
Zirui Guo, Lianghao Xia, Yanhua Yu, Tu Ao, and Chao Huang. Lightrag: Simple and fast retrieval-augmented generation.arXiv preprint arXiv:2410.05779, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[4]
Kun Gao, Hanpin Wang, Yongzhi Cao, and Katsumi Inoue. Learning from interpretation transition using differentiable logic programming semantics.Machine Learning, 111(1):123–145, 2022
work page 2022
-
[5]
Jaron Maene and Luc De Raedt. Soft-unification in deep probabilistic logic.Advances in Neural Information Processing Systems, 36:60804–60820, 2023
work page 2023
-
[6]
Jiaxin Bai, Xin Liu, Weiqi Wang, Chen Luo, and Yangqiu Song. Complex query answering on eventuality knowledge graph with implicit logical constraints.Advances in Neural Information Processing Systems, 36: 30534–30553, 2023
work page 2023
-
[7]
Farah Atif, Ola El Khatib, and Djellel Difallah. Beamqa: Multi-hop knowledge graph question answering with sequence-to-sequence prediction and beam search. InProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 781–790, 2023
work page 2023
-
[8]
Xiaoting Li, Yuhang Wu, Vineeth Rakesh, Yusan Lin, Hao Yang, and Fei Wang. Smartquery: An active learning framework for graph neural networks through hybrid uncertainty reduction. InProceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 4199–4203, 2022
work page 2022
-
[9]
Neuroescape: Ordered escape routing via monte-carlo tree search and neural network
Zhiyang Chen, Tsung-Yi Ho, Ulf Schlichtmann, Datao Chen, Mingyu Liu, Hailong Yao, and Xia Yin. Neuroescape: Ordered escape routing via monte-carlo tree search and neural network. In2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), pages 01–09. IEEE, 2023
work page 2023
-
[10]
Zheng Zhang, Levent Yilmaz, and Bo Liu. A critical review of inductive logic programming techniques for explainable ai.IEEE transactions on neural networks and learning systems, 35(8):10220–10236, 2023. 19 NeuroSymActive
work page 2023
-
[11]
Xuguang Duan, Xin Wang, Peilin Zhao, Guangyao Shen, and Wenwu Zhu. Deeplogic: Joint learning of neural perception and logical reasoning.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4321–4334, 2022
work page 2022
-
[12]
Mojtaba Nayyeri, Chengjin Xu, Mirza Mohtashim Alam, Jens Lehmann, and Hamed Shariat Yazdi. Logicenn: A neural based knowledge graphs embedding model with logical rules.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6):7050–7062, 2021
work page 2021
-
[13]
Neupsl: Neural probabilistic soft logic.arXiv preprint arXiv:2205.14268, 2022
Connor Pryor, Charles Dickens, Eriq Augustine, Alon Albalak, William Wang, and Lise Getoor. Neupsl: Neural probabilistic soft logic.arXiv preprint arXiv:2205.14268, 2022
-
[14]
Differentiable inductive logic programming for structured examples
Hikaru Shindo, Masaaki Nishino, and Akihiro Yamamoto. Differentiable inductive logic programming for structured examples. InProceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 5034–5041, 2021
work page 2021
-
[15]
Hikaru Shindo, Viktor Pfanschilling, Devendra Singh Dhami, and Kristian Kersting. α ilp: thinking visual scenes as differentiable logic programs.Machine Learning, 112(5):1465–1497, 2023
work page 2023
-
[16]
Chang Yue and Niraj K Jha. Learning interpretable differentiable logic networks.IEEE Transactions on Circuits and Systems for Artificial Intelligence, 2024
work page 2024
-
[17]
Chen Shengyuan, Yunfeng Cai, Huang Fang, Xiao Huang, and Mingming Sun. Differentiable neuro-symbolic reasoning on large-scale knowledge graphs.Advances in Neural Information Processing Systems, 36:28139–28154, 2023
work page 2023
-
[18]
Yuming Zhang, Jun Hsieh, Xin Li, Ming-Ching Chang, Chun-Chieh Lee, and Kuo-Chin Fan. Mote-nas: Multi- objective training-based estimate for efficient neural architecture search.Advances in Neural Information Processing Systems, 37:100845–100869, 2024
work page 2024
-
[19]
Bikram Pratim Bhuyan, Amar Ramdane-Cherif, Ravi Tomar, and TP Singh. Neuro-symbolic artificial intelligence: a survey.Neural Computing and Applications, 36(21):12809–12844, 2024
work page 2024
-
[20]
Uzma Nawaz, Mufti Anees-ur Rahaman, and Zubair Saeed. A review of neuro-symbolic ai integrating reasoning and learning for advanced cognitive systems.Intelligent Systems with Applications, page 200541, 2025
work page 2025
-
[21]
Eduardo Mosqueira-Rey, Elena Hernández-Pereira, David Alonso-Ríos, José Bobes-Bascarán, and Ángel Fernández-Leal. Human-in-the-loop machine learning: a state of the art.Artificial Intelligence Review, 56 (4):3005–3054, 2023
work page 2023
-
[22]
Yiran Huang, Jian-Feng Yang, and Haoda Fu. Efficient human-in-the-loop active learning: A novel framework for data labeling in ai systems.arXiv preprint arXiv:2501.00277, 2024
-
[23]
Active learning framework for improving knowledge graph accuracy.IEEE Access, 2025
Donghyun Kim, Hyeongjun Yang, Seokju Hwang, Kyuhwan Yeom, Midan Shim, and Kyong-Ho Lee. Active learning framework for improving knowledge graph accuracy.IEEE Access, 2025
work page 2025
-
[24]
Chaeeun Kim and Seungone Kim. Freeson: Retriever-free retrieval-augmented reasoning via corpus-traversing mcts.arXiv preprint arXiv:2505.16409, 2025
-
[25]
Few-shot preference learning for human-in-the-loop rl
Donald Joseph Hejna III and Dorsa Sadigh. Few-shot preference learning for human-in-the-loop rl. InConference on Robot Learning, pages 2014–2025. PMLR, 2023
work page 2014
-
[26]
Nc-alg: Graph-based active learning under noisy crowd
Wentao Zhang, Yexin Wang, Zhenbang You, Yang Li, Gang Cao, Zhi Yang, and Bin Cui. Nc-alg: Graph-based active learning under noisy crowd. In2024 IEEE 40th International Conference on Data Engineering (ICDE), pages 2681–2694. IEEE, 2024
work page 2024
-
[27]
A comparison of uncertainty quantification methods for active learning in image classification
Alice Hein, Stefan Röhrl, Thea Grobel, Manuel Lengl, Nawal Hafez, Martin Knopp, Christian Klenk, Dominik Heim, Oliver Hayden, and Klaus Diepold. A comparison of uncertainty quantification methods for active learning in image classification. In2022 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2022
work page 2022
-
[28]
Epistemic uncertainty quantification for pre-trained neural networks
Hanjing Wang and Qiang Ji. Epistemic uncertainty quantification for pre-trained neural networks. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11052–11061, 2024. 20 NeuroSymActive
work page 2024
-
[29]
Alexander Immer, Emanuele Palumbo, Alexander Marx, and Julia V ogt. Effective bayesian heteroscedastic regression with deep neural networks.Advances in Neural Information Processing Systems, 36:53996–54019, 2023
work page 2023
-
[30]
Maximilian Seitzer, Arash Tavakoli, Dimitrije Antic, and Georg Martius. On the pitfalls of heteroscedastic uncertainty estimation with probabilistic neural networks.arXiv preprint arXiv:2203.09168, 2022
-
[31]
Bayesian entropy neural networks for physics- aware prediction.arXiv preprint arXiv:2407.01015, 2024
Rahul Rathnakumar, Jiayu Huang, Hao Yan, and Yongming Liu. Bayesian entropy neural networks for physics- aware prediction.arXiv preprint arXiv:2407.01015, 2024
-
[32]
Qian Li, Shu Guo, Yinjia Chen, Cheng Ji, Jiawei Sheng, and Jianxin Li. Uncertainty-aware relational graph neural network for few-shot knowledge graph completion.arXiv preprint arXiv:2403.04521, 2024
-
[33]
Active learning with fully bayesian neural networks for discontinuous and nonstationary data
Maxim Ziatdinov. Active learning with fully bayesian neural networks for discontinuous and nonstationary data. arXiv preprint arXiv:2405.09817, 2024
-
[34]
Mariliza Tzes, Nikolaos Bousias, Evangelos Chatzipantazis, and George J Pappas. Graph neural networks for multi-robot active information acquisition.arXiv preprint arXiv:2209.12091, 2022
-
[35]
Xinze Li, Sen Mei, Zhenghao Liu, Yukun Yan, Shuo Wang, Shi Yu, Zheni Zeng, Hao Chen, Ge Yu, Zhiyuan Liu, et al. Rag-ddr: Optimizing retrieval-augmented generation using differentiable data rewards.arXiv preprint arXiv:2410.13509, 2024
-
[36]
Knowledge graph prompting for multi-document question answering
Yu Wang, Nedim Lipka, Ryan A Rossi, Alexa Siu, Ruiyi Zhang, and Tyler Derr. Knowledge graph prompting for multi-document question answering. InProceedings of the AAAI conference on artificial intelligence, volume 38, pages 19206–19214, 2024
work page 2024
-
[37]
Yixin Ji, Kaixin Wu, Juntao Li, Wei Chen, Mingjie Zhong, Xu Jia, and Min Zhang. Retrieval and reasoning on kgs: Integrate knowledge graphs into large language models for complex question answering. InFindings of the Association for Computational Linguistics: EMNLP 2024, pages 7598–7610, 2024
work page 2024
-
[38]
Mufei Li, Siqi Miao, and Pan Li. Simple is effective: The roles of graphs and large language models in knowledge-graph-based retrieval-augmented generation.arXiv preprint arXiv:2410.20724, 2024
-
[39]
D-rag: Differentiable retrieval-augmented generation for knowledge graph question answering
Guangze Gao, Zixuan Li, Chunfeng Yuan, Jiawei Li, Wu Jianzhuo, Yuehao Zhang, Xiaolong Jin, Bing Li, and Weiming Hu. D-rag: Differentiable retrieval-augmented generation for knowledge graph question answering. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 35386–35405, 2025
work page 2025
-
[40]
Rekg-mcts: Reinforcing llm reasoning on knowledge graphs via training-free monte carlo tree search
Xiaozhuang Song, Shufei Zhang, and Tianshu Yu. Rekg-mcts: Reinforcing llm reasoning on knowledge graphs via training-free monte carlo tree search. InFindings of the Association for Computational Linguistics: ACL 2025, pages 9288–9306, 2025
work page 2025
-
[41]
Interpretable contrastive monte carlo tree search reasoning.arXiv preprint arXiv:2410.01707, 2024
Zitian Gao, Boye Niu, Xuzheng He, Haotian Xu, Hongzhang Liu, Aiwei Liu, Xuming Hu, and Lijie Wen. Interpretable contrastive monte carlo tree search reasoning.arXiv preprint arXiv:2410.01707, 2024
-
[42]
Xiao Yu, Baolin Peng, Vineeth Vajipey, Hao Cheng, Michel Galley, Jianfeng Gao, and Zhou Yu. Exact: Teaching ai agents to explore with reflective-mcts and exploratory learning.arXiv preprint arXiv:2410.02052, 2024
-
[43]
Rushi Shah, Mingyuan Yan, Michael Curtis Mozer, and Dianbo Liu. Improving discrete optimisation via decoupled straight-through gumbel-softmax.arXiv preprint arXiv:2410.13331, 2024
-
[44]
Laxmi Chaudhary and Buddha Singh. Gumbel-softmax based graph convolution network approach for community detection.International Journal of Information Technology, 15(6):3063–3070, 2023
work page 2023
-
[45]
Path spuriousness-aware reinforcement learning for multi-hop knowledge graph reasoning
Chunyang Jiang, Tianchen Zhu, Haoyi Zhou, Chang Liu, Ting Deng, Chunming Hu, and Jianxin Li. Path spuriousness-aware reinforcement learning for multi-hop knowledge graph reasoning. InProceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3181–3192, 2023
work page 2023
-
[46]
Rpr-kgqa: Relational path reasoning for multi-hop question answering with knowledge graph
Long Zhao, Yin Xu, Yanyan Wang, Zhengyi Chen, Youzhi Huang, and Qiangzhong Feng. Rpr-kgqa: Relational path reasoning for multi-hop question answering with knowledge graph. InProceedings of the 2024 International Conference on Computer and Multimedia Technology, pages 596–600, 2024. 21 NeuroSymActive
work page 2024
-
[47]
Hierarchy-aware multi-hop question answering over knowledge graphs
Junnan Dong, Qinggang Zhang, Xiao Huang, Keyu Duan, Qiaoyu Tan, and Zhimeng Jiang. Hierarchy-aware multi-hop question answering over knowledge graphs. InProceedings of the ACM web conference 2023, pages 2519–2527, 2023
work page 2023
-
[48]
Shuo Yang, Soyeon Caren Han, Yihao Ding, Shuhe Wang, and Eduard Hoy. Tooltree: Efficient llm agent tool planning via dual-feedback monte carlo tree search and bidirectional pruning.arXiv preprint arXiv:2603.12740, 2026
-
[49]
Shuo Yang, Soyeon Caren Han, Xueqi Ma, Yan Li, Mohammad Reza Ghasemi Madani, and Eduard Hovy. Evotool: Self-evolving tool-use policy optimization in llm agents via blame-aware mutation and diversity-aware selection. arXiv preprint arXiv:2603.04900, 2026
-
[50]
Jing Zhang, Xiaokang Zhang, Jifan Yu, Jian Tang, Jie Tang, Cuiping Li, and Hong Chen. Subgraph retrieval enhanced model for multi-hop knowledge base question answering.arXiv preprint arXiv:2202.13296, 2022
-
[51]
Shangfei Zheng, Wei Chen, Weiqing Wang, Pengpeng Zhao, Hongzhi Yin, and Lei Zhao. Multi-hop knowledge graph reasoning in few-shot scenarios.IEEE Transactions on Knowledge and Data Engineering, 36(4):1713–1727, 2023
work page 2023
-
[52]
Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text
Haitian Sun, Tania Bedrax-Weiss, and William Cohen. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. InProceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 2380–2390, 2019
work page 2019
-
[53]
Open domain question answering using early fusion of knowledge bases and text
Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. Open domain question answering using early fusion of knowledge bases and text. InProceedings of the 2018 conference on empirical methods in natural language processing, pages 4231–4242, 2018
work page 2018
-
[54]
Jiaxin Shi, Shulin Cao, Lei Hou, Juanzi Li, and Hanwang Zhang. Transfernet: An effective and transparent framework for multi-hop question answering over relation graph.arXiv preprint arXiv:2104.07302, 2021
-
[55]
Guanming Xiong, Junwei Bao, and Wen Zhao. Interactive-kbqa: Multi-turn interactions for knowledge base question answering with large language models.arXiv preprint arXiv:2402.15131, 2024
-
[56]
Peng Yixing, Quan Wang, Licheng Zhang, Yi Liu, and Zhendong Mao. Chain-of-question: A progressive question decomposition approach for complex knowledge base question answering. InFindings of the Association for Computational Linguistics ACL 2024, pages 4763–4776, 2024
work page 2024
-
[57]
Reham Omar, Omij Mangukiya, Panos Kalnis, and Essam Mansour. Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowledge graph chatbots.arXiv preprint arXiv:2302.06466, 2023
-
[58]
Knowprefix-tuning: A two-stage prefix-tuning framework for knowledge-grounded dialogue generation
Jiaqi Bai, Zhao Yan, Ze Yang, Jian Yang, Xinnian Liang, Hongcheng Guo, and Zhoujun Li. Knowprefix-tuning: A two-stage prefix-tuning framework for knowledge-grounded dialogue generation. InJoint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 525–542. Springer, 2023
work page 2023
-
[59]
Junda Wu, Tong Yu, Rui Wang, Zhao Song, Ruiyi Zhang, Handong Zhao, Chaochao Lu, Shuai Li, and Ricardo Henao. Infoprompt: Information-theoretic soft prompt tuning for natural language understanding.Advances in neural information processing systems, 36:61060–61084, 2023
work page 2023
-
[60]
Lifelong knowledge editing for llms with retrieval-augmented continuous prompt learning
Qizhou Chen, Taolin Zhang, Xiaofeng He, Dongyang Li, Chengyu Wang, Longtao Huang, et al. Lifelong knowledge editing for llms with retrieval-augmented continuous prompt learning. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13565–13580, 2024
work page 2024
-
[61]
Uprise: Universal prompt retrieval for improving zero-shot evaluation
Daixuan Cheng, Shaohan Huang, Junyu Bi, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Furu Wei, Weiwei Deng, and Qi Zhang. Uprise: Universal prompt retrieval for improving zero-shot evaluation. InProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12318–12337, 2023
work page 2023
-
[62]
Shanyue Wan. Automatic optimization method for database indexing by integrating monte carlo tree search and graph neural network.Procedia Computer Science, 262:831–839, 2025
work page 2025
-
[63]
The value of semantic parse labeling for knowledge base question answering
Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. The value of semantic parse labeling for knowledge base question answering. InProceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201–206, 2016. 22 NeuroSymActive
work page 2016
-
[64]
The Web as a Knowledge-base for Answering Complex Questions
Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions.arXiv preprint arXiv:1803.06643, 2018
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[65]
Key-value memory networks for directly reading documents (emnlp16).arXiv, 2016
Alexander H Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents (emnlp16).arXiv, 2016
work page 2016
-
[66]
Improving multi-hop question answering over knowledge graphs using knowledge base embeddings
Apoorv Saxena, Aditay Tripathi, and Partha Talukdar. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. InProceedings of the 58th annual meeting of the association for computational linguistics, pages 4498–4507, 2020
work page 2020
-
[67]
Improving multi-hop knowledge base question answering by learning intermediate supervision signals
Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, and Ji-Rong Wen. Improving multi-hop knowledge base question answering by learning intermediate supervision signals. InProceedings of the 14th ACM international conference on web search and data mining, pages 553–561, 2021
work page 2021
-
[68]
Apoorv Saxena, Adrian Kochsiek, and Rainer Gemulla. Sequence-to-sequence knowledge graph completion and question answering.arXiv preprint arXiv:2203.10321, 2022
-
[69]
Jinhao Jiang, Kun Zhou, Wayne Xin Zhao, and Ji-Rong Wen. Unikgqa: Unified retrieval and reasoning for solving multi-hop question answering over knowledge graph.arXiv preprint arXiv:2212.00959, 2022
-
[70]
Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel M Ni, Heung-Yeung Shum, and Jian Guo. Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph.arXiv preprint arXiv:2307.07697, 2023
-
[71]
Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. Structgpt: A general framework for large language model to reason over structured data.arXiv preprint arXiv:2305.09645, 2023
-
[72]
AgentBench: Evaluating LLMs as Agents
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. Agentbench: Evaluating llms as agents.arXiv preprint arXiv:2308.03688, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[73]
Tiezheng Guo, Qingwen Yang, Chen Wang, Yanyi Liu, Pan Li, Jiawei Tang, Dapeng Li, and Yingyou Wen. Knowledgenavigator: Leveraging large language models for enhanced reasoning over knowledge graph.Complex & Intelligent Systems, 10(5):7063–7076, 2024
work page 2024
-
[74]
Lightprof: A lightweight reasoning framework for large language model on knowledge graph
Tu Ao, Yanhua Yu, Yuling Wang, Yang Deng, Zirui Guo, Liang Pang, Pinghui Wang, Tat-Seng Chua, Xiao Zhang, and Zhen Cai. Lightprof: A lightweight reasoning framework for large language model on knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 23424–23432, 2025. A Theoretical Analysis This section collects ...
work page 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.