Recognition: unknown
Aspect-Aware Content-Based Recommendations for Mathematical Research Papers
Pith reviewed 2026-05-07 14:14 UTC · model grok-4.3
The pith
Aspect-conditioned graph neural networks outperform prior methods for recommending mathematical research papers.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Relevance among mathematical papers is inherently aspect-driven, and conditioning a heterogeneous graph neural network on explicit aspects while jointly modeling textual semantics, citation structure, and author lineage produces superior content-based recommendations, with substantial gains over prior aspect-based methods on both small expert-annotated and large automatically-derived datasets.
What carries the argument
AchGNN, an aspect-conditioned heterogeneous graph neural network that jointly models textual semantics, citation structure, and author lineage.
If this is right
- Aspect-aware modeling enables discovery of papers linked by conceptual connections such as shared proof techniques or natural generalizations even when textual and citation overlap is minimal.
- The same architecture transfers effectively to machine learning publications, suggesting utility beyond mathematics.
- Ablation results show that aspect supervision, authorship lineage, and graph-structural signals each contribute measurably to the performance lift.
- Public release of the GoldRiM and SilverRiM datasets and code allows direct reproduction and extension by other researchers.
Where Pith is reading between the lines
- If aspect supervision reliably surfaces conceptual relatedness, the method could be adapted to other domains where papers connect through ideas rather than explicit links, such as theoretical physics or formal logic.
- Incorporating author lineage alongside aspects may help trace the development of mathematical ideas across generations of papers.
Load-bearing premise
Mathematical paper relevance is inherently driven by the specific aspects identified in the expert study, and the automatically derived SilverRiM dataset captures those aspects accurately enough for reliable model comparisons.
What would settle it
An independent expert rating study on held-out mathematical papers where AchGNN recommendations receive no higher aspect-specific relevance scores than strong baselines, or a clear performance reversal on a fresh math corpus not used in the original experiments.
Figures
read the original abstract
Content-based research paper recommendation (CbRPR) has seen advances in computer science and biomedicine, but remains unexplored for mathematics, where paper relatedness is more conceptual than explicit textual or citation-based similarity. Mathematics papers may be connected through shared proof techniques, logical implications, or natural generalizations, yet exhibit minimal textual or citation overlap, rendering existing CbRPR ineffective. To address this gap, we first conduct an expert-driven study characterizing mathematical recommendations, revealing that relevance is inherently \textit{aspect}-driven. Grounded in this insight, we introduce GoldRiM (small, expert-annotated) and SilverRiM (large, automatically derived), the first datasets for \textit{aspect}-aware CbRPR in mathematics. Recognizing that LLM embeddings of mathematical content alone yield suboptimal representation, we propose AchGNN, an \textit{aspect}-conditioned heterogeneous GNN that jointly models textual semantics, citation structure, and author lineage. Across GoldRiM and SilverRiM, AchGNN consistently outperforms prior \textit{aspect}-based CbRPR methods, achieving substantial gains across all evaluated \textit{aspects}. We conduct ablation studies to analyze the contributions of individual \textit{aspect} supervision, authorship lineage, and graph-structural signals to AchGNN's performance. To assess domain generality, we further evaluate AchGNN on the \textit{Papers with Code} dataset of machine learning publications, demonstrating that our \textit{aspect}-aware approach effectively transfers beyond mathematics. We deploy our system on the MaRDI platform to help mathematicians with recommendations and release datasets and code publicly for reproducibility.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript claims that content-based research paper recommendation (CbRPR) for mathematics requires modeling aspect-driven relevance (e.g., shared proof techniques or logical implications) rather than textual or citation overlap alone. It reports an expert study establishing this, introduces GoldRiM (small expert-annotated dataset) and SilverRiM (large automatically derived dataset), proposes AchGNN (an aspect-conditioned heterogeneous GNN integrating text, citations, and author lineage), and shows AchGNN outperforming prior aspect-based CbRPR methods on both datasets across aspects, supported by ablations, transfer to the Papers with Code ML dataset, and deployment on MaRDI with public code and data release.
Significance. If the results hold, the work addresses a genuine gap in CbRPR for mathematics by grounding recommendations in conceptual aspects and providing the first dedicated datasets. The heterogeneous GNN design, ablation analysis of aspect supervision/authorship/graph signals, cross-domain transfer, and reproducibility via public release and MaRDI deployment are strengths that could enable follow-on research in other low-textual-overlap domains.
major comments (3)
- [§3] §3 (Dataset Construction): The automatic aspect-labeling procedure for SilverRiM is described as 'automatically derived' but lacks explicit details on the signals used (e.g., whether embeddings, citations, or metadata overlap with AchGNN inputs). This raises a risk of label-feature leakage that could artifactually inflate AchGNN's reported gains on the larger dataset; a concrete validation (e.g., correlation analysis or held-out expert check) is required to support the central outperformance claim.
- [§5] §5 (Experiments): The abstract and results claim 'substantial gains' and 'consistent outperformance' across aspects on GoldRiM and SilverRiM, yet no specific metrics (precision@K, NDCG, etc.), baseline implementations, statistical significance tests, or error bars are referenced. Without these, the ablation studies cannot be assessed for whether aspect conditioning, authorship, or graph structure are the true drivers.
- [§2] §2 (Expert Study): The aspect taxonomy and inter-annotator agreement from the expert study are not quantified (e.g., Cohen's kappa or exact aspect definitions). Since both GoldRiM annotation and AchGNN conditioning rest on this taxonomy, missing agreement metrics weaken the grounding for the aspect-driven premise.
minor comments (2)
- [Abstract] Abstract: Mentions ablation studies but does not quantify component contributions (e.g., 'aspect supervision improves X by Y%'); adding one sentence with key deltas would aid readability.
- [Throughout] Notation: 'Aspect' is used both for expert labels and model conditioning; a brief glossary or consistent subscripting (e.g., aspect labels vs. aspect embeddings) would prevent ambiguity in later sections.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. We address each major comment point by point below and will revise the manuscript to incorporate the suggested improvements for clarity and rigor.
read point-by-point responses
-
Referee: [§3] §3 (Dataset Construction): The automatic aspect-labeling procedure for SilverRiM is described as 'automatically derived' but lacks explicit details on the signals used (e.g., whether embeddings, citations, or metadata overlap with AchGNN inputs). This raises a risk of label-feature leakage that could artifactually inflate AchGNN's reported gains on the larger dataset; a concrete validation (e.g., correlation analysis or held-out expert check) is required to support the central outperformance claim.
Authors: We agree that additional explicit details are needed to fully address potential concerns about label-feature leakage. In the revised manuscript, we will expand the description in Section 3 to specify the exact signals used for automatic aspect labeling in SilverRiM (primarily citation overlap and metadata patterns, kept distinct from the textual embeddings and heterogeneous graph features fed to AchGNN). We will also add a correlation analysis between the derived aspect labels and AchGNN input features, along with results from a held-out expert validation on a random subset of SilverRiM to confirm independence and support the validity of the outperformance results. revision: yes
-
Referee: [§5] §5 (Experiments): The abstract and results claim 'substantial gains' and 'consistent outperformance' across aspects on GoldRiM and SilverRiM, yet no specific metrics (precision@K, NDCG, etc.), baseline implementations, statistical significance tests, or error bars are referenced. Without these, the ablation studies cannot be assessed for whether aspect conditioning, authorship, or graph structure are the true drivers.
Authors: We acknowledge that the abstract and high-level results narrative do not reference the specific quantitative details. The full experimental section (Section 5) and appendix contain tables reporting precision@K, NDCG@K, and MAP for all aspects and datasets, along with baseline re-implementations, paired t-test p-values for statistical significance, and error bars from multiple random seeds. In the revision, we will update the abstract to mention key metrics and ensure the main results text explicitly highlights these elements, including a clearer discussion of what the ablations reveal about aspect conditioning, authorship, and graph structure. revision: yes
-
Referee: [§2] §2 (Expert Study): The aspect taxonomy and inter-annotator agreement from the expert study are not quantified (e.g., Cohen's kappa or exact aspect definitions). Since both GoldRiM annotation and AchGNN conditioning rest on this taxonomy, missing agreement metrics weaken the grounding for the aspect-driven premise.
Authors: We will revise Section 2 to include the precise definitions for each aspect in the taxonomy (e.g., 'Proof Technique' as shared methods such as induction or contradiction) and report the inter-annotator agreement from the expert study using Cohen's kappa. This will provide stronger quantitative grounding for the aspect-driven premise underlying both the datasets and AchGNN conditioning. revision: yes
Circularity Check
No significant circularity; claims rest on new datasets, expert study, and independent evaluations
full rationale
The paper grounds its approach in a new expert-driven study characterizing aspect-driven relevance for mathematical papers, introduces two fresh datasets (GoldRiM as small expert-annotated and SilverRiM as large automatically derived), proposes the AchGNN model, performs ablation studies on aspect supervision and graph signals, and evaluates transfer on the external Papers with Code dataset. No load-bearing step reduces by construction to a fitted parameter renamed as prediction, a self-definitional equivalence, or a self-citation chain whose validity is internal to the present work. All performance claims are assessed via standard held-out comparisons on the introduced benchmarks rather than tautological re-derivations of inputs.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Mathematics papers may be connected through shared proof techniques, logical implications, or natural generalizations yet exhibit minimal textual or citation overlap.
invented entities (1)
-
AchGNN
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Yauhen Babakhin, Radek Osmulski, Ronay Ak, Gabriel Moreira, Mengyao Xu, Benedikt Schifferer, Bo Liu, and Even Oldridge. 2025. Llama-Embed-Nemotron- 8B: A Universal Text Embedding Model for Multilingual and Cross-Lingual Tasks. arXiv:2511.07025 [cs.CL] https://arxiv.org/abs/2511.07025
-
[2]
Joeran Beel, Bela Gipp, Stefan Langer, and Corinna Breitinger. 2016. Paper rec- ommender systems: a literature survey.International Journal on Digital Libraries 17, 4 (2016), 305–338
2016
-
[3]
Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, and Siva Reddy. 2024. LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders. InFirst Conference on Language Modeling. https://openreview.net/forum?id=IW1PR7vEBf
2024
- [4]
- [5]
-
[6]
Xuheng Cai, Chao Huang, Lianghao Xia, and Xubin Ren. 2023. LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation. InThe Eleventh International Conference on Learning Representations. https://openreview.net/ forum?id=FKXVK9dyMM
2023
-
[7]
Tanmoy Chakraborty, Amrith Krishna, Mayank Singh, Niloy Ganguly, Pawan Goyal, and Animesh Mukherjee. 2016. Ferosa: A faceted recommendation system for scientific articles. InPacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, 528–541
2016
-
[8]
Joel Chan, Joseph Chee Chang, Tom Hope, Dafna Shahaf, and Aniket Kittur. 2018. Solvent: A mixed initiative system for finding analogies between research papers. Proceedings of the ACM on Human-Computer Interaction2, CSCW (2018), 1–21
2018
-
[9]
Jin Yao Chin, Yile Chen, and Gao Cong. 2022. The Datasets Dilemma: How Much Do We Really Know About Recommendation Datasets?. InWSDM ’22: The Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event / Tempe, AZ, USA, February 21 - 25, 2022. ACM, 141–149. doi:10.1145/ 3488560.3498519
-
[10]
Arman Cohan, Waleed Ammar, Madeleine van Zuylen, and Field Cady. 2019. Structural Scaffolds for Citation Intent Classification in Scientific Publications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3586 3596
2019
-
[11]
Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, and Daniel Weld. 2020. SPECTER: Document-level Representation Learning using Citation-informed Transformers. InProceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (Eds.). Association for Computational Li...
- [12]
-
[13]
Shi Dong, Xueyun Tao, Rui Zhong, Zhifeng Wang, Mingzhang Zuo, and Jianwen Sun. 2024. Advanced Mathematics Exercise Recommendation Based on Automatic Knowledge Extraction and Multilayer Knowledge Graph.IEEE Trans. Learn. Technol.17 (Jan. 2024), 776–793. doi:10.1109/TLT.2023.3333669
-
[14]
Ziheng Duan, Yueyang Wang, Weihao Ye, Qilin Fan, and Xiuhua Li. 2022. Connect- ing latent relationships over heterogeneous attributed network for recommenda- tion.Applied Intelligence52, 14 (Nov. 2022), 16214–16232. doi:10.1007/s10489- 022-03340-7
-
[15]
Maryam Fatima. 2025. FIRMA: Bidirectional Formal-Informal Mathematical Language Alignment with Proof-Theoretic Grounding. InProceedings of The 3rd Workshop on Mathematical Natural Language Processing (MathNLP 2025), Marco Valentino, Deborah Ferreira, Mokanarangan Thayaparan, Leonardo Ranaldi, and Andre Freitas (Eds.). Association for Computational Linguis...
-
[16]
Pubmender
Xiaoyue Feng, Hao Zhang, Yijie Ren, Penghui Shang, Yi Zhu, Yanchun Liang, Renchu Guan, and Dong Xu. 2019. The Deep Learning–Based Recommender System “Pubmender” for Choosing a Biomedical Publication Venue: Development and Validation Study.J Med Internet Res21, 5 (24 May 2019), e12957. doi:10.2196/ 12957
2019
-
[17]
Hamilton, Rex Ying, and Jure Leskovec
William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. InProceedings of the 31st International Conference on Neural Information Processing Systems(Long Beach, California, USA)(NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 1025–1035
2017
-
[18]
Klaus Hulek and Olaf Teschke. 2023. How do mathematicians publish?–Some trends.European Mathematical Society Magazine129 (2023), 36–41
2023
-
[19]
David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky
-
[20]
Transactions of the Association for Computational Linguistics6 (07 2018), 391–406
Measuring the Evolution of a Scientific Field through Citation Frames. Transactions of the Association for Computational Linguistics6 (07 2018), 391–406
2018
-
[21]
Lars Kaesberg, Terry Ruas, Jan Philip Wahle, and Bela Gipp. 2024. CiteAs- sist: A System for Automated Preprint Citation and BibTeX Generation. InPro- ceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024), Tirthankar Ghosal, Amanpreet Singh, Anita Waard, Philipp Mayr, Aakanksha Naik, Orion Weller, Yoonjoo Lee, Shannon Shen, and Yanxi...
2024
-
[22]
Marcin Kardas, Piotr Czapla, Pontus Stenetorp, Sebastian Ruder, Sebastian Riedel, Ross Taylor, and Robert Stojnic. 2020. AxCell: Automatic Extraction of Results from Machine Learning Papers. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (Eds.). Associati...
-
[23]
Özge Kart, Alexandre Mestiashvili, Kurt Lachmann, Richard Kwas- nicki, and Michael Schroeder. 2022. Emati: a recommender system for biomedical literature based on supervised learning.Database2022 (12 2022), baac104. arXiv:https://academic.oup.com/database/article- pdf/doi/10.1093/database/baac104/47779573/baac104.pdf doi:10.1093/database/ baac104
work page doi:10.1093/database/baac104/47779573/baac104.pdf 2022
-
[24]
Abdalsamad Keramatfar, Mohadeseh Rafiee, and Hossein Amirkhani. 2022. Graph Neural Networks: A bibliometrics overview.Machine Learning with Applications 10 (2022), 100401. doi:10.1016/j.mlwa.2022.100401
-
[25]
2023.Diagrams, Visual Imagination, and Continuity in Peirce’s Philosophy of Mathematics
Vitaly Kiryushchenko. 2023.Diagrams, Visual Imagination, and Continuity in Peirce’s Philosophy of Mathematics. Springer
2023
-
[26]
Yuta Kobayashi, Masashi Shimbo, and Yuji Matsumoto. 2018. Citation Recommen- dation Using Distributed Representation of Discourse Facets in Scientific Articles. InProceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries(Fort Worth, Texas, USA)(JCDL ’18). Association for Computing Machinery, New York, NY, USA, 243–251. doi:10.1145/3197026.3197059
-
[27]
Paris Koloveas, Serafeim Chatzopoulos, Thanasis Vergoulis, and Christos Try- fonopoulos. 2025. Can llms predict citation intent? an experimental analysis of in-context learning and fine-tuning on open llms. InInternational Conference on Theory and Practice of Digital Libraries. Springer, 207–224
2025
-
[28]
Christin Katharina Kreutz and Ralf Schenkel. 2022. Scientific paper recommen- dation systems: a literature review of recent publications.International journal on digital libraries23, 4 (2022), 335–369
2022
-
[29]
Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. 2025. NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models. arXiv:2405.17428 [cs.CL] https: //arxiv.org/abs/2405.17428
work page internal anchor Pith review arXiv 2025
-
[30]
Xiao Li, Li Sun, Mengjie Ling, and Yan Peng. 2023. A survey of graph neural network based recommendation in social networks.Neurocomputing549 (2023), 126441. doi:10.1016/j.neucom.2023.126441
-
[31]
Kehan Long, Shasha Li, Jintao Tang, and Ting Wang. 2025. Leveraging multiple control codes for aspect-controllable related paper recommendation.Inf. Process. Manage.62, 1 (Jan. 2025), 19 pages. doi:10.1016/j.ipm.2024.103879
-
[32]
Ayushi Malik, Pankaj Dadure, and Sahinur Rahman Laskar. 2026. A Review of Mathematical Information Retrieval: Bridging Symbolic Representation and Intel- ligent Retrieval: A. Malik et al.Archives of Computational Methods in Engineering 33, 1 (2026), 577–611
2026
-
[33]
Behrooz Mansouri, Anurag Agarwal, Douglas W Oard, and Richard Zanibbi. 2022. Advancing math-aware search: the ARQMath-3 lab at CLEF 2022. InEuropean Conference on Information Retrieval. Springer, 408–415
2022
-
[34]
McElfresh, Sujay Khandagale, Jonathan Valverde, John Dickerson, and Colin White
Duncan C. McElfresh, Sujay Khandagale, Jonathan Valverde, John Dickerson, and Colin White. 2022. On the Generalizability and Predictability of Recommender Systems. InAdvances in Neural Information Processing Systems 35: Annual Confer- ence on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. ht...
2022
-
[35]
H Mihaljević-Brandt and O Teschke. 2014. Journal Profiles and Beyond: What Makes a Mathematics Journal “General”?Newsletter of the European Mathematical Society91 (2014), 55–56
2014
-
[36]
Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. 2023. MTEB: Massive Text Embedding Benchmark. InProceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, Andreas Vlachos and Isabelle Augenstein (Eds.). Association for Computational Linguistics, Dubrovnik, Croatia, 2014–2037. doi:10.18653/...
-
[37]
Sheshera Mysore, Tim O’Gorman, Andrew McCallum, and Hamed Zamani. 2021. CSFCube - A Test Collection of Computer Science Research Articles for Faceted Query by Example. InThirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). https://openreview.net/forum? id=8Y50dBbmGU
2021
-
[38]
Malte Ostendorff, Till Blume, Terry Ruas, Bela Gipp, and Georg Rehm. 2022. Specialized Document Embeddings for Aspect-based Similarity of Research Pa- pers. In2022 ACM/IEEE Joint Conference on Digital Libraries (JCDL)(Cologne, Germany). doi:10.1145/3529372.3530912
-
[39]
Malte Ostendorff, Terry Ruas, Moritz Schubotz, Georg Rehm, and Bela Gipp. 2020. Pairwise Multi-Class Document Classification for Semantic Relations between SIGIR ’26, July 20–24, 2026, Melbourne, VIC, Australia Satpute et al. Wikipedia Articles. InProceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020(Virtual Event, China)(JCDL ’20). As...
- [40]
-
[41]
Santiago Posteguillo. 1999. The Schematic Structure of Computer Science Re- search Articles.English for Specific Purposes18, 2 (1999), 139–160. doi:10.1016/ S0889-4906(98)00001-5
1999
-
[42]
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is ChatGPT a General-Purpose Natural Language Processing Task Solver?. InProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singap...
2023
-
[43]
Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Ti...
work page internal anchor Pith review arXiv 2025
-
[44]
Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme
-
[45]
InProceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence(Montreal, Quebec, Canada)(UAI ’09)
BPR: Bayesian personalized ranking from implicit feedback. InProceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence(Montreal, Quebec, Canada)(UAI ’09). AUAI Press, Arlington, Virginia, USA, 452–461
-
[46]
Anja Reusch, Maik Thiele, and Wolfgang Lehner. 2022. Transformer-Encoder and Decoder Models for Questions on Math.. InCLEF (Working Notes). 119–137
2022
-
[47]
Ankit Satpute, Noah Gießing, André Greiner-Petter, Moritz Schubotz, Olaf Teschke, Akiko Aizawa, and Bela Gipp. 2024. Can LLMs Master Math? Investi- gating Large Language Models on Math Stack Exchange. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval(Washington DC, USA)(SIGIR ’24). Associat...
-
[48]
Tim Schopf, Emanuel Gerber, Malte Ostendorff, and Florian Matthes. 2023. As- pectCSE: Sentence Embeddings for Aspect-Based Semantic Textual Similarity Using Contrastive Learning and Structured Knowledge. InProceedings of the 14th International Conference on Recent Advances in Natural Language Processing, Ruslan Mitkov and Galia Angelova (Eds.). INCOMA Ltd...
2023
-
[49]
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open language models.arXiv preprint arXiv:2402.03300(2024)
work page internal anchor Pith review arXiv 2024
-
[50]
Zhen Tan, Dawei Li, Song Wang, Alimohammad Beigi, Bohan Jiang, Amrita Bhat- tacharjee, Mansooreh Karami, Jundong Li, Lu Cheng, and Huan Liu. 2024. Large Language Models for Data Annotation and Synthesis: A Survey. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (Ed...
-
[51]
J. Wahle, T. Ruas, S. M. Mohammad, N. Meuschke, and B. Gipp. 2023. AI Usage Cards: Responsibly Reporting AI-Generated Content. In2023 ACM/IEEE Joint Conference on Digital Libraries (JCDL). IEEE Computer Society, Los Alamitos, CA, USA, 282–284. doi:10.1109/JCDL57899.2023.00060
-
[52]
Gang Wang, Hanru Wang, Jing Liu, and Ying Yang. 2022. Leveraging the fine- grained user preferences with graph neural networks for recommendation.World Wide Web26, 4 (Sept. 2022), 1371–1393. doi:10.1007/s11280-022-01099-y
-
[53]
Gang Wang, Li Zhou, Junqiao Gong, and Xuan Zhang. 2024. Heterogeneous graph neural network with hierarchical attention for group-aware paper rec- ommendation in scientific social networks.Applied Soft Computing167 (2024), 112448. doi:10.1016/j.asoc.2024.112448
-
[54]
Jingyi Wang and Xuedong Tian. 2023. Math Information Retrieval with Con- trastive Learning of Formula Embeddings. InWISE. https://api.semanticscholar. org/CorpusID:264441533
2023
- [55]
-
[56]
Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised Graph Learning for Recommendation. InProceed- ings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval(Virtual Event, Canada)(SIGIR ’21). Association for Com- puting Machinery, New York, NY, USA, 726...
-
[57]
Qianqian Xie, Yutao Zhu, Jimin Huang, Pan Du, and Jian-Yun Nie. 2021. Graph Neural Collaborative Topic Model for Citation Recommendation.ACM Trans. Inf. Syst.40, 3, Article 48 (Nov. 2021), 30 pages. doi:10.1145/3473973
- [58]
-
[59]
Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations.Advances in neural information processing systems33 (2020), 5812–5823
2020
-
[60]
Chenyan Zhang, Shan Xue, Jing Li, Jia Wu, Bo Du, Donghua Liu, and Jun Chang
-
[61]
Neural Networks157 (2023), 90–102
Multi-Aspect enhanced Graph Neural Networks for recommendation. Neural Networks157 (2023), 90–102. doi:10.1016/j.neunet.2022.10.001
-
[62]
Yanzhao Zhang, Mingxin Li, Dingkun Long, Xin Zhang, Huan Lin, Baosong Yang, Pengjun Xie, An Yang, Dayiheng Liu, Junyang Lin, Fei Huang, and Jingren Zhou
-
[63]
Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models.arXiv preprint arXiv:2506.05176(2025)
work page internal anchor Pith review arXiv 2025
-
[64]
Yang Zhang, Yufei Wang, Quan Z. Sheng, Lina Yao, Haihua Chen, Kai Wang, Adnan Mahmood, Wei Emma Zhang, Munazza Zaib, Subhash Sagar, and Rongying Zhao. 2025. Deep learning meets bibliometrics: A survey of citation function classification.Journal of Informetrics19, 1 (2025), 101608. doi:10.1016/j.joi.2024. 101608
-
[65]
Wei Zhong, Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2023. One Blade for One Purpose: Advancing Math Information Retrieval Using Hybrid Search. InProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval(Taipei, Taiwan)(SIGIR ’23). Association for Computing Machinery, New York, NY, USA, 141–151....
-
[66]
Wei Zhong, Yuqing Xie, and Jimmy Lin. 2022. Applying Structural and Dense Semantic Matching for the ARQMath Lab 2022, CLEF.. InCLEF (Working Notes). 147–170
2022
-
[67]
Zhiqiang Zhong, Cheng-Te Li, and Jun Pang. 2023. Hierarchical message-passing graph neural networks.Data Mining and Knowledge Discovery37, 1 (2023), 381–408. Aspect-Aware Content-Based Recommendations for Mathematical Research Papers SIGIR ’26, July 20–24, 2026, Melbourne, VIC, Australia AI Usage Card PROJECT DETAILS PROJECT NAME MathRecSys DOMAIN KEY APP...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.