pith. machine review for the scientific record. sign in

arxiv: 2605.11662 · v1 · submitted 2026-05-12 · 💻 cs.IR

Recognition: 1 theorem link

· Lean Theorem

HSUGA: LLM-Enhanced Recommendation with Hierarchical Semantic Understanding and Group-Aware Alignment

Authors on Pith no claims yet

Pith reviewed 2026-05-13 01:02 UTC · model grok-4.3

classification 💻 cs.IR
keywords LLM-enhanced recommendationsequential recommendationhierarchical semantic understandinggroup-aware alignmentpreference mininguser activity levelssemantic embeddingsrecommendation systems
0
0 comments X

The pith

HSUGA improves LLM-based sequential recommendations by staging preference extraction into two constrained phases and modulating semantic use according to user activity levels.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper argues that feeding long user histories straight into an LLM makes preference summarization unreliable, and that applying the same semantic embedding to every user ignores real differences in how much history each person has. It proposes HSU to split the mining into two phases that edit preferences in controlled steps, and GAA to weaken the semantic signal for active users while strengthening it for sparse ones. Experiments on three public datasets show the combined plugin lifts performance and works on top of existing LLM recommenders. A reader would care because these fixes target the exact points where current LLM rec systems lose accuracy on typical e-commerce or streaming data.

Core claim

HSUGA introduces Hierarchical Semantic Understanding (HSU) that performs staged two-phase preference mining and models preference evolution through constrained editing operations to improve the reliability of user semantic extraction, together with Group-Aware Alignment (GAA) that adjusts the intensity of semantic utilization based on user activity levels, providing weaker alignment for active users and stronger guidance for users with sparse historical data.

What carries the argument

Hierarchical Semantic Understanding (HSU) as a two-phase preference miner using constrained editing, paired with Group-Aware Alignment (GAA) that scales semantic influence by user activity.

If this is right

  • Long interaction sequences can be turned into reliable user embeddings without exceeding LLM inference limits.
  • Sparse users gain more from semantic embeddings while active users avoid over-influence from summarized history.
  • The plugin structure lets the same HSU and GAA modules attach to different base LLM recommenders.
  • Overall accuracy rises on standard sequential recommendation benchmarks.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Modeling preference change as editing steps could be tested in non-LLM sequential models that already track short-term versus long-term signals.
  • Activity-level grouping might be compared against other user partitions such as by item diversity or session length to see which produces the largest lift.
  • The two-phase structure suggests that future work could insert additional intermediate editing stages for even longer histories.
  • If the constrained edits capture genuine evolution, the same idea could be ported to cross-domain recommendation where user histories come from multiple platforms.

Load-bearing premise

That dividing the mining into two constrained editing stages reliably yields better embeddings than direct long-sequence input, and that grouping users by activity level is the right way to vary semantic strength without introducing new biases.

What would settle it

If ablation experiments on the same three benchmark datasets show that replacing the two-phase HSU with a single direct LLM call or replacing GAA with uniform alignment produces equal or higher accuracy, the claimed gains would be falsified.

Figures

Figures reproduced from arXiv: 2605.11662 by Dugang Liu, Guorui Li, Lei Li, Xing Tang, Zhong Ming.

Figure 1
Figure 1. Figure 1: , the development of LLM-enabled sequen￾tial recommendation algorithms typically aims to improve two core components: semantic embed￾ding extraction (Liu et al., 2024b, 2025a) and se￾mantic embedding utilization (Wang et al., 2024; Qin et al., 2024). The former leverages LLMs to infer and summarize user preferences from inter￾action sequences, producing semantic representa￾tions, while the latter incorpora… view at source ↗
Figure 2
Figure 2. Figure 2: The overview of the proposed HSUGA framework [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Hyper-parameter analysis on the Steam dataset: (a) impact of the number of retrieved similar users N (g) u on SasRec, and (b) impact of stage length on model performance. As shown in [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Prompts for user history and interest infer [PITH_FULL_IMAGE:figures/full_fig_p012_4.png] view at source ↗
read the original abstract

Large language model (LLM)-enhanced sequential recommendation typically aims to improve two core components: user semantic embedding extraction and utilization. Despite promising results, existing methods still have two limitations: 1) In the extraction stage, most methods directly input long interaction sequence fragments into LLM for preference summarization. However, excessively long sequences increase inference difficulty, making it challenging to reliably infer accurate user embeddings. 2) In the utilization stage, most methods employ the same semantic embedding utilization strategy for all users, neglecting the differences caused by user activity levels, leading to suboptimal performance. To address these issues, we propose HSUGA, which introduces a simple yet effective plugin for each of the two core components: Hierarchical Semantic Understanding (HSU) and Group-Aware Alignment (GAA). HSU performs a staged two-phase preference mining and models preference evolution through constrained editing operations, thereby improving the reliability of user semantic extraction. GAA adjusts the intensity of semantic utilization based on user activity levels, providing weaker alignment for active users and stronger guidance for users with sparse historical data. Finally, extensive experiments on three benchmark datasets demonstrate the effectiveness and compatibility of HSUGA.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The paper claims that LLM-enhanced sequential recommendation suffers from unreliable user semantic embeddings due to direct input of long interaction sequences and from suboptimal performance due to uniform semantic utilization strategies that ignore differences in user activity levels. It proposes HSUGA as a plugin with two components: Hierarchical Semantic Understanding (HSU), which performs staged two-phase preference mining and models preference evolution via constrained editing operations to improve extraction reliability, and Group-Aware Alignment (GAA), which modulates the intensity of semantic utilization according to user activity levels (weaker alignment for active users, stronger guidance for sparse-data users). The manuscript asserts that extensive experiments on three benchmark datasets demonstrate the effectiveness and compatibility of HSUGA.

Significance. If the empirical claims hold, HSUGA could offer a practical, integrable enhancement to LLM-based recommenders by addressing sequence-length inference difficulties and user heterogeneity through hierarchical processing and activity-based modulation. This targets common challenges in the field and the plugin design supports compatibility, which is a positive attribute for adoption. The emphasis on modeling preference evolution and providing stronger guidance for sparse users aligns with ongoing needs in personalized recommendation.

minor comments (2)
  1. The abstract asserts effectiveness on three benchmarks but supplies no dataset names, quantitative metrics, baseline comparisons, ablation results, or statistical details, which limits immediate assessment of the claimed improvements.
  2. The high-level descriptions of the 'constrained editing operations' in HSU and the precise adjustment mechanism in GAA would benefit from additional clarification or pseudocode even at the abstract level to aid reader understanding.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their summary of our work and for recognizing the potential practical value of HSUGA as an integrable plugin that targets sequence-length inference issues and user heterogeneity via hierarchical processing and activity-based modulation. We note that the report lists no specific major comments or criticisms, only a restatement of our claims and an 'uncertain' recommendation. We would welcome any additional questions or concerns the referee may have.

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper introduces HSU and GAA as descriptive plugins for LLM-based recommendation without any equations, derivations, or parameter-fitting steps that could reduce to self-definition or fitted inputs by construction. Effectiveness is asserted via experiments on three benchmark datasets rather than through a load-bearing derivation chain. No self-citations, uniqueness theorems, or ansatzes are invoked in a manner that collapses the central claims to prior inputs. The method descriptions remain self-contained empirical proposals.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Only abstract available; no explicit free parameters, axioms, or invented entities are stated. The approach implicitly assumes that constrained editing operations constitute a valid model of preference evolution and that user activity level is a sufficient proxy for alignment intensity.

pith-pipeline@v0.9.0 · 5512 in / 1144 out tokens · 42055 ms · 2026-05-13T01:02:03.195948+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

65 extracted references · 65 canonical work pages · 2 internal anchors

  1. [1]

    Self-Attentive Sequential Recommendation , year=

    Kang, Wang-Cheng and McAuley, Julian , booktitle=. Self-Attentive Sequential Recommendation , year=

  2. [2]

    Advances in Neural Information Processing Systems , volume=

    Llm-esr: Large language models enhancement for long-tailed sequential recommendation , author=. Advances in Neural Information Processing Systems , volume=

  3. [3]

    Justifying recommendations using distantly-labeled reviews and fine-grained aspects , author=. Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP) , pages=

  4. [5]

    Proceedings of the 28th ACM international conference on information and knowledge management , pages=

    BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer , author=. Proceedings of the 28th ACM international conference on information and knowledge management , pages=

  5. [6]

    2020 IEEE International Conference on Data Mining (ICDM) , pages=

    Cities: Contextual inference of tail-item embeddings for sequential recommendation , author=. 2020 IEEE International Conference on Data Mining (ICDM) , pages=. 2020 , organization=

  6. [7]

    Proceedings of the 46th international ACM SIGIR conference on Research and development in information retrieval , pages=

    Melt: Mutual enhancement of long-tailed user and item for sequential recommendation , author=. Proceedings of the 46th international ACM SIGIR conference on Research and development in information retrieval , pages=

  7. [8]

    Proceedings of the ACM web conference 2024 , pages=

    Representation learning with large language models for recommendation , author=. Proceedings of the ACM web conference 2024 , pages=

  8. [9]

    Companion Proceedings of the ACM Web Conference 2024 , pages=

    Enhancing sequential recommendation via llm-based semantic embedding learning , author=. Companion Proceedings of the ACM Web Conference 2024 , pages=

  9. [10]

    Proceedings of the 17th ACM Conference on Recommender Systems , pages=

    Leveraging large language models for sequential recommendation , author=. Proceedings of the 17th ACM Conference on Recommender Systems , pages=

  10. [11]

    Technometrics , volume=

    An analysis for unreplicated fractional factorials , author=. Technometrics , volume=. 1986 , publisher=

  11. [12]

    Proceedings of the 18th ACM Conference on Recommender Systems , pages=

    Towards open-world recommendation with knowledge augmentation from large language models , author=. Proceedings of the 18th ACM Conference on Recommender Systems , pages=

  12. [13]

    Proceedings of the ACM Web Conference 2024 , pages=

    Harnessing large language models for text-rich sequential recommendation , author=. Proceedings of the ACM Web Conference 2024 , pages=

  13. [15]

    ACM Transactions on Information Systems (TOIS) , volume=

    Deep learning for sequential recommendation: Algorithms, influential factors, and evaluations , author=. ACM Transactions on Information Systems (TOIS) , volume=. 2020 , publisher=

  14. [16]

    SN Computer Science , volume=

    A survey and taxonomy of sequential recommender systems for e-commerce product recommendation , author=. SN Computer Science , volume=. 2023 , publisher=

  15. [17]

    Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V

    Large Language Model Enhanced Recommender Systems: Methods, Applications and Trends , author=. Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2 , pages=

  16. [18]

    ACM Transactions on Information Systems , volume=

    How can recommender systems benefit from large language models: A survey , author=. ACM Transactions on Information Systems , volume=. 2025 , publisher=

  17. [19]

    arXiv preprint arXiv:2503.04162 , year=

    Semantic Retrieval Augmented Contrastive Learning for Sequential Recommendation , author=. arXiv preprint arXiv:2503.04162 , year=

  18. [20]

    Advances in Neural Information Processing Systems , volume=

    Faith and fate: Limits of transformers on compositionality , author=. Advances in Neural Information Processing Systems , volume=

  19. [21]

    Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages=

    Towards Interest Drift-driven User Representation Learning in Sequential Recommendation , author=. Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages=

  20. [22]

    IEEE Transactions on Knowledge and Data Engineering , volume=

    Adapting to user interest drift for poi recommendation , author=. IEEE Transactions on Knowledge and Data Engineering , volume=. 2016 , publisher=

  21. [23]

    Proceedings of the 18th ACM Conference on Recommender Systems , pages=

    Dynamic Stage-aware User Interest Learning for Heterogeneous Sequential Recommendation , author=. Proceedings of the 18th ACM Conference on Recommender Systems , pages=

  22. [25]

    arXiv e-prints , pages=

    A practice-friendly two-stage LLM-enhanced paradigm in sequential recommendation , author=. arXiv e-prints , pages=

  23. [26]

    Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V

    Retrieval Augmented Cross-Domain LifeLong Behavior Modeling for Enhancing Click-through Rate Prediction , author=. Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2 , pages=

  24. [27]

    Neurocomputing , volume=

    Recursively summarizing enables long-term dialogue memory in large language models , author=. Neurocomputing , volume=. 2025 , publisher=

  25. [28]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Memorybank: Enhancing large language models with long-term memory , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  26. [29]

    Advances in Neural Information Processing Systems , volume=

    Augmenting language models with long-term memory , author=. Advances in Neural Information Processing Systems , volume=

  27. [30]

    Rethinking memory in ai: Taxonomy, operations, topics, and future directions.arXiv preprint arXiv:2505.00675, 2025a

    Rethinking memory in ai: Taxonomy, operations, topics, and future directions , author=. arXiv preprint arXiv:2505.00675 , year=

  28. [31]

    Advances in neural information processing systems , volume=

    Chain-of-thought prompting elicits reasoning in large language models , author=. Advances in neural information processing systems , volume=

  29. [32]

    Advances in Neural Information Processing Systems , volume=

    Reflexion: Language agents with verbal reinforcement learning , author=. Advances in Neural Information Processing Systems , volume=

  30. [33]

    Cheng Qian, Emre Can Acikgoz, Qi He, Hongru Wang, Xiusi Chen, Dilek Hakkani-Tür, Gokhan Tur, and Heng Ji

    On memory construction and retrieval for personalized conversational agents , author=. arXiv preprint arXiv:2502.05589 , year=

  31. [34]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Llmemb: Large language model can be a good embedding generator for sequential recommendation , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  32. [35]

    Proceedings of the 17th ACM international conference on web search and data mining , pages=

    Intent contrastive learning with cross subsequences for sequential recommendation , author=. Proceedings of the 17th ACM international conference on web search and data mining , pages=

  33. [36]

    Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V

    Llm2rec: Large language models are powerful embedding models for sequential recommendation , author=. Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2 , pages=

  34. [37]

    Proceedings of the 33rd ACM International Conference on Information and Knowledge Management , pages=

    Relative Contrastive Learning for Sequential Recommendation with Similarity-based Positive Sample Selection , author=. Proceedings of the 33rd ACM International Conference on Information and Knowledge Management , pages=

  35. [38]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Beyond Step Pruning: Information Theory Based Step-level Optimization for Self-Refining Large Language Models , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  36. [42]

    George EP Box and R Daniel Meyer. 1986. An analysis for unreplicated fractional factorials. Technometrics, 28(1):11--18

  37. [43]

    Heyang Gao, Zexu Sun, Erxue Min, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, and Xu Chen. 2025. Solving the granularity mismatch: Hierarchical preference learning for long-horizon llm agents. arXiv preprint arXiv:2510.03253

  38. [44]

    Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, and Marios Fragkoulis. 2023. Leveraging large language models for sequential recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1096--1102

  39. [45]

    Yingzhi He, Xiaohao Liu, An Zhang, Yunshan Ma, and Tat-Seng Chua. 2025. Llm2rec: Large language models are powerful embedding models for sequential recommendation. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2, pages 896--907

  40. [46]

    Bal \'a zs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939

  41. [47]

    Jun Hu, Wenwen Xia, Xiaolu Zhang, Chilin Fu, Weichang Wu, Zhaoxin Huan, Ang Li, Zuoli Tang, and Jun Zhou. 2024. Enhancing sequential recommendation via llm-based semantic embedding learning. In Companion Proceedings of the ACM Web Conference 2024, pages 103--111

  42. [48]

    Seongwon Jang, Hoyeop Lee, Hyunsouk Cho, and Sehee Chung. 2020. Cities: Contextual inference of tail-item embeddings for sequential recommendation. In 2020 IEEE International Conference on Data Mining (ICDM), pages 202--211. IEEE

  43. [49]

    Wang-Cheng Kang and Julian McAuley. 2018. https://doi.org/10.1109/ICDM.2018.00035 Self-attentive sequential recommendation . In 2018 IEEE International Conference on Data Mining (ICDM), pages 197--206

  44. [50]

    Kibum Kim, Dongmin Hyun, Sukwon Yun, and Chanyoung Park. 2023. Melt: Mutual enhancement of long-tailed user and item for sequential recommendation. In Proceedings of the 46th international ACM SIGIR conference on Research and development in information retrieval, pages 68--77

  45. [51]

    Lei Li, Yongfeng Zhang, Dugang Liu, and Li Chen. 2023. Large language models for generative recommendation: A survey and visionary discussions. arXiv preprint arXiv:2309.01157

  46. [52]

    Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Hao Zhang, Yong Liu, Chuhan Wu, Xiangyang Li, Chenxu Zhu, and 1 others. 2025. How can recommender systems benefit from large language models: A survey. ACM Transactions on Information Systems, 43(2):1--47

  47. [53]

    Dugang Liu, Shenxian Xian, Xiaolin Lin, Xiaolian Zhang, Hong Zhu, Yuan Fang, Zhen Chen, and Zhong Ming. 2024 a . A practice-friendly two-stage llm-enhanced paradigm in sequential recommendation. arXiv e-prints, pages arXiv--2406

  48. [54]

    Qidong Liu, Xian Wu, Wanyu Wang, Yejing Wang, Yuanshao Zhu, Xiangyu Zhao, Feng Tian, and Yefeng Zheng. 2025 a . Llmemb: Large language model can be a good embedding generator for sequential recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 12183--12191

  49. [55]

    Qidong Liu, Xian Wu, Yejing Wang, Zijian Zhang, Feng Tian, Yefeng Zheng, and Xiangyu Zhao. 2024 b . Llm-esr: Large language models enhancement for long-tailed sequential recommendation. Advances in Neural Information Processing Systems, 37:26701--26727

  50. [56]

    Qidong Liu, Xiangyu Zhao, Yuhao Wang, Yejing Wang, Zijian Zhang, Yuqi Sun, Xiang Li, Maolin Wang, Pengyue Jia, Chong Chen, and 1 others. 2025 b . Large language model enhanced recommender systems: Methods, applications and trends. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2, pages 6096--6106

  51. [57]

    Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 188--197

  52. [58]

    Xiuyuan Qin, Huanhuan Yuan, Pengpeng Zhao, Guanfeng Liu, Fuzhen Zhuang, and Victor S Sheng. 2024. Intent contrastive learning with cross subsequences for sequential recommendation. In Proceedings of the 17th ACM international conference on web search and data mining, pages 548--556

  53. [59]

    Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. Representation learning with large language models for recommendation. In Proceedings of the ACM web conference 2024, pages 3464--3475

  54. [60]

    Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management, pages 1441--1450

  55. [61]

    Zexu Sun, Bokai Ji, Hengyi Cai, Shuaiqiang Wang, Lei Wang, Guangxia Li, and Xu Chen. 2026. Agentskiller: Scaling generalist agent intelligence through semantically integrated cross-domain data synthesis. arXiv preprint arXiv:2602.09372

  56. [62]

    Qingyue Wang, Yanhe Fu, Yanan Cao, Shuai Wang, Zhiliang Tian, and Liang Ding. 2025. Recursively summarizing enables long-term dialogue memory in large language models. Neurocomputing, 639:130193

  57. [63]

    Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. 2023. Augmenting language models with long-term memory. Advances in Neural Information Processing Systems, 36:74530--74543

  58. [64]

    Zhikai Wang, Yanyan Shen, Zexi Zhang, Li He, Yichun Li, Hao Gu, and Yinghua Zhang. 2024. Relative contrastive learning for sequential recommendation with similarity-based positive sample selection. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, pages 2493--2502

  59. [65]

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824--24837

  60. [66]

    Yunjia Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, and Yong Yu. 2024. Towards open-world recommendation with knowledge augmentation from large language models. In Proceedings of the 18th ACM Conference on Recommender Systems, pages 12--22

  61. [67]

    Qiyuan Zhang, Fuyuan Lyu, Zexu Sun, Lei Wang, Weixu Zhang, Wenyue Hua, Haolun Wu, Zhihan Guo, Yufei Wang, Niklas Muennighoff, and 1 others. 2025. A survey on test-time scaling in large language models: What, how, where, and how well? arXiv preprint arXiv:2503.24235

  62. [68]

    Jinman Zhao, Erxue Min, Hui Wu, Ziheng Li, Zexu Sun, Hengyi Cai, Shuaiqiang Wang, Xu Chen, and Gerald Penn. 2026. Beyond step pruning: Information theory based step-level optimization for self-refining large language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 40, pages 34941--34949

  63. [69]

    Zhi Zheng, Wenshuo Chao, Zhaopeng Qiu, Hengshu Zhu, and Hui Xiong. 2024. Harnessing large language models for text-rich sequential recommendation. In Proceedings of the ACM Web Conference 2024, pages 3207--3216

  64. [70]

    Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, and Yanlin Wang. 2024. Memorybank: Enhancing large language models with long-term memory. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 19724--19731

  65. [71]

    Chenxu Zhu, Shigang Quan, Bo Chen, Jianghao Lin, Xiaoling Cai, Hong Zhu, Xiangyang Li, Yunjia Xi, Weinan Zhang, and Ruiming Tang. 2024. Liber: Lifelong user behavior modeling based on large language models. arXiv preprint arXiv:2411.14713