pith. machine review for the scientific record. sign in

arxiv: 2605.11447 · v1 · submitted 2026-05-12 · 💻 cs.IR · cs.AI

Recognition: 1 theorem link

· Lean Theorem

Conditional Memory Enhanced Item Representation for Generative Recommendation

Authors on Pith no claims yet

Pith reviewed 2026-05-13 02:25 UTC · model grok-4.3

classification 💻 cs.IR cs.AI
keywords generative recommendationsemantic identifieritem representationconditional memoryEngram memoryautoregressive generationSID decodingquantization
0
0 comments X

The pith

Conditional memory reconstructs SID-token embeddings to resolve representation conflicts in generative recommendation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Generative recommendation systems predict items by autoregressively generating their semantic identifiers, or SIDs, but current ways of turning those SIDs into usable item representations create two problems. Direct merging of token embeddings loses structural information and creates collisions, while methods that add external knowledge lose the exact token-level details needed for accurate decoding. The proposed ComeIR framework uses conditional memory to rebuild item-aware inputs from the tokens while keeping the ability to decode at the original token granularity. A reader would care because this bottleneck has limited how well generative recommenders can work in practice, and overcoming it could make such systems more reliable without needing extra networks or data.

Core claim

We propose ComeIR, a Conditional Memory enhanced Item Representation framework that reconstructs SID-token embeddings into item-aware inputs and restores the token granularity during SID decoding. Specifically, MM-guided token scoring adaptively estimates the contribution of each code within the SID, dual-level Engram memory captures intra-item code composition and inter-item transition patterns, and a memory-restoring prediction head reuses the memories during SID decoding.

What carries the argument

Dual-level Engram memory that captures stable intra-item code compositions and inter-item transition patterns, enabling adaptive reconstruction of item representations from SID tokens and their reuse during decoding.

If this is right

  • The framework resolves both the Identity-Structure Preservation Conflict and the Input-Output Granularity Mismatch.
  • Extensive experiments confirm the effectiveness and flexibility of ComeIR across recommendation tasks.
  • Scalable performance gains follow from enlarging the conditional memory.
  • Item representations become simultaneously more item-aware and more faithful to the original SID token structure.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same memory mechanism could be tested in other autoregressive generation settings that rely on discrete codes, such as text or audio synthesis.
  • If inter-item transition patterns prove robust, the approach might lower the data volume needed to train effective generative recommenders.
  • Applying the memory restoration step to non-recommendation domains with quantized identifiers could reveal whether the dual-level structure is domain-specific.

Load-bearing premise

The dual-level Engram memory can capture stable intra-item code compositions and inter-item transition patterns that generalize beyond the training data without introducing new overfitting or requiring domain-specific tuning.

What would settle it

If increasing the size of the conditional memory produces no further gains or if accuracy falls on items whose SIDs contain code combinations absent from training, the central claim would be falsified.

Figures

Figures reproduced from arXiv: 2605.11447 by Shengyu Zhou, Xiangyu Zhao, Xinhang Li, Yejing Wang, Ziwei Liu.

Figure 1
Figure 1. Figure 1: Overview of GR pipeline. Quantization transforms items from features to SID. Representation organizes SID as input [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The overall framework of the proposed ComeIR. The code layer [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Scaling analysis of dual-level Engram memory on [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The detailed framework of the general Engram module. A discrete code sequence is converted into suffix N-gram keys [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
read the original abstract

Generative recommendation (GR) has emerged as a promising paradigm that predicts target items by autoregressively generating their semantic identifiers (SID). Most GR methods follow a quantization-representation-generation pipeline, first assigning each item a SID, then constructing input representations from SID-token embeddings, and finally predicting the target SID through autoregressive generation. Existing item-level representation constructions mainly take two forms: directly merging SID-token embeddings into a compact vector, or enriching item-level representations with external inputs through additional networks. However, these item-level constructors still expose two practical challenges: direct merging may amplify the information loss caused by quantization and ID collision while obscuring SID code relations, whereas external-input-based methods can strengthen item semantics but cannot reliably preserve the SID-structured evidence required for token-level generation. These limitations make representation construction an underexplored bottleneck, leading to two severe problems, \ie{} the Identity-Structure Preservation Conflict and Input-Output Granularity Mismatch. To this end, we propose ComeIR, a Conditional Memory enhanced Item Representation framework that reconstructs SID-token embeddings into item-aware inputs and restores the token granularity during SID decoding. Specifically, MM-guided token scoring adaptively estimates the contribution of each code within the SID, dual-level Engram memory captures intra-item code composition and inter-item transition patterns, and a memory-restoring prediction head reuses the memories during SID decoding. Extensive experiments demonstrate the effectiveness and flexibility of ComeIR, and further reveal scalable gains from enlarging conditional memory.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The manuscript introduces ComeIR, a Conditional Memory enhanced Item Representation framework for generative recommendation. It identifies limitations in existing SID-based item representation methods that lead to the Identity-Structure Preservation Conflict and Input-Output Granularity Mismatch. ComeIR reconstructs SID-token embeddings into item-aware inputs via MM-guided token scoring, employs dual-level Engram memory to capture intra-item code compositions and inter-item transition patterns, and uses a memory-restoring prediction head to reuse these memories while restoring token granularity during autoregressive SID decoding. The authors report extensive experiments demonstrating effectiveness and scalable performance gains from enlarging the conditional memory.

Significance. If the empirical results hold under rigorous controls, this work provides a principled architectural solution to a key bottleneck in generative recommendation pipelines, potentially improving both accuracy and the fidelity of token-level generation. The dual-level memory mechanism for handling code relations and transitions represents a structured addition that could extend to other autoregressive tasks in IR, and the reported scalability with memory size offers a clear path for further gains without requiring entirely new quantization schemes.

minor comments (2)
  1. Abstract: The terms 'MM-guided token scoring' and 'dual-level Engram memory' are introduced without a one-sentence definition or reference to their later formalization; adding a brief parenthetical gloss would improve immediate readability for readers unfamiliar with the sub-area.
  2. The manuscript would benefit from an explicit statement in the experimental section on whether the reported gains remain stable under fixed hyperparameter budgets or require additional tuning relative to baselines.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive assessment of ComeIR, the recognition of its potential to address key bottlenecks in generative recommendation, and the recommendation for minor revision. We appreciate the comments on the dual-level memory mechanism and scalability with memory size. As the report does not enumerate any specific major comments, we have no point-by-point rebuttals to provide at this stage and will proceed with minor revisions to improve clarity and presentation.

Circularity Check

0 steps flagged

No significant circularity identified in the proposed framework

full rationale

The paper introduces ComeIR as an architectural enhancement to generative recommendation systems, detailing components like conditional memory for item representations without presenting any mathematical derivations or predictions that are equivalent to their inputs by construction. The claims focus on resolving specific conflicts through new mechanisms (MM-guided scoring, Engram memory, restoring head), which are presented as independent contributions rather than reductions of existing elements. No self-citations are invoked as load-bearing for uniqueness theorems or ansatzes in the provided description, and the empirical experiments are separate from any definitional circularity. This aligns with the default expectation for most papers lacking circular derivation chains.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 2 invented entities

The central claim rests on standard assumptions from the generative recommendation literature plus two new architectural inventions whose independent evidence is limited to the reported experiments.

axioms (2)
  • domain assumption Semantic identifiers obtained via quantization preserve sufficient item semantics for autoregressive generation
    Invoked in the quantization-representation-generation pipeline description
  • domain assumption User-item interaction data contains stable intra-item and inter-item patterns that can be captured by memory modules
    Underlying the dual-level Engram memory design
invented entities (2)
  • Dual-level Engram memory no independent evidence
    purpose: Capture intra-item code composition and inter-item transition patterns
    New memory structure introduced to address the identified conflicts
  • Memory-restoring prediction head no independent evidence
    purpose: Reuse memories during SID decoding to restore token granularity
    New component to fix input-output mismatch

pith-pipeline@v0.9.0 · 5571 in / 1347 out tokens · 42389 ms · 2026-05-13T02:25:37.010049+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

57 extracted references · 57 canonical work pages · 9 internal anchors

  1. [1]

    Yimeng Bai, Chang Liu, Yang Zhang, Dingxian Wang, Frank Yang, Andrew Rabinovich, Wenge Rong, and Fuli Feng. 2025. Bi-Level Optimization for Gener- ative Recommendation: Bridging Tokenization and Generation.arXiv preprint arXiv:2510.21242(2025)

  2. [2]

    Yoshua Bengio, Nicholas Léonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432(2013)

  3. [3]

    Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al . 2024. A survey on evaluation of large language models.ACM transactions on intelligent systems and technology15, 3 (2024), 1–45

  4. [4]

    Zerui Chen, Heng Chang, Tianying Liu, Chuantian Zhou, Yi Cao, Jiandong Ding, Ming Liu, and Bing Qin. 2026. Beyond the Flat Sequence: Hierarchical and Preference-Aware Generative Recommendations. InProceedings of the ACM Web Conference 2026. 7999–8007

  5. [5]

    Xin Cheng, Di Luo, Xiuying Chen, Lemao Liu, Dongyan Zhao, and Rui Yan

  6. [6]

    Advances in Neural Information Processing Systems36 (2023), 43780–43799

    Lift yourself up: Retrieval-augmented text generation with self-memory. Advances in Neural Information Processing Systems36 (2023), 43780–43799

  7. [7]

    Xin Cheng, Wangding Zeng, Damai Dai, Qinyu Chen, Bingxuan Wang, Zhenda Xie, Kezhao Huang, Xingkai Yu, Zhewen Hao, Yukun Li, et al. 2026. Conditional memory via scalable lookup: A new axis of sparsity for large language models. arXiv preprint arXiv:2601.07372(2026)

  8. [8]

    Jiaxin Deng, Shiyao Wang, Kuo Cai, Lejian Ren, Qigen Hu, Weifeng Ding, Qiang Luo, and Guorui Zhou. 2025. Onerec: Unifying retrieve and rank with generative recommender and iterative preference alignment.arXiv preprint arXiv:2502.18965 (2025)

  9. [9]

    Yijie Ding, Jiacheng Li, Julian McAuley, and Yupeng Hou. 2026. Inductive genera- tive recommendation via retrieval-based speculation. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 40. 14675–14683

  10. [10]

    Dengzhao Fang, Jingtong Gao, Chengcheng Zhu, Yu Li, Xiangyu Zhao, and Yi Chang. 2025. Hid-vae: Interpretable generative recommendation via hierarchical and disentangled semantic ids.arXiv preprint arXiv:2508.04618(2025)

  11. [11]

    Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). InProceedings of the 16th ACM conference on recommender systems. 299–315

  12. [12]

    Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key-value memories. InProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 5484–5495

  13. [13]

    Xiangming Gu, Tianyu Pang, Chao Du, Qian Liu, Fengzhuo Zhang, Cunxiao Du, Ye Wang, and Min Lin. 2024. When attention sink emerges in language models: An empirical view.arXiv preprint arXiv:2410.10781(2024)

  14. [14]

    Yupeng Hou, Zhankui He, Julian McAuley, and Wayne Xin Zhao. 2023. Learning vector-quantized item representation for transferable sequential recommenders. InProceedings of the ACM Web Conference 2023. 1162–1171

  15. [15]

    Yupeng Hou, Jiacheng Li, Ashley Shin, Jinsung Jeon, Abhishek Santhanam, Wei Shao, Kaveh Hassani, Ning Yao, and Julian McAuley. 2025. Generating long semantic ids in parallel for recommendation. InProceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2. 956–966

  16. [16]

    Zheng Hu, Yuxin Chen, Yongsen Pan, Xu Yuan, Yuting Yin, Daoyuan Wang, Boyang Xia, Zefei Luo, Hongyang Wang, Songhao Ni, et al. 2026. Stop Treating Collisions Equally: Qualification-Aware Semantic ID Learning for Recommenda- tion at Industrial Scale.arXiv preprint arXiv:2603.00632(2026)

  17. [17]

    Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2010. Product quantization for nearest neighbor search.IEEE transactions on pattern analysis and machine intelligence33, 1 (2010), 117–128

  18. [18]

    Slava Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer.IEEE transactions on acoustics, speech, and signal processing35, 3 (1987), 400–401

  19. [19]

    Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In1995 international conference on acoustics, speech, and signal processing, Vol. 1. IEEE, 181–184

  20. [20]

    Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. 2022. Autoregressive image generation using residual quantization. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 11523–11532

  21. [21]

    Lei Li, Yongfeng Zhang, Dugang Liu, and Li Chen. 2024. Large language models for generative recommendation: A survey and visionary discussions. InProceedings of the 2024 joint international conference on computational linguistics, language resources and evaluation (LREC-COLING 2024). 10146–10159

  22. [22]

    Xiaopeng Li, Bo Chen, Junda She, Shiteng Cao, You Wang, Qinlin Jia, Haiying He, Zheli Zhou, Zhao Liu, Ji Liu, et al. 2025. A Survey of Generative Recommendation from a Tri-Decoupled Perspective: Tokenization.Architecture, and Optimization (2025)

  23. [23]

    Yongqi Li, Xinyu Lin, Wenjie Wang, Fuli Feng, Liang Pang, Wenjie Li, Liqiang Nie, Xiangnan He, and Tat-Seng Chua. 2024. A survey of generative search and recom- mendation in the era of large language models.arXiv preprint arXiv:2404.16924 (2024)

  24. [24]

    Zhao Li, FengYang Qi, Chuanyu Xu, Tao Zhang, Chengfu Huo, and Peng Zhang

  25. [25]

    InProceedings of the ACM Web Conference 2026

    LSIG: Long Semantic IDs for Generative Recommendation. InProceedings of the ACM Web Conference 2026. 7779–7788

  26. [26]

    Jiacheng Lin, Tian Wang, and Kun Qian. 2025. Rec-r1: Bridging generative large language models and user-centric recommendation systems via reinforcement learning.arXiv preprint arXiv:2503.24289(2025)

  27. [27]

    Tao Lin. 2026. A Collision-Free Hot-Tier Extension for Engram-Style Con- ditional Memory: A Controlled Study of Training Dynamics.arXiv preprint arXiv:2601.16531(2026)

  28. [28]

    Xinyu Lin, Chaoqun Yang, Wenjie Wang, Yongqi Li, Cunxiao Du, Fuli Feng, See-Kiong Ng, and Tat-Seng Chua. 2024. Efficient inference for large language model-based generative recommendation.arXiv preprint arXiv:2410.05165(2024)

  29. [29]

    Enze Liu, Bowen Zheng, Cheng Ling, Lantao Hu, Han Li, and Wayne Xin Zhao

  30. [30]

    arXiv preprint arXiv:2409.05546(2024)

    End-to-end learnable item tokenization for generative recommendation. arXiv preprint arXiv:2409.05546(2024)

  31. [31]

    Ruiyang Ma, Teng Ma, Zhiyuan Su, Hantian Zha, Xinpeng Zhao, Xuchun Shang, Xingrui Yi, Zheng Liu, Zhu Cao, An Wu, et al . 2026. Pooling Engram Condi- tional Memory in Large Language Models using CXL. InProceedings of the Sixth European Workshop on Machine Learning and Systems. 225–231

  32. [32]

    Kidist Amde Mekonnen, Yubao Tang, and Maarten de Rijke. 2026. A Parametric Memory Head for Continual Generative Retrieval.arXiv preprint arXiv:2604.23388 (2026)

  33. [33]

    Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al . 2018. Improving language understanding by generative pre-training. (2018)

  34. [34]

    Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners.OpenAI blog 1, 8 (2019), 9

  35. [35]

    Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan Hulikal Keshavan, Trung Vu, Lukasz Heldt, Lichan Hong, Yi Tay, Vinh Tran, Jonah Samost, et al

  36. [36]

    Recommender systems with generative retrieval.Advances in Neural Information Processing Systems36 (2023), 10299–10315

  37. [37]

    Anima Singh, Trung Vu, Nikhil Mehta, Raghunandan Keshavan, Maheswaran Sathiamoorthy, Yilin Zheng, Lichan Hong, Lukasz Heldt, Li Wei, Devansh Tandon, et al. 2024. Better generalization with semantic ids: A case study in ranking for recommendations. InProceedings of the 18th ACM Conference on Recommender Systems. 1039–1044

  38. [38]

    Dan Tito Svenstrup, Jonas Hansen, and Ole Winther. 2017. Hash embeddings for efficient word representations.Advances in neural information processing systems 30 (2017)

  39. [39]

    Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, et al. 2023. Shall we pretrain autoregressive language models with retrieval? a comprehensive study. InProceedings of the 2023 conference on empirical methods in natural language processing. 7763–7786

  40. [40]

    Lean Wang, Huazuo Gao, Chenggang Zhao, Xu Sun, and Damai Dai. 2024. Auxiliary-loss-free load balancing strategy for mixture-of-experts.arXiv preprint arXiv:2408.15664(2024)

  41. [41]

    Wenjie Wang, Honghui Bao, Xinyu Lin, Jizhi Zhang, Yongqi Li, Fuli Feng, See- Kiong Ng, and Tat-Seng Chua. 2024. Learnable item tokenization for generative recommendation. InProceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2400–2409. Conditional Memory Enhanced Item Representation for Generative Recommendation C...

  42. [42]

    Ye Wang, Jiahao Xun, Minjie Hong, Jieming Zhu, Tao Jin, Wang Lin, Haoyuan Li, Linjun Li, Yan Xia, Zhou Zhao, et al . 2024. Eager: Two-stream generative recommender with behavior-semantic collaboration. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 3245–3254

  43. [43]

    Yejing Wang, Shengyu Zhou, Jinyu Lu, Ziwei Liu, Langming Liu, Maolin Wang, Wenlin Zhang, Feng Li, Wenbo Su, Pengjie Wang, et al. 2026. Nezha: A zero- sacrifice and hyperspeed decoding architecture for generative recommendations. InProceedings of the ACM Web Conference 2026. 8073–8082

  44. [44]

    Zesheng Wang, Longfei Xu, Weidong Deng, Huimin Yan, Kaikui Liu, and Xi- angxiang Chu. 2026. IntRR: A Framework for Integrating SID Redistribution and Length Reduction.arXiv preprint arXiv:2602.20704(2026)

  45. [45]

    Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis

  46. [46]

    Efficient streaming language models with attention sinks.arXiv preprint arXiv:2309.17453(2023)

  47. [47]

    Liu Yang, Fabian Paischer, Kaveh Hassani, Jiacheng Li, Shuai Shao, Zhang Gabriel Li, Yun He, Xue Feng, Nima Noorshams, Sem Park, et al . 2024. Unifying gen- erative and dense retrieval for sequential recommendation.arXiv preprint arXiv:2411.18814(2024)

  48. [48]

    Yuhao Yang, Zhi Ji, Zhaopeng Li, Yi Li, Zhonglin Mo, Yue Ding, Kai Chen, Zijian Zhang, Jie Li, Shuanglong Li, et al. 2025. Sparse meets dense: Unified generative recommendations with cascaded sparse-dense representations.arXiv preprint arXiv:2503.02453(2025)

  49. [49]

    Jiaqi Zhai, Lucy Liao, Xing Liu, Yueming Wang, Rui Li, Xuan Cao, Leon Gao, Zhao- jie Gong, Fangda Gu, Michael He, et al. 2024. Actions speak louder than words: Trillion-parameter sequential transducers for generative recommendations.arXiv preprint arXiv:2402.17152(2024)

  50. [50]

    Zhaoqi Zhang, Haolei Pei, Jun Guo, Tianyu Wang, Yufei Feng, Hui Sun, Shaowei Liu, and Aixin Sun. 2026. Onetrans: Unified feature interaction and sequence modeling with one transformer in industrial recommender. InProceedings of the ACM Web Conference 2026. 8162–8170

  51. [51]

    Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models.arXiv preprint arXiv:2303.182231, 2 (2023), 1–124

  52. [52]

    Zhe Zhao, Tao Liu, Shen Li, Bofang Li, and Xiaoyong Du. 2017. Ngram2vec: Learning improved word representations from ngram co-occurrence statistics. InProceedings of the 2017 conference on empirical methods in natural language processing. 244–253

  53. [53]

    Bowen Zheng, Yupeng Hou, Hongyu Lu, Yu Chen, Wayne Xin Zhao, Ming Chen, and Ji-Rong Wen. 2024. Adapting large language models by integrating collaborative semantics for recommendation. In2024 IEEE 40th International Conference on Data Engineering (ICDE). IEEE, 1435–1448

  54. [54]

    Guorui Zhou, Hengrui Hu, Hongtao Cheng, Huanjie Wang, Jiaxin Deng, Jinghao Zhang, Kuo Cai, Lejian Ren, Lu Ren, Liao Yu, et al. 2025. Onerec-v2 technical report.arXiv preprint arXiv:2508.20900(2025)

  55. [55]

    Jieming Zhu, Mengqun Jin, Qijiong Liu, Zexuan Qiu, Zhenhua Dong, and Xiu Li

  56. [56]

    InProceedings of the 18th ACM Conference on Recommender Systems

    Cost: Contrastive quantization based semantic tokenization for generative recommendation. InProceedings of the 18th ACM Conference on Recommender Systems. 969–974

  57. [57]

    GenRec: A Preference-Oriented Generative Framework for Large-Scale Recommendation

    Yanyan Zou, Junbo Qi, Lunsong Huang, Yu Li, Kewei Xu, Jiabao Gao, Binglei Zhao, Xuanhua Yang, Sulong Xu, and Shengjie Li. 2026. GenRec: A Preference- Oriented Generative Framework for Large-Scale Recommendation.arXiv preprint arXiv:2604.14878(2026). Conference’17, July 2017, Washington, DC, USA Ziwei Liu, et al. Context Query x L Context-aware Gate x L Co...