Recognition: no theorem link
SpecTran: Spectral-Aware Transformer-based Adapter for LLM-Enhanced Sequential Recommendation
Pith reviewed 2026-05-16 09:20 UTC · model grok-4.3
The pith
SpecTran integrates high-dimensional LLM text embeddings into sequential recommenders by attending to the full spectral range with a transformer adapter and learnable position encoding.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
SpecTran is a spectral-aware transformer-based adapter that operates in the spectral domain, attending to the full spectrum to select and aggregate informative components from LLM embeddings. A learnable spectral-position encoding injects singular-value cues as an inductive bias that guides attention toward salient components and promotes diversity across embedding dimensions, thereby avoiding the dimension collapse of adapter methods and the rigidity of SVD-based methods.
What carries the argument
Spectral-aware transformer adapter that performs full-spectrum attention guided by learnable spectral-position encoding derived from singular values
If this is right
- Recommendation accuracy rises across multiple real-world datasets and backbone architectures.
- Embedding transformations avoid the severe dimension collapse observed in prior adapter designs.
- More spectral information is retained compared with SVD truncation that keeps only top components.
- The same adapter can be plugged into different sequential recommendation models without manual retuning.
- Singular-value cues supply an inductive bias that helps the model focus on salient parts of the spectrum.
Where Pith is reading between the lines
- The same full-spectrum attention idea could be tested on non-text embeddings such as image or graph features in recommendation.
- If the spectral transformer pattern generalizes, it might replace manual dimensionality reduction steps in other embedding-fusion pipelines.
- The learnable position encoding might transfer to domains outside recommendation where high-dimensional inputs suffer from concentration in few coordinates.
- An ablation that removes the learnable encoding while keeping full-spectrum attention would isolate how much of the gain comes from the inductive bias.
Load-bearing premise
That attending across the entire spectrum inside a transformer adapter plus a learnable singular-value encoding will reliably keep more useful information than methods that collapse dimensions or discard most spectral components.
What would settle it
A controlled experiment on a held-out dataset or backbone in which SpecTran produces no accuracy gain over SVD or standard adapter baselines and the resulting embeddings still concentrate in a small number of dimensions.
Figures
read the original abstract
Traditional sequential recommendation (SR) models learn low-dimensional item ID embeddings from user-item interactions, often overlooking textual information such as item titles or descriptions. Recent advances in Large Language Models (LLMs) have inspired a surge of research that encodes item textual information with high-dimensional semantic embeddings, and designs transformation methods to inject such embeddings into SR models. These embedding transformation strategies can be categorized into two types, both of which exhibits notable drawbacks: 1) adapter-based methods suffer from pronounced dimension collapse, concentrating information into a few dominant dimensions; 2) SVD-based methods are rigid and manual, considering only a few principal spectral components while discarding rich information in the remaining spectrum. To address these limitations, we propose SpecTran, a spectral-aware transformer-based adapter that operates in the spectral domain, attending to the full spectrum to select and aggregates informative components. A learnable spectral-position encoding injects singular-value cues as an inductive bias, guiding attention toward salient spectral components and promoting diversity across embedding dimensions. Across four real-world datasets and three SR backbones, it consistently outperforms strong baselines, achieving an average improvement of 9.17%.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes SpecTran, a spectral-aware transformer-based adapter for injecting high-dimensional LLM textual embeddings into sequential recommendation models. It operates in the spectral domain using full-spectrum attention and a learnable spectral-position encoding to address dimension collapse in adapter methods and rigidity in SVD-based methods. Empirical evaluation across four real-world datasets and three SR backbones reports consistent outperformance with an average 9.17% improvement over strong baselines.
Significance. If the gains can be rigorously attributed to the spectral mechanisms rather than capacity or optimization differences, the approach would provide a useful framework for preserving informative components when aligning LLM embeddings with SR models. The multi-dataset, multi-backbone consistency is a positive indicator of robustness, but the current lack of supporting diagnostics limits the assessed significance.
major comments (3)
- [Abstract] Abstract: The central claim that full-spectrum attention plus learnable spectral-position encoding avoids dimension collapse and yields the 9.17% gain is load-bearing, yet no singular-value spectra, effective-rank metrics, or cosine-similarity diagnostics are reported to compare SpecTran embeddings against adapter and SVD baselines.
- [Experiments] Experiments section: The reported average improvement lacks error bars, run-to-run variance, or statistical significance tests, and no ablation isolating the spectral attention component from the transformer adapter structure is described, leaving alternative explanations (parameter count, regularization) viable.
- [Method] Method: The inductive bias introduced by the learnable spectral-position encoding is asserted to promote dimension diversity, but no quantitative verification (e.g., rank or entropy measures before/after) is provided to substantiate this over standard positional encodings.
minor comments (1)
- The abstract would benefit from explicitly naming the four datasets and three SR backbones to improve immediate reproducibility assessment.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We agree that the suggested diagnostics, statistical reporting, and ablations will strengthen the empirical claims and will incorporate them in the revised manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central claim that full-spectrum attention plus learnable spectral-position encoding avoids dimension collapse and yields the 9.17% gain is load-bearing, yet no singular-value spectra, effective-rank metrics, or cosine-similarity diagnostics are reported to compare SpecTran embeddings against adapter and SVD baselines.
Authors: We acknowledge that the current manuscript lacks these supporting diagnostics. In the revision we will add singular-value spectra plots, effective-rank metrics, and cosine-similarity comparisons between SpecTran embeddings and those from standard adapter and SVD baselines to directly demonstrate reduced dimension collapse. revision: yes
-
Referee: [Experiments] Experiments section: The reported average improvement lacks error bars, run-to-run variance, or statistical significance tests, and no ablation isolating the spectral attention component from the transformer adapter structure is described, leaving alternative explanations (parameter count, regularization) viable.
Authors: We agree that error bars, run-to-run variance, and statistical tests are necessary. The revision will include results from multiple independent runs with mean and standard deviation, plus significance testing. We will also add an ablation that isolates the spectral attention component while keeping parameter counts matched across variants; the original adapter designs already used comparable budgets, but the new ablation will rule out capacity or regularization confounds. revision: yes
-
Referee: [Method] Method: The inductive bias introduced by the learnable spectral-position encoding is asserted to promote dimension diversity, but no quantitative verification (e.g., rank or entropy measures before/after) is provided to substantiate this over standard positional encodings.
Authors: We will add quantitative verification in the revised manuscript by reporting effective rank and entropy measures on the embeddings before and after the learnable spectral-position encoding, with direct comparison to standard positional encodings to confirm the claimed increase in dimension diversity. revision: yes
Circularity Check
No circularity: empirical method with no self-referential derivations
full rationale
The paper introduces SpecTran via descriptive claims about operating in the spectral domain with full-spectrum attention and learnable position encoding to avoid dimension collapse. No equations, derivations, or first-principles steps are exhibited in the provided text that reduce the 9.17% improvement claim to a fitted parameter, self-definition, or self-citation chain. Performance gains are presented as empirical outcomes across datasets and backbones rather than mathematically forced results. No load-bearing uniqueness theorems or ansatzes imported from prior self-work are referenced. This is a standard non-circular empirical adapter proposal.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Floren- cia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report.arXiv preprint arXiv:2303.08774 (2023)
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[2]
Keqin Bao, Jizhi Zhang, Wenjie Wang, Yang Zhang, Zhengyi Yang, Yanchen Luo, Chong Chen, Fuli Feng, and Qi Tian. 2025. A bi-step grounding paradigm for large language models in recommendation systems.ACM Transactions on Recommender Systems3, 4 (2025), 1–27
work page 2025
-
[3]
Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. InProceedings of the 17th ACM conference on recommender systems. 1007–1014
work page 2023
-
[4]
Tesfaye Fenta Boka, Zhendong Niu, and Rama Bastola Neupane. 2024. A sur- vey of sequential recommendation systems: Techniques, evaluation, and future directions.Information Systems125 (2024), 102427
work page 2024
-
[5]
Jianxin Chang, Chen Gao, Yu Zheng, Yiqun Hui, Yanan Niu, Yang Song, Depeng Jin, and Yong Li. 2021. Sequential recommendation with graph neural networks. InProceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 378–387
work page 2021
-
[6]
Huiyuan Chen, Vivian Lai, Hongye Jin, Zhimeng Jiang, Mahashweta Das, and Xia Hu. 2024. Towards mitigating dimensional collapse of representations in collaborative filtering. InProceedings of the 17th ACM international conference on web search and data mining. 106–115
work page 2024
- [7]
-
[8]
Xu Chen, Hongteng Xu, Yongfeng Zhang, Jiaxi Tang, Yixin Cao, Zheng Qin, and Hongyuan Zha. 2018. Sequential recommendation with user memory networks. InProceedings of the eleventh ACM international conference on web search and data mining. 108–116
work page 2018
-
[9]
Yuxin Chen, Junfei Tan, An Zhang, Zhengyi Yang, Leheng Sheng, Enzhi Zhang, Xiang Wang, and Tat-Seng Chua. 2024. On softmax direct preference optimization for recommendation.Advances in Neural Information Processing Systems37 (2024), 27463–27489
work page 2024
-
[10]
Yu Cui, Feng Liu, Pengbo Wang, Bohao Wang, Heng Tang, Yi Wan, Jun Wang, and Jiawei Chen. 2024. Distillation matters: empowering sequential recommenders to match the performance of large language models. InProceedings of the 18th ACM Conference on Recommender Systems. 507–517
work page 2024
- [11]
-
[12]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Cui et al. for computational linguistics: human language tec...
work page 2019
-
[13]
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models.arXiv e-prints(2024), arXiv–2407
work page 2024
- [14]
-
[15]
Binzong Geng, Zhaoxin Huan, Xiaolu Zhang, Yong He, Liang Zhang, Fajie Yuan, Jun Zhou, and Linjian Mo. 2024. Breaking the length barrier: Llm-enhanced CTR prediction in long textual user behaviors. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2311–2315
work page 2024
-
[16]
Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). InProceedings of the 16th ACM Conference on Recommender Systems. 299–315
work page 2022
-
[17]
Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk
-
[18]
Session-based recommendations with recurrent neural networks.arXiv preprint arXiv:1511.06939(2015)
work page internal anchor Pith review Pith/arXiv arXiv 2015
-
[19]
Andreas Hoecker and Vakhtang Kartvelishvili. 1996. SVD approach to data unfold- ing.Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment372, 3 (1996), 469–481
work page 1996
-
[20]
Yupeng Hou, Zhankui He, Julian McAuley, and Wayne Xin Zhao. 2023. Learning vector-quantized item representation for transferable sequential recommenders. InProceedings of the ACM Web Conference 2023. 1162–1171
work page 2023
-
[21]
Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, and Ji-Rong Wen. 2022. Towards universal sequence representation learning for recommender systems. InProceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 585–593
work page 2022
-
[22]
Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2024. Large language models are zero-shot rankers for recommender systems. InEuropean Conference on Information Retrieval. Springer, 364–381
work page 2024
-
[23]
Guoqing Hu, An Zhang, Shuo Liu, Zhibo Cai, Xun Yang, and Xiang Wang. 2025. Alphafuse: Learn id embeddings for sequential recommendation in null space of language embeddings. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1614–1623
work page 2025
-
[24]
Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, and Xiangnan He. 2024. Item-side Fairness of Large Language Model-based Recommendation System. InProceedings of the ACM on Web Conference 2024. 4717–4726
work page 2024
- [25]
-
[26]
Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recom- mendation. In2018 IEEE international conference on data mining (ICDM). IEEE, 197–206
work page 2018
-
[27]
Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fa- had Shahbaz Khan, and Mubarak Shah. 2022. Transformers in vision: A survey. ACM computing surveys (CSUR)54, 10s (2022), 1–41
work page 2022
-
[28]
Sein Kim, Hongseok Kang, Seungyoon Choi, Donghyun Kim, Minchul Yang, and Chanyoung Park. 2024. Large language models meet collaborative filtering: An efficient all-round llm-based recommender system. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1395–1406
work page 2024
-
[29]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic opti- mization.arXiv preprint arXiv:1412.6980(2014)
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[30]
Jiacheng Li, Yujie Wang, and Julian McAuley. 2020. Time interval aware self- attention for sequential recommendation. InProceedings of the 13th international conference on web search and data mining. 322–330
work page 2020
-
[31]
Lei Li, Yongfeng Zhang, and Li Chen. 2023. Prompt distillation for efficient llm- based recommendation. InProceedings of the 32nd ACM International Conference on Information and Knowledge Management. 1348–1357
work page 2023
- [32]
-
[33]
Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang, and Xiangnan He. 2024. Llara: Large language-recommendation assistant. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1785–1795
work page 2024
-
[34]
Jianghao Lin, Rong Shan, Chenxu Zhu, Kounianhua Du, Bo Chen, Shigang Quan, Ruiming Tang, Yong Yu, and Weinan Zhang. 2024. Rella: Retrieval-enhanced large language models for lifelong sequential behavior comprehension in recom- mendation. InProceedings of the ACM on Web Conference 2024. 3497–3508
work page 2024
-
[35]
Xinyu Lin, Wenjie Wang, Yongqi Li, Fuli Feng, See-Kiong Ng, and Tat-Seng Chua
-
[36]
InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
Bridging items and language: A transition paradigm for large language model-based recommendation. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1816–1826
-
[37]
Xinyu Lin, Wenjie Wang, Yongqi Li, Shuo Yang, Fuli Feng, Yinwei Wei, and Tat- Seng Chua. 2024. Data-efficient Fine-tuning for LLM-based Recommendation. InProceedings of the 47th international ACM SIGIR conference on research and development in information retrieval. 365–374
work page 2024
- [38]
-
[39]
Qijiong Liu, Nuo Chen, Tetsuya Sakai, and Xiao-Ming Wu. 2024. Once: Boosting content-based recommendation with both open-and closed-source large language models. InProceedings of the 17th ACM International Conference on Web Search and Data Mining. 452–461
work page 2024
-
[40]
Qidong Liu, Xian Wu, Yejing Wang, Zijian Zhang, Feng Tian, Yefeng Zheng, and Xiangyu Zhao. 2024. Llm-esr: Large language models enhancement for long- tailed sequential recommendation.Advances in Neural Information Processing Systems37 (2024), 26701–26727
work page 2024
- [41]
-
[42]
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748(2018)
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[43]
Lutz Prechelt. 2002. Early stopping-but when? InNeural Networks: Tricks of the trade. Springer, 55–69
work page 2002
-
[44]
Jiarui Qin, Weiwen Liu, Weinan Zhang, and Yong Yu. 2025. D2K: Turning histor- ical data into retrievable knowledge for recommender systems. InProceedings of the ACM on Web Conference 2025. 472–482
work page 2025
-
[45]
Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. Representation learning with large language models for recommendation. InProceedings of the ACM web conference 2024. 3464–3475
work page 2024
-
[46]
Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. Representation learning with large language models for recommendation. InProceedings of the ACM on Web Conference 2024. 3464– 3475
work page 2024
-
[47]
Yankun Ren, Zhongde Chen, Xinxing Yang, Longfei Li, Cong Jiang, Lei Cheng, Bo Zhang, Linjian Mo, and Jun Zhou. 2024. Enhancing sequential recommenders with augmented knowledge from aligned large language models. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 345–354
work page 2024
-
[48]
Wentao Shi, Xiangnan He, Yang Zhang, Chongming Gao, Xinyue Li, Jizhi Zhang, Qifan Wang, and Fuli Feng. 2024. Large Language Models are Learnable Planners for Long-Term Recommendation. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1893– 1903
work page 2024
-
[49]
Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang
-
[50]
InProceedings of the 28th ACM international conference on information and knowledge management
BERT4Rec: Sequential recommendation with bidirectional encoder rep- resentations from transformer. InProceedings of the 28th ACM international conference on information and knowledge management. 1441–1450
-
[51]
Zhongxiang Sun, Zihua Si, Xiaoxue Zang, Kai Zheng, Yang Song, Xiao Zhang, and Jun Xu. 2024. Large language models enhanced collaborative filtering. InPro- ceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2178–2188
work page 2024
-
[52]
Juntao Tan, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Zelong Li, and Yongfeng Zhang. 2024. IDGenRec: LLM-RecSys Alignment with Textual ID Learning. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 355–364
work page 2024
-
[53]
Jiaxi Tang and Ke Wang. 2018. Personalized top-n sequential recommenda- tion via convolutional sequence embedding. InProceedings of the eleventh ACM international conference on web search and data mining. 565–573
work page 2018
-
[54]
Bohao Wang, Feng Liu, Jiawei Chen, Xingyu Lou, Changwang Zhang, Jun Wang, Yuegang Sun, Yan Feng, Chun Chen, and Can Wang. 2025. MSL: Not All Tokens Are What You Need for Tuning LLM as a Recommender. InProceedings of the 48th international ACM SIGIR conference on research and development in information retrieval
work page 2025
-
[55]
Bohao Wang, Feng Liu, Changwang Zhang, Jiawei Chen, Yudi Wu, Sheng Zhou, Xingyu Lou, Jun Wang, Yan Feng, Chun Chen, et al. 2025. Llm4dsr: Leveraging large language model for denoising sequential recommendation.ACM Transac- tions on Information Systems44, 1 (2025), 1–32
work page 2025
- [56]
- [57]
-
[58]
Yuling Wang, Changxin Tian, Binbin Hu, Yanhua Yu, Ziqi Liu, Zhiqiang Zhang, Jun Zhou, Liang Pang, and Xiao Wang. 2024. Can Small Language Models be Good Reasoners for Sequential Recommendation?. InProceedings of the ACM on Web Conference 2024. 3876–3887. SpecTran: Spectral-Aware Transformer-based Adapter for LLM-Enhanced Sequential Recommendation Conferen...
work page 2024
-
[59]
Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Jun- feng Wang, Dawei Yin, and Chao Huang. 2024. Llmrec: Large language models with graph augmentation for recommendation. InProceedings of the 17th ACM International Conference on Web Search and Data Mining. 806–815
work page 2024
-
[60]
Yunjia Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, and Yong Yu. 2024. Towards open-world recommendation with knowledge augmentation from large language models. In Proceedings of the 18th ACM Conference on Recommender Systems. 12–22
work page 2024
-
[61]
Xu Xie, Fei Sun, Zhaoyang Liu, Shiwen Wu, Jinyang Gao, Jiandong Zhang, Bolin Ding, and Bin Cui. 2022. Contrastive learning for sequential recommendation. In 2022 IEEE 38th international conference on data engineering (ICDE). IEEE, 1259– 1273
work page 2022
-
[62]
Wujiang Xu, Qitian Wu, Zujie Liang, Jiaojiao Han, Xuying Ning, Yunxiao Shi, Wenfang Lin, and Yongfeng Zhang. 2025. SLMRec: Distilling Large Language Models into Small for Sequential Recommendation. InThe Thirteenth International Conference on Learning Representations
work page 2025
-
[63]
Wentao Xu, Qianqian Xie, Shuo Yang, Jiangxia Cao, and Shuchao Pang. 2024. Enhancing content-based recommendation via large language model. InPro- ceedings of the 33rd ACM International Conference on Information and Knowledge Management. 4153–4157
work page 2024
-
[64]
Zhengyi Yang, Xiangnan He, Jizhi Zhang, Jiancan Wu, Xin Xin, Jiawei Chen, and Xiang Wang. 2023. A generic learning framework for sequential recommenda- tion with distribution shifts. InProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 331–340
work page 2023
-
[65]
Yong Yu, Xiaosheng Si, Changhua Hu, and Jianxun Zhang. 2019. A review of recurrent neural networks: LSTM cells and network architectures.Neural computation31, 7 (2019), 1235–1270
work page 2019
-
[66]
Zheng Yuan, Fajie Yuan, Yu Song, Youhua Li, Junchen Fu, Fei Yang, Yunzhu Pan, and Yongxin Ni. 2023. Where to go next for recommender systems? id- vs. modality-based recommender models revisited. InProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2639–2649
work page 2023
-
[67]
Jiaqi Zhai, Lucy Liao, Xing Liu, Yueming Wang, Rui Li, Xuan Cao, Leon Gao, Zhaojie Gong, Fangda Gu, Jiayuan He, et al . 2024. Actions speak louder than words: trillion-parameter sequential transducers for generative recommendations. InProceedings of the 41st International Conference on Machine Learning. 58484– 58509
work page 2024
-
[68]
Junjie Zhang, Ruobing Xie, Yupeng Hou, Xin Zhao, Leyu Lin, and Ji-Rong Wen
-
[69]
Recommendation as instruction following: A large language model em- powered recommendation approach.ACM Transactions on Information Systems (2023)
work page 2023
-
[70]
Lingzi Zhang, Xin Zhou, Zhiwei Zeng, and Zhiqi Shen. 2024. Are id embeddings necessary? whitening pre-trained text embeddings for effective sequential rec- ommendation. In2024 IEEE 40th International Conference on Data Engineering (ICDE). IEEE, 530–543
work page 2024
- [71]
-
[72]
Zihuai Zhao, Wenqi Fan, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, et al. 2024. Recommender systems in the era of large language models (llms).IEEE Transactions on Knowledge and Data Engineering36, 11 (2024), 6889–6907
work page 2024
-
[73]
Bowen Zheng, Yupeng Hou, Hongyu Lu, Yu Chen, Wayne Xin Zhao, Ming Chen, and Ji-Rong Wen. 2024. Adapting large language models by integrating collaborative semantics for recommendation. In2024 IEEE 40th International Conference on Data Engineering (ICDE). IEEE, 1435–1448
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.