pith. machine review for the scientific record. sign in

arxiv: 2604.13573 · v1 · submitted 2026-04-15 · 💻 cs.IR

Recognition: unknown

From Transfer to Collaboration: A Federated Framework for Cross-Market Sequential Recommendation

Authors on Pith no claims yet

Pith reviewed 2026-05-10 13:00 UTC · model grok-4.3

classification 💻 cs.IR
keywords cross-market recommendationfederated learningsequential recommendationsource degradationnegative transfersemantic lossmulti-market collaboration
0
0 comments X

The pith

A many-to-many federated framework lets all markets improve sequential recommendations together without any market losing performance.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that standard one-to-one transfer from a source market to a target market creates two problems: the source market's own accuracy drops, and differences between markets cause the transferred model to perform poorly on the target. To fix this, the authors replace the one-to-one approach with a many-to-many setup in which every market participates in a joint federated pretraining stage that learns common user behavior sequences. After this shared stage, each market runs a local fine-tuning step that adapts the model to its own item catalog using a new loss function called Semantic Soft Cross-Entropy. The new loss uses item semantic similarities to soften the usual cross-entropy penalties, reducing the harm caused by market differences during the joint phase. If the approach holds, recommendation systems could pool knowledge across isolated markets while keeping every market's local performance at least as good as training alone.

Core claim

The central claim is that a many-to-many collaboration paradigm, formed by federated pretraining on shared behavior-level patterns followed by market-specific local fine-tuning, together with the Semantic Soft Cross-Entropy loss, overcomes source degradation and negative transfer in cross-market sequential recommendation.

What carries the argument

The many-to-many collaboration paradigm that performs federated pretraining to capture shared behavior sequences and then applies local fine-tuning with a market-specific adaptation module, enabled by the Semantic Soft Cross-Entropy loss that incorporates shared semantic information to stabilize collaborative optimization.

If this is right

  • Every participating market obtains higher sequential recommendation accuracy than it would achieve by training alone.
  • No market experiences a drop in its own performance when it contributes to the shared pretraining stage.
  • The semantic soft loss reduces the optimization difficulty caused by item catalog and behavior differences across markets.
  • Local fine-tuning after pretraining successfully captures item-level preferences that are unique to each market.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same pretraining-plus-local-adaptation pattern could be tested in other settings where data must stay isolated, such as hospital-specific medical recommendations.
  • Defining semantic similarity between items may require different methods when markets sell completely unrelated product types.
  • The framework raises the practical question of how many markets can join the federated stage before communication costs outweigh the accuracy gains.

Load-bearing premise

Market differences can be handled during joint training simply by softening the cross-entropy loss with semantic similarities, without the softening step itself creating new mismatches or biases across markets.

What would settle it

An experiment in which the source market's recommendation accuracy after the federated pretraining stage falls below the accuracy obtained by training that market completely on its own data.

Figures

Figures reproduced from arXiv: 2604.13573 by Han Yu, Haoxuan Li, Honglei Zhang, Jundong Chen, Xiangmou Qu, Yidong Li.

Figure 1
Figure 1. Figure 1: Paradigm comparison. One-to-one transfer suffers [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Our framework consists of two stages: the first stage lever [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of FeCoSR for CMR. I. (Orange) Federated pretraining with textual modality for cross-market behavior-level [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Inter-market heterogeneity under CE and S [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Distributions of inter-user representation similari [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: Effect of the temperature parameter 𝜏 during feder￾ated pretraining, evaluated by HR@10. to vanilla CE, exacerbating inter-market heterogeneity and hin￾dering collaboration. Conversely, a too-large 𝜏 over-smooths the supervision, causing negative samples to receive non-negligible predicted probabilities and thereby reducing the model’s discrim￾inative ability. Additionally, we observe that some markets, e.… view at source ↗
Figure 7
Figure 7. Figure 7: Effect of the low-rank 𝑟 during local fine-tuning, evaluated by HR@10. 5 CONCLUSION In this work, we revisit CMR and identify two fundamental limita￾tions of existing one-to-one transfer paradigms, i.e., source degrada￾tion and negative transfer. To this end, we propose FeCoSR, a novel federated collaboration framework that enables many-to-many knowledge sharing across markets. FeCoSR introduces a two-stag… view at source ↗
Figure 8
Figure 8. Figure 8: Inter-market heterogeneity under CE and S [PITH_FULL_IMAGE:figures/full_fig_p013_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Under the CE objective, the similarity values are concen [PITH_FULL_IMAGE:figures/full_fig_p014_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Distributions of inter-user representation similarities before and after behavior encoder fine-tuning. [PITH_FULL_IMAGE:figures/full_fig_p015_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Visualization of item embeddings before and after incorporating the ID modality. [PITH_FULL_IMAGE:figures/full_fig_p015_11.png] view at source ↗
Figure 13
Figure 13. Figure 13: Effect of the low-rank 𝑟 during local fine-tuning, evaluated by NDCG@10. Specifically, smaller rank values, e.g., 𝑟 ∈ {1, 2, 4}, tend to yield better performance across most markets. For example, 𝑟 = 2 or 4 achieves the best or near-best results on several datasets such as mx, uk, and in, while larger ranks, e.g., 16, do not bring further improvements and may even slightly degrade performance. These resul… view at source ↗
Figure 12
Figure 12. Figure 12: Effect of the temperature parameter 𝜏 during fed￾erated pretraining, evaluated by NDCG@10. C.4.2 The low-rank parameter 𝑟. As the NDCG@10 results shown in [PITH_FULL_IMAGE:figures/full_fig_p015_12.png] view at source ↗
read the original abstract

Cross-market recommendation (CMR) aims to enhance recommendation performance across multiple markets. Due to its inherent characteristics, i.e., data isolation, non-overlapping users, and market heterogeneity, CMR introduces unique challenges and fundamentally differs from cross-domain recommendation (CDR). Existing CMR approaches largely inherit CDR by adopting the one-to-one transfer paradigm, where a model is pretrained on a source market and then fine-tuned on a target market. However, such a paradigm suffers from CH1. source degradation, where the source market sacrifices its own performance for the target markets, and CH2. negative transfer, where market heterogeneity leads to suboptimal performance in target markets. To address these challenges, we propose FeCoSR, a novel federated collaboration framework for cross-market sequential recommendation. Specifically, to tackle CH1, we introduce a many-to-many collaboration paradigm that enables all markets to jointly participate in and benefit from training. It consists of a federated pretraining stage for capturing shared behavior-level patterns, followed by local fine-tuning for market-specific item-level preferences. For CH2, we theoretically and empirically show that vanilla Cross-Entropy (CE) exacerbates market heterogeneity, undermining federated optimization. To address this, we propose a Semantic Soft Cross-Entropy (S^2CE) that leverages shared semantic information to facilitate collaborative behavioral learning across markets. Then, we design a market-specific adaptation module during fine-tuning to capture local item preferences. Extensive experiments on the real-world datasets demonstrate the advantages of FeCoSR over other methods.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes FeCoSR, a federated collaboration framework for cross-market sequential recommendation. It replaces the one-to-one transfer paradigm with a many-to-many approach consisting of federated pretraining to capture shared behavior-level patterns across markets and local fine-tuning for market-specific item preferences, thereby addressing source degradation (CH1). To handle negative transfer (CH2) arising from market heterogeneity, the authors claim to theoretically and empirically demonstrate that vanilla cross-entropy exacerbates heterogeneity during federated optimization and introduce Semantic Soft Cross-Entropy (S^2CE) that leverages shared semantic information; a market-specific adaptation module is added during fine-tuning. Experiments on real-world datasets are said to show advantages over existing methods.

Significance. If the central claims hold, the work would be significant for cross-market recommendation by establishing a collaborative federated alternative to transfer-based methods, allowing all markets to benefit without source degradation and providing a mechanism to reduce heterogeneity via semantic soft labels. The many-to-many paradigm and S^2CE represent a constructive shift in handling data isolation and non-overlapping users/items, with potential to influence federated recsys research; the use of real datasets is a positive aspect.

major comments (2)
  1. [Abstract] Abstract: the claim of a 'theoretical and empirical' demonstration that vanilla CE exacerbates market heterogeneity (undermining federated optimization) and that S^2CE reliably mitigates it is asserted without any derivation, equation, or proof sketch visible; this is load-bearing for the justification of replacing CE and for the assertion that semantic sharing avoids new negative transfer effects.
  2. [Method] Method (S^2CE definition): the formulation of how shared semantic information is extracted, how soft targets are computed, and how they are aggregated across non-overlapping markets (e.g., via FedAvg) is not specified; without this, it is unclear whether S^2CE counters CE-induced heterogeneity or introduces noise when item semantics are not aligned.
minor comments (2)
  1. [Abstract] The abstract refers to 'extensive experiments on the real-world datasets' but provides no information on the specific datasets, baselines, evaluation metrics, or statistical significance tests; these details are needed to assess the empirical support.
  2. [Method] Clarify the precise distinction and interaction between 'behavior-level patterns' captured in federated pretraining and 'item-level preferences' handled in local fine-tuning, including any equations governing the transition between stages.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. The comments highlight important areas for improving clarity and rigor, particularly around the theoretical justification and the precise formulation of S^2CE. We address each major comment below and will incorporate revisions to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim of a 'theoretical and empirical' demonstration that vanilla CE exacerbates market heterogeneity (undermining federated optimization) and that S^2CE reliably mitigates it is asserted without any derivation, equation, or proof sketch visible; this is load-bearing for the justification of replacing CE and for the assertion that semantic sharing avoids new negative transfer effects.

    Authors: We acknowledge that the abstract asserts the theoretical demonstration without a visible sketch, which reduces accessibility. The manuscript contains supporting analysis (Section 3.2) showing that vanilla CE amplifies label distribution divergence across markets during FedAvg updates, with empirical validation in the experiments. To address the concern directly, we will add a concise proof sketch (key inequality on heterogeneity increase) and reference to the relevant equations in the revised abstract and introduction, ensuring the claim is self-contained. revision: yes

  2. Referee: [Method] Method (S^2CE definition): the formulation of how shared semantic information is extracted, how soft targets are computed, and how they are aggregated across non-overlapping markets (e.g., via FedAvg) is not specified; without this, it is unclear whether S^2CE counters CE-induced heterogeneity or introduces noise when item semantics are not aligned.

    Authors: We agree the current description lacks sufficient detail on the mechanics. Shared semantics are derived from aligned item metadata (categories and textual descriptions) available across markets; soft targets are computed via a similarity-weighted distribution over semantically related items (using cosine similarity on metadata embeddings). These replace hard labels in the loss, and updates are aggregated with standard FedAvg. A semantic alignment regularizer mitigates misalignment noise. In revision we will insert explicit equations for soft-target generation, a pseudocode algorithm, and discussion of the alignment step to clarify how heterogeneity is reduced without introducing new noise. revision: yes

Circularity Check

0 steps flagged

No load-bearing circularity; new framework components presented as independent construction

full rationale

The derivation chain introduces FeCoSR with a many-to-many federated pretraining stage plus S^2CE loss as a novel response to identified challenges CH1/CH2. The claim that vanilla CE exacerbates heterogeneity is stated as a theoretical/empirical finding rather than a definitional equivalence or fitted-parameter renaming. No equations in the abstract reduce the proposed semantic soft labels or adaptation module to prior fitted quantities by construction. Self-citations are not load-bearing for the central result, and the method is not a re-expression of existing patterns under new coordinates. This is the common honest non-finding for a methods paper whose improvements rest on empirical validation rather than tautological reduction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

Review performed on abstract only; full paper details on parameters, proofs, or new entities unavailable. Assumes standard federated learning constraints and market heterogeneity as given.

axioms (1)
  • domain assumption Data isolation, non-overlapping users, and market heterogeneity are inherent to CMR and differ fundamentally from CDR.
    Stated directly in the abstract as the basis for why existing CDR methods fail.
invented entities (1)
  • Semantic Soft Cross-Entropy (S^2CE) no independent evidence
    purpose: Leverage shared semantic information to facilitate collaborative behavioral learning and mitigate negative transfer from market heterogeneity.
    Introduced as the key replacement for vanilla CE during federated pretraining.

pith-pipeline@v0.9.0 · 5589 in / 1458 out tokens · 25268 ms · 2026-05-10T13:00:26.837513+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

45 extracted references · 2 canonical work pages

  1. [1]

    Samarth Bhargav, Mohammad Aliannejadi, and Evangelos Kanoulas. 2023. Market-aware models for efficient cross-market recommendation. InEuropean Conference on Information Retrieval. 134–149

  2. [2]

    Hamed Bonab, Mohammad Aliannejadi, Ali Vardasbi, Evangelos Kanoulas, and James Allan. 2021. Cross-market product recommendation. InProceedings of the 30th ACM International Conference on Information & Knowledge Management. 110–119

  3. [3]

    Jiangxia Cao, Xin Cong, Tingwen Liu, and Bin Wang. 2022. Item similarity mining for multi-market recommendation. InProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2249–2254

  4. [4]

    Gaode Chen, Xinghua Zhang, Yijun Su, Yantong Lai, Ji Xiang, Junbo Zhang, and Yu Zheng. 2023. Win-win: a privacy-preserving federated framework for dual-target cross-domain recommendation. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 4149–4156

  5. [5]

    Jundong Chen, Honglei Zhang, Haoxuan Li, Chunxu Zhang, Zhiwei Li, and Yidong Li. 2025. Beyond personalization: Federated recommendation with cali- bration via low-rank decomposition.arXiv preprint arXiv:2506.09525(2025)

  6. [6]

    Jundong Chen, Honglei Zhang, Chunxu Zhang, Fangyuan Luo, and Yidong Li

  7. [7]

    InProceedings of the AAAI Conference on Artificial Intelligence, Vol

    Breaking the aggregation bottleneck in federated recommendation: A personalized model merging approach. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 40. 14547–14555

  8. [8]

    Yen-Chi Chen. 2017. A tutorial on kernel density estimation and recent advances. Biostatistics & Epidemiology1, 1 (2017), 161–187

  9. [9]

    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1 (long and short papers). 4171–4186

  10. [10]

    Xiaoqiang Gui, Bowen Chen, Qiaoyu Tan, Jun Wang, Yongqing Zheng, Qingzhong Li, Lizhen Cui, and Guoxian Yu. 2026. Federated Recommendation via Stochas- tic Aggregation and Consistency Inference.ACM Transactions on Information Systems(2026)

  11. [11]

    Lei Guo, Ziang Lu, Junliang Yu, Quoc Viet Hung Nguyen, and Hongzhi Yin. 2024. Prompt-enhanced federated content representation learning for cross-domain recommendation. InProceedings of the ACM Web Conference 2024. 3139–3149

  12. [12]

    Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, and Ji-Rong Wen. 2022. Towards universal sequence representation learning for recommender systems. InProceedings of the 28th ACM SIGKDD Conference on Knowledge Dis- covery and Data Mining. 585–593

  13. [13]

    Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Liang Wang, Weizhu Chen, et al. 2022. Lora: Low-rank adaptation of large language models.. InInternational Conference on Learning Representations

  14. [14]

    Jun Hu, Bryan Hooi, Bingsheng He, and Yinwei Wei. 2025. Modality-independent graph neural networks with global transformers for multimodal recommendation. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 39. 11790– 11798

  15. [15]

    Zheng Hu, Satoshi Nakagawa, Shi-Min Cai, Fuji Ren, and Jiawen Deng. 2024. Enhancing cross-market recommendations by addressing negative transfer and leveraging item co-occurrences.Information Systems124 (2024), 102388

  16. [16]

    HyeoungGuk Kang, Donghoon Lee, and Hyunsouk Cho. 2023. Outlier-aware Cross-Market Product Recommendation. In2023 IEEE International Conference on Big Data and Smart Computing (BigComp). 120–123

  17. [17]

    Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recom- mendation. In2018 IEEE International Conference on Data Mining. 197–206

  18. [18]

    Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. Federated learning: Challenges, methods, and future directions.IEEE Signal Processing Magazine37, 3 (2020), 50–60

  19. [19]

    Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2020. On the convergence of fedavg on non-iid data. InInternational Conference on Learning Representations

  20. [20]

    Yuyuan Li, Junjie Fang, Fengyuan Yu, Xichun Sheng, Tianyu Du, Xuyang Teng, Shaowei Jiang, Linbo Jiang, Jianan Lin, and Chaochao Chen. 2026. FedAU2: At- tribute Unlearning for User-Level Federated Recommender Systems with Adap- tive and Robust Adversarial Training. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 40. 23310–23318

  21. [21]

    Yichen Li, Yijing Shan, Yi Liu, Haozhao Wang, Cheng Wang, Yi Wang, Ruixuan Li, et al. 2025. Efficient Knowledge Transfer in Federated Recommendation for Joint Venture Ecosystem. InThe 39th Annual Conference on Neural Information Processing Systems

  22. [22]

    Zhiwei Li, Guodong Long, and Tianyi Zhou. 2024. Federated recommendation with additive personalization. InInternational Conference on Learning Represen- tations

  23. [23]

    Qidong Liu, Jiaxi Hu, Yutian Xiao, Xiangyu Zhao, Jingtong Gao, Wanyu Wang, Qing Li, and Jiliang Tang. 2024. Multimodal recommender systems: A survey. Comput. Surveys57, 2 (2024), 1–17

  24. [24]

    Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel

  25. [25]

    InProceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval

    Image-based recommendations on styles and substitutes. InProceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. 43–52

  26. [26]

    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep net- works from decentralized data. InArtificial Intelligence and Statistics. 1273–1282

  27. [27]

    Jiaming Qian, Xinting Liao, Xiangmou Qu, Zhihui Fu, Xingyu Lou, Changwang Zhang, Pengyang Zhou, Zijun Zhou, Jun Wang, and Chaochao Chen. 2025. Per- sonalized Federated Recommendation with Multi-Faceted User Representation and Global Consistent Prototype. InProceedings of the 34th ACM International Conference on Information and Knowledge Management. 2399–2408

  28. [28]

    Zehua Sun, Yonghui Xu, Yong Liu, Wei He, Lanju Kong, Fangzhao Wu, Yali Jiang, and Lizhen Cui. 2024. A survey on federated recommendation systems.IEEE Transactions on Neural Networks and Learning Systems36, 1 (2024), 6–20

  29. [29]

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. InAnnual Conference on Neural Information Processing Systems, Vol. 30

  30. [30]

    Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. InProceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 5797–5808

  31. [31]

    Chen Wang, Ziwei Fan, Liangwei Yang, Mingdai Yang, Xiaolong Liu, Zhiwei Liu, and Philip Yu. 2024. Pre-training with transferable attention for addressing market shifts in cross-market sequential recommendation. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2970– 2979

  32. [32]

    Li Wang, Shoujin Wang, Quangui Zhang, Qiang Wu, and Min Xu. 2025. Federated user preference modeling for privacy-preserving cross-domain recommendation. IEEE Transactions on Multimedia(2025)

  33. [33]

    Yueqi Xie, Peilin Zhou, and Sunghun Kim. 2022. Decoupled side information fusion for sequential recommendation. InProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1611–1621. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Chen et al

  34. [34]

    Tianzi Zang, Yanmin Zhu, Haobing Liu, Ruohan Zhang, and Jiadi Yu. 2022. A survey on cross-domain recommendation: taxonomies, methods, and future directions.ACM Transactions on Information Systems41, 2 (2022), 1–39

  35. [35]

    Chunxu Zhang, Guodong Long, Tianyi Zhou, Peng Yan, Zijian Zhang, Chengqi Zhang, and Bo Yang. 2023. Dual personalization on federated recommendation. InProceedings of the 32nd International Joint Conference on Artificial Intelligence. 4558–4566

  36. [36]

    Chunxu Zhang, Guodong Long, Tianyi Zhou, Zijian Zhang, Peng Yan, and Bo Yang. 2024. Gpfedrec: Graph-guided personalization for federated recommenda- tion. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 4131–4142

  37. [37]

    Honglei Zhang, Haoxuan Li, Jundong Chen, Sen Cui, Kunda Yan, Abudukelimu Wuerkaixi, Xin Zhou, Zhiqi Shen, and Yidong Li. 2026. Beyond similarity: Person- alized federated recommendation with composite aggregation.ACM Transactions on Information Systems44, 2 (2026), 1–28

  38. [38]

    Honglei Zhang, Zhiwei Li, Haoxuan Li, Xin Zhou, Jie Zhang, and Yidong Li

  39. [39]

    InProceedings of the AAAI Conference on Artificial Intelligence, Vol

    Transfr: Transferable federated recommendation with adapter tuning on pre-trained language models. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 40. 28212–28220

  40. [40]

    Honglei Zhang, Fangyuan Luo, Jun Wu, Xiangnan He, and Yidong Li. 2023. LightFR: Lightweight federated recommendation with privacy-preserving matrix factorization.ACM Transactions on Information Systems41, 4 (2023), 1–28

  41. [41]

    Hongyu Zhang, Dongyi Zheng, Xu Yang, Jiyuan Feng, and Qing Liao. 2024. FedDCSR: Federated cross-domain sequential recommendation via disentangled representation learning. InProceedings of the 2024 SIAM International Conference on Data Mining (SDM). 535–543

  42. [42]

    Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, et al . 2021. Recbole: Towards a unified, comprehensive and efficient framework for recommendation algorithms. InProceedings of the 30th ACM International Conference on Information & Knowledge Management. 4653–4664

  43. [43]

    Zihuai Zhao, Wenqi Fan, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, et al. 2024. Recommender systems in the era of large language models (llms).IEEE Transactions on Knowledge and Data Engineering36, 11 (2024), 6889–6907

  44. [44]

    Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. 2020. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. InPro- ceedings of the 29th ACM International Conference on Information & Knowledge Management. 1893–1902

  45. [45]

    Yongchun Zhu, Zhenwei Tang, Yudan Liu, Fuzhen Zhuang, Ruobing Xie, Xu Zhang, Leyu Lin, and Qing He. 2022. Personalized transfer of user preferences for cross-domain recommendation. InProceedings of the 15th ACM International Conference on Web Search and Data Mining. 1507–1515. A Detailed Theoretical Analysis Remark 1 (Cross-Market Shared Behavior and Mark...