pith. machine review for the scientific record. sign in

arxiv: 1511.06939 · v4 · submitted 2015-11-21 · 💻 cs.LG · cs.IR· cs.NE

Recognition: 2 theorem links

· Lean Theorem

Session-based Recommendations with Recurrent Neural Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-11 21:04 UTC · model grok-4.3

classification 💻 cs.LG cs.IRcs.NE
keywords session-based recommendationsrecurrent neural networksrecommender systemsranking losssequence modelingnext-item predictionshort sessions
0
0 comments X

The pith

Recurrent neural networks can generate more accurate recommendations from short user sessions than item-to-item methods.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows how recurrent neural networks can be adapted to session-based recommender systems, where only brief sequences of user interactions are available instead of long personal histories. Traditional matrix factorization struggles here, so systems often fall back to simple item-to-item similarity. By treating each session as a sequence and training an RNN to predict the next item, with a ranking loss tailored to recommendations, the model captures order and context that similarity misses. Tests on two datasets confirm clear gains over common baselines, making sequence modeling practical for real-world sites with limited data.

Core claim

We apply recurrent neural networks to session-based recommendations and argue that modeling the whole session provides more accurate results than item-to-item approaches. Several modifications to classic RNNs, including a ranking loss function, make the model viable for the task. Experimental results on two datasets show marked improvements over widely used approaches.

What carries the argument

A recurrent neural network trained on item sequences within sessions, using a ranking loss to score and predict the next item.

If this is right

  • Recommendations become feasible in domains that lack long-term user profiles.
  • Sequential order in user behavior can be directly exploited for higher accuracy.
  • A ranking loss lets neural sequence models align with the top-N recommendation goal.
  • The method applies immediately to e-commerce and media sites that see only short visits.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same session-modeling idea could be combined with any available long-term user data when it exists.
  • Variable-length sessions may require additional techniques such as attention to maintain performance.
  • The gains suggest testing RNNs on other short-interaction tasks like app navigation or news reading.
  • Larger-scale experiments could check whether the approach remains effective as session volume grows.

Load-bearing premise

Short sessions contain enough sequential signal for the RNN to learn patterns that improve recommendations beyond simple item similarity.

What would settle it

Running the same RNN model on the paper's datasets and finding no improvement over item-to-item baselines when sessions average fewer than three items.

read the original abstract

We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes applying recurrent neural networks (specifically GRUs) to session-based recommendations, where only short session data is available instead of long user histories. It introduces modifications such as a ranking loss function to make RNNs practical for this task, arguing that modeling the entire session yields more accurate recommendations than item-to-item approaches. Experimental results on two datasets are claimed to show marked improvements over widely used baselines.

Significance. If the results hold under scrutiny, this work demonstrates that sequential models can extract useful patterns from short sessions and outperform simple co-occurrence methods in real-world recommendation scenarios without long user profiles, with direct applicability to e-commerce and content platforms.

major comments (2)
  1. [Abstract] Abstract: the central claim of 'marked improvements' and 'modeling the whole session' is presented without any quantitative metrics, baseline names, ablation results, or session-length statistics. This makes it impossible to verify whether gains come from session-level dynamics or from the custom ranking loss and hyperparameter choices.
  2. [Introduction / Experimental results] The assumption that short sessions (frequently length 1–3) contain sufficient sequential signal for an RNN to outperform item-to-item baselines is load-bearing but untested in the provided summary. If most sessions are very short, the hidden state carries little history beyond the last item, reducing the model to learned similarity and undermining the headline claim.
minor comments (2)
  1. [Experiments] Add a table or figure showing session-length distribution and per-length performance breakdown to address the short-session concern.
  2. [Method] Clarify the exact ranking loss formulation and how it differs from standard cross-entropy or BPR loss.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. The comments highlight opportunities to strengthen the presentation of our results and clarify the role of sequential modeling in short sessions. We have revised the manuscript to incorporate additional quantitative details and analysis as described below.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim of 'marked improvements' and 'modeling the whole session' is presented without any quantitative metrics, baseline names, ablation results, or session-length statistics. This makes it impossible to verify whether gains come from session-level dynamics or from the custom ranking loss and hyperparameter choices.

    Authors: We agree that the abstract would benefit from greater specificity. In the revised version, we have updated the abstract to name the two datasets, identify the main baselines (item-to-item and matrix factorization), and report the relative improvements in recall@20 and MRR@20. Space limitations preclude including full ablation tables in the abstract, but the experimental section now explicitly compares the ranking loss against cross-entropy and includes session-length distribution statistics (mean and median lengths) in the data description. revision: yes

  2. Referee: [Introduction / Experimental results] The assumption that short sessions (frequently length 1–3) contain sufficient sequential signal for an RNN to outperform item-to-item baselines is load-bearing but untested in the provided summary. If most sessions are very short, the hidden state carries little history beyond the last item, reducing the model to learned similarity and undermining the headline claim.

    Authors: The experimental results already demonstrate outperformance on real-world datasets whose sessions are predominantly short. To directly address the concern, the revised manuscript adds a per-session-length breakdown showing that the GRU model improves over item-to-item baselines for sessions of length 2 and 3, with gains diminishing but remaining positive at length 1. This indicates that the learned item embeddings and the training objective on longer sessions within the same dataset provide benefits even when the test session is short. We have also clarified in the introduction that the model is trained on complete sessions of varying lengths, allowing it to capture sequential patterns where they exist. revision: partial

Circularity Check

0 steps flagged

No circularity: empirical RNN application with independent experimental validation

full rationale

The paper applies RNNs (with GRU) to session-based recommendation as a new domain, introduces a ranking loss and other practical modifications to the standard architecture, and evaluates via experiments on two datasets (including RSC15). The derivation chain consists of standard RNN forward passes followed by a custom loss for ranking; no equation defines a target quantity in terms of itself, no fitted parameter is relabeled as a prediction, and no load-bearing premise reduces to a self-citation. The central claim of improved accuracy rests on held-out test performance rather than on any tautological reduction, rendering the work self-contained.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The approach rests on the standard assumption that user sessions exhibit sequential structure amenable to RNN modeling and on conventional neural-network training practices.

free parameters (1)
  • RNN architecture hyperparameters
    Hidden size, learning rate, and loss-function parameters chosen or tuned for the recommendation task.
axioms (1)
  • domain assumption Sessions contain learnable sequential dependencies
    Invoked when stating that modeling the whole session improves accuracy over item-to-item methods.

pith-pipeline@v0.9.0 · 5457 in / 1079 out tokens · 36431 ms · 2026-05-11T21:04:12.369582+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • Foundation.HierarchyEmergence hierarchy_emergence_forces_phi unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Forward citations

Cited by 39 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Asymmetric Generative Recommendation via Multi-Expert Projection and Multi-Faceted Hierarchical Quantization

    cs.IR 2026-05 unverdicted novelty 7.0

    AsymRec decouples input and output representations in generative recommendation via multi-expert semantic projection and multi-faceted hierarchical quantization, outperforming prior models by 15.8% on average.

  2. F-GRPO: Factorized Group-Relative Policy Optimization for Unified Candidate Generation and Ranking

    cs.LG 2026-05 unverdicted novelty 7.0

    F-GRPO factorizes group-relative policy optimization into generation and ranking phases within one autoregressive sequence, using order-invariant coverage and position-aware utility rewards to improve top-ranked perfo...

  3. Why Users Go There: World Knowledge-Augmented Generative Next POI Recommendation

    cs.AI 2026-05 unverdicted novelty 7.0

    AWARE augments generative next-POI recommendation with LLM agents that produce user-anchored narratives capturing events, culture, and trends, delivering up to 12.4% relative gains on three real datasets.

  4. Every Preference Has Its Strength: Injecting Ordinal Semantics into LLM-Based Recommenders

    cs.IR 2026-05 unverdicted novelty 7.0

    OSA improves LLM-based recommenders by anchoring ordinal preference levels as numeric tokens in the model's latent space to retain fine-grained strength information when fusing collaborative signals.

  5. Similar Users-Augmented Interest Network

    cs.IR 2026-04 unverdicted novelty 7.0

    SUIN improves CTR prediction by augmenting target user sequences with similar users' behaviors via embedding-based retrieval, user-specific position encoding, and user-aware target attention.

  6. Objective Shaping with Hard Negatives: Windowed Partial AUC Optimization for RL-based LLM Recommenders

    cs.IR 2026-04 unverdicted novelty 7.0

    Beam-search negatives induce partial AUC optimization in GRPO for LLM recommenders; Windowed Partial AUC and TAWin improve Top-K alignment on four datasets.

  7. Break the Optimization Barrier of LLM-Enhanced Recommenders: A Theoretical Analysis and Practical Framework

    cs.IR 2026-04 unverdicted novelty 7.0

    TF-LLMER resolves optimization barriers in LLM-enhanced recommenders through embedding normalization and Rec-PCA that aligns semantic representations with collaborative co-occurrence graphs.

  8. Beyond One-Size-Fits-All: Adaptive Test-Time Augmentation for Sequential Recommendation

    cs.IR 2026-04 unverdicted novelty 7.0

    AdaTTA is an actor-critic RL framework that selects sequence-specific test-time augmentations and improves recommendation metrics by up to 26% over fixed augmentation strategies on four datasets.

  9. Retrieval Augmented Conversational Recommendation with Reinforcement Learning

    cs.IR 2026-04 unverdicted novelty 7.0

    RAR retrieves candidate items from a 300k-movie corpus then uses LLM generation with RL feedback to produce context-aware recommendations that outperform baselines on benchmarks.

  10. Fusion and Alignment Enhancement with Large Language Models for Tail-item Sequential Recommendation

    cs.IR 2026-04 unverdicted novelty 7.0

    FAERec fuses collaborative ID embeddings with LLM semantic embeddings using adaptive gating and dual-level alignment to enhance tail-item sequential recommendations.

  11. Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations

    cs.LG 2024-02 unverdicted novelty 7.0

    HSTU-based generative recommenders with 1.5 trillion parameters scale as a power law with compute up to GPT-3 scale, outperform baselines by up to 65.8% NDCG, run 5-15x faster than FlashAttention2 on long sequences, a...

  12. RRCM: Ranking-Driven Retrieval over Collaborative and Meta Memories for LLM Recommendation

    cs.IR 2026-05 unverdicted novelty 6.0

    RRCM trains an LLM to dynamically retrieve from collaborative and meta memories using group relative policy optimization driven by final top-k recommendation quality.

  13. An Embarrassingly Simple Graph Heuristic Reveals Shortcut-Solvable Benchmarks for Sequential Recommendation

    cs.IR 2026-05 conditional novelty 6.0

    A simple graph heuristic without training or sequence encoders matches or outperforms trained generative recommenders on 10 of 14 sequential recommendation benchmarks by exploiting local transition and feature shortcuts.

  14. Bridging Textual Profiles and Latent User Embeddings for Personalization

    cs.IR 2026-05 unverdicted novelty 6.0

    BLUE aligns LLM-generated textual user profiles with embedding-based recommendation objectives via reinforcement learning and next-item text supervision, yielding better zero-shot performance and cross-domain transfer...

  15. Bridging Passive and Active: Enhancing Conversation Starter Recommendation via Active Expression Modeling

    cs.IR 2026-05 unverdicted novelty 6.0

    PA-Bridge bridges passive conversation starter recommendations with active user expressions via adversarial distribution alignment and semantic discretization, yielding 0.54% higher feature penetration in online tests.

  16. DynamicPO: Dynamic Preference Optimization for Recommendation

    cs.IR 2026-05 unverdicted novelty 6.0

    DynamicPO prevents preference optimization collapse in multi-negative DPO by adaptively selecting boundary-critical negatives and calibrating per-sample optimization strength, yielding higher recommendation accuracy o...

  17. The Attention Market: Interpreting Online Fair Re-ranking as Manifold Optimization under Walrasian Equilibrium

    cs.IR 2026-04 unverdicted novelty 6.0

    Fair re-ranking is equivalent to gradient descent on a ranking manifold under Walrasian equilibrium in an attention market, yielding the ManifoldRank algorithm that adjusts gradients for supply-side fairness costs and...

  18. Modeling Behavioral Intensity and Transitions for Generative Recommendation

    cs.IR 2026-04 unverdicted novelty 6.0

    BITRec improves generative multi-behavior recommendation by modeling behavioral intensity via separated pathways and transitions via learnable relation matrices, reporting 15-23% gains on large retail datasets.

  19. WPGRec: Wavelet Packet Guided Graph Enhanced Sequential Recommendation

    cs.IR 2026-04 unverdicted novelty 6.0

    WPGRec is a new sequential recommender that performs multi-scale temporal modeling via stationary wavelet packets and injects high-order collaborative information through scale-aligned graph propagation with energy-aw...

  20. GraphRAG-IRL: Personalized Recommendation with Graph-Grounded Inverse Reinforcement Learning and LLM Re-ranking

    cs.IR 2026-04 unverdicted novelty 6.0

    GraphRAG-IRL fuses graph-grounded MaxEnt IRL pre-ranking with persona-guided LLM re-ranking to deliver up to 16.8% NDCG@10 gains over IRL-only baselines on MovieLens and consistent 4-6% gains on KuaiRand.

  21. Multi-LLM Token Filtering and Routing for Sequential Recommendation

    cs.IR 2026-04 unverdicted novelty 6.0

    MLTFR combines user-guided token filtering with a multi-LLM mixture-of-experts and Fisher-weighted consensus expert to deliver stable gains in corpus-free sequential recommendation.

  22. Federated User Behavior Modeling for Privacy-Preserving LLM Recommendation

    cs.IR 2026-04 unverdicted novelty 6.0

    SF-UBM enables privacy-preserving cross-domain LLM recommendation by federating semantic item representations, distilling domain knowledge, and aligning preferences into LLM soft prompts.

  23. Behavior-Aware Dual-Channel Preference Learning for Heterogeneous Sequential Recommendation

    cs.IR 2026-04 unverdicted novelty 6.0

    BDPL improves heterogeneous sequential recommendation by constructing behavior-aware subgraphs, aggregating via cascade GNN, and enhancing representations with preference-level contrastive learning before adaptive fus...

  24. RoTE: Coarse-to-Fine Multi-Level Rotary Time Embedding for Sequential Recommendation

    cs.IR 2026-04 unverdicted novelty 6.0

    RoTE is a multi-level rotary time embedding module that explicitly models time spans in sequential recommendation and improves NDCG@5 by up to 20.11% when added to standard backbones on public benchmarks.

  25. MOSAIC: Multi-Domain Orthogonal Session Adaptive Intent Capture for Prescient Recommendations

    cs.IR 2026-04 unverdicted novelty 6.0

    MOSAIC decomposes user intent into three orthogonal components via a triple-encoder architecture with adversarial training and dynamic gating to outperform baselines in multi-domain session recommendations.

  26. ReRec: Reasoning-Augmented LLM-based Recommendation Assistant via Reinforcement Fine-tuning

    cs.IR 2026-04 unverdicted novelty 6.0

    ReRec uses reinforcement fine-tuning with dual-graph reward shaping, reasoning-aware advantage estimation, and online curriculum scheduling to improve LLM reasoning and performance in recommendation tasks.

  27. Leveraging LLMs and Heterogeneous Knowledge Graphs for Persona-Driven Session-Based Recommendation

    cs.IR 2026-04 unverdicted novelty 6.0

    A persona-driven SBRS framework learns unsupervised user personas from an LLM-initialized heterogeneous KG and incorporates them into data-driven sequential recommenders, reporting consistent gains over session-histor...

  28. From Clues to Generation: Language-Guided Conditional Diffusion for Cross-Domain Recommendation

    cs.IR 2026-04 unverdicted novelty 6.0

    LGCD creates pseudo-overlapping user data via LLM reasoning and uses conditional diffusion to generate target-domain user representations for inter-domain sequential recommendation without real overlapping users.

  29. Pay Attention to Sequence Split: Uncovering the Impacts of Sub-Sequence Splitting on Sequential Recommendation Models

    cs.IR 2026-04 conditional novelty 6.0

    Sub-sequence splitting interferes with fair evaluation in sequential recommendation models and enhances performance only when paired with particular splitting, targeting, and loss function choices.

  30. FLAME: Condensing Ensemble Diversity into a Single Network for Efficient Sequential Recommendation

    cs.IR 2026-04 conditional novelty 6.0

    FLAME condenses ensemble diversity into a single network via modular ensemble simulation and guided mutual learning during training, delivering ensemble-level performance with single-network inference speed on sequent...

  31. HSUGA: LLM-Enhanced Recommendation with Hierarchical Semantic Understanding and Group-Aware Alignment

    cs.IR 2026-05 unverdicted novelty 5.0

    HSUGA improves LLM-enhanced sequential recommendation via staged hierarchical semantic understanding for better preference extraction and group-aware alignment that varies intensity by user activity level.

  32. TwiSTAR:Think Fast, Think Slow, Then Act,Generative Recommendation with Adaptive Reasoning

    cs.IR 2026-05 unverdicted novelty 5.0

    TwiSTAR learns to switch between fast SID retrieval and slow rationale-generating reasoning in generative recommendation, yielding better accuracy-latency trade-offs on three datasets.

  33. Compressed Video Aggregator: Content-driven Module for Efficient Micro-Video Recommendation

    cs.LG 2026-05 unverdicted novelty 5.0

    CVA aggregates frozen VFM embeddings via latent reasoning to create compact video embeddings for efficient micro-video recommendation, delivering consistent performance gains and orders-of-magnitude efficiency improvements.

  34. From Hidden Profiles to Governable Personalization: Recommender Systems in the Age of LLM Agents

    cs.IR 2026-04 unverdicted novelty 5.0

    LLM agents enable a shift in recommender systems from opaque hidden profiles to governable, inspectable, and portable user representations.

  35. SLSREC: Self-Supervised Contrastive Learning for Adaptive Fusion of Long- and Short-Term User Interests

    cs.IR 2026-04 unverdicted novelty 5.0

    SLSRec disentangles long- and short-term user interests via self-supervised contrastive learning and fuses them adaptively with attention, outperforming prior models on three public recommendation benchmarks.

  36. R3-REC: Reasoning-Driven Recommendation via Retrieval-Augmented LLMs over Multi-Granular Interest Signals

    cs.IR 2026-03 unverdicted novelty 5.0

    R3-REC unifies multi-level intent reasoning, semantic extraction, long-short interest mining, and collaborative enhancement in a retrieval-augmented LLM to boost sequential recommendation metrics.

  37. OneRec: Unifying Retrieve and Rank with Generative Recommender and Iterative Preference Alignment

    cs.IR 2025-02 unverdicted novelty 5.0

    OneRec unifies retrieval and ranking in a generative recommender using session-wise decoding and iterative DPO-based preference alignment, achieving real-world gains on Kuaishou.

  38. Driving Engagement in Daily Fantasy Sports with a Scalable and Urgency-Aware Ranking Engine

    cs.IR 2026-04 unverdicted novelty 4.0

    An urgency-aware adaptation of the Deep Interest Network with temporal encodings and listwise neuralNDCG loss delivers a 9% nDCG@1 lift over an optimized LightGBM baseline on a 650k-user industrial DFS dataset.

  39. TME-PSR: Time-aware, Multi-interest, and Explanation Personalization for Sequential Recommendation

    cs.IR 2026-04 unverdicted novelty 4.0

    TME-PSR improves sequential recommendation accuracy and explanation quality by personalizing temporal rhythms, fine-grained interests, and recommendation-explanation alignment using a dual-view time encoder, multihead...

Reference graph

Works this paper leans on

21 extracted references · 21 canonical work pages · cited by 39 Pith papers

  1. [1]

    On the Properties of Neural Machine Translation: Encoder-Decoder Approaches

    Cho, Kyunghyun, van Merri \"e nboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014

  2. [2]

    Rmsprop and equilibrated adaptive learning rates for non-convex optimization

    Dauphin, Yann N, de Vries, Harm, Chung, Junyoung, and Bengio, Yoshua. Rmsprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015

  3. [3]

    The YouTube video recommendation system

    Davidson, James, Liebald, Benjamin, Liu, Junning, et al. The YouTube video recommendation system . In Recsys'10: ACM Conf. on Recommender Systems, pp.\ 293--296, 2010. ISBN 978-1-60558-906-0

  4. [4]

    Adaptive subgradient methods for online learning and stochastic optimization

    Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12: 0 2121--2159, 2011

  5. [5]

    and Tikk, D

    Hidasi, B. and Tikk, D. Fast ALS -based tensor factorization for context-aware recommendation from implicit feedback. In ECML-PKDD'12, Part II, number 7524 in LNCS, pp.\ 67–--82. Springer, 2012

  6. [6]

    General factorization framework for context-aware recommendations

    Hidasi, Bal\' a zs and Tikk, Domonkos. General factorization framework for context-aware recommendations. Data Mining and Knowledge Discovery, pp.\ 1--30, 2015. ISSN 1384-5810. doi:10.1007/s10618-015-0417-y. URL http://dx.doi.org/10.1007/s10618-015-0417-y

  7. [7]

    Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups

    Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29 0 (6): 0 82--97, 2012

  8. [8]

    Factorization meets the neighborhood: a multifaceted collaborative filtering model

    Koren, Y. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In SIGKDD'08: ACM Int. Conf. on Knowledge Discovery and Data Mining, pp.\ 426--434, 2008

  9. [9]

    Matrix factorization techniques for recommender systems

    Koren, Yehuda, Bell, Robert, and Volinsky, Chris. Matrix factorization techniques for recommender systems. Computer, 42 0 (8): 0 30--37, 2009

  10. [10]

    Linden, G., Smith, B., and York, J. Amazon. com recommendations: Item-to-item collaborative filtering. Internet Computing, IEEE, 7 0 (1): 0 76--80, 2003

  11. [11]

    Enlister: Baidu's recommender system for the biggest Chinese Q&A website

    Liu, Qiwen, Chen, Tianjian, Cai, Jing, and Yu, Dianhai. Enlister: Baidu's recommender system for the biggest Chinese Q&A website. In RecSys-12: Proc. of the 6th ACM Conf. on Recommender Systems, pp.\ 285--288, 2012

  12. [12]

    BPR: B ayesian personalized ranking from implicit feedback

    Rendle, S., Freudenthaler, C., Gantner, Z., and Schmidt-Thieme, L. BPR: B ayesian personalized ranking from implicit feedback. In UAI'09: 25 ^ th Conf. on Uncertainty in Artificial Intelligence , pp.\ 452--461, 2009. ISBN 978-0-9749039-5-8

  13. [13]

    Berg, and Li Fei-Fei

    Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael S., Berg, Alexander C., and Li, Fei - Fei. Imagenet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014. URL http://arxiv.org/abs/1409.0575

  14. [14]

    Restricted boltzmann machines for collaborative filtering

    Salakhutdinov, Ruslan, Mnih, Andriy, and Hinton, Geoffrey. Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24th international conference on Machine learning, pp.\ 791--798. ACM, 2007

  15. [15]

    Item-based collaborative filtering recommendation algorithms

    Sarwar, Badrul, Karypis, George, Konstan, Joseph, and Riedl, John. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, pp.\ 285--295. ACM, 2001

  16. [16]

    An mdp-based recommender system

    Shani, Guy, Brafman, Ronen I, and Heckerman, David. An mdp-based recommender system. In Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, pp.\ 453--460. Morgan Kaufmann Publishers Inc., 2002

  17. [17]

    Climf: Learning to maximize reciprocal rank with collaborative less-is-more filtering

    Shi, Yue, Karatzoglou, Alexandros, Baltrunas, Linas, Larson, Martha, Oliver, Nuria, and Hanjalic, Alan. Climf: Learning to maximize reciprocal rank with collaborative less-is-more filtering. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys '12, pp.\ 139--146, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1270-7. doi:10.1145/236595...

  18. [18]

    Gaussian ranking by matrix factorization

    Steck, Harald. Gaussian ranking by matrix factorization. In Proceedings of the 9th ACM Conference on Recommender Systems, RecSys '15, pp.\ 115--122, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3692-5. doi:10.1145/2792838.2800185. URL http://doi.acm.org/10.1145/2792838.2800185

  19. [19]

    Deep content-based music recommendation

    Van den Oord, Aaron, Dieleman, Sander, and Schrauwen, Benjamin. Deep content-based music recommendation. In Advances in Neural Information Processing Systems, pp.\ 2643--2651, 2013

  20. [20]

    Collaborative deep learning for recommender systems

    Wang, Hao, Wang, Naiyan, and Yeung, Dit-Yan. Collaborative deep learning for recommender systems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15, pp.\ 1235--1244, New York, NY, USA, 2015. ACM

  21. [21]

    Maximum margin matrix factorization for collaborative ranking

    Weimer, Markus, Karatzoglou, Alexandros, Le, Quoc Viet, and Smola, Alex. Maximum margin matrix factorization for collaborative ranking. Advances in neural information processing systems, 2007