HSTU-based generative recommenders with 1.5 trillion parameters scale as a power law with compute up to GPT-3 scale, outperform baselines by up to 65.8% NDCG, run 5-15x faster than FlashAttention2 on long sequences, and improve online A/B metrics by 12.4%.
Behavior sequence transformer for e-commerce recommendation in alibaba
2 Pith papers cite this work. Polarity classification is still indexing.
verdicts
UNVERDICTED 2representative citing papers
PRISM improves e-commerce search robustness by modeling preference-relevance interactions via preference rectification, LLM-driven semantic anchoring with prototypes, and preference-conditioned evidence routing.
citing papers explorer
-
Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations
HSTU-based generative recommenders with 1.5 trillion parameters scale as a power law with compute up to GPT-3 scale, outperform baselines by up to 65.8% NDCG, run 5-15x faster than FlashAttention2 on long sequences, and improve online A/B metrics by 12.4%.
-
PRISM: Refracting the Entangled User Behavior Space for E-Commerce Search
PRISM improves e-commerce search robustness by modeling preference-relevance interactions via preference rectification, LLM-driven semantic anchoring with prototypes, and preference-conditioned evidence routing.