AsymRec decouples input and output representations in generative recommendation via multi-expert semantic projection and multi-faceted hierarchical quantization, outperforming prior models by 15.8% on average.
InProceedings of the 38th international ACM SIGIR conference on research and development in information retrieval
8 Pith papers cite this work. Polarity classification is still indexing.
citation-role summary
citation-polarity summary
representative citing papers
RecRM-Bench is a new large-scale benchmark dataset and framework for multi-dimensional reward modeling in agentic recommender systems, spanning instruction following, factual consistency, query-item relevance, and user behavior prediction.
Veda and EffVeda partition vectors into disjoint role-combination blocks, apply lattice-based copy and merge operations within a storage budget, index large nodes with HNSW, and use coordinated search with distance bounds to deliver higher throughput at high recall.
BEAR adds a beam-search-aware regularization to LLM fine-tuning for recommendations that forces positive-item tokens to rank in the top-B candidates at each decoding step to avoid premature pruning.
SemaCDR builds a unified semantic space with LLM-generated domain-agnostic features and adaptive fusion to improve cross-domain sequential recommendations over baselines.
FAVE replaces multi-step flow generation with a learned global average velocity from a semantic anchor prior, delivering SOTA accuracy and roughly 10x faster inference on recommendation benchmarks.
FLAME condenses ensemble diversity into a single network via modular ensemble simulation and guided mutual learning during training, delivering ensemble-level performance with single-network inference speed on sequential recommendation tasks.
SDA uses structural alignment as a soft teacher and gated low-rank expert paths to adapt LVLMs for multimodal recommendation, reporting 6.15% Hit@10 and 8.64% NDCG@10 average gains plus larger long-tail improvements on Amazon datasets.
citing papers explorer
-
Asymmetric Generative Recommendation via Multi-Expert Projection and Multi-Faceted Hierarchical Quantization
AsymRec decouples input and output representations in generative recommendation via multi-expert semantic projection and multi-faceted hierarchical quantization, outperforming prior models by 15.8% on average.
-
RecRM-Bench: Benchmarking Multidimensional Reward Modeling for Agentic Recommender Systems
RecRM-Bench is a new large-scale benchmark dataset and framework for multi-dimensional reward modeling in agentic recommender systems, spanning instruction following, factual consistency, query-item relevance, and user behavior prediction.
-
Don't Be a Pot Stirrer! Authorized Vector Data Retrieval via Access-Aware Indexing
Veda and EffVeda partition vectors into disjoint role-combination blocks, apply lattice-based copy and merge operations within a storage budget, index large nodes with HNSW, and use coordinated search with distance bounds to deliver higher throughput at high recall.
-
BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models
BEAR adds a beam-search-aware regularization to LLM fine-tuning for recommendations that forces positive-item tokens to rank in the top-B candidates at each decoding step to avoid premature pruning.
-
SemaCDR: LLM-Powered Transferable Semantics for Cross-Domain Sequential Recommendation
SemaCDR builds a unified semantic space with LLM-generated domain-agnostic features and adaptive fusion to improve cross-domain sequential recommendations over baselines.
-
FAVE: Flow-based Average Velocity Establishment for Sequential Recommendation
FAVE replaces multi-step flow generation with a learned global average velocity from a semantic anchor prior, delivering SOTA accuracy and roughly 10x faster inference on recommendation benchmarks.
-
FLAME: Condensing Ensemble Diversity into a Single Network for Efficient Sequential Recommendation
FLAME condenses ensemble diversity into a single network via modular ensemble simulation and guided mutual learning during training, delivering ensemble-level performance with single-network inference speed on sequential recommendation tasks.
-
Structural and Disentangled Adaptation of Large Vision Language Models for Multimodal Recommendation
SDA uses structural alignment as a soft teacher and gated low-rank expert paths to adapt LVLMs for multimodal recommendation, reporting 6.15% Hit@10 and 8.64% NDCG@10 average gains plus larger long-tail improvements on Amazon datasets.