pith. machine review for the scientific record. sign in

arxiv: 2602.05408 · v2 · submitted 2026-02-05 · 💻 cs.IR

Recognition: no theorem link

Rich-Media Re-Ranker: A User Satisfaction-Driven LLM Re-ranking Framework for Rich-Media Search

Authors on Pith no claims yet

Pith reviewed 2026-05-16 07:27 UTC · model grok-4.3

classification 💻 cs.IR
keywords re-rankingLLM re-rankerrich-media searchuser satisfactionquery plannerVLM evaluatorreinforcement learninginformation retrieval
0
0 comments X

The pith

The Rich-Media Re-Ranker decomposes session queries into sub-queries and uses an LLM evaluator informed by VLM visual signals to score results across relevance, quality, novelty, information gain, and presentation for higher user search满意度.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a re-ranking method that first runs a Query Planner over a user's sequence of query refinements to break the original request into complementary sub-queries. It then feeds each candidate result, together with visual signals produced by a VLM evaluator, into an LLM-based re-ranker that judges the result against five explicit principles: content relevance and quality, information gain, novelty, and cover-image presentation. Multi-task reinforcement learning is applied to both the VLM and LLM components so they adapt to the search scenario. Experiments show the framework beats prior re-rankers, and the system has been deployed in a large industrial search engine where it raised engagement and satisfaction metrics. A sympathetic reader would care because current search systems often return results that miss part of what the user actually wants once visual and session context are ignored.

Core claim

By combining a Query Planner that decomposes session queries into clear sub-queries with an LLM re-ranker that performs holistic scoring on signals from both text and a VLM-based visual evaluator, guided by the five principles of relevance, quality, information gain, novelty, and visual presentation, the framework produces rankings that better satisfy multifaceted user intents.

What carries the argument

The Rich-Media Re-Ranker pipeline: a Query Planner that decomposes session queries, a VLM evaluator supplying visual signals, and an LLM re-ranker that applies multi-facet scoring principles, all tuned by multi-task reinforcement learning.

If this is right

  • Decomposing queries into sub-queries gives broader coverage of latent user intents within a session.
  • Incorporating VLM visual signals allows explicit scoring of cover-image presentation quality.
  • The multi-facet scoring principles produce a single holistic ranking that balances relevance, novelty, and information gain.
  • Multi-task reinforcement learning improves scenario adaptability of both the visual evaluator and the re-ranker.
  • The method yields measurable lifts in engagement and satisfaction when deployed at industrial scale.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same query-planning plus multi-signal LLM evaluation pattern could be tested in non-rich-media verticals such as product search or news recommendation.
  • If the five scoring principles prove stable across domains, they could serve as a reusable template for other LLM-based ranking systems.
  • A controlled A/B test that isolates the contribution of the visual VLM signals versus the query planner would clarify which component drives most of the observed lift.

Load-bearing premise

The VLM evaluator and LLM re-ranker, after multi-task reinforcement learning, assign reliable scores on relevance, quality, novelty, information gain, and visual presentation without systematic bias or overfitting.

What would settle it

Deploy the framework in the live search system and measure whether online engagement rates and satisfaction metrics remain flat or decline relative to the prior production ranker.

Figures

Figures reproduced from arXiv: 2602.05408 by Feicheng Li, Jianxin Li, Ligang Zhou, Qingyun Sun, Ying Nie, Zeyang Tang, Zhiming Peng, Zihao Guo.

Figure 1
Figure 1. Figure 1: Comparison between original ranking and desired [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The workflow of Query Planner. of simpler, executable sub-queries, and obtain a candidate set of results that can meet the diverse needs of users. Firstly, the Query Planner analyzes the query and classifies it into one of three predefined types: complex query, broad-needs query, or simple query. For complex and broad-needs queries, the planner leverages session context 𝑆 to infer genuine user intent. Base… view at source ↗
Figure 3
Figure 3. Figure 3: Example of cover image value. candidate 𝑑𝑜𝑐𝑖 ∈ 𝐶(𝑄) of query 𝑄: Srel(𝑐𝑖) = VLM(Prel;𝑄, 𝐼𝑚𝑔𝑖), (1) Squal(𝑐𝑖) = VLM(Pqual; 𝐼𝑚𝑔𝑖). (2) where 𝐼𝑚𝑔𝑖 denotes the cover image of 𝑑𝑜𝑐𝑖 , Prel constitutes the rule set for relevance assessment defining the semantic alignment criteria between query and cover image, and Pqual represents the quality assessment rule set specifying comprehensive visual eval￾uation dimensio… view at source ↗
Figure 4
Figure 4. Figure 4: Framework of our Rich-Media Re-Ranker. (a) We first leverage a Session-aware Query Planner to capture the user’s [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Case study on online rich-media search system. Left: Online System, Right: Our Rich Media Re-ranker. [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
read the original abstract

Re-ranking plays a crucial role in modern information search systems by refining the ranking of initial search results to better satisfy user information needs. However, existing methods show two notable limitations in improving user search satisfaction: inadequate modeling of multifaceted user intents and neglect of rich side information such as visual perception signals. To address these challenges, we propose the Rich-Media Re-Ranker framework, which aims to enhance user search satisfaction through multi-dimensional and fine-grained modeling. Our approach begins with a Query Planner that analyzes the sequence of query refinements within a session to capture genuine search intents, decomposing the query into clear and complementary sub-queries to enable broader coverage of users' potential intents. Subsequently, moving beyond primary text content, we integrate richer side information of candidate results, including signals modeling visual content generated by the VLM-based evaluator. These comprehensive signals are then processed alongside carefully designed re-ranking principle that considers multiple facets, including content relevance and quality, information gain, information novelty, and the visual presentation of cover images. Then, the LLM-based re-ranker performs the holistic evaluation based on these principles and integrated signals. To enhance the scenario adaptability of the VLM-based evaluator and the LLM-based re-ranker, we further enhance their capabilities through multi-task reinforcement learning. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art baselines. Notably, the proposed framework has been deployed in a large-scale industrial search system, yielding substantial improvements in online user engagement rates and satisfaction metrics.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes the Rich-Media Re-Ranker framework for refining search results in rich-media systems. It introduces a Query Planner to decompose session queries into complementary sub-queries, incorporates visual signals via a VLM-based evaluator, and applies an LLM-based re-ranker that performs holistic scoring according to explicit principles covering content relevance, quality, information gain, novelty, and visual presentation of cover images. The VLM and LLM components are adapted via multi-task reinforcement learning. The manuscript claims that extensive experiments show significant outperformance over state-of-the-art baselines and that the framework has been deployed in a large-scale industrial search system, producing substantial gains in online user engagement and satisfaction metrics.

Significance. If the empirical claims are substantiated, the work would be significant for information retrieval because it offers a concrete, multi-faceted approach to modeling user intent and rich-media signals with LLMs and VLMs, moving beyond text-only re-ranking. Successful industrial deployment would constitute strong evidence of practical utility and could influence production search pipelines.

major comments (2)
  1. [Abstract] Abstract and Experiments section: The central claims of significant outperformance over SOTA baselines and substantial online improvements in engagement rates are asserted without any quantitative metrics, baseline descriptions, statistical significance tests, or details on data collection, splits, or filtering. This absence makes the primary empirical contribution impossible to evaluate.
  2. [Method] Method and Experiments sections: The assumption that the VLM-based evaluator and LLM-based re-ranker produce reliable holistic scores on relevance, quality, novelty, information gain, and visual presentation after multi-task reinforcement learning is not supported by reported validation against held-out human judgments or inter-annotator agreement statistics. This is load-bearing for both the offline gains and the online deployment claims.
minor comments (1)
  1. [Method] The description of the re-ranking principles and the integration of VLM signals would benefit from explicit pseudocode or a formal listing of input features to aid reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback highlighting the need for stronger empirical substantiation. We will revise the manuscript to address both major comments by incorporating additional quantitative details and validation evidence where available from our experiments and deployment data.

read point-by-point responses
  1. Referee: [Abstract] Abstract and Experiments section: The central claims of significant outperformance over SOTA baselines and substantial online improvements in engagement rates are asserted without any quantitative metrics, baseline descriptions, statistical significance tests, or details on data collection, splits, or filtering. This absence makes the primary empirical contribution impossible to evaluate.

    Authors: We agree that the abstract presents the claims at a high level without specific numbers, and the Experiments section would benefit from more explicit upfront details on these aspects to aid evaluation. In the revised version, we will update the abstract to include key quantitative results (e.g., relative improvements over baselines in offline metrics such as NDCG and online engagement lifts with p-values). We will also expand the Experiments section with dedicated subsections detailing the baselines, data collection and filtering procedures, train/validation/test splits, and statistical significance testing methodology. revision: yes

  2. Referee: [Method] Method and Experiments sections: The assumption that the VLM-based evaluator and LLM-based re-ranker produce reliable holistic scores on relevance, quality, novelty, information gain, and visual presentation after multi-task reinforcement learning is not supported by reported validation against held-out human judgments or inter-annotator agreement statistics. This is load-bearing for both the offline gains and the online deployment claims.

    Authors: We acknowledge that direct validation of the multi-facet scoring reliability against human judgments is important for substantiating the framework's core components. While the multi-task RL was trained using implicit user feedback signals from the industrial deployment (which serves as a form of real-world validation), we agree this does not fully replace explicit human evaluation. In the revision, we will add a new subsection in Experiments reporting correlation analysis between the VLM/LLM scores and held-out human annotations, along with inter-annotator agreement statistics. revision: yes

Circularity Check

0 steps flagged

No circularity: framework claims rest on empirical experiments and deployment, not self-referential derivations

full rationale

The paper presents a descriptive framework (Query Planner, VLM evaluator, LLM re-ranker with multi-task RL) whose performance claims are supported by experiments and industrial deployment metrics rather than any closed mathematical derivation. No equations, fitted parameters renamed as predictions, or self-citation chains appear in the provided text that would reduce outputs to inputs by construction. The re-ranking logic is explicitly rule-based on listed facets (relevance, quality, novelty, information gain, visual presentation), making the approach self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The framework rests on the unproven premise that current LLMs and VLMs can jointly and reliably judge the five listed facets from the supplied signals; no independent verification of this capability is provided.

axioms (1)
  • domain assumption LLMs can perform reliable holistic evaluation of search results when given textual descriptions of content relevance, quality, information gain, novelty, and visual presentation signals.
    This is the core premise that allows the LLM-based re-ranker to replace traditional scoring functions.

pith-pipeline@v0.9.0 · 5590 in / 1383 out tokens · 38032 ms · 2026-05-16T07:27:29.587057+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references · 50 canonical work pages · 8 internal anchors

  1. [1]

    Abdelrahman Abdallah, Jamshid Mozafari, Bhawna Piryani, and Adam Jatowt

  2. [2]

    Dear: Dual-stage document reranking with reasoning agents via llm distil- lation.arXiv preprint arXiv:2508.16998(2025)

  3. [3]

    Qingyao Ai, Keping Bi, Jiafeng Guo, and W Bruce Croft. 2018. Learning a deep listwise context model for ranking refinement. InThe 41st international ACM SIGIR conference on research & development in information retrieval. 135–144

  4. [4]

    Irwan Bello, Sayali Kulkarni, Sagar Jain, Craig Boutilier, Ed Chi, Elad Eban, Xiyang Luo, Alan Mackey, and Ofer Meshi. 2018. Seq2Slate: Re-ranking and slate optimization with RNNs.arXiv preprint arXiv:1810.02019(2018)

  5. [5]

    Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al

  6. [6]

    InProceedings of the 1st workshop on deep learning for recommender systems

    Wide & deep learning for recommender systems. InProceedings of the 1st workshop on deep learning for recommender systems. 7–10

  7. [7]

    Ziqi Dai, Xin Zhang, Mingxin Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, Wenjie Li, and Min Zhang. 2025. Supervised Fine-Tuning or Contrastive Learning? Towards Better Multimodal LLM Reranking.arXiv preprint arXiv:2510.14824(2025)

  8. [8]

    Yufei Feng, Yu Gong, Fei Sun, Junfeng Ge, and Wenwu Ou. 2021. Revisit recom- mender system in the permutation prospective.arXiv preprint arXiv:2102.12057 (2021)

  9. [9]

    Yufei Feng, Binbin Hu, Yu Gong, Fei Sun, Qingwen Liu, and Wenwu Ou. 2021. GRN: Generative Rerank Network for Context-wise Recommendation.arXiv preprint arXiv:2104.00860(2021)

  10. [10]

    Jingtong Gao, Bo Chen, Xiangyu Zhao, Weiwen Liu, Xiangyang Li, Yichao Wang, Wanyu Wang, Huifeng Guo, and Ruiming Tang. 2025. Llm4rerank: Llm-based auto-reranking framework for recommendations. InProceedings of the ACM on Web Conference 2025. 228–239

  11. [11]

    Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al . 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning.arXiv preprint arXiv:2501.12948(2025)

  12. [12]

    Ray Jiang, Sven Gowal, Timothy A Mann, and Danilo J Rezende. 2018. Beyond greedy ranking: Slate optimization via list-CVAE.arXiv preprint arXiv:1803.01682 (2018)

  13. [13]

    Can Jin, Hongwu Peng, Anxiang Zhang, Nuo Chen, Jiahui Zhao, Xi Xie, Kuangzheng Li, Shuya Feng, Kai Zhong, Caiwen Ding, et al. 2025. Rankflow: A multi-role collaborative reranking workflow utilizing large language models. In Companion Proceedings of the ACM on Web Conference 2025. 2484–2493

  14. [14]

    Xiao Lin, Xiaokai Chen, Chenyang Wang, Hantao Shu, Linfeng Song, Biao Li, and Peng Jiang. 2024. Discrete conditional diffusion for reranking in recommendation. InCompanion Proceedings of the ACM Web Conference 2024. 161–169

  15. [15]

    Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Cheng- gang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. 2024. Deepseek-v3 technical report.arXiv preprint arXiv:2412.19437(2024)

  16. [16]

    Shuchang Liu, Qingpeng Cai, Zhankui He, Bowen Sun, Julian McAuley, Dong Zheng, Peng Jiang, and Kun Gai. 2023. Generative flow network for listwise rec- ommendation. InProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1524–1534

  17. [17]

    Shuchang Liu, Fei Sun, Yingqiang Ge, Changhua Pei, and Yongfeng Zhang. 2021. Variation control and evaluation for generative slate recommendations. InPro- ceedings of the Web Conference 2021. 436–448

  18. [18]

    Wenhan Liu, Xinyu Ma, Weiwei Sun, Yutao Zhu, Yuchen Li, Dawei Yin, and Zhicheng Dou. 2025. Reasonrank: Empowering passage ranking with strong reasoning ability.arXiv preprint arXiv:2508.07050(2025)

  19. [19]

    Wenhan Liu, Xinyu Ma, Yutao Zhu, Lixin Su, Shuaiqiang Wang, Dawei Yin, and Zhicheng Dou. 2025. CoRanking: Collaborative Ranking with Small and Large Ranking Agents.arXiv preprint arXiv:2503.23427(2025)

  20. [20]

    Wenyu Mao, Shuchang Liu, Hailan Yang, Xiaobei Wang, Xiaoyu Yang, Xu Gao, Xiang Li, Lantao Hu, Han Li, Kun Gai, et al . 2025. Robust denoising neural reranker for recommender systems.arXiv e-prints(2025), arXiv–2509

  21. [21]

    Yue Meng, Cheng Guo, Yi Cao, Tong Liu, and Bo Zheng. 2025. A generative re- ranking model for list-level multi-objective optimization at taobao. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 4213–4218

  22. [22]

    Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xueqi Cheng, and Jirong Wen

  23. [23]

    InProceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval

    Setrank: Learning a permutation-invariant ranking model for information retrieval. InProceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval. 499–508

  24. [24]

    Changhua Pei, Yi Zhang, Yongfeng Zhang, Fei Sun, Xiao Lin, Hanxiao Sun, Jian Wu, Peng Jiang, Junfeng Ge, Wenwu Ou, et al. 2019. Personalized re-ranking for recommendation. InProceedings of the 13th ACM conference on recommender systems. 3–11

  25. [25]

    Qi Pi, Guorui Zhou, Yujing Zhang, Zhe Wang, Lejian Ren, Ying Fan, Xiaoqiang Zhu, and Kun Gai. 2020. Search-based user interest modeling with lifelong sequential behavior data for click-through rate prediction. InProceedings of the 29th ACM International Conference on Information & Knowledge Management. 2685–2692

  26. [26]

    Yuxin Ren, Qiya Yang, Yichun Wu, Wei Xu, Yalong Wang, and Zhiqiang Zhang

  27. [27]

    In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining

    Non-autoregressive generative models for reranking recommendation. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 5625–5634

  28. [28]

    Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open language models.arXiv preprint arXiv:2402.03300(2024)

  29. [29]

    Xiaowen Shi, Fan Yang, Ze Wang, Xiaoxu Wu, Muzhi Guan, Guogang Liao, Wang Yongkang, Xingxing Wang, and Dong Wang. 2023. PIER: Permutation-Level Interest-Based End-to-End Re-ranking Framework in E-commerce. InProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 4823–4831

  30. [30]

    Qwen Team. 2024. Qwen2.5: A Party of Foundation Models. https://qwenlm. github.io/blog/qwen2.5/

  31. [31]

    Qwen Team. 2025. Qwen3 Technical Report. arXiv:2505.09388 [cs.CL] https: //arxiv.org/abs/2505.09388

  32. [32]

    Pinhuan Wang, Zhiqiu Xia, Chunhua Liao, Feiyi Wang, and Hang Liu. 2025. REALM: Recursive Relevance Modeling for LLM-based Document Re-Ranking. InProceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 23875–23889

  33. [33]

    Shuli Wang, Yinqiu Huang, Changhao Li, Yuan Zhou, Yonggang Liu, Yongqiang Zhang, Yinhua Zhu, Haitao Wang, and Xingxing Wang. 2025. You only evaluate once: A tree-based rerank method at meituan. InProceedings of the 34th ACM International Conference on Information and Knowledge Management. 6136–6143

  34. [34]

    Shuli Wang, Xue Wei, Senjie Kou, Chi Wang, Wenshuai Chen, Qi Tang, Yinhua Zhu, Xiong Xiao, and Xingxing Wang. 2025. NLGR: Utilizing Neighbor Lists for Generative Rerank in Personalized Recommendation Systems. InCompanion Proceedings of the ACM on Web Conference 2025. 530–537

  35. [35]

    Yunjia Xi, Weiwen Liu, Xinyi Dai, Ruiming Tang, Weinan Zhang, Qing Liu, Xi- uqiang He, and Yong Yu. 2021. Context-aware reranking with utility maximization for recommendation.arXiv preprint arXiv:2110.09059(2021)

  36. [36]

    Yunjia Xi, Weiwen Liu, Jieming Zhu, Xilong Zhao, Xinyi Dai, Ruiming Tang, Weinan Zhang, Rui Zhang, and Yong Yu. 2022. Multi-level interaction reranking with user behavior history. InProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1336–1346

  37. [37]

    Mingjun Xu, Jinhan Dong, Jue Hou, Zehui Wang, Sihang Li, Zhifeng Gao, Renxin Zhong, and Hengxing Cai. 2025. MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval.arXiv preprint arXiv:2506.12364(2025)

  38. [38]

    Hailan Yang, Zhenyu Qi, Shuchang Liu, Xiaoyu Yang, Xiaobei Wang, Xiang Li, Lantao Hu, Han Li, and Kun Gai. 2025. Comprehensive list generation for multi-generator reranking. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2298–2308

  39. [39]

    Kaike Zhang, Xiaobei Wang, Xiaoyu Yang, Shuchang Liu, Hailan Yang, Xiang Li, Fei Sun, and Qi Cao. 2025. From generation to consumption: Personalized list value estimation for re-ranking.arXiv preprint arXiv:2508.02242(2025)

  40. [40]

    Le Zhang, Bo Wang, Xipeng Qiu, Siva Reddy, and Aishwarya Agrawal. 2025. REARANK: Reasoning Re-ranking Agent via Reinforcement Learning.arXiv preprint arXiv:2505.20046(2025)

  41. [41]

    Zhaoqi Zhang, Haolei Pei, Jun Guo, Tianyu Wang, Yufei Feng, Hui Sun, Shaowei Liu, and Aixin Sun. 2025. OneTrans: Unified Feature Interaction and Sequence Modeling with One Transformer in Industrial Recommender.arXiv preprint arXiv:2510.26104(2025)

  42. [42]

    Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018. Deep interest network for click-through rate prediction. InProceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 1059–1068

  43. [43]

    Jie Zhu, Zhifang Fan, Xiaoxie Zhu, Yuchen Jiang, Hangyu Wang, Xintian Han, Haoran Ding, Xinmin Wang, Wenlin Zhao, Zhen Gong, et al. 2025. Rankmixer: Scaling up ranking models in industrial recommenders. InProceedings of the 34th ACM International Conference on Information and Knowledge Management. 6309–6316

  44. [44]

    Shengyao Zhuang, Xueguang Ma, Bevan Koopman, Jimmy Lin, and Guido Zuc- con. 2025. Rank-r1: Enhancing reasoning in llm-based document rerankers via reinforcement learning.arXiv preprint arXiv:2503.06034(2025)

  45. [45]

    Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. 2024. A setwise approach for effective and highly efficient zero-shot ranking with large language models. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 38–47

  46. [46]

    Globally Optimized Mutual Influence Aware Ranking in E-Commerce Search

    Tao Zhuang, Wenwu Ou, and Zhirong Wang. 2018. Globally optimized mutual influence aware ranking in e-commerce search.arXiv preprint arXiv:1805.08524 (2018). Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Zihao Guo, Ligang Zhou †, Zeyang Tang, Feicheng Li, Ying Nie, Zhiming Peng, Qingyun Sun†, and Jianxin Li A Implementation Details A.1 Query Plan...

  47. [47]

    Prioritize results highly relevant to the user query that provide an overview or cover core themes

  48. [48]

    Ensure each new result offers significant information gain while avoiding redundancy

    Subsequently supplement with results from different perspectives, details, or novel angles. Ensure each new result offers significant information gain while avoiding redundancy

  49. [49]

    When information value is comparable, prioritize results with higher cover value, higher content quality, or higher click-through and content completion rates

  50. [50]

    zero-click

    For each candidate, reduce consideration if their content does not align with the corresponding Intent Dimension. User Query: ... Candidate results are as follows: [ID] 1 [Title] ... [Content] ... [Intent Dimension] ... [Publish Time] ... [Cover Image Relevant] ... [Cover Image Quality] ... [Click-Through Rate] ... [Content Completion Rate] ... [ID] 2 ......