Recognition: unknown
How Generative AI Disrupts Search: An Empirical Study of Google Search, Gemini, and AI Overviews
Pith reviewed 2026-05-07 05:49 UTC · model grok-4.3
The pith
Generative AI overviews appear for 51.5% of real-user queries and retrieve substantially different sources than traditional Google search.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
For 51.5% of the 11,500 queries, AI Overviews are generated and placed above the organic results. The sources returned by traditional Google search, AI Overviews, and Gemini exhibit average Jaccard similarity below 0.2. Traditional search favors popular and institutional sites in the government and education domains, while the generative systems favor Google-owned content. Sites that block Google's AI crawler are less likely to appear in AI Overviews. AI Overviews also vary more across repeated runs of the identical query and change more readily when the query is edited slightly.
What carries the argument
The public benchmark of 11,500 user queries together with pairwise Jaccard similarity of retrieved sources and domain-category analysis across the three systems.
If this is right
- Institutional and popular non-Google sites lose relative visibility when AI Overviews are shown.
- Websites that block AI crawlers reduce their chance of appearing in generative answers even if their content remains crawlable by traditional search.
- Users receive different information on the same topic depending on whether an AI Overview is displayed.
- Low consistency across runs and sensitivity to small edits reduce reliability on controversial questions.
- Publishers and search providers need new revenue arrangements to keep an open information ecosystem sustainable.
Where Pith is reading between the lines
- Over repeated queries the preference for Google-owned sources could gradually concentrate attention around a smaller set of platforms.
- Independent publishers may need to develop AI-specific optimization tactics or negotiate direct licensing deals to regain visibility.
- Competition authorities could treat the observed self-preferencing as a new form of gatekeeping that warrants scrutiny.
- Similar experiments on other generative search products would show whether the pattern is specific to Google or general across the industry.
Load-bearing premise
The 11,500 queries represent ordinary user behavior and the measured differences in sources and consistency are produced by the generative components rather than by other unmeasured system changes or by how the queries were chosen.
What would settle it
A new sample of queries drawn from actual search logs that yields either Jaccard similarity above 0.4 between the three systems or no detectable preference for Google-owned domains in the generative results would falsify the central claims.
Figures
read the original abstract
Generative AI is being increasingly integrated into web search for the convenience it provides users. In this work, we aim to understand how generative AI disrupts web search by retrieving and presenting the information and sources differently from traditional search engines. We introduce a public benchmark dataset of 11,500 user queries to support our study and future research of generative search. We compare the search results returned by Google's search engine, the accompanying AI Overview (AIO), and Gemini Flash 2.5 for each query. We have made several key findings. First, we find that for 51.5\% of representative, real-user queries, AIOs are generated, and are displayed above the organic search results. Controversial questions frequently result in an AIO. Second, we show that the retrieved sources are substantially different for each search engine (<0.2 average Jaccard similarity). Traditional Google search is significantly more likely to retrieve information from popular or institutional websites in government or education, while generative search engines are significantly more likely to retrieve Google-owned content. Third, we observe that websites that block Google's AI crawler are significantly less likely to be retrieved by AIOs, despite having access to the content. Finally, AIOs are less consistent when processing two runs of the same query, and are less robust to minor query edits. Our findings have important implications for understanding how generative search impacts website visibility, the effectiveness of generative engine optimization techniques, and the information users receive. We call for revenue frameworks to foster a sustainable and mutually beneficial ecosystem for publishers and generative search providers.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents an empirical study of how generative AI affects web search by comparing traditional Google organic results, AI Overviews (AIOs), and Gemini Flash 2.5 responses across a new public benchmark of 11,500 user queries. It reports that AIOs are generated and shown above organic results for 51.5% of queries (with higher rates for controversial questions), that retrieved sources differ substantially across the three systems (average Jaccard similarity <0.2), that generative systems favor Google-owned sources while traditional search favors government/education domains, that sites blocking Google's AI crawler appear less often in AIOs, and that AIOs exhibit lower consistency across repeated queries and lower robustness to small query edits.
Significance. If the sampling and attribution claims hold, the work supplies a large-scale, reproducible dataset and concrete measurements of shifts in source visibility and result stability caused by generative components. These have direct implications for publisher economics, the viability of generative engine optimization, and user information diets. The public release of the 11,500-query benchmark is a clear strength that supports follow-on research.
major comments (3)
- [Abstract and Data Collection section] The central claim that the 11,500 queries are 'representative, real-user queries' (Abstract) is load-bearing for every headline statistic (51.5% AIO rate, source-type skews, Jaccard <0.2). The manuscript supplies no sampling protocol, source logs, topic stratification, temporal window, or validation against real query distributions, leaving open the possibility that selection effects (especially the noted over-representation of controversial queries) drive the reported differences.
- [Results (source retrieval and type analysis)] The attribution of source differences (government/education vs. Google-owned content) and the Jaccard <0.2 similarity specifically to the generative AI layer (Abstract, Results on source analysis) is not isolated from other system variations. No ablation, matched-pair design, or control for underlying retriever/ranker differences between Google Search, AIO, and Gemini is described; therefore the causal claim that generative components produce the observed skews cannot be verified from the current evidence.
- [Results (crawler-blocking analysis)] The finding that sites blocking Google's AI crawler are 'significantly less likely to be retrieved by AIOs' (Abstract) is central to the publisher-visibility implications, yet the manuscript does not detail how blocking status was determined, whether content was independently verified as accessible, or the statistical test and effect size used to establish significance.
minor comments (2)
- [Abstract and Results] The average Jaccard similarity is stated only as '<0.2'; reporting the precise mean, standard deviation, and per-pair distributions (perhaps in a table) would improve precision and allow readers to assess the magnitude of the difference.
- [Consistency and robustness subsection] The consistency and robustness experiments would benefit from an explicit description of the similarity metric used to compare AIO outputs across runs or edits (e.g., whether it is source-set Jaccard, content overlap, or embedding similarity) and from example query-edit pairs.
Simulated Author's Rebuttal
We thank the referee for their thoughtful and constructive comments, which have helped clarify areas where the manuscript can be strengthened. We address each major comment point by point below, with honest indications of where revisions will be incorporated.
read point-by-point responses
-
Referee: [Abstract and Data Collection section] The central claim that the 11,500 queries are 'representative, real-user queries' (Abstract) is load-bearing for every headline statistic (51.5% AIO rate, source-type skews, Jaccard <0.2). The manuscript supplies no sampling protocol, source logs, topic stratification, temporal window, or validation against real query distributions, leaving open the possibility that selection effects (especially the noted over-representation of controversial queries) drive the reported differences.
Authors: We agree that the manuscript would benefit from greater transparency on query sampling. The 11,500 queries constitute a new public benchmark drawn from real-user queries with intentional coverage of diverse topics, including controversial ones. We did not include a complete sampling protocol in the initial submission. We will expand the Data Collection section to describe the query sources, collection timeframe, topic coverage, and stratification approach. We will also revise the abstract to describe the queries as 'a diverse set of real-user queries' and add a Limitations section addressing potential selection effects and generalizability. The public release of the full dataset supports independent assessment of its properties. revision: yes
-
Referee: [Results (source retrieval and type analysis)] The attribution of source differences (government/education vs. Google-owned content) and the Jaccard <0.2 similarity specifically to the generative AI layer (Abstract, Results on source analysis) is not isolated from other system variations. No ablation, matched-pair design, or control for underlying retriever/ranker differences between Google Search, AIO, and Gemini is described; therefore the causal claim that generative components produce the observed skews cannot be verified from the current evidence.
Authors: We agree that the design does not isolate the generative layer via ablations or matched-pair controls, as the study compares the three systems in their deployed form. We will revise the Results and Discussion sections to clarify that observed differences (including source-type skews and low Jaccard similarity) are between the systems as experienced by users and may involve multiple factors beyond the generative components. A new Limitations section will explicitly note the black-box nature of the systems and the absence of controls for underlying retriever/ranker variations. We will adjust language in the abstract and results to describe associations with the integration of generative AI rather than direct causal attribution to the generative layer alone. revision: partial
-
Referee: [Results (crawler-blocking analysis)] The finding that sites blocking Google's AI crawler are 'significantly less likely to be retrieved by AIOs' (Abstract) is central to the publisher-visibility implications, yet the manuscript does not detail how blocking status was determined, whether content was independently verified as accessible, or the statistical test and effect size used to establish significance.
Authors: We agree that the methods for the crawler-blocking analysis require additional detail. Blocking status was determined via inspection of robots.txt files for directives targeting Google's AI crawlers, with independent verification through direct content access attempts. Statistical significance was assessed with a chi-squared test. We will add a dedicated subsection describing the exact user agents checked, verification procedures, the statistical test, p-values, and effect size to the relevant Results section. revision: yes
Circularity Check
No circularity: purely empirical measurements with direct comparisons
full rationale
The manuscript is an empirical measurement study that collects a fixed dataset of 11,500 queries, runs them through three systems (Google organic, AIO, Gemini), and reports observed frequencies, Jaccard similarities, and source-type distributions. No equations, fitted parameters, or derivations appear; the headline statistics (51.5% AIO rate, <0.2 Jaccard, source skews) are direct counts and comparisons from the collected data. No self-citation is invoked to justify core claims, no ansatz is smuggled, and no result is renamed as a prediction. The study is self-contained against external benchmarks and does not reduce any finding to a quantity defined by the authors' own modeling choices.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The 11,500 queries are representative of real-user search behavior
Reference graph
Works this paper leans on
-
[1]
Pranjal Aggarwal, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik Narasimhan, and Ameet Deshpande. 2024. GEO: Generative Engine Optimization. InProceedings of the 30th ACM SIGKDD Conference on Knowl- edge Discovery and Data Mining(New York, NY, USA, 2024-08-24)(KDD ’24). Association for Computing Machinery, 5–16. doi:10.1145/3637528.3671900
-
[2]
Davey Alba. 2025. Google can train search AI with web content after AI opt- out. The Edge Singapore. https://sg.news.yahoo.com/google-train-search-ai- content-230000810.html?guccounter=1
2025
-
[3]
Davey Alba and Julia Love. 2025. Google Decided Against Offering Publishers Options in AI Search. Bloomberg. https://www.bloomberg.com/news/articles/ 2025-05-19/google-gave-sites-little-choice-in-using-data-for-ai-search
2025
-
[4]
Daria Alexander and Arjen P. de Vries. 2025. In a Few Words: Comparing Weak Supervision and LLMs for Short Query Intent Classification. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval(New York, NY, USA, 2025-07-13)(SIGIR ’25). Association for Computing Machinery, 2977–2981. doi:10.1145/3726...
-
[5]
Peter Bailey, Alistair Moffat, Falk Scholer, and Paul Thomas. 2017. Retrieval consistency in the presence of query variations. InProceedings of the 40th interna- tional ACM SIGIR conference on research and development in information retrieval. 395–404
2017
-
[6]
Evan Bailyn. 2025. Google vs ChatGPT Market Share: 2026 Report. FirstPageSage. https://firstpagesage.com/seo-blog/google-vs-chatgpt-market-share-report
2025
-
[7]
Krishnan Batri, Rajermani Thinakaran, S. Lakshmi, R. Sowrirajan, and Sivaram Murugan. 2025. Beyond Precision and Recall: Measuring Search Engine Con- sistency Using Rank Stability.IEEE Access13 (2025), 92242–92259. doi:10.1109/ ACCESS.2025.3571184
-
[8]
Michael R Baye, Babur De los Santos, and Matthijs R Wildenbeest. 2016. Search en- gine optimization: what drives organic traffic to retail sites?Journal of Economics & Management Strategy25, 1 (2016), 6–31
2016
-
[9]
Rebecca Bellan. 2025. Google’s AI search features are killing traffic to publish- ers. TechCrunch. https://techcrunch.com/2025/06/10/googles-ai-overviews-are- killing-traffic-for-publishers/
2025
-
[10]
Matan Ben-Tov and Mahmood Sharif. 2025. GASLITEing the Retrieval: Exploring Vulnerabilities in Dense Embedding-based Search. InProceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security(Taipei Taiwan, 2025-11-19). ACM, 4364–4378. doi:10.1145/3719027.3765095
-
[11]
Joydeep Bhattacharya. 2025. Generative Engine Optimization (GEO) Statistics: New Data for 2025. SEO Sandwitch. https://seosandwitch.com/generative- engine-optimization-stats/
2025
-
[12]
Engin Bozdag and Jeroen Van Den Hoven. 2015. Breaking the filter bubble: democracy and design.Ethics and information technology17, 4 (2015), 249–265
2015
- [13]
-
[14]
Marketing Couch. 2025. How AI Overviews Are Reshaping Google Click-Through Rates in 2025 (And What Businesses Must Do Now). https://marketing-couch.com/how-ai-overviews-are-reshaping-google- click-through-rates-in-2025-and-what-businesses-must-do-now/
2025
-
[15]
Nick Craswell, Daniel Campos, Bhaskar Mitra, Emine Yilmaz, and Bodo Billerbeck
-
[16]
arXiv preprint arXiv:2006.05324(2020)
ORCAS: 18 Million Clicked Query-Document Pairs for Analyzing Search. arXiv preprint arXiv:2006.05324(2020)
-
[17]
Sunhao Dai, Zhanshuo Cao, Wenjie Wang, Liang Pang, Jun Xu, See-Kiong Ng, and Tat-Seng Chua. 2025. Media Source Matters More Than Content: Unveiling Political Bias in LLM-Generated Citations. InProceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Suzhou, China, 17256–17276. doi:1...
-
[18]
Sunhao Dai, Wenjie Wang, Liang Pang, Jun Xu, See-Kiong Ng, Ji-Rong Wen, and Tat-Seng Chua. 2025. NExT-Search: Rebuilding User Feedback Ecosystem for Generative AI Search. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval(New York, NY, USA, 2025-07-13)(SIGIR ’25). Association for Computing Mac...
-
[19]
Sunhao Dai, Chen Xu, Shicheng Xu, Liang Pang, Zhenhua Dong, and Jun Xu
-
[20]
Bias and Unfairness in Information Retrieval Systems: New Challenges in the LLM Era. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining(Barcelona Spain, 2024-08-25). ACM, 6437–6447. doi:10.1145/3637528.3671458
-
[21]
Sunhao Dai, Chen Xu, Shicheng Xu, Zhongxiang Sun, Liang Pang, Zhenhua Dong, and Jun Xu. 2025. Trustworthy Information Retrieval in the LLM Era: Bias, Unfairness, and Hallucination. InProceedings of the 2025 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region(Xi’an China, 2025-12-07). AC...
-
[22]
Sunhao Dai, Yuqi Zhou, Liang Pang, Zhuoyang Li, Zhaocheng Du, Gang Wang, and Jun Xu. 2025. Mitigating Source Bias with LLM Alignment. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval(New York, NY, USA, 2025-07-13)(SIGIR ’25). Association for Computing Machinery, 370–380. doi:10.1145/3726302.3730038
-
[23]
Sunhao Dai, Yuqi Zhou, Liang Pang, Weihao Liu, Xiaolin Hu, Yong Liu, Xiao Zhang, Gang Wang, and Jun Xu. 2024. Neural Retrievers are Biased Towards LLM-Generated Content. InProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining(2024-08-25). ACM, 526–537. doi:10.1145/ 3637528.3671882
-
[24]
European Union. 2024. Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/eli/reg/2024/1689/oj. Official Journal of the European Union
2024
- [25]
-
[26]
Google. 2025. AI features and your website. https://developers.google.com/ search/docs/appearance/ai-features
2025
-
[27]
Google. 2025. List of Google’s common crawlers. https://developers.google.com/ crawling/docs/crawlers-fetchers/google-common-crawlers
2025
-
[28]
Andrew Gregory. 2026. ‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk. The Guardian. https://www.theguardian.com/technology/2026/jan/11/google-ai- overviews-health-guardian-investigation
2026
-
[29]
Sara Guaglione. 2026. A timeline of the major deals between publishers and AI tech companies in 2025. Digiday. https://digiday.com/media/a-timeline-of-the- major-deals-between-publishers-and-ai-tech-companies-in-2025/
2026
-
[30]
Aniko Hannak, Piotr Sapiezynski, Arash Molavi Kakhki, Balachander Krish- namurthy, David Lazer, Alan Mislove, and Christo Wilson. 2013. Measuring personalization of web search. InProceedings of the 22nd international conference on World Wide Web. 527–538
2013
-
[31]
Megri Hohli. 2025. The Great CTR Crash: Why Google’s AI Overviews Demand a New E-E-A-T-Focused SEO Strategy. Submitshop. https://www.submitshop.com/the-great-ctr-crash-why-googles-ai-overviews- demand-a-new-e-e-a-t-focused-seo-strategy
2025
-
[32]
Jyun-Yu Jiang, Jing Liu, Chin-Yew Lin, and Pu-Jen Cheng. 2015. Improving ranking consistency for web search by leveraging a knowledge base and search logs. InProceedings of the 24th ACM International on Conference on Information and Knowledge Management. 1441–1450
2015
-
[33]
Gummadi, and Muhammad Bilal Zafar
Elisabeth Kirsten, Jost Grosse Perdekamp, Mihir Upadhyay, Krishna P. Gummadi, and Muhammad Bilal Zafar. 2025. Characterizing Web Search in The Age of Generative AI. arXiv:2510.11560 [cs.IR] https://arxiv.org/abs/2510.11560
-
[34]
Naveen Kumar. 2025. 31 Bing Statistics 2026 [Facts, Usage & Revenue]. Demand- Sage. https://www.demandsage.com/bing-statistics/
2025
-
[35]
Victor Le Pochat, Tom Van Goethem, Maciej Tajalizadehkhoob, Samaneh Ko- rczyński, and Wouter Joosen. 2019. Tranco: A Research-Oriented Top Sites Ranking Hardened Against Manipulation. InProceedings of the 26th Annual Net- work and Distributed System Security Symposium (NDSS 2019). doi:10.14722/ndss. 2019.23386
-
[36]
Yidong Liang, Zhijing Wu, Fan Zhang, Dandan Song, and Heyan Huang. 2025. How Users Interact with Generative Information Retrieval Systems: A Study of User Behavior and Search Experience. InProceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval (Padua Italy, 2025-07-13). ACM, 634–644. doi:10.1145/372...
-
[37]
Nelson Liu, Tianyi Zhang, and Percy Liang. 2023. Evaluating Verifiability in Generative Search Engines. InFindings of the Association for Computational Linguistics: EMNLP 2023. Association for Computational Linguistics, Singapore, 7001–7025. doi:10.18653/v1/2023.findings-emnlp.467
-
[38]
Tracy McDonald. 2025. AIO Impact on Google CTR: September 2025 Update. Seer Interactive. https://www.seerinteractive.com/insights/aio-impact-on-google- ctr-september-2025-update
2025
-
[39]
Quinn McNemar. 1947. Note on the Sampling Error of the Difference Between Correlated Proportions or Percentages.Psychometrika12, 2 (1947), 153–157. doi:10.1007/BF02295996
-
[40]
Grynbaum
Cade Metz and Michael M. Grynbaum. 2025. New York Times Sues A.I. Start-Up Perplexity Over Use of Copyrighted Work. The New York Times. https://www.nytimes.com/2025/12/05/technology/new-york-times- perplexity-ai-lawsuit.html
2025
- [41]
-
[42]
Pranav Narayanan Venkit, Philippe Laban, Yilun Zhou, Yixin Mao, and Chien- Sheng Wu. 2025. Search Engines in the AI Era: A Qualitative Understanding to the False Promise of Factual and Verifiable Source-Cited Responses in LLM-based Search. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency(Athens Greece, 2025-06-23). AC...
-
[43]
Fredrik Nestaas, Edoardo Debenedetti, and Florian Tramèr. 2025. Adversarial Search Engine Optimization for Large Language Models. InThe Thirteenth Inter- national Conference on Learning Representations. https://openreview.net/forum? id=hkdqxN3c7t SIGIR ’26, July 20–24, 2026, Melbourne, VIC, Australia Grossman et al
2025
-
[44]
Samuel Pfrommer, Yatong Bai, Tanmay Gautam, and Somayeh Sojoudi. 2024. Ranking Manipulation for Conversational Search Engines. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Miami, Florida, USA, 9523–9552. doi:10.18653/v1/ 2024.emnlp-main.534
-
[45]
Kaustubh Phatak. 2025. The AI Content Crisis: A Publisher’s Guide To Survival And Success In 2025. Forbes. https://www.forbes.com/councils/ forbestechcouncil/2025/11/17/the-ai-content-crisis-a-publishers-guide-to- survival-and-success-in-2025/
2025
-
[46]
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah Smith, and Mike Lewis. 2023. Measuring and Narrowing the Compositionality Gap in Language Models. InFindings of the Association for Computational Linguistics: EMNLP 2023, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 5687–5711. doi:10.18653/v1...
-
[47]
Haritz Puerto, Martin Gubri, Tommaso Green, Seong Joon Oh, and Sangdoo Yun
-
[48]
InThe Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track
C-SEO Bench: Does Conversational SEO Work?. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track. https://openreview.net/forum?id=oTeixD3oZO
- [49]
-
[50]
Larisa Rosu. 2026. AIO Citation Rank: See Who Gets the Front Row in AI Overviews. Advanced Web Ranking. https://www.advancedwebranking.com/ help/aio-citation-rank-see-who-gets-the-spotlight-in-ai-overviews
2026
-
[51]
Alex Seifert. 2025. Google Forcing Websites to Allow Its AI to Train on Their Content. Medium. https://medium.com/@alexseifert/google-forcing-websites- to-allow-its-ai-to-train-on-their-content-0faa70b0b63b
2025
- [52]
-
[53]
Nikhil Sharma, Q. Vera Liao, and Ziang Xiao. 2024. Generative Echo Chamber? Effect of LLM-Powered Search Systems on Diverse Information Seeking. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 1033. doi:10.1145/3613904.3642459
-
[54]
Shubham Singh. 2025. ChatGPT Users Statistics (January 2026) – Growth & Usage Data. DemandSage. https://www.demandsage.com/chatgpt-statistics/
2025
-
[55]
Matthias Stadler, Maria Bannert, and Michael Sailer. 2024. Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior160 (2024), 108386. doi:10.1016/j.chb.2024.108386
- [56]
-
[57]
Miriam Steiner, Melanie Magin, Birgit Stark, and Stefan Geiß. 2022. Seek and you shall find? A content analysis on the diversity of five search engines’ results on political queries.Information, Communication & Society25, 2 (2022), 217–241
2022
-
[58]
Chat GPT Is Eating the World. 2025. Updated U.S. Map of Copyright Suits v. AI (Dec. 5, 2025) = 65 suits. https://chatgptiseatingtheworld.com/2025/12/05/ updated-u-s-map-of-copyright-suits-v-ai-dec-5-2025-64-suits/
2025
- [59]
-
[60]
Bruno Venditti. 2025. ChatGPT Lags Far Behind Google in Daily Search Volume. Visual Capitalist. https://www.visualcapitalist.com/chatgpt-lags-far-behind- google-in-daily-search-volume/
2025
-
[61]
William Webber, Alistair Moffat, and Justin Zobel. 2010. A similarity measure for indefinite rankings.ACM Trans. Inf. Syst.28 (2010), 20:1–20:38. https: //api.semanticscholar.org/CorpusID:16050561
2010
-
[62]
Shicheng Xu, Danyang Hou, Liang Pang, Jingcheng Deng, Jun Xu, Huawei Shen, and Xueqi Cheng. 2024. Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images. InProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval(Washington DC, USA)(SIGIR ’24). Association for Computing Machi...
-
[63]
Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel Deutch, and Jonathan Berant. 2023. Answering Questions by Meta-Reasoning over Multiple Chains of Thought. InProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Associa- tion for Computational Linguistics, Singapore, 5...
-
[64]
An Zhang, Yang Deng, Yankai Lin, Xu Chen, Ji-Rong Wen, and Tat-Seng Chua
-
[65]
Large Language Model Powered Agents for Information Retrieval. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval(Washington DC USA, 2024-07-10). ACM, 2989–2992. doi:10.1145/3626772.3661375
-
[66]
Qiwei Zhao, Dong Li, Yanchi Liu, Wei Cheng, Yiyou Sun, Mika Oishi, Takao Osaki, Katsushi Matsuda, Huaxiu Yao, Chen Zhao, Haifeng Chen, and Xujiang Zhao. 2025. Uncertainty Propagation on LLM Agent. InProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Wanxiang Che, Joyce Nabende, Ekaterina Shutov...
2025
-
[67]
doi:10.18653/v1/2025.acl-long.302
-
[68]
Jakub Zilincan. 2015. Search engine optimization. InCBU International Conference Proceedings, Vol. 3. 506–510
2015
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.