pith. machine review for the scientific record. sign in

arxiv: 2210.07316 · v3 · submitted 2022-10-13 · 💻 cs.CL · cs.IR· cs.LG

Recognition: 1 theorem link

MTEB: Massive Text Embedding Benchmark

Authors on Pith no claims yet

Pith reviewed 2026-05-15 10:11 UTC · model grok-4.3

classification 💻 cs.CL cs.IRcs.LG
keywords text embeddingsbenchmarkevaluationsemantic textual similarityclusteringrerankingmultilingualleaderboard
0
0 comments X

The pith

A new benchmark shows no single text embedding method performs best across all tasks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Evaluations of text embeddings have long focused on narrow sets of datasets from one task, leaving unclear how well models transfer to other uses like clustering or reranking. The paper introduces MTEB as a broader test covering eight tasks, fifty-eight datasets, and one hundred twelve languages. Benchmarking thirty-three models on this suite reveals that performance rankings shift sharply depending on the task. This indicates the field has not settled on one embedding approach that scales to top results everywhere. The benchmark supplies open code and a public leaderboard to make future comparisons more consistent.

Core claim

The paper establishes the Massive Text Embedding Benchmark (MTEB) that spans eight embedding tasks across fifty-eight datasets and one hundred twelve languages. By evaluating thirty-three models on MTEB, the work finds that no particular text embedding method dominates across all tasks, which suggests the field has yet to converge on a universal text embedding method scaled sufficiently for state-of-the-art results on every embedding task.

What carries the argument

The Massive Text Embedding Benchmark (MTEB), a standardized collection of eight tasks and fifty-eight datasets that measures text embedding performance across diverse applications.

If this is right

  • Embedding models must be tested on multiple tasks instead of relying on semantic similarity alone.
  • Progress requires either new general methods or task-aware selection rather than one-size-fits-all scaling.
  • A public leaderboard will allow direct tracking of improvements across the full set of tasks.
  • Developers will need to weigh task-specific strengths when choosing an embedding for a given application.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Research groups may shift from single-task optimization to methods designed for balanced performance across the eight categories.
  • The benchmark could become a default check for any new embedding model before it is released.
  • Task-specific fine-tuning or routing mechanisms might emerge as practical ways to handle the observed specialization.

Load-bearing premise

The eight tasks and fifty-eight datasets chosen for MTEB represent the full range of real-world embedding applications so that scores on MTEB predict usefulness elsewhere.

What would settle it

A single new embedding model that ranks first on every one of the eight MTEB tasks at once, or a follow-up study showing that MTEB scores fail to predict performance in previously untested practical applications.

read the original abstract

Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The manuscript introduces the Massive Text Embedding Benchmark (MTEB), spanning 8 tasks, 58 datasets, and 112 languages. By evaluating 33 models on this suite, the authors establish the most comprehensive text embedding benchmark to date and report that no single embedding method achieves top performance across all tasks.

Significance. If the reported results hold, MTEB supplies a standardized, multi-task evaluation resource that directly addresses the prior limitation of narrow, single-task assessments (e.g., STS-only). The open-source code, public leaderboard, and fully reproducible experimental setup constitute concrete strengths that enable community verification and incremental progress tracking.

minor comments (3)
  1. §3.2: The criteria used to select the 58 datasets within each task are stated at a high level; adding a short paragraph or table listing the primary inclusion/exclusion rules would improve transparency without altering the central claim.
  2. Table 2: The reported scores for the 33 models would benefit from an additional column or footnote indicating the number of runs or standard deviation, even if the main text already notes single-run evaluation.
  3. Figure 3: The radar-chart comparison of top models is visually effective, but the legend ordering does not match the task order in the caption; reordering would reduce reader cross-referencing.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive review and recommendation to accept the manuscript. We are pleased that the significance of MTEB as a standardized, multi-task benchmark for text embeddings is recognized, along with the value of the open-source code and public leaderboard.

Circularity Check

0 steps flagged

Pure empirical benchmark with no circular derivation

full rationale

The paper introduces MTEB as a benchmark spanning 8 tasks and 58 datasets, evaluates 33 models, and reports that no single embedding method dominates all tasks. This finding is a direct empirical observation from external datasets and model performances, with no equations, fitted parameters, or self-citations forming a load-bearing derivation chain. The task selection is presented as a practical choice rather than derived from prior results in a circular manner.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

This is an empirical benchmark paper. It introduces no free parameters, no new axioms beyond standard assumptions about vector similarity, and no invented entities.

axioms (1)
  • standard math Text embeddings can be meaningfully compared via cosine similarity or dot product on vector representations.
    Invoked in the definition of the STS and retrieval tasks.

pith-pipeline@v0.9.0 · 5488 in / 1098 out tokens · 29336 ms · 2026-05-15T10:11:33.391781+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 21 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. AcquisitionSynthesis: Targeted Data Generation using Acquisition Functions

    cs.CL 2026-05 unverdicted novelty 7.0

    AcquisitionSynthesis uses acquisition functions as rewards to train generators that produce higher-quality synthetic data, delivering 2-7% gains on math, medical QA, and coding tasks with improved robustness to forgetting.

  2. Much of Geospatial Web Search Is Beyond Traditional GIS

    cs.IR 2026-05 unverdicted novelty 7.0

    Analysis of 1.01 million unfiltered Bing queries identifies 18% as geospatial, dominated by transactional categories like costs (15.3%) that exceed traditional GIS scope.

  3. Led to Mislead: Adversarial Content Injection for Attacks on Neural Ranking Models

    cs.IR 2026-05 unverdicted novelty 7.0

    CRAFT is a supervised LLM framework using retrieval-augmented generation, self-refinement, fine-tuning, and preference optimization to create fluent adversarial content that boosts target ranks in neural ranking model...

  4. MMEB-V3: Measuring the Performance Gaps of Omni-Modality Embedding Models

    cs.IR 2026-04 unverdicted novelty 7.0

    MMEB-V3 benchmark shows omni-modality embedding models fail to enforce instruction-specified modality constraints and exhibit asymmetric, query-biased retrieval.

  5. mEOL: Training-Free Instruction-Guided Multimodal Embedder for Vector Graphics and Image Retrieval

    cs.CV 2026-04 unverdicted novelty 7.0

    mEOL creates aligned embeddings for text, images, and SVGs using instruction-guided MLLM one-word summaries and semantic SVG rewriting, outperforming baselines on a new text-to-SVG retrieval benchmark.

  6. C-Pack: Packed Resources For General Chinese Embeddings

    cs.CL 2023-09 accept novelty 7.0

    C-Pack releases a new Chinese embedding benchmark, large training dataset, and optimized models that outperform priors by up to 10% on C-MTEB while also delivering English SOTA results.

  7. Sliced Inner Product Gromov-Wasserstein Distances

    stat.ML 2026-05 unverdicted novelty 6.0

    A sliced IGW distance is introduced with closed-form 1D expressions, rotational invariance, and studied structural and computational properties for efficient data alignment.

  8. Is Textual Similarity Invariant under Machine Translation? Evidence Based on the Political Manifesto Corpus

    cs.CL 2026-05 unverdicted novelty 6.0

    Machine translation preserves embedding similarity structure for ten languages but distorts it for four in the Manifesto Corpus, via a new non-inferiority testing framework.

  9. MIPIC: Matryoshka Representation Learning via Self-Distilled Intra-Relational and Progressive Information Chaining

    cs.CL 2026-04 unverdicted novelty 6.0

    MIPIC trains nested Matryoshka representations via self-distilled intra-relational alignment with top-k CKA and progressive information chaining across depths, yielding competitive performance especially at extreme lo...

  10. JU\'A -- A Benchmark for Information Retrieval in Brazilian Legal Text Collections

    cs.IR 2026-04 accept novelty 6.0

    JU'A is a new heterogeneous benchmark for Brazilian legal IR that distinguishes retrieval methods and shows domain-adapted models excel on aligned subsets while BM25 stays competitive elsewhere.

  11. Semantic Data Processing with Holistic Data Understanding

    cs.DB 2026-04 unverdicted novelty 6.0

    HoldUp uses LLM-guided clustering to provide holistic dataset context for semantic operators, yielding up to 33% higher classification accuracy and 30% higher scoring accuracy than row-by-row LLM processing across 15 ...

  12. EmbeddingGemma: Powerful and Lightweight Text Representations

    cs.CL 2025-09 unverdicted novelty 6.0

    A 300M-parameter open embedding model sets new SOTA on MTEB for its size class and matches models twice as large while staying effective when compressed.

  13. NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models

    cs.CL 2024-05 accept novelty 6.0

    NV-Embed achieves first place on the MTEB leaderboard across 56 tasks by combining a latent attention layer, causal-mask removal, two-stage contrastive training, and data curation for LLM-based embedding models.

  14. StarCoder 2 and The Stack v2: The Next Generation

    cs.SE 2024-02 accept novelty 6.0

    StarCoder2-15B matches or beats CodeLlama-34B on code tasks despite being smaller, and StarCoder2-3B outperforms prior 15B models, with open weights and exact training data identifiers released.

  15. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    cs.CL 2022-11 unverdicted novelty 6.0

    BLOOM is a 176B-parameter open-access multilingual language model trained on the ROOTS corpus that achieves competitive performance on benchmarks, with improved results after multitask prompted finetuning.

  16. Measuring Embedding Sensitivity to Authorial Style in French: Comparing Literary Texts with Language Model Rewritings

    cs.CL 2026-05 unverdicted novelty 5.0

    Embeddings reliably capture authorial stylistic features in French literary texts, and these signals persist after LLM rewriting while showing model-specific patterns.

  17. How Does Chunking Affect Retrieval-Augmented Code Completion? A Controlled Empirical Study

    cs.SE 2026-05 conditional novelty 5.0

    Function-based chunking underperforms other strategies in RAG code completion by 3.57-5.64 points, with context length as the dominant factor.

  18. Towards Better Static Code Analysis Reports: Sentence Transformer-based Filtering of Non-Actionable Alerts

    cs.SE 2026-04 conditional novelty 5.0

    STAF applies sentence embeddings from transformers to classify SCA findings, reaching 89% F1 and beating prior filters by 11% within projects and 6% across projects.

  19. Text-as-Signal: Quantitative Semantic Scoring with Embeddings, Logprobs, and Noise Reduction

    cs.CL 2026-03 unverdicted novelty 5.0

    A configurable pipeline turns text corpora into quantitative semantic signals via embeddings, logprobs, and UMAP-based noise reduction for document positioning and corpus profiling.

  20. Text Embeddings by Weakly-Supervised Contrastive Pre-training

    cs.CL 2022-12 unverdicted novelty 5.0

    E5 text embeddings trained with weakly-supervised contrastive pre-training on CCPairs outperform BM25 on BEIR zero-shot and achieve top results on MTEB, beating much larger models.

  21. Domain-Adaptive Dense Retrieval for Brazilian Legal Search

    cs.IR 2026-05 unverdicted novelty 4.0

    Mixed training of Qwen3-Embedding-4B on legal data plus SQuAD-pt yields higher average NDCG@10 (0.447), MRR@10 (0.595), and MAP@10 (0.308) across six Portuguese retrieval datasets than legal-only or base models, with ...