pith. machine review for the scientific record. sign in

arxiv: 2509.18154 · v1 · submitted 2025-09-16 · 💻 cs.LG · cs.CV

Recognition: no theorem link

MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe

Authors on Pith no claims yet

Pith reviewed 2026-05-15 17:01 UTC · model grok-4.3

classification 💻 cs.LG cs.CV
keywords MiniCPM-V 4.5MLLMefficient multimodal models3D-Resamplerhybrid reinforcement learningOpenCompassVideoMME8B parameters
0
0 comments X

The pith

An 8B multimodal model outperforms GPT-4o-latest and Qwen2.5-VL 72B on OpenCompass while using far less memory and inference time.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces MiniCPM-V 4.5 as an 8 billion parameter multimodal model built for strong results with low resource use. It combines a compact architecture for handling images and videos together, a direct way to learn from documents without extra data work, and a mixed reinforcement learning method that handles both quick answers and longer chains of thought. A reader would care because the results suggest that targeted design choices can close the gap with much larger or proprietary systems, lowering the barriers to running capable multimodal AI.

Core claim

MiniCPM-V 4.5 shows that an 8B MLLM can surpass GPT-4o-latest and Qwen2.5-VL 72B on OpenCompass while posting state-of-the-art scores on VideoMME among models under 30B parameters. The gains come from a unified 3D-Resampler for compact encoding of images and videos, a single learning setup for document knowledge and text recognition, and hybrid reinforcement learning that covers both short and long reasoning modes, all at 46.7 percent GPU memory and 8.7 percent inference time of Qwen2.5-VL 7B.

What carries the argument

The unified 3D-Resampler architecture that performs compact encoding over both images and videos, paired with hybrid reinforcement learning for short and long reasoning.

Load-bearing premise

The reported benchmark scores and efficiency numbers hold on tasks and hardware outside the specific suites and setups used in testing.

What would settle it

A new evaluation set where MiniCPM-V 4.5 scores below GPT-4o-latest or requires more than half the memory of Qwen2.5-VL 7B on video tasks.

read the original abstract

Multimodal Large Language Models (MLLMs) are undergoing rapid progress and represent the frontier of AI development. However, their training and inference efficiency have emerged as a core bottleneck in making MLLMs more accessible and scalable. To address the challenges, we present MiniCPM-V 4.5, an 8B parameter model designed for high efficiency and strong performance. We introduce three core improvements in model architecture, data strategy and training method: a unified 3D-Resampler model architecture for highly compact encoding over images and videos, a unified learning paradigm for document knowledge and text recognition without heavy data engineering, and a hybrid reinforcement learning strategy for proficiency in both short and long reasoning modes. Comprehensive experimental results in OpenCompass evaluation show that MiniCPM-V 4.5 surpasses widely used proprietary models such as GPT-4o-latest, and significantly larger open-source models such as Qwen2.5-VL 72B. Notably, the strong performance is achieved with remarkable efficiency. For example, on the widely adopted VideoMME benchmark, MiniCPM-V 4.5 achieves state-of-the-art performance among models under 30B size, using just 46.7\% GPU memory cost and 8.7\% inference time of Qwen2.5-VL 7B.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper presents MiniCPM-V 4.5, an 8B-parameter MLLM that achieves superior performance on OpenCompass benchmarks compared to GPT-4o-latest and Qwen2.5-VL 72B through three contributions: a unified 3D-Resampler architecture for compact image/video encoding, a unified learning paradigm for document knowledge and text recognition, and a hybrid reinforcement learning strategy balancing short and long reasoning. It further claims strong efficiency, with MiniCPM-V 4.5 attaining SOTA results among models under 30B parameters while using 46.7% GPU memory and 8.7% inference time of Qwen2.5-VL 7B on VideoMME.

Significance. If the benchmark wins and efficiency ratios prove reproducible under matched conditions, the work would be significant for demonstrating that carefully engineered 8B-scale MLLMs can match or exceed much larger models, potentially lowering barriers to deploying capable multimodal systems. The 3D-Resampler and hybrid RL recipe could serve as reusable building blocks for future efficiency-focused MLLM research.

major comments (3)
  1. [Abstract] Abstract: The headline efficiency ratios (46.7% GPU memory and 8.7% inference time versus Qwen2.5-VL 7B on VideoMME) are reported without any protocol details on hardware model, numerical precision, tensor parallelism, batch size, video frame sampling rate, KV-cache configuration, or inference engine, rendering the percentages unverifiable and non-generalizable.
  2. [Experiments] Experiments section: No ablation tables or controlled experiments isolate the individual contributions of the 3D-Resampler token count/projection dimensions, the unified document learning paradigm, or the hybrid RL reward weights for short vs. long reasoning, which are load-bearing for attributing the reported performance gains.
  3. [Abstract] Abstract and results: The claim of surpassing Qwen2.5-VL 72B on OpenCompass is presented without specifying evaluation protocol details (e.g., number of shots, prompt templates, or whether the same evaluation harness was used), leaving open the possibility that protocol differences drive the observed gaps.
minor comments (2)
  1. [Abstract] Abstract: The phrase 'unified learning paradigm for document knowledge and text recognition without heavy data engineering' is introduced without a concise definition or pointer to the relevant subsection describing the data construction process.
  2. Ensure benchmark citations (OpenCompass, VideoMME) include full references to their source papers in the bibliography.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments, which help improve the clarity and reproducibility of our work. We address each major comment below and have revised the manuscript to incorporate additional details and studies where feasible.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The headline efficiency ratios (46.7% GPU memory and 8.7% inference time versus Qwen2.5-VL 7B on VideoMME) are reported without any protocol details on hardware model, numerical precision, tensor parallelism, batch size, video frame sampling rate, KV-cache configuration, or inference engine, rendering the percentages unverifiable and non-generalizable.

    Authors: We agree that the efficiency claims require full protocol details for verifiability. In the revised manuscript, we have added an explicit 'Efficiency Evaluation Protocol' subsection in the Experiments section specifying the hardware (8x NVIDIA A100 80GB), precision (BF16), inference engine (vLLM with tensor parallelism of 1), batch size (1), video sampling (1 fps with 32 frames max), KV-cache settings, and other parameters used for the VideoMME measurements. revision: yes

  2. Referee: [Experiments] Experiments section: No ablation tables or controlled experiments isolate the individual contributions of the 3D-Resampler token count/projection dimensions, the unified document learning paradigm, or the hybrid RL reward weights for short vs. long reasoning, which are load-bearing for attributing the reported performance gains.

    Authors: We recognize that isolating each component strengthens attribution. The original submission emphasized end-to-end results due to training scale constraints, but we have added targeted ablation tables in the appendix (new Table A.3 and A.4) varying 3D-Resampler projection dimensions and RL reward weights for short/long reasoning, along with a controlled comparison of the unified document paradigm versus separate training. These show measurable contributions to the final metrics. revision: partial

  3. Referee: [Abstract] Abstract and results: The claim of surpassing Qwen2.5-VL 72B on OpenCompass is presented without specifying evaluation protocol details (e.g., number of shots, prompt templates, or whether the same evaluation harness was used), leaving open the possibility that protocol differences drive the observed gaps.

    Authors: We have clarified the protocol in the revised version. All OpenCompass results, including comparisons to Qwen2.5-VL 72B and GPT-4o-latest, were obtained using the official OpenCompass evaluation harness with identical 0-shot settings, standardized prompt templates from the benchmark, and the same evaluation code and decoding parameters for all models. These details are now stated in Section 4.1 and referenced in the abstract. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical benchmarks and efficiency claims rest on external measurements

full rationale

The paper reports an 8B MLLM trained with a 3D-Resampler architecture, unified document/text data paradigm, and hybrid RL. All performance claims (surpassing GPT-4o and Qwen2.5-VL 72B on OpenCompass; 46.7% memory and 8.7% time of Qwen2.5-VL 7B on VideoMME) are direct experimental outcomes on public benchmarks. No equations, fitted parameters renamed as predictions, self-citation chains, or uniqueness theorems appear in the derivation. The efficiency ratios are measured results, not algebraically forced by the model definition itself. The work is self-contained against external benchmarks and receives the default non-circularity finding.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 0 invented entities

The work rests on standard transformer and vision-encoder assumptions plus a large number of fitted hyperparameters and data-selection choices that are not enumerated in the abstract.

free parameters (2)
  • 3D-Resampler token count and projection dimensions
    Chosen to achieve the reported memory reduction; exact values not stated in abstract.
  • Hybrid RL reward weights for short vs long reasoning
    Tuned to balance the two modes; values not provided.
axioms (1)
  • standard math Standard next-token prediction loss plus RLHF-style reward model
    Invoked implicitly for the hybrid training stage.

pith-pipeline@v0.9.0 · 5674 in / 1291 out tokens · 31849 ms · 2026-05-15T17:01:57.476728+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 19 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TraceAV-Bench: Benchmarking Multi-Hop Trajectory Reasoning over Long Audio-Visual Videos

    cs.CV 2026-05 unverdicted novelty 8.0

    TraceAV-Bench is the first benchmark for multi-hop trajectory reasoning over long audio-visual videos, showing top models reach only 51-68% accuracy with substantial room for improvement.

  2. How Far Is Document Parsing from Solved? PureDocBench: A Source-TraceableBenchmark across Clean, Degraded, and Real-World Settings

    cs.CV 2026-05 conditional novelty 8.0

    PureDocBench shows document parsing is far from solved, with top models at ~74/100, small specialists competing with large VLMs, and ranking reversals under real degradation.

  3. MiVE: Multiscale Vision-language features for reference-guided video Editing

    cs.CV 2026-05 unverdicted novelty 7.0

    MiVE repurposes VLMs as multiscale feature extractors integrated into a unified self-attention Diffusion Transformer, achieving top human preference in reference-guided video editing.

  4. Utility-Oriented Visual Evidence Selection for Multimodal Retrieval-Augmented Generation

    cs.CL 2026-05 unverdicted novelty 7.0

    Evidence utility is defined as information gain on the model's output distribution, with ranking by gain on a latent helpfulness variable shown equivalent to answer-space utility under mild assumptions, enabling a tra...

  5. Chronicles-OCR: A Cross-Temporal Perception Benchmark for the Evolutionary Trajectory of Chinese Characters

    cs.CV 2026-05 unverdicted novelty 7.0

    Chronicles-OCR is the first benchmark with 2,800 images across the complete evolutionary trajectory of Chinese characters, defining four tasks to evaluate VLLMs' cross-temporal visual perception.

  6. TableVista: Benchmarking Multimodal Table Reasoning under Visual and Structural Complexity

    cs.CL 2026-05 unverdicted novelty 7.0

    TableVista benchmark finds foundation models maintain performance across visual styles but degrade sharply on complex table structures and vision-only settings.

  7. Can Multimodal Large Language Models Truly Understand Small Objects?

    cs.CV 2026-04 unverdicted novelty 7.0

    Current MLLMs show weak performance on small object understanding tasks, but fine-tuning with the new SOU-Train dataset measurably improves their capabilities.

  8. OmniScript: Towards Audio-Visual Script Generation for Long-Form Cinematic Video

    cs.CV 2026-04 unverdicted novelty 7.0

    OmniScript is a new 8B omni-modal model that turns long cinematic videos into scene-by-scene scripts and matches top proprietary models on temporal localization and semantic accuracy.

  9. OmniTrace: A Unified Framework for Generation-Time Attribution in Omni-Modal LLMs

    cs.CL 2026-03 unverdicted novelty 7.0

    OmniTrace converts token-level signals into span-level cross-modal attributions for open-ended generation in omni-modal LLMs via generation-time tracing.

  10. SCP: Spatial Causal Prediction in Video

    cs.CV 2026-03 unverdicted novelty 7.0

    SCP defines a new benchmark task for predicting spatial causal outcomes beyond direct observation and shows that 23 leading models lag far behind humans on it.

  11. SocialDirector: Training-Free Social Interaction Control for Multi-Person Video Generation

    cs.CV 2026-05 unverdicted novelty 6.0

    SocialDirector uses spatiotemporal actor masking and directional reweighting on cross-attention maps to reduce actor-action mismatches and improve target-directed interactions in generated multi-person videos.

  12. CC-OCR V2: Benchmarking Large Multimodal Models for Literacy in Real-world Document Processing

    cs.CL 2026-05 unverdicted novelty 6.0

    CC-OCR V2 reveals that state-of-the-art large multimodal models substantially underperform on challenging real-world document processing tasks.

  13. When Audio-Language Models Fail to Leverage Multimodal Context for Dysarthric Speech Recognition

    cs.AI 2026-05 unverdicted novelty 6.0

    Current audio-language models fail to use clinical multimodal context for dysarthric speech recognition, but context-aware LoRA fine-tuning delivers large accuracy gains on the SAP dataset.

  14. See Further, Think Deeper: Advancing VLM's Reasoning Ability with Low-level Visual Cues and Reflection

    cs.CV 2026-04 unverdicted novelty 6.0

    ForeSight lets VLMs use low-level visual cues and mask-based visual feedback within an RL loop to reason more accurately, with the 7B model beating same-scale peers and some closed-source SOTA on a new benchmark.

  15. POINTS-Long: Adaptive Dual-Mode Visual Reasoning in MLLMs

    cs.CV 2026-04 unverdicted novelty 6.0

    POINTS-Long is a dual-mode multimodal large language model that uses dynamic visual token scaling to retain 97.7-99.7% accuracy on long-form tasks with 1/40 to 1/10th the tokens and supports streaming via detachable KV-cache.

  16. Graph-to-Frame RAG: Visual-Space Knowledge Fusion for Training-Free and Auditable Video Reasoning

    cs.CV 2026-04 unverdicted novelty 6.0

    G2F-RAG converts retrieved knowledge subgraphs into a single visual reasoning frame appended to videos, enabling training-free and interpretable improvements for LMM-based video reasoning on knowledge-intensive tasks.

  17. CoME-VL: Scaling Complementary Multi-Encoder Vision-Language Learning

    cs.CV 2026-04 unverdicted novelty 6.0

    CoME-VL fuses contrastive and self-supervised vision encoders via entropy-guided multi-layer aggregation and RoPE cross-attention to improve vision-language model performance on benchmarks.

  18. EgoMind: Activating Spatial Cognition through Linguistic Reasoning in MLLMs

    cs.CV 2026-04 unverdicted novelty 6.0

    EgoMind activates spatial cognition in MLLMs via linguistic Role-Play Caption and Progressive Spatial Analysis, reaching competitive results on VSI-Bench, SPAR-Bench, SITE-Bench and SPBench with only 5K SFT and 20K RL...

  19. Valley3: Scaling Omni Foundation Models for E-commerce

    cs.AI 2026-05 unverdicted novelty 4.0

    Valley3 is an omni MLLM for e-commerce that uses a four-stage pre-training pipeline plus post-training for controllable reasoning and agentic search, outperforming baselines on e-commerce benchmarks while staying comp...

Reference graph

Works this paper leans on

77 extracted references · 77 canonical work pages · cited by 19 Pith papers · 8 internal anchors

  1. [1]

    Glm-4.5v and glm-4.1v-thinking: Towards versatile multimodal reasoning with scalable reinforcement learning, 2025

    GLM-V Team, Wenyi Hong, Wenmeng Yu, Xiaotao Gu, Guo Wang, Guobing Gan, Haomiao Tang, Jiale Cheng, Ji Qi, Junhui Ji, Lihang Pan, Shuaiqi Duan, Weihan Wang, Yan Wang, Yean Cheng, Zehai He, Zhe Su, Zhen Yang, Ziyang Pan, Aohan Zeng, Baoxu Wang, Bin Chen, Boyan Shi, Changyu Pang, Chenhui Zhang, Da Yin, Fan Yang, Guoqing Chen, Jiazheng Xu, Jiale Zhu, Jiali Che...

  2. [2]

    Kimi Team, Angang Du, Bohong Yin, Bowei Xing, Bowen Qu, Bowen Wang, Cheng Chen, Chenlin Zhang, Chenzhuang Du, Chu Wei, Congcong Wang, Dehao Zhang, Dikang Du, Dongliang Wang, Enming Yuan, Enzhe Lu, Fang Li, Flood Sung, Guangda Wei, Guokun Lai, Han Zhu, Hao Ding, Hao Hu, Hao Yang, Hao Zhang, Haoning Wu, Haotian Yao, Haoyu Lu, Heng Wang, Hongcheng Gao, Huabi...

  3. [3]

    Mimo-vl technical report, 2025

    LLM-Core Xiaomi, Zihao Yue, Zhenru Lin, Yifan Song, Weikun Wang, Shuhuai Ren, Shuhao Gu, Shicheng Li, Peidian Li, Liang Zhao, Lei Li, Kainan Bao, Hao Tian, Hailin Zhang, Gang Wang, Dawei Zhu, Cici, Chenhong He, Bowen Ye, Bowen Shen, Zihan Zhang, Zihan Jiang, Zhixian Zheng, Zhichao Song, Zhenbo Luo, Yue Yu, Yudong Wang, Yuanyuan Tian, Yu Tu, Yihan Yan, Yi ...

  4. [4]

    Openai platform chatgpt-4o, 2025

    OpenAI. Openai platform chatgpt-4o, 2025. Accessed: 2025-08-03

  5. [5]

    Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond, 2023

    Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond, 2023

  6. [6]

    MiniCPM-V: A GPT-4V Level MLLM on Your Phone

    Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al. Minicpm-v: A gpt-4v level mllm on your phone.ArXiv preprint, abs/2408.01800, 2024

  7. [7]

    Qwen2.5-vl technical report, 2025

    Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jian- qiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report, 2025

  8. [8]

    Ovis2.5 Technical Report

    Shiyin Lu, Yang Li, Yu Xia, Yuwei Hu, Shanshan Zhao, Yanqing Ma, Zhichao Wei, Yinglun Li, Lunhao Duan, Jianshan Zhao, Yuxuan Han, Haijun Li, Wanying Chen, Junke Tang, Chengkun 13 Hou, Zhixing Du, Tianli Zhou, Wenjie Zhang, Huping Ding, Jiahe Li, Wen Li, Gui Hu, Yiliang Gu, Siran Yang, Jiamang Wang, Hailong Sun, Yibo Wang, Hui Sun, Jinlong Huang, Yuping He...

  9. [9]

    InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models

    Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Shao, et al. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models.ArXiv preprint, abs/2504.10479, 2025

  10. [10]

    Flash-vl 2b: Optimizing vision-language model performance for ultra-low latency and high throughput

    Bo Zhang, Shuo Li, Runhe Tian, Yang Yang, Jixin Tang, Jinhao Zhou, and Lin Ma. Flash-vl 2b: Optimizing vision-language model performance for ultra-low latency and high throughput. arXiv preprint arXiv:2505.09498, 2025

  11. [11]

    Gemini 2.0: Our latest, most capable ai model yet, 2024

    Google DeepMind. Gemini 2.0: Our latest, most capable ai model yet, 2024. First Gemini 2.0 Flash announced December 11, 2024; multimodal support for text, image, audio, native tool use

  12. [12]

    DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai D...

  13. [13]

    Llava-uhd: an lmm perceiving any aspect ratio and high- resolution images

    Zonghao Guo, Ruyi Xu, Yuan Yao, Junbo Cui, Zanlin Ni, Chunjiang Ge, Tat-Seng Chua, Zhiyuan Liu, and Gao Huang. Llava-uhd: an lmm perceiving any aspect ratio and high- resolution images. InEuropean Conference on Computer Vision, pages 390–406. Springer, 2024

  14. [14]

    Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis, 2024

    Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, Peixian Chen, Yanwei Li, Shaohui Lin, Sirui Zhao, Ke Li, Tong Xu, Xiawu Zheng, Enhong Chen, Caifeng Shan, Ran He, and Xing Sun. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis, 2024. 14

  15. [15]

    MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies

    Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, et al. Minicpm: Unveiling the potential of small language models with scalable training strategies.ArXiv preprint, abs/2404.06395, 2024

  16. [16]

    LAION-5B: an open large-scale dataset for training next generation image-text models

    Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmar- czyk, and Jenia Jitsev. LAION-5B: an open large-scale dataset for training next generation image-text model...

  17. [17]

    Coyo-700m: Image-text pair dataset.https://github.com/kakaobrain/ coyo-dataset, 2022

    Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Sae- hoon Kim. Coyo-700m: Image-text pair dataset.https://github.com/kakaobrain/ coyo-dataset, 2022

  18. [18]

    Learning transferable visual models from natural language supervision

    Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Ma- rina Meila and Tong Zhang, editors,Proc. of ICML, volume 139 ofProceedings of Machine Learn...

  19. [19]

    Capsfusion: Rethinking image-text data at scale

    Qiying Yu, Quan Sun, Xiaosong Zhang, Yufeng Cui, Fan Zhang, Yue Cao, Xinlong Wang, and Jingjing Liu. Capsfusion: Rethinking image-text data at scale. InIEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024, pages 14022–14032. IEEE, 2024

  20. [20]

    Omnicorpus: A unified multimodal corpus of 10 billion-level images interleaved with text.ArXiv preprint, abs/2406.08418, 2024

    Qingyun Li, Zhe Chen, Weiyun Wang, Wenhai Wang, Shenglong Ye, Zhenjiang Jin, Guanzhou Chen, Yinan He, Zhangwei Gao, Erfei Cui, et al. Omnicorpus: A unified multimodal corpus of 10 billion-level images interleaved with text.ArXiv preprint, abs/2406.08418, 2024

  21. [21]

    MINT-1T: scaling open-source multimodal data by 10x: A multimodal dataset with one trillion tokens

    Anas Awadalla, Le Xue, Oscar Lo, Manli Shu, Hannah Lee, Etash Guha, Sheng Shen, Mo- hamed Awadalla, Silvio Savarese, Caiming Xiong, Ran Xu, Yejin Choi, and Ludwig Schmidt. MINT-1T: scaling open-source multimodal data by 10x: A multimodal dataset with one trillion tokens. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jaku...

  22. [22]

    Synthetic data for text localisation in natural images

    Ankush Gupta, Andrea Vedaldi, and Andrew Zisserman. Synthetic data for text localisation in natural images. In2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV , USA, June 27-30, 2016, pages 2315–2324. IEEE Computer Society, 2016

  23. [23]

    Frozen in time: A joint video and image encoder for end-to-end retrieval

    Max Bain, Arsha Nagrani, G ¨ul Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 1708–1718. IEEE, 2021

  24. [24]

    Vript: A video is worth thousands of words

    Dongjie Yang, Suyuan Huang, Chengqiang Lu, Xiaodong Han, Haoxin Zhang, Yan Gao, Yao Hu, and Hai Zhao. Vript: A video is worth thousands of words. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors,Advances in Neural Information Processing Systems 38: Annual Conference on Neural Infor...

  25. [25]

    OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation

    Kepan Nan, Rui Xie, Penghao Zhou, Tiehan Fan, Zhenheng Yang, Zhijie Chen, Xiang Li, Jian Yang, and Ying Tai. Openvid-1m: A large-scale high-quality dataset for text-to-video generation.ArXiv preprint, abs/2407.02371, 2024

  26. [26]

    Wikipedia, the free encyclopedia, 2025

    Wikipedia contributors. Wikipedia, the free encyclopedia, 2025. Accessed: 2025-09-14; CC BY-SA 4.0. 15

  27. [27]

    OpenThoughts: Data Recipes for Reasoning Models

    Etash Guha, Ryan Marten, Sedrick Keh, Negin Raoof, Georgios Smyrnis, Hritik Bansal, Mar- ianna Nezhurina, Jean Mercat, Trung Vu, Zayne Sprague, et al. Openthoughts: Data recipes for reasoning models.ArXiv preprint, abs/2506.04178, 2025

  28. [28]

    Rlaif-v: Open-source ai feedback leads to super gpt-4v trustworthiness, 2024

    Tianyu Yu, Haoye Zhang, Qiming Li, Qixin Xu, Yuan Yao, Da Chen, Xiaoman Lu, Ganqu Cui, Yunkai Dang, Taiwen He, Xiaocheng Feng, Jun Song, Bo Zheng, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. Rlaif-v: Open-source ai feedback leads to super gpt-4v trustworthiness, 2024

  29. [29]

    Rlpr: Extrapolating rlvr to general domains without verifiers, 2025

    Tianyu Yu, Bo Ji, Shouli Wang, Shu Yao, Zefan Wang, Ganqu Cui, Lifan Yuan, Ning Ding, Yuan Yao, Zhiyuan Liu, Maosong Sun, and Tat-Seng Chua. Rlpr: Extrapolating rlvr to general domains without verifiers, 2025

  30. [30]

    G-llava: Solving geometric problem with multi-modal large language model, 2023

    Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, and Lingpeng Kong. G-llava: Solving geometric problem with multi-modal large language model, 2023

  31. [31]

    R-cot: Reverse chain-of- thought problem generation for geometric reasoning in large multimodal models

    Linger Deng, Yuliang Liu, Bohan Li, Dongliang Luo, Liang Wu, Chengquan Zhang, Pengyuan Lyu, Ziyang Zhang, Gang Zhang, Errui Ding, et al. R-cot: Reverse chain-of-thought problem generation for geometric reasoning in large multimodal models.ArXiv preprint, abs/2410.17885, 2024

  32. [32]

    Clevr-math: A dataset for composi- tional language, visual and mathematical reasoning, 2022

    Adam Dahlgren Lindstr ¨om and Savitha Sam Abraham. Clevr-math: A dataset for composi- tional language, visual and mathematical reasoning, 2022

  33. [33]

    INFOTABS: Inference on ta- bles as semi-structured data

    Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. INFOTABS: Inference on ta- bles as semi-structured data. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault, editors,Proc. of ACL, pages 2309–2324, Online, 2020. Association for Computational Lin- guistics

  34. [34]

    Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning

    Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. InProc. of ICLR. OpenReview.net, 2023

  35. [35]

    Compositional semantic parsing on semi-structured tables

    Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. In Chengqing Zong and Michael Strube, editors,Proc. of ACL, pages 1470–1480, Beijing, China, 2015. Association for Computational Linguistics

  36. [36]

    Tabfact: A large-scale dataset for table-based fact verification

    Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. Tabfact: A large-scale dataset for table-based fact verification. InProc. of ICLR. OpenReview.net, 2020

  37. [37]

    TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance

    Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat-Seng Chua. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Nav- igli, editors,Proc. of ACL, pages 3277–3287, Online, 2021. Association for Computational Linguistics

  38. [38]

    FigureQA: An annotated figure dataset for visual reasoning, 2018

    Samira Ebrahimi Kahou, Adam Atkinson, Vincent Michalski, ´Akos K ´ad´ar, Adam Trischler, and Yoshua Bengio. FigureQA: An annotated figure dataset for visual reasoning, 2018

  39. [39]

    Multimodal ArXiv: A dataset for improving scientific comprehension of large vision-language models

    Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal ArXiv: A dataset for improving scientific comprehension of large vision-language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors,Proc. of ACL, pages 14369–14387, Bangkok, Thailand, 2024. Association for Computational Linguistics

  40. [40]

    Price, Scott Cohen, and Christopher Kanan

    Kushal Kafle, Brian L. Price, Scott Cohen, and Christopher Kanan. DVQA: understanding data visualizations via question answering. In2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 5648–

  41. [41]

    IEEE Computer Society, 2018

  42. [42]

    Visual- webinstruct: Scaling up multimodal instruction data through web search.ArXiv preprint, abs/2503.10582, 2025

    Yiming Jia, Jiachen Li, Xiang Yue, Bo Li, Ping Nie, Kai Zou, and Wenhu Chen. Visual- webinstruct: Scaling up multimodal instruction data through web search.ArXiv preprint, abs/2503.10582, 2025. 16

  43. [43]

    Llama-nemotron: Effi- cient reasoning models, 2025

    Akhiad Bercovich, Itay Levy, Izik Golan, Mohammad Dabbah, Ran El-Yaniv, Omri Puny, Ido Galil, Zach Moshe, Tomer Ronen, Najeeb Nabwani, Ido Shahaf, Oren Tropp, Ehud Karpas, Ran Zilberstein, Jiaqi Zeng, Soumye Singhal, Alexander Bukharin, Yian Zhang, Tugrul Konuk, Gerald Shen, Ameya Sunil Mahabaleshwarkar, Bilal Kartal, Yoshi Suhara, Olivier Delalleau, Ziji...

  44. [44]

    A multi-dimensional constraint framework for evaluating and im- proving instruction following in large language models, 2025

    Junjie Ye, Caishuang Huang, Zhuohan Chen, Wenjie Fu, Chenyuan Yang, Leyi Yang, Yilong Wu, Peng Wang, Meng Zhou, Xiaolong Yang, Tao Gui, Qi Zhang, Zhongchao Shi, Jianping Fan, and Xuanjing Huang. A multi-dimensional constraint framework for evaluating and im- proving instruction following in large language models, 2025

  45. [45]

    Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models, 2024

  46. [46]

    Skywork-vl reward: An effective reward model for multimodal understanding and reasoning, 2025

    Xiaokun Wang, Peiyu Wang, Jiangbo Pei, Wei Shen, Yi Peng, Yunzhuo Hao, Weijie Qiu, Ai Jian, Tianyidan Xie, Xuchen Song, Yang Liu, and Yahui Zhou. Skywork-vl reward: An effective reward model for multimodal understanding and reasoning, 2025

  47. [47]

    Manning, Stefano Ermon, and Chelsea Finn

    Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors,Advances in Neural Information Processing Systems 36: Annual Con- ference...

  48. [48]

    MMMU: A massive multi-discipline multimodal understanding and reasoning benchmark for expert AGI

    Xiang Yue, Yuansheng Ni, Tianyu Zheng, Kai Zhang, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. MMMU: A massive multi-discipline multimodal understanding and reasoning benchmark for exp...

  49. [49]

    Mathvista: Evaluating mathe- matical reasoning of foundation models in visual contexts

    Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathe- matical reasoning of foundation models in visual contexts. InProc. of ICLR. OpenReview.net, 2024

  50. [50]

    A diagram is worth a dozen images

    Aniruddha Kembhavi, Michael Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. InEuropean Conference on Computer Vision (ECCV), 2016. 17

  51. [51]

    Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? InEuropean Conference on Computer Vision, pages 169–186

    Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? InEuropean Conference on Computer Vision, pages 169–186. Springer, 2024

  52. [52]

    Logicvista: Multimodal llm logical reasoning benchmark in visual contexts.arXiv preprint arXiv:2407.04973, 2024

    Yijia Xiao, Edward Sun, Tianyu Liu, and Wei Wang. Logicvista: Multimodal llm logical reasoning benchmark in visual contexts.ArXiv preprint, abs/2407.04973, 2024

  53. [53]

    Can mllms reason in multimodality? emma: An enhanced multimodal reasoning benchmark

    Yunzhuo Hao, Jiawei Gu, Huichen Will Wang, Linjie Li, Zhengyuan Yang, Lijuan Wang, and Yu Cheng. Can mllms reason in multimodality? emma: An enhanced multimodal reasoning benchmark.ArXiv preprint, abs/2501.05444, 2025

  54. [54]

    OCRBench: On the hidden mystery of OCR in large multimodal models.Science China Information Sciences, 2024

    Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, and Xiang Bai. OCRBench: On the hidden mystery of OCR in large multimodal models.Science China Information Sciences, 2024

  55. [55]

    ChartQA: A benchmark for question answering about charts with visual and logical reasoning

    Ahmed Masry, Xuan Long Do, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. ChartQA: A benchmark for question answering about charts with visual and logical reasoning. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors,Findings of the Association for Computational Linguistics: ACL 2022, pages 2263–2279, Dublin, Ireland, 2022. Association for...

  56. [56]

    TextVQA: Towards VQA requiring reasoning about text

    Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. TextVQA: Towards VQA requiring reasoning about text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019

  57. [57]

    Manmatha, and C

    Minesh Mathew, Dimosthenis Karatzas, R. Manmatha, and C. V . Jawahar. DocVQA: A dataset for VQA on document images. InProceedings of the IEEE/CVF Winter Conference on Appli- cations of Computer Vision, 2021

  58. [58]

    Omnidocbench: Benchmarking diverse pdf document parsing with comprehensive annotations, 2024

    Linke Ouyang, Yuan Qu, Hongbin Zhou, Jiawei Zhu, Rui Zhang, Qunshu Lin, Bin Wang, Zhiyuan Zhao, Man Jiang, Xiaomeng Zhao, Jin Shi, Fan Wu, Pei Chu, Minghao Liu, Zhenx- iang Li, Chao Xu, Bo Zhang, Botian Shi, Zhongying Tu, and Conghui He. Omnidocbench: Benchmarking diverse pdf document parsing with comprehensive annotations, 2024

  59. [59]

    Hallusion- bench: An advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models

    Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. Hallusion- bench: An advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. InIEEE/CVF Conference on Computer Vision and Pattern Recogni...

  60. [60]

    Ob- ject hallucination in image captioning

    Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. Ob- ject hallucination in image captioning. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii, editors,Proc. of EMNLP, pages 4035–4045, Brussels, Belgium, 2018. Associ- ation for Computational Linguistics

  61. [61]

    Aligning Large Multimodal Models with Factually Augmented RLHF

    Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with factually augmented rlhf.ArXiv preprint, abs/2309.14525, 2023

  62. [62]

    Mantis: Interleaved multi-image instruction tuning.ArXiv preprint, abs/2405.01483, 2024

    Dongfu Jiang, Xuan He, Huaye Zeng, Cong Wei, Max Ku, Qian Liu, and Wenhu Chen. Mantis: Interleaved multi-image instruction tuning.ArXiv preprint, abs/2405.01483, 2024

  63. [63]

    Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask AGI

    Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, Jiayi Lei, Quanfeng Lu, Runjian Chen, Peng Xu, Renrui Zhang, Haozhe Zhang, Peng Gao, Yali Wang, Yu Qiao, Ping Luo, Kaipeng Zhang, and Wenqi Shao. Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models toward...

  64. [64]

    Grok-1.5 vision preview.https://x.ai/news/grok-1.5v, 2024

    xAI. Grok-1.5 vision preview.https://x.ai/news/grok-1.5v, 2024. Connecting the digital and physical worlds with our first multimodal model. 18

  65. [65]

    Mm-ifengine: Towards multimodal instruction following.ArXiv preprint, abs/2504.07957, 2025

    Shengyuan Ding, Shenxi Wu, Xiangyu Zhao, Yuhang Zang, Haodong Duan, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Dahua Lin, and Jiaqi Wang. Mm-ifengine: Towards multimodal instruction following.ArXiv preprint, abs/2504.07957, 2025

  66. [66]

    Video-MME: The first-ever comprehen- sive evaluation benchmark of multi-modal LLMs in video analysis

    Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-MME: The first-ever comprehen- sive evaluation benchmark of multi-modal LLMs in video analysis. 2025

  67. [67]

    Lvbench: An extreme long video un- derstanding benchmark.arXiv preprint arXiv:2406.08035,

    Weihan Wang, Zehai He, Wenyi Hong, Yean Cheng, Xiaohan Zhang, Ji Qi, Shiyu Huang, Bin Xu, Yuxiao Dong, Ming Ding, and Jie Tang. LVBench: An extreme long video understanding benchmark.ArXiv preprint, abs/2406.08035, 2024

  68. [68]

    Mlvu: Benchmarking multi-task long video un- derstanding

    Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Zhengyang Liang, Shitao Xiao, Minghao Qin, Xi Yang, Yongping Xiong, Bo Zhang, et al. Mlvu: Benchmarking multi-task long video un- derstanding. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 13691–13701, 2025

  69. [69]

    Longvideobench: A benchmark for long-context interleaved video-language understanding

    Haoning Wu, Dongxu Li, Bei Chen, and Junnan Li. Longvideobench: A benchmark for long-context interleaved video-language understanding. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors, Advances in Neural Information Processing Systems 38: Annual Conference on Neural Infor- mation P...

  70. [70]

    Motionbench: Benchmarking and improving fine-grained video motion understanding for vision language models, 2024

    Wenyi Hong*, Yean Cheng*, Zhuoyi Yang*, Weihan Wang, Lefan Wang, Xiaotao Gu, Shiyu Huang, Yuxiao Dong, and Jie Tang. Motionbench: Benchmarking and improving fine-grained video motion understanding for vision language models, 2024

  71. [71]

    Favor-bench: A comprehensive benchmark for fine-grained video motion under- standing, 2025

    Chongjun Tu, Lin Zhang, Pengtao Chen, Peng Ye, Xianfang Zeng, Wei Cheng, Gang Yu, and Tao Chen. Favor-bench: A comprehensive benchmark for fine-grained video motion under- standing, 2025

  72. [72]

    Opencompass: A universal evaluation platform for foundation models.https://github.com/open-compass/opencompass, 2023

    OpenCompass Contributors. Opencompass: A universal evaluation platform for foundation models.https://github.com/open-compass/opencompass, 2023

  73. [73]

    Mm-vet: Evaluating large multimodal models for integrated capa- bilities

    Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capa- bilities. InForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024

  74. [74]

    Are we on the right way for evaluating large vision-language models? In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M

    Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, and Feng Zhao. Are we on the right way for evaluating large vision-language models? In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors,Advances in Neural Information P...

  75. [75]

    Mme: A comprehensive evaluation benchmark for multimodal large language models, 2023

    Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2023

  76. [76]

    Mmbench: Is your multi-modal model an all-around player? InEuropean conference on computer vision, pages 216–233

    Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? InEuropean conference on computer vision, pages 216–233. Springer, 2024

  77. [77]

    DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

    Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models.ArXiv preprint, abs/2402.03300, 2024. 19 A Implementation Details Pre-training follows a WSD schedule [15] with a fixed learning rate of5×10 −5 in the s...