pith. machine review for the scientific record. sign in

arxiv: 2605.13130 · v1 · submitted 2026-05-13 · 💻 cs.AI

Recognition: 2 theorem links

· Lean Theorem

GRACE: Gradient-aligned Reasoning Data Curation for Efficient Post-training

Authors on Pith no claims yet

Pith reviewed 2026-05-14 20:30 UTC · model grok-4.3

classification 💻 cs.AI
keywords reasoning data curationgradient alignmentstep-level scoringpost-training efficiencytrajectory consistencyLLM fine-tuningdata subset selectioninternal optimization signals
0
0 comments X

The pith

GRACE scores each reasoning step by its alignment with the answer gradient and trajectory consistency to select data subsets that match or exceed full performance with 5-20 percent of the samples.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Existing curation scores whole reasoning traces, but steps inside them contribute unequally to learning. GRACE instead treats each trace as a chain of optimization events and scores every step using two internal signals: how closely its update direction matches the gradient toward the correct answer, and how well it fits the preceding steps. A single-forward-pass proxy turns token-level signals into step-level alignment estimates without reward models or labels. When these scores select subsets for post-training Qwen3-VL-2B-Instruct on MMathCoT-1M, 20 percent of the data reaches 108.8 percent of full-set performance and 5 percent retains 100.2 percent, with the same subsets transferring to other backbones.

Core claim

GRACE views each reasoning trace as a sequence of optimization events and assigns every step a score from two complementary signals: its alignment with the answer-oriented gradient direction and its consistency with the preceding trajectory. Step scores are aggregated to sample level for subset selection using only the model's internal signals. A representation-level gradient proxy computes the alignment estimate from token-level upstream activations in one forward pass, making the method scalable without external reward models or step annotations. Post-training on the resulting subsets yields performance at or above the full-data baseline with 5-20 percent of the samples.

What carries the argument

Representation-level gradient proxy that estimates step-level alignment with the answer-oriented gradient from token-level upstream signals in a single forward pass.

If this is right

  • Subsets selected by GRACE reach 108.8 percent of full-data performance using only 20 percent of the samples.
  • Five-percent subsets retain 100.2 percent of full-data performance.
  • The same subsets transfer effectively across different model backbones without retraining the selector.
  • Curation requires no external reward models or human step annotations.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Internal gradient signals may contain enough information to guide data selection in other post-training regimes such as instruction following or tool use.
  • Applying the same step-level filter during synthetic data generation could reduce the volume of traces that need to be created in the first place.
  • The method's reliance on a single forward pass suggests it could be inserted into online data pipelines that continually filter incoming traces.

Load-bearing premise

The representation-level proxy must faithfully reflect each step's true contribution to moving the model toward the correct answer without requiring full back-propagation or external supervision.

What would settle it

Training the same model on a random 5 percent subset of MMathCoT-1M and measuring whether its accuracy on held-out math reasoning benchmarks falls significantly below the GRACE-selected 5 percent subset.

Figures

Figures reproduced from arXiv: 2605.13130 by Jianghong Ma, Junjie Li, Ningxuan Ma, Xiaofeng Zhang, Ziao Wang.

Figure 1
Figure 1. Figure 1: Motivation and empirical effect of GRACE. Left: Reasoning traces are viewed as sequences [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The GRACE curation pipeline. (1) Given an input and its reasoning trace, GRACE identifies token sets for each step and the answer. (2) A fixed scoring model extracts token-level upstream signals in one forward pass and groups them by token sets. (3) Grouped signals are averaged into gradient proxies and scored by answer and trajectory alignment. (4) Step scores are aggregated into a sample value for rankin… view at source ↗
Figure 3
Figure 3. Figure 3: Transferability and training efficiency across backbones [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
read the original abstract

Existing reasoning data curation pipelines score whole samples, treating every intermediate step as equally valuable. In reality, steps within a trace contribute very unevenly, and selecting reasoning data well requires assessing them individually. We present GRACE, a gradient-aligned curation method that views each reasoning trace as a sequence of optimization events and scores every step by two complementary signals: its alignment with the answer-oriented gradient direction, and its consistency with the preceding reasoning trajectory. Step-level scores are aggregated into a sample-level value for subset selection, using only the model's internal optimization signals and no external reward models or step annotations. To make this scalable, GRACE introduces a representation-level gradient proxy that estimates step-level alignment from token-level upstream signals in a single forward pass. Post-training Qwen3-VL-2B-Instruct on MMathCoT-1M, GRACE reaches 108.8% of the full-data performance with 20% of the data and retains 100.2% with only 5%, with subsets that transfer effectively across model backbones.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces GRACE, a curation method that scores individual reasoning steps by their alignment with the answer-oriented gradient direction and their consistency with the preceding trajectory. Scores are aggregated to select high-value subsets; a representation-level proxy enables single-forward-pass computation without external rewards or step annotations. On post-training Qwen3-VL-2B-Instruct with MMathCoT-1M, 20% and 5% GRACE subsets reach 108.8% and 100.2% of full-data performance, with cross-backbone transfer.

Significance. If the proxy is shown to track true gradient alignment and the empirical gains are robust, GRACE would offer a practical, annotation-free route to data-efficient reasoning post-training, reducing the data volume needed while preserving or exceeding full-data results.

major comments (2)
  1. [Abstract] Abstract: the headline claims (108.8% of full-data performance with 20% data, 100.2% with 5%) are reported without any baseline comparison (random selection, length-based, or prior curation methods), statistical significance tests, or ablation on the two signals, so the attribution of gains specifically to gradient alignment remains unsupported.
  2. [Method] Method (gradient proxy description): the representation-level proxy is presented as a faithful, one-pass surrogate for step-level alignment with the answer-oriented gradient, yet no correlation coefficient, rank agreement, or direct back-propagation comparison on held-out steps is supplied; this validation is load-bearing for the optimality claim.
minor comments (1)
  1. [Abstract] Abstract: dataset name 'MMathCoT-1M' is used without stating its total size or construction details, which are needed to interpret the 5%/20% fractions.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below and will revise the manuscript accordingly to strengthen the empirical support and validation.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the headline claims (108.8% of full-data performance with 20% data, 100.2% with 5%) are reported without any baseline comparison (random selection, length-based, or prior curation methods), statistical significance tests, or ablation on the two signals, so the attribution of gains specifically to gradient alignment remains unsupported.

    Authors: We agree that the abstract would be strengthened by explicit references to baselines and statistical tests. The full manuscript already contains comparisons to random selection and length-based curation in the experimental results, along with ablations isolating the gradient alignment and trajectory consistency signals. We will revise the abstract to mention these baselines, report statistical significance (e.g., via p-values across runs), and clarify the attribution to the proposed signals. revision: yes

  2. Referee: [Method] Method (gradient proxy description): the representation-level proxy is presented as a faithful, one-pass surrogate for step-level alignment with the answer-oriented gradient, yet no correlation coefficient, rank agreement, or direct back-propagation comparison on held-out steps is supplied; this validation is load-bearing for the optimality claim.

    Authors: We acknowledge that quantitative validation of the proxy is essential to support its use as a surrogate. While the manuscript motivates the proxy via its design as a representation-level approximation, we did not include direct correlation analysis in the initial version. In the revision we will add a dedicated validation subsection reporting Pearson correlation coefficients, rank agreement (e.g., Kendall tau), and comparisons to direct back-propagation on held-out steps to confirm the proxy's faithfulness. revision: yes

Circularity Check

0 steps flagged

No significant circularity: selection uses independent internal signals; performance gains are empirical

full rationale

The GRACE method scores reasoning steps via two internal signals (alignment with answer-oriented gradient direction and trajectory consistency) computed from the model's own forward-pass representations. These scores are aggregated to select subsets, which are then used for post-training; final performance numbers (e.g., 108.8% of full-data with 20% subset) are measured on external benchmarks after training. No equation reduces the reported performance to the selection criterion by construction, no parameters are fitted to the target metric, and no self-citation chain or imported uniqueness theorem carries the central claim. The proxy is an engineering approximation whose fidelity is an empirical question, not a definitional one. The derivation chain remains open and falsifiable.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Only the abstract is available, so the ledger reflects elements directly implied by the description. The method rests on standard gradient-based optimization assumptions in LLM training rather than new postulates.

axioms (1)
  • domain assumption Gradient direction computed during training indicates the value of an individual reasoning step toward the final answer
    This assumption underpins the alignment signal and the claim that no external reward model is needed.

pith-pipeline@v0.9.0 · 5491 in / 1298 out tokens · 56406 ms · 2026-05-14T20:30:21.939493+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • IndisputableMonolith/Cost/FunctionalEquation.lean washburn_uniqueness_aczel unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    scores every step by two complementary signals: its alignment with the answer-oriented gradient direction, and its consistency with the preceding reasoning trajectory... representation-level gradient proxy that estimates step-level alignment from token-level upstream signals in a single forward pass

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

37 extracted references · 10 canonical work pages · 2 internal anchors

  1. [1]

    Chain-of-thought prompting elicits reasoning in large language models

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Assoc...

  2. [3]

    Unlocking mul- timodal mathematical reasoning via process reward model

    Ruilin Luo, Zhuofan Zheng, Lei Wang, Yifan Wang, Xinzhe Ni, Zicheng Lin, Songtao Jiang, Yiyao Yu, Chufan Shi, Ruihang Chu, Jin zeng, and Yujiu Yang. Unlocking mul- timodal mathematical reasoning via process reward model. In D. Belgrave, C. Zhang, H. Lin, R. Pascanu, P. Koniusz, M. Ghassemi, and N. Chen, editors,Advances in Neu- ral Information Processing ...

  3. [4]

    LIMA: less is more for alignment

    Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettle- moyer, and Omer Levy. LIMA: less is more for alignment. In Alice Oh, Tristan Nau- mann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors,Ad- vances in Neural Informati...

  4. [5]

    Le, Ed H

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V . Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. InThe Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/ forum?...

  5. [6]

    Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human fee...

  6. [7]

    ICONS: influence consensus for vision-language data selection.CoRR, abs/2501.00654, 2025

    Xindi Wu, Mengzhou Xia, Rulin Shao, Zhiwei Deng, Pang Wei Koh, and Olga Russakovsky. ICONS: influence consensus for vision-language data selection.CoRR, abs/2501.00654, 2025

  7. [8]

    Less is more: High-value data selection for visual instruction tuning.CoRR, abs/2403.09559, 2024

    Zikang Liu, Kun Zhou, Wayne Xin Zhao, Dawei Gao, Yaliang Li, and Ji-Rong Wen. Less is more: High-value data selection for visual instruction tuning.CoRR, abs/2403.09559, 2024. 10

  8. [9]

    Concept-skill transferability-based data selection for large vision-language models

    Jaewoo Lee, Boyang Li, and Sung Ju Hwang. Concept-skill transferability-based data selection for large vision-language models. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors,Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024, pages 5060–5080, 2024

  9. [12]

    LESS: selecting influential data for targeted instruction tuning

    Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. LESS: selecting influential data for targeted instruction tuning. InForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024

  10. [14]

    Estimating training data influence by tracing gradient descent

    Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. Estimating training data influence by tracing gradient descent. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors,Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, ...

  11. [15]

    Understanding black-box predictions via influence functions

    Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Doina Precup and Yee Whye Teh, editors,Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, Proceedings of Machine Learning Research, pages 1885–1894. PMLR, 2017. URL http://proceedings. mlr.pr...

  12. [16]

    Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal

    Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. In8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URLhttps://openreview.net/forum?id=ryghZJBKPS

  13. [17]

    TRAK: attributing model behavior at scale

    Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry. TRAK: attributing model behavior at scale. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors,International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, Proceedings o...

  14. [18]

    In: CVPR

    Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. Hallu- sionbench: An advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. InIEEE/CVF Conference on Computer Vision and Pattern Recogni...

  15. [19]

    Learn to explain: Multimodal rea- soning via thought chains for science question answering

    Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal rea- soning via thought chains for science question answering. In Sanmi Koyejo, S. Mo- hamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors,Advances in Neu- ral Information Processing System...

  16. [20]

    Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: Is your multi-modal model an all-around player? In Ales Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, and Gül Varol, editors,Computer Vision - ECCV 2024 - 18th European Con- fe...

  17. [22]

    Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask AGI

    Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, Jiayi Lei, Quanfeng Lu, Runjian Chen, Peng Xu, Renrui Zhang, Haozhe Zhang, Peng Gao, Yali Wang, Yu Qiao, Ping Luo, Kaipeng Zhang, and Wenqi Shao. Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models toward...

  18. [23]

    URLhttps://proceedings.mlr.press/v235/ying24a.html

  19. [24]

    Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts

    Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. InThe Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net,

  20. [25]

    URLhttps://openreview.net/forum?id=KUNzEQMWU7

  21. [26]

    Measuring multimodal mathematical reasoning with math-vision dataset

    Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors,Advances in Neural Information Processing Systems 38: Annual C...

  22. [27]

    Qwen2.5-vl, January 2025

    Qwen Team. Qwen2.5-vl, January 2025. URL https://qwenlm.github.io/blog/qwen2. 5-vl/

  23. [28]

    In: CVPR

    Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual in- struction tuning. InIEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024, pages 26286–26296. IEEE, 2024. doi: 10.1109/ CVPR52733.2024.02484. URLhttps://doi.org/10.1109/CVPR52733.2024.02484

  24. [29]

    Manning, Stefano Ermon, and Chelsea Finn

    Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors,Advances in Neural Information Processing Systems 36: Annual Conference o...

  25. [30]

    Let's Verify Step by Step

    Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023. URL https://arxiv.org/abs/2305.20050

  26. [31]

    Benchmarking Multimodal CoT Reward Model Step- wise by Visual Program

    Minghe Gao, Xuqi Liu, Zhongqi Yue, Yang Wu, Shuang Chen, Juncheng Li, Siliang Tang, Fei Wu, Tat-Seng Chua, and Yueting Zhuang. Benchmarking Multimodal CoT Reward Model Step- wise by Visual Program. InInternational Conference on Computer Vision, pages 1718–1728,

  27. [32]

    URLhttps://mlanthology.org/iccv/2025/gao2025iccv-benchmarking/. 12

  28. [33]

    Representer point selection for explaining deep neural networks.Advances in neural information processing systems, 31, 2018

    Chih-Kuan Yeh, Joon Kim, Ian En-Hsu Yen, and Pradeep K Ravikumar. Representer point selection for explaining deep neural networks.Advances in neural information processing systems, 31, 2018

  29. [34]

    Amirata Ghorbani and James Y . Zou. Data shapley: Equitable valuation of data for machine learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors,Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, Proceedings of Machine Learning Research, pages 2242–2251. PMLR, 2019. URLh...

  30. [35]

    Datamodels: Predicting predictions from training data

    Andrew Ilyas, Sung Min Park, Logan Engstrom, Guillaume Leclerc, and Aleksander Madry. Datamodels: Predicting predictions from training data. InICML, 2022

  31. [36]

    control bars

    Shengguang Wu, Keming Lu, Benfeng Xu, Junyang Lin, Qi Su, and Chang Zhou. Self-evolved diverse data sampling for efficient instruction tuning.CoRR, abs/2311.08182, 2023. doi: 10.48550/ARXIV .2311.08182. URLhttps://doi.org/10.48550/arXiv.2311.08182

  32. [37]

    Wizardlm: Empowering large pre-trained language models to follow complex instructions

    Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. Wizardlm: Empowering large pre-trained language models to follow complex instructions. In B. Kim, Y . Yue, S. Chaudhuri, K. Fragkiadaki, M. Khan, and Y . Sun, editors,International Conference on Learning Representations, volume 2024, pages 3074...

  33. [38]

    What makes good data for alignment? A comprehensive study of automatic data selection in instruction tuning

    Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, and Junxian He. What makes good data for alignment? A comprehensive study of automatic data selection in instruction tuning. InThe Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum? id=BTKAeLqLMw

  34. [39]

    What’s in the image? a deep-dive into the vision of vision language models

    Bardia Safaei, Faizan Siddiqui, Jiacong Xu, Vishal M. Patel, and Shao-Yuan Lo. Filter Images First, Generate Instructions Later: Pre-Instruction Data Selection for Visual Instruc- tion Tuning . In2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14247–14256, Los Alamitos, CA, USA, June 2025. IEEE Computer Society. doi: 10.1...

  35. [40]

    Instruction mining: Instruction data selection for tuning large language models, 2024

    Yihan Cao, Yanbin Kang, Chi Wang, and Lichao Sun. Instruction mining: Instruction data selection for tuning large language models, 2024. URL https://arxiv.org/abs/2307. 06290

  36. [41]

    SWIFT: A scalable lightweight infrastructure for fine-tuning

    Yuze Zhao, Jintao Huang, Jinghan Hu, Xingjun Wang, Yunlin Mao, Daoze Zhang, Zeyinzi Jiang, Zhikai Wu, Baole Ai, Ang Wang, Wenmeng Zhou, and Yingda Chen. SWIFT: A scalable lightweight infrastructure for fine-tuning. In Toby Walsh, Julie Shah, and Zico Kolter, editors,Thirty-Ninth AAAI Conference on Artificial Intelligence, Thirty-Seventh Conference on Inno...

  37. [42]

    Vlmevalkit: An open-source toolkit for evaluating large multi-modality models

    Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, Dahua Lin, and Kai Chen. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Jianfei Cai, Mohan S. Kankanhalli, Balakrishnan Prabhakaran, Susanne Boll, Ramanathan Subramanian, Liang Zheng, Vivek K. Singh, ...