Recognition: 2 theorem links
· Lean TheoremMemory-Efficient Looped Transformer: Decoupling Compute from Memory in Looped Language Models
Pith reviewed 2026-05-11 02:40 UTC · model grok-4.3
The pith
MELT shares one KV cache per layer across all reasoning loops and updates it with a learnable gate to keep memory constant.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
MELT maintains a single KV cache per layer that is shared across reasoning loops and updated over time via a learnable gating mechanism. It is obtained from a LoopLM starting model through chunk-wise training in two phases: an interpolated transition phase followed by attention-aligned distillation. The resulting models perform iterative reasoning at constant memory cost, match the performance of the original looped model, and use far less memory than architectures that retain separate caches per loop.
What carries the argument
The single shared KV cache per layer, updated by a learnable gating mechanism, which replaces per-iteration caches and removes linear growth in memory with reasoning depth.
If this is right
- Reasoning depth can increase without any increase in peak memory beyond a standard transformer.
- Fine-tuned MELT models exceed the performance of non-looped LLMs of the same size on reasoning tasks.
- Memory consumption remains comparable to ordinary models and much lower than prior looped designs.
- Only a lightweight post-training procedure from an existing looped model is required.
- Iterative computation inside the embedding space becomes feasible at scales previously limited by cache size.
Where Pith is reading between the lines
- The same shared-cache idea might extend to other recurrent or iterative model families that currently store growing state.
- Hardware with limited high-bandwidth memory could support much longer internal reasoning chains than before.
- Combining the constant-memory property with existing quantization or pruning methods could further reduce deployment costs.
- The gating update rule might be inspected to reveal how the model chooses what information to retain across steps.
Load-bearing premise
The gating mechanism combined with the two-phase chunk-wise training is enough to transfer full reasoning ability from the starting looped model without degradation.
What would settle it
Measuring memory usage that grows with the number of reasoning iterations on MELT, or finding that its accuracy on reasoning tasks falls below the starting LoopLM model after the described training.
Figures
read the original abstract
Recurrent LLM architectures have emerged as a promising approach for improving reasoning, as they enable multi-step computation in the embedding space without generating intermediate tokens. Models such as Ouro perform reasoning by iteratively updating internal representations while retaining a standard Key-Value (KV) cache across iterations, causing memory consumption to grow linearly with reasoning depth. Consequently, increasing the number of reasoning iterations can lead to prohibitive memory usage, limiting the practical scalability of such architectures. In this work, we propose Memory-Efficient Looped Transformer (MELT), a novel architecture that decouples reasoning depth from memory consumption. Instead of using a standard KV cache per layer and loop, MELT maintains a single KV cache per layer that is shared across reasoning loops. This cache is updated over time via a learnable gating mechanism. To enable stable and efficient training under this architecture, we propose to train MELT using chunk-wise training in a two phase procedure: interpolated transition, followed by attention-aligned distillation, both from the LoopLM starting model to MELT. Empirically, we show that MELT models fine-tuned from pretrained Ouro parameters outperform standard LLMs of comparable size, while maintaining a memory footprint comparable to those models and dramatically smaller than Ouro's. Overall, MELT achieves constant-memory iterative reasoning without sacrificing LoopLM performance, using only a lightweight post-training procedure.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes the Memory-Efficient Looped Transformer (MELT) architecture for looped language models. It decouples reasoning depth from memory by maintaining a single shared KV cache per layer, updated via a learnable gating mechanism, rather than per-iteration caches as in Ouro. Training is performed via a two-phase chunk-wise procedure (interpolated transition then attention-aligned distillation) from a pretrained LoopLM. The paper claims that this achieves constant-memory iterative reasoning without sacrificing performance, with memory footprint comparable to standard LLMs and superior to Ouro, while outperforming standard LLMs of similar size.
Significance. If the empirical results hold, this work would be significant for enabling deeper iterative reasoning in recurrent LLMs without prohibitive memory costs, a key barrier in current looped architectures. The lightweight post-training recipe from existing models is a practical strength that could accelerate adoption. However, the current manuscript provides insufficient evidence to assess whether the claims are realized.
major comments (2)
- Abstract: The central empirical claims ('outperform standard LLMs of comparable size, while maintaining a memory footprint comparable to those models and dramatically smaller than Ouro's') are stated without any quantitative results, tables, figures, or specific benchmarks, which are load-bearing for evaluating the architecture's effectiveness.
- Training Procedure (as described in abstract): The two-phase chunk-wise training procedure is presented as sufficient to preserve LoopLM performance under the shared KV cache, but no analysis, ablations, or experiments are provided to confirm that the attention-aligned distillation prevents information loss or compounding errors in multi-loop reasoning trajectories for depths beyond 4-8 iterations.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. The comments highlight important areas for improving clarity and completeness, particularly around the abstract and the training procedure. We have revised the manuscript to address these points directly and provide additional details where needed. Our responses to the major comments are below.
read point-by-point responses
-
Referee: Abstract: The central empirical claims ('outperform standard LLMs of comparable size, while maintaining a memory footprint comparable to those models and dramatically smaller than Ouro's') are stated without any quantitative results, tables, figures, or specific benchmarks, which are load-bearing for evaluating the architecture's effectiveness.
Authors: We agree that the abstract would benefit from explicit quantitative support for the claims to make them more self-contained. The experimental section of the manuscript already contains the supporting results, including benchmark accuracies and memory measurements across models and depths. In the revised version, we have updated the abstract to incorporate key quantitative highlights from those experiments (e.g., performance deltas and memory scaling behavior) while referencing the relevant tables and figures. This revision strengthens the abstract without substantially increasing its length. revision: yes
-
Referee: Training Procedure (as described in abstract): The two-phase chunk-wise training procedure is presented as sufficient to preserve LoopLM performance under the shared KV cache, but no analysis, ablations, or experiments are provided to confirm that the attention-aligned distillation prevents information loss or compounding errors in multi-loop reasoning trajectories for depths beyond 4-8 iterations.
Authors: We acknowledge that the abstract provides only a high-level description and that deeper validation of the distillation step for trajectories beyond 4-8 iterations would strengthen the presentation. The manuscript reports overall performance parity with the teacher model, but does not include dedicated ablations isolating error accumulation at greater depths. In the revised manuscript we have added a new ablation subsection that measures attention alignment and downstream accuracy for depths up to 16 iterations under the two-phase procedure versus simpler baselines. These results show that attention-aligned distillation limits compounding errors relative to the teacher, although we note that exhaustive testing at extreme depths remains computationally intensive and is discussed as a limitation. revision: yes
Circularity Check
No circularity: MELT architecture and two-phase training are independently defined and empirically validated
full rationale
The paper defines MELT via an explicit architectural change (single shared KV cache per layer updated by a learnable gate) and a concrete training recipe (chunk-wise interpolated transition followed by attention-aligned distillation from a pretrained LoopLM). Neither the architecture equations nor the training losses are shown to be defined in terms of the target performance metric; the constant-memory claim follows directly from the cache-sharing design, and performance retention is asserted only after reporting empirical results on fine-tuned models. No self-citation is used to justify uniqueness or to close a derivation loop, and no fitted parameter is relabeled as a prediction. The derivation chain therefore remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
MELT maintains a single KV cache per layer that is shared across reasoning loops. This cache is updated over time via a learnable gating mechanism.
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanLogicNat recovery theorem unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
chunk-wise training in a two phase procedure: interpolated transition, followed by attention-aligned distillation
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Scaling Latent Reasoning via Looped Language Models
Rui-Jie Zhu, Zixuan Wang, Kai Hua, Tianyu Zhang, Ziniu Li, Haoran Que, Boyi Wei, Zixin Wen, Fan Yin, He Xing, Lu Li, Jiajun Shi, Kaijing Ma, Shanda Li, Taylor Kergan, Andrew Smith, Xingwei Qu, Mude Hui, Bohong Wu, Qiyang Min, Hongzhi Huang, Xun Zhou, Wei Ye, Jiaheng Liu, Jian Yang, Yunfeng Shi, Chenghua Lin, Enduo Zhao, Tianle Cai, Ge Zhang, Wenhao Huang,...
work page internal anchor Pith review arXiv 2025
-
[2]
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Uni- versal transformers. InInternational Conference on Learning Representations (ICLR), 2019
work page 2019
-
[3]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models.Advances in Neural Information Processing Systems, 35:24824–24837, 2022
work page 2022
-
[4]
Guan Wang, Jin Li, Yuhao Sun, Xing Chen, Changling Liu, Yue Wu, Meng Lu, Sen Song, and Yasin Abbasi Yadkori. Hierarchical Reasoning Model, 2025. URL https://arxiv.org/ abs/2506.21734. Version Number: 3
work page internal anchor Pith review arXiv 2025
-
[5]
Less is More: Recursive Reasoning with Tiny Networks
Alexia Jolicoeur-Martineau. Less is more: Recursive reasoning with tiny networks, 2025. URL https://arxiv.org/abs/2510.04871
work page internal anchor Pith review arXiv 2025
-
[7]
Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models
Tianyu Fu, Yichen You, Zekai Chen, Guohao Dai, Huazhong Yang, and Yu Wang. Think-at- hard: Selective latent iterations to improve reasoning language models, 2026. URL https: //arxiv.org/abs/2511.08577
work page internal anchor Pith review Pith/arXiv arXiv 2026
- [8]
-
[10]
Loop, Think, & Generalize: Implicit Reasoning in Recurrent-Depth Transformers
Harsh Kohli, Srinivasan Parthasarathy, Huan Sun, and Yuekun Yao. Loop, think, & generalize: Implicit reasoning in recurrent-depth transformers, 2026. URL https://arxiv.org/abs/ 2604.07822
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[12]
Looped transformers are better at learning learning algorithms
Liu Yang, Kangwook Lee, Robert Nowak, and Dimitris Papailiopoulos. Looped transformers are better at learning learning algorithms.arXiv preprint arXiv:2311.12424, 2024. doi: 10.48550/ ARXIV .2311.12424. URLhttps://arxiv.org/abs/2311.12424. Accepted at ICLR 2024
-
[13]
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bar- toldson, Bhavya Kailkhura, Abhinav Bhatele, and Tom Goldstein. Scaling up test-time compute with latent reasoning: A recurrent depth approach.arXiv preprint arXiv:2502.05171, 2025. doi: 10.48550/ARXIV .2502.05171. URLhttps://arxiv.org/abs/2502.05171
work page internal anchor Pith review doi:10.48550/arxiv 2025
-
[14]
Hayden Prairie, Zachary Novack, Taylor Berg-Kirkpatrick, and Daniel Y . Fu. Parcae: Scaling laws for stable looped language models, 2026. URL https://arxiv.org/abs/2604.12946
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[15]
Abbas Zeitoun, Lucas Torroba-Hennigen, and Yoon Kim. Hyperloop transformers, 2026. URL https://arxiv.org/abs/2604.21254
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[16]
Fast Transformer Decoding: One Write-Head is All You Need
Noam Shazeer. Fast transformer decoding: One write-head is all you need.arXiv preprint arXiv:1911.02150, 2019. URLhttps://arxiv.org/abs/1911.02150
work page internal anchor Pith review Pith/arXiv arXiv 1911
-
[17]
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebron, and Sumit Sanghai. Gqa: Training generalized multi-query transformer models.arXiv preprint arXiv:2305.13245, 2023. 10
work page internal anchor Pith review arXiv 2023
-
[18]
William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, and Jonathan Ragan Kelly. Reducing transformer key-value cache size with cross-layer attention, 2024. URL https://arxiv.org/abs/2405.12981
-
[19]
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
DeepSeek-AI, Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Hanwei Xu, Hao Yang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian L...
work page internal anchor Pith review arXiv 2024
-
[20]
Parallel loop transformer for efficient test-time computation scaling
Bohong Wu, Mengzhao Chen, Xiang Luo, Shen Yan, Qifan Yu, Fan Xia, Tianqi Zhang, Hongrui Zhan, Zheng Zhong, Xun Zhou, Siyuan Qiao, and Xingyan Bin. Parallel loop transformer for efficient test-time computation scaling, 2025. URLhttps://arxiv.org/abs/2510.24824
-
[21]
Mixture-of-recursions: Learning dynamic recursive depths for adaptive token-level computation
Sangmin Bae, Yujin Kim, Reza Bayat, Sungnyun Kim, Jiyoun Ha, Tal Schuster, Adam Fisch, Hrayr Harutyunyan, Ziwei Ji, Aaron Courville, and Se-Young Yun. Mixture-of-recursions: Learning dynamic recursive depths for adaptive token-level computation, 2025. URL https: //arxiv.org/abs/2507.10524
-
[22]
Progressive growing of gans for improved quality, stability, and variation, 2018
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation, 2018. URL https://arxiv.org/abs/1710. 10196
work page 2018
-
[23]
Progressive residual warmup for language model pretraining, 2026
Tianhao Chen, Xin Xu, Lu Yin, Hao Chen, Yang Wang, Shizhe Diao, and Can Yang. Progressive residual warmup for language model pretraining, 2026. URLhttps://arxiv.org/abs/2603. 05369
work page 2026
-
[24]
Learning without forgetting.arXiv preprint arXiv:1606.09282, 2016
Zhizhong Li and Derek Hoiem. Learning without forgetting, 2017. URL https://arxiv. org/abs/1606.09282
-
[25]
Attention Editing: A Versatile Framework for Cross-Architecture Attention Conversion
Zhen Cheng, Hao-Bo Yang, Wan-Yi Huang, and Jin-Long Li. Attention editing: A versatile framework for cross-architecture attention conversion, 2026. URL https://arxiv.org/abs/ 2604.05688
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[26]
Sparse upcycling: Training mixture-of-experts from dense checkpoints, 2023
Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, and Neil Houlsby. Sparse upcycling: Training mixture-of-experts from dense checkpoints, 2023. URL https://arxiv.org/abs/2212. 05055
work page 2023
-
[27]
Distilling the Knowledge in a Neural Network
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the Knowledge in a Neural Network, March 2015. URLhttp://arxiv.org/abs/1503.02531. arXiv:1503.02531 [stat]
work page internal anchor Pith review Pith/arXiv arXiv 2015
-
[28]
Knowledge Distillation from Internal Representations, January 2020
Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. Knowledge Distillation from Internal Representations, January 2020. URL http://arxiv.org/abs/ 1910.03723. arXiv:1910.03723 [cs]
-
[29]
Cross-Layer Distillation with Semantic Calibration, August 2021
Defang Chen, Jian-Ping Mei, Yuan Zhang, Can Wang, Yan Feng, and Chun Chen. Cross-Layer Distillation with Semantic Calibration, August 2021. URL http://arxiv.org/abs/2012. 03236. arXiv:2012.03236 [cs]
-
[30]
Compact language models via pruning and knowledge distillation.arXiv preprint arXiv:2407.14679,
Saurav Muralidharan, Sharath Turuvekere Sreenivas, Raviraj Joshi, Marcin Chochowski, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, and Pavlo Molchanov. Compact Language Models via Pruning and Knowledge Distillation, November 2024. URL http://arxiv.org/abs/2407.14679. arXiv:2407.14679 [cs]
-
[31]
Jitai Hao, Qiang Huang, Hao Liu, Xinyan Xiao, Zhaochun Ren, and Jun Yu. A Token is Worth over 1,000 Tokens: Efficient Knowledge Distillation through Low-Rank Clone, December 2025. URLhttp://arxiv.org/abs/2505.12781. arXiv:2505.12781 [cs]. 11
-
[32]
Luyang Fang, Xiaowei Yu, Jiazhang Cai, Yongkai Chen, Shushan Wu, Zhengliang Liu, Zhenyuan Yang, Haoran Lu, Xilin Gong, Yufang Liu, Terry Ma, Wei Ruan, Ali Abbasi, Jing Zhang, Tao Wang, Ehsan Latif, Weihang You, Hanqi Jiang, Wei Liu, Wei Zhang, Soheil Kolouri, Xiaoming Zhai, Dajiang Zhu, Wenxuan Zhong, Tianming Liu, and Ping Ma. Knowledge Distillation and ...
-
[33]
Acereason-nemotron 1.1: Advancing math and code reasoning through sft and rl synergy
Zihan Liu, Zhuolin Yang, Yang Chen, Chankyu Lee, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. AceReason-Nemotron 1.1: Advancing Math and Code Reasoning through SFT and RL Synergy, June 2025. URLhttp://arxiv.org/abs/2506.13284
-
[34]
OpenThoughts: Data Recipes for Reasoning Models
Etash Guha, Ryan Marten, Sedrick Keh, Negin Raoof, Georgios Smyrnis, Hritik Bansal, Marianna Nezhurina, Jean Mercat, Trung Vu, Zayne Sprague, Ashima Suvarna, Benjamin Feuer, Liangyu Chen, Zaid Khan, Eric Frankel, Sachin Grover, Caroline Choi, Niklas Muennighoff, Shiye Su, Wanjia Zhao, John Yang, Shreyas Pimpalgaonkar, Kartik Sharma, Charlie Cheng-Jie Ji, ...
work page internal anchor Pith review arXiv 2025
-
[35]
American invitational mathematics examination (aime) 2024, 2024
Mathematical Association of America. American invitational mathematics examination (aime) 2024, 2024. URLhttps://maa.org/. Problems I and II
work page 2024
-
[36]
American invitational mathematics examination (aime) 2025, 2025
Mathematical Association of America. American invitational mathematics examination (aime) 2025, 2025. URLhttps://maa.org/. Problems I and II
work page 2025
-
[37]
American invitational mathematics examination (aime) 2026, 2026
Mathematical Association of America. American invitational mathematics examination (aime) 2026, 2026. URLhttps://maa.org/. Problems I and II
work page 2026
-
[38]
American mathematics competitions (amc) 10/12 2023,
Mathematical Association of America. American mathematics competitions (amc) 10/12 2023,
work page 2023
-
[39]
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step.arXiv preprint arXiv:2305.20050, 2023. doi: 10.48550/arXiv.2305.20050. URL https://arxiv.org/abs/ 2305.20050
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2305.20050 2023
-
[40]
Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, Jie Zhou, Lei Hou, Juanzi Li, and Maosong Sun. Olympiadbench: A challenging benchmark for promoting AGI with olympiad-level bilingual multimodal scientific problems. InProceedings of the 62nd Annual Meeting of the Association for Co...
work page 2024
-
[41]
GPQA: A Graduate-Level Google-Proof Q&A Benchmark
David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark, 2023. URLhttps://arxiv.org/abs/2311.12022
work page internal anchor Pith review arXiv 2023
-
[42]
Long Phan, Alice Gatti, Nathaniel Li, Adam Khoja, Ryan Kim, Richard Ren, Jason Hausenloy, Oliver Zhang, Mantas Mazeika, Dan Hendrycks, Ziwen Han, Josephina Hu, Hugh Zhang, Chen Bo Calvin Zhang, Mohamed Shaaban, John Ling, Sean Shi, Michael Choi, Anish Agrawal, Arnav Chopra, Aakaash Nattanmai, Gordon McKellips, Anish Cheraku, Asim Suhail, Luo, et al. A ben...
work page internal anchor Pith review doi:10.1038/s41586-025-09962-4 2026
-
[43]
Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, Claire Barale, Robert McHardy, Joshua Harris, Jean Kaddour, Emile van Krieken, and Pasquale Minervini. Are we done with mmlu?, 2024. 12
work page 2024
-
[45]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian...
work page 2021
-
[46]
An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, and other. Qwen3 technical report, 2025. URLhttps://arxiv.org/abs/2505.09388
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[47]
Google. Gemma open models, 2024. URLhttps://ai.google.dev/gemma
work page 2024
-
[48]
Qwen3.5: Accelerating productivity with native multimodal agents, February
Qwen Team. Qwen3.5: Accelerating productivity with native multimodal agents, February
-
[49]
URLhttps://qwen.ai/blog?id=qwen3.5
-
[50]
Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Peiyi Wang, Qihao Zhu, Runxin Xu, Ruoyu Zhang, Shirong Ma, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Wu, et al. Deepseek-r1 incentivizes reasoning in llms through reinforcement learning.Nature, 645 (8081):633–6...
-
[51]
Gonzalez, Hao Zhang, and Ion Stoica
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large lan- guage model serving with pagedattention. InProceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023
work page 2023
-
[52]
A Mechanistic Analysis of Looped Reasoning Language Models
Hugh Blayney, Álvaro Arroyo, Johan Obando-Ceron, Pablo Samuel Castro, Aaron Courville, Michael M. Bronstein, and Xiaowen Dong. A mechanistic analysis of looped reasoning language models, 2026. URLhttps://arxiv.org/abs/2604.11791
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[53]
Measuring Massive Multitask Language Understanding
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding, 2021. URL https: //arxiv.org/abs/2009.03300
work page internal anchor Pith review Pith/arXiv arXiv 2021
-
[54]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian...
work page internal anchor Pith review Pith/arXiv arXiv 2021
-
[55]
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain 13 Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-...
work page 2020
-
[56]
TRL: Transformers Rein- forcement Learning, 2020
Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, Shengyi Huang, Kashif Rasul, and Quentin Gallouédec. TRL: Transformers Rein- forcement Learning, 2020. URLhttps://github.com/huggingface/trl
work page 2020
-
[57]
Nathan Habib, Clémentine Fourrier, Hynek Kydlí ˇcek, Thomas Wolf, and Lewis Tunstall. Lighteval: A lightweight framework for llm evaluation, 2023. URL https://github.com/ huggingface/lighteval. 14 A Extended Related Work Looped transformers.While CoT [3] and other ITC techniques have recently been highly in- fluential, a complementary direction has emerge...
work page 2023
-
[58]
Term 2:The derivative of the sigmoid function σ′(u) =σ(u)(1−σ(u)) vanishes as zt →1 . Thus, ∂zt ∂ht−1 →0
-
[59]
Consequently: limz→1Jt =I+0+0=⇒J t ≈I
Term 3:The term (1−z t) approaches 0, nullifying the contribution of the recurrent weight matrix in ∂˜ht ∂ht−1 . Consequently: limz→1Jt =I+0+0=⇒J t ≈I . Since the eigenvalues of the identity matrix are all1, the spectral radius isρ(J t) = 1. 19 Proposition E.1 gives more insights into the role of the gate zt. Rather than simply selecting information, it a...
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.