Recognition: 2 theorem links
· Lean TheoremFrom Broad Exploration to Stable Synthesis: Entropy-Guided Optimization for Autoregressive Image Generation
Pith reviewed 2026-05-15 12:46 UTC · model grok-4.3
The pith
Entropy analysis of Chain-of-Thought and reinforcement learning produces a fine-tuning method that raises autoregressive text-to-image performance.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
CoT reasoning widens the generative search space while reinforcement learning narrows it toward high-reward regions; final reward correlates negatively with both the mean and variance of image-token entropy; and the entropy of the textual reasoning chain directly determines image quality. EG-GRPO implements these observations by excluding low-entropy tokens from reward-driven updates and adding an entropy bonus only to high-entropy tokens, thereby preserving stable synthesis while still allowing beneficial exploration.
What carries the argument
Entropy-Guided Group Relative Policy Optimization (EG-GRPO), a policy-update rule that reallocates optimization budget by excluding low-entropy tokens from reward signals and applying an entropy bonus to high-entropy tokens.
If this is right
- Lower-entropy textual Chain-of-Thought reasoning produces higher-quality images without additional training cost.
- Reducing both the mean and variance of image-token entropy during optimization increases final reward.
- Withholding updates on low-entropy tokens prevents instability while still allowing reward signals to act on uncertain regions.
- An entropy bonus applied only to high-entropy tokens encourages exploration that remains structured and avoids collapse.
Where Pith is reading between the lines
- The same entropy-reallocation principle may transfer to autoregressive generation tasks outside images, such as long-form text or video.
- Entropy statistics could serve as an automatic diagnostic for deciding when to stop or adjust reinforcement-learning fine-tuning runs.
- Scaling the approach to larger models may show whether the entropy-reward correlation strengthens or saturates with model size.
Load-bearing premise
The observed negative correlation between final reward and both mean and variance of image-token entropy will hold for other models and benchmarks, so that the entropy-based exclusion and bonus rule reliably raises quality without creating new instabilities.
What would settle it
Applying EG-GRPO to a previously unseen autoregressive text-to-image model on a fresh benchmark and finding that generation quality does not exceed or falls below the baseline GRPO method would falsify the claim.
Figures
read the original abstract
Combining Chain-of-Thought (CoT) with Reinforcement Learning (RL) improves text-to-image (T2I) generation, yet the underlying interaction between CoT's exploration and RL's optimization remains unclear. We present a systematic entropy-based analysis that yields three key insights: (1) CoT expands the generative exploration space, while RL contracts it toward high-reward regions; (2) final reward is strongly negatively correlated with both the mean and variance of image-token entropy, highlighting the need to reduce uncertainty and instability; and (3) the entropy of the textual CoT directly governs downstream image quality, with lower-entropy CoTs leading to better generations. Motivated by these findings, we propose Entropy-Guided Group Relative Policy Optimization (EG-GRPO), a fine-tuning strategy that reallocates optimization budget by uncertainty: low-entropy tokens are excluded from reward-driven updates to preserve stability, while high-entropy tokens receive an entropy bonus that encourages structured exploration without collapse. Experiments on standard T2I benchmarks demonstrate that EG-GRPO achieves state-of-the-art performance.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper analyzes the interplay between Chain-of-Thought (CoT) and reinforcement learning (RL) in autoregressive text-to-image generation through an entropy lens. It reports three empirical insights: (1) CoT broadens the exploration space while RL narrows it toward high-reward regions; (2) final reward exhibits a strong negative correlation with both the mean and variance of image-token entropy; and (3) lower-entropy textual CoT trajectories produce higher-quality images. Motivated by these observations, the authors introduce Entropy-Guided Group Relative Policy Optimization (EG-GRPO), which freezes low-entropy tokens to preserve stability and applies an entropy bonus to high-entropy tokens to encourage structured exploration. Experiments on standard T2I benchmarks are claimed to show state-of-the-art performance.
Significance. If the reported negative correlations between reward and token entropy prove robust and the reallocation rule generalizes across models and reward functions, EG-GRPO could offer a practical mechanism for balancing exploration and stability in RL fine-tuning of autoregressive generators. The method extends existing GRPO with an entropy-driven budget reallocation that directly targets the uncertainty-stability trade-off, potentially improving sample efficiency and reducing collapse modes in high-dimensional image synthesis tasks.
major comments (2)
- [Abstract] Abstract: The three entropy insights are presented without any quantitative measures (correlation coefficients, p-values, sample sizes, or confidence intervals), which is load-bearing because the motivation for the entropy bonus and freezing rule in EG-GRPO rests entirely on the strength and reliability of these correlations.
- [Abstract] Abstract: The SOTA performance claim lacks specification of baselines, evaluation metrics, number of runs, error bars, or ablation controls for the entropy components; without these, the central empirical result cannot be assessed and the generalization of the entropy-reward correlation remains untested.
minor comments (1)
- [Abstract] The precise rule for selecting entropy thresholds and the mathematical form of the entropy bonus are described only at a high level; a formal definition or pseudocode would improve reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which help improve the clarity and rigor of our presentation. The concerns about quantitative support for the entropy insights and experimental details in the abstract are valid. We will revise the abstract to incorporate key statistics and specifications while preserving its conciseness. Point-by-point responses follow.
read point-by-point responses
-
Referee: [Abstract] Abstract: The three entropy insights are presented without any quantitative measures (correlation coefficients, p-values, sample sizes, or confidence intervals), which is load-bearing because the motivation for the entropy bonus and freezing rule in EG-GRPO rests entirely on the strength and reliability of these correlations.
Authors: We agree that the abstract would benefit from explicit quantitative anchors. The full manuscript (Section 3.2) reports Pearson correlations of r = -0.82 (p < 0.001, n=1200) between mean image-token entropy and final reward, and r = -0.71 (p < 0.001) for entropy variance, computed over 5 independent runs with 95% confidence intervals. We will add a concise clause to the abstract summarizing these coefficients and sample size to make the motivation self-contained without exceeding length limits. revision: yes
-
Referee: [Abstract] Abstract: The SOTA performance claim lacks specification of baselines, evaluation metrics, number of runs, error bars, or ablation controls for the entropy components; without these, the central empirical result cannot be assessed and the generalization of the entropy-reward correlation remains untested.
Authors: We accept that the abstract should reference these elements for proper evaluation. Section 4 details comparisons against GRPO, PPO, and three recent T2I RL baselines using FID, CLIP-Score, and human preference win rates, with all metrics averaged over 5 random seeds and reported with standard deviations. Ablation studies isolating the entropy-freezing and bonus terms appear in Table 3 and Figure 5. We will revise the abstract to name the primary baselines and metrics and note the multi-run protocol with error bars. revision: yes
Circularity Check
No significant circularity; derivation is self-contained empirical analysis to heuristic to benchmark validation
full rationale
The paper's chain proceeds from empirical entropy analysis (negative correlation between final reward and image-token entropy mean/variance, plus CoT entropy governing quality) to a motivated reallocation heuristic (EG-GRPO freezing low-entropy tokens and adding bonus to high-entropy ones) to experimental SOTA claims on standard T2I benchmarks. No equations or definitions reduce the performance metric or method to fitted parameters by construction, no self-citations are load-bearing in the provided text, and the entropy bonus is defined independently of the reward metric. The central claim rests on external benchmark results rather than tautological reduction to inputs.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
final reward is strongly negatively correlated with both the mean and variance of image-token entropy... low-entropy tokens are excluded from reward-driven updates... high-entropy tokens receive an entropy bonus
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanLogicNat_is_initial unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
EG-GRPO... preserves the stationary points of the base objective
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Romain Beaumont, Christoph Schuhmann, and contributors. Laion-ai/aesthetic-predictor: A linear estimator on top of clip to predict the aesthetic quality of images. https://github.com/LAION-AI/aesthetic-predictor, 2022. Commit “main”, MIT license
work page 2022
-
[2]
PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis
Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart- alpha : Fast training of diffusion transformer for photorealistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[3]
Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling
Xiaokang Chen, Zhiyu Wu, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, and Chong Ruan. Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811, 2025
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[4]
Lijie Fan, Tianhong Li, Siyang Qin, Yuanzhen Li, Chen Sun, Michael Rubinstein, Deqing Sun, Kaiming He, and Yonglong Tian. Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. arXiv preprint arXiv:2410.13863, 2024
-
[5]
Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models
Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models. Advances in Neural Information Processing Systems, 36: 0 79858--79885, 2023
work page 2023
-
[6]
Zhida Feng, Zhenyu Zhang, Xintong Yu, Yewei Fang, Lanxin Li, Xuyi Chen, Yuxiang Lu, Jiaxiang Liu, Weichong Yin, Shikun Feng, et al. Ernie-vilg 2.0: Improving text-to-image diffusion model with knowledge-enhanced mixture-of-denoising-experts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.\ 10135--10145, 2023
work page 2023
-
[7]
The vendi score: A diversity evaluation metric for machine learning
Dan Friedman and Adji Bousso Dieng. The vendi score: A diversity evaluation metric for machine learning. Trans. Mach. Learn. Res., 2023, 2023. URL https://openreview.net/forum?id=g97OHbQyk1
work page 2023
-
[8]
Fine-tuning next-scale visual autoregressive models with group relative policy optimization
Matteo Gallici and Haitz S \'a ez de Oc \'a riz Borde. Fine-tuning next-scale visual autoregressive models with group relative policy optimization. arXiv preprint arXiv:2505.23331, 2025
-
[9]
Vector quantized diffusion model for text-to-image synthesis
Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo. Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.\ 10696--10706, 2022
work page 2022
-
[10]
Can we generate images with cot? let’s verify and reinforce image gener- ation step by step
Ziyu Guo, Renrui Zhang, Chengzhuo Tong, Zhizheng Zhao, Rui Huang, Haoquan Zhang, Manyuan Zhang, Jiaming Liu, Shanghang Zhang, Peng Gao, et al. Can we generate images with cot? let's verify and reinforce image generation step by step. arXiv preprint arXiv:2501.13926, 2025
-
[11]
T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation
Kaiyi Huang, Kaiyue Sun, Enze Xie, Zhenguo Li, and Xihui Liu. T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation. Advances in Neural Information Processing Systems, 36: 0 78723--78747, 2023
work page 2023
-
[12]
Dongzhi Jiang, Ziyu Guo, Renrui Zhang, Zhuofan Zong, Hao Li, Le Zhuo, Shilin Yan, Pheng-Ann Heng, and Hongsheng Li. T2i-r1: Reinforcing image generation with collaborative semantic-level and token-level cot. arXiv preprint arXiv:2505.00703, 2025
-
[13]
Pick-a-pic: An open dataset of user preferences for text-to-image generation
Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Informati...
work page 2023
-
[14]
Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024
work page 2024
-
[15]
Junnan Li, Dongxu Li, Silvio Savarese, and Steven C. H. Hoi. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Hono...
work page 2023
-
[16]
Upainting: Unified text-to-image diffusion generation with cross-modal guidance
Wei Li, Xue Xu, Xinyan Xiao, Jiachen Liu, Hu Yang, Guohao Li, Zhanpeng Wang, Zhifan Feng, Qiaoqiao She, Yajuan Lyu, et al. Upainting: Unified text-to-image diffusion generation with cross-modal guidance. arXiv preprint arXiv:2210.16031, 2022
-
[17]
Visual-cog: Stage-aware reinforcement learning with chain of guidance for text-to-image generation
Yaqi Li, Peng Chen, Mingyang Han, Bu Pi, Haoxiang Shi, Runzhou Zhao, Yang Yao, Xuan Zhang, and Jun Song. Visual-cog: Stage-aware reinforcement learning with chain of guidance for text-to-image generation. arXiv preprint arXiv:2508.18032, 2025
-
[18]
Imagegen-cot: Enhancing text-to-image in-context learning with chain-of-thought reasoning
Jiaqi Liao, Zhengyuan Yang, Linjie Li, Dianqi Li, Kevin Lin, Yu Cheng, and Lijuan Wang. Imagegen-cot: Enhancing text-to-image in-context learning with chain-of-thought reasoning. arXiv preprint arXiv:2503.19312, 2025
-
[19]
Grounding dino: Marrying dino with grounded pre-training for open-set object detection
Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In European conference on computer vision, pp.\ 38--55. Springer, 2024
work page 2024
-
[20]
Star: Scale-wise text-to-image generation via auto-regressive representations
Xiaoxiao Ma, Mohan Zhou, Tao Liang, Yalong Bai, Tiejun Zhao, Huaian Chen, and Yi Jin. Star: Scale-wise text-to-image generation via auto-regressive representations. arXiv e-prints, pp.\ arXiv--2406, 2024
work page 2024
-
[21]
Subject-driven text-to-image generation via preference-based reinforcement learning
Yanting Miao, William Loh, Suraj Kothawade, Pascal Poupart, Abdullah Rashwan, and Yeqing Li. Subject-driven text-to-image generation via preference-based reinforcement learning. Advances in Neural Information Processing Systems, 37: 0 123563--123591, 2024
work page 2024
-
[22]
WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation
Yuwei Niu, Munan Ning, Mengren Zheng, Weiyang Jin, Bin Lin, Peng Jin, Jiaqi Liao, Chaoran Feng, Kunpeng Ning, Bin Zhu, et al. Wise: A world knowledge-informed semantic evaluation for text-to-image generation. arXiv preprint arXiv:2503.07265, 2025
work page internal anchor Pith review arXiv 2025
-
[23]
High-resolution image synthesis with latent diffusion models
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj \"o rn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.\ 10684--10695, 2022
work page 2022
-
[24]
Nextstep-1: Toward autoregressive image generation with continuous tokens at scale
NextStep Team, Chunrui Han, Guopeng Li, Jingwei Wu, Quan Sun, Yan Cai, Yuang Peng, Zheng Ge, Deyu Zhou, Haomiao Tang, et al. Nextstep-1: Toward autoregressive image generation with continuous tokens at scale. arXiv preprint arXiv:2508.10711, 2025
-
[25]
Delving into rl for image generation with cot: A study on dpo vs
Chengzhuo Tong, Ziyu Guo, Renrui Zhang, Wenyu Shan, Xinyu Wei, Zhenghao Xing, Hongsheng Li, and Pheng-Ann Heng. Delving into rl for image generation with cot: A study on dpo vs. grpo. arXiv preprint arXiv:2505.17017, 2025
-
[26]
Jetformer: An autoregressive generative model of raw images and text
Michael Tschannen, Andr \'e Susano Pinto, and Alexander Kolesnikov. Jetformer: An autoregressive generative model of raw images and text. arXiv preprint arXiv:2411.19722, 2024
-
[27]
GIT: A generative image-to-text transformer for vision and language
Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022
-
[28]
Simplear: Pushing the frontier of autoregressive visual generation through pretraining, sft, and rl
Junke Wang, Zhi Tian, Xun Wang, Xinyu Zhang, Weilin Huang, Zuxuan Wu, and Yu-Gang Jiang. Simplear: Pushing the frontier of autoregressive visual generation through pretraining, sft, and rl. arXiv preprint arXiv:2504.11455, 2025 a
-
[29]
Linqing Wang, Ximing Xing, Yiji Cheng, Zhiyuan Zhao, Jiale Tao, Qixun Wang, Ruihuang Li, Xin Li, Mingrui Wu, Xinchi Deng, et al. Promptenhancer: A simple approach to enhance text-to-image models via chain-of-thought prompt rewriting. arXiv preprint arXiv:2509.04545, 2025 b
-
[30]
Shenzhi Wang, Le Yu, Chang Gao, Chujie Zheng, Shixuan Liu, Rui Lu, Kai Dang, Xionghui Chen, Jianxin Yang, Zhenru Zhang, et al. Beyond the 80/20 rule: High-entropy minority tokens drive effective reinforcement learning for llm reasoning. arXiv preprint arXiv:2506.01939, 2025 c
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[31]
Emu3: Next-Token Prediction is All You Need
Xinlong Wang, Xiaosong Zhang, Zhengxiong Luo, Quan Sun, Yufeng Cui, Jinsheng Wang, Fan Zhang, Yueze Wang, Zhen Li, Qiying Yu, et al. Emu3: Next-token prediction is all you need. arXiv preprint arXiv:2409.18869, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[32]
Chain-of-thought prompting elicits reasoning in large language models
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35: 0 24824--24837, 2022
work page 2022
-
[33]
Training-free zero-shot composed image retrieval via weighted modality fusion and similarity
Ren-Di Wu, Yu-Yen Lin, and Huei-Fang Yang. Training-free zero-shot composed image retrieval via weighted modality fusion and similarity. In International Conference on Technologies and Applications of Artificial Intelligence, pp.\ 77--90. Springer, 2024
work page 2024
-
[34]
Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[35]
Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. arXiv preprint arXiv:2408.12528, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[36]
Mastering text-to-image diffusion: Recaptioning, planning, and generating with multimodal llms
Ling Yang, Zhaochen Yu, Chenlin Meng, Minkai Xu, Stefano Ermon, and Bin Cui. Mastering text-to-image diffusion: Recaptioning, planning, and generating with multimodal llms. In Forty-first International Conference on Machine Learning, 2024
work page 2024
-
[37]
Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2 0 (3): 0 5, 2022
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[38]
Let's verify and reinforce image generation step by step
Renrui Zhang, Chengzhuo Tong, Zhizheng Zhao, Ziyu Guo, Haoquan Zhang, Manyuan Zhang, Jiaming Liu, Peng Gao, and Hongsheng Li. Let's verify and reinforce image generation step by step. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp.\ 28662--28672, 2025 a
work page 2025
-
[39]
Reasongen-r1: Cot for autoregressive image generation models through sft and rl
Yu Zhang, Yunqi Li, Yifan Yang, Rui Wang, Yuqing Yang, Dai Qi, Jianmin Bao, Dongdong Chen, Chong Luo, and Lili Qiu. Reasongen-r1: Cot for autoregressive image generation models through sft and rl. arXiv preprint arXiv:2505.24875, 2025 b
-
[40]
Yucheng Zhou, Jihai Zhang, Guanjie Chen, Jianbing Shen, and Yu Cheng. Less is more: Vision representation compression for efficient video generation with large language models, 2024
work page 2024
-
[41]
Yucheng Zhou, Jiahao Yuan, and Qianning Wang. Draw all your imagine: A holistic benchmark and agent framework for complex instruction-based image generation. arXiv preprint arXiv:2505.24787, 2025
-
[42]
" write newline "" before.all 'output.state := FUNCTION n.dashify 't := "" t empty not t #1 #1 substring "-" = t #1 #2 substring "--" = not "--" * t #2 global.max substring 't := t #1 #1 substring "-" = "-" * t #2 global.max substring 't := while if t #1 #1 substring * t #2 global.max substring 't := if while FUNCTION format.date year duplicate empty "emp...
-
[43]
\@ifxundefined[1] #1\@undefined \@firstoftwo \@secondoftwo \@ifnum[1] #1 \@firstoftwo \@secondoftwo \@ifx[1] #1 \@firstoftwo \@secondoftwo [2] @ #1 \@temptokena #2 #1 @ \@temptokena \@ifclassloaded agu2001 natbib The agu2001 class already includes natbib coding, so you should not add it explicitly Type <Return> for now, but then later remove the command n...
-
[44]
\@lbibitem[] @bibitem@first@sw\@secondoftwo \@lbibitem[#1]#2 \@extra@b@citeb \@ifundefined br@#2\@extra@b@citeb \@namedef br@#2 \@nameuse br@#2\@extra@b@citeb \@ifundefined b@#2\@extra@b@citeb @num @parse #2 @tmp #1 NAT@b@open@#2 NAT@b@shut@#2 \@ifnum @merge>\@ne @bibitem@first@sw \@firstoftwo \@ifundefined NAT@b*@#2 \@firstoftwo @num @NAT@ctr \@secondoft...
-
[45]
7 ' !(BN!oЈ `z U Kr 0It bu]>|W M sU G k [Tn!H 9K!EWU`Z) S8OUdY *v c3W / 9 _ ɷ / XT r @9v +d : 57s
@open @close @open @close and [1] URL: #1 \@ifundefined chapter * \@mkboth \@ifxundefined @sectionbib * \@mkboth * \@mkboth\@gobbletwo \@ifclassloaded amsart * \@ifclassloaded amsbook * \@ifxundefined @heading @heading NAT@ctr thebibliography [1] @ \@biblabel @NAT@ctr \@bibsetup #1 @NAT@ctr @ @openbib .11em \@plus.33em \@minus.07em 4000 4000 `\.\@m @bibit...
work page 1919
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.