Recognition: unknown
Transition-Matrix Regularization for Next Dialogue Act Prediction in Counselling Conversations
Pith reviewed 2026-05-10 04:02 UTC · model grok-4.3
The pith
A KL regularization term based on corpus transition patterns improves next dialogue act prediction in counselling conversations.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that a Kullback-Leibler regularization term aligning a model's predicted dialogue-act distribution with a transition matrix derived from corpus statistics raises macro-F1 by 9 to 42 percent relative to unregularized baselines, depending on the encoder, while also increasing measured alignment between predicted and observed dialogue flows. The gains hold across five-fold cross-validation on the German counselling data and appear to transfer when the same regularized models are evaluated on the HOPE dataset. Systematic ablations show the benefit is largest for weaker encoders and remains positive even when stronger pretrained models are used.
What carries the argument
The KL regularization term that penalizes deviations between the model's softmax output over the 60 acts and a fixed transition matrix precomputed from training-corpus act sequences.
If this is right
- Macro-F1 gains occur consistently across multiple pretrained encoders and model architectures.
- Weaker baseline models receive the largest relative improvement from the regularization.
- Dialogue-flow alignment metrics improve alongside classification accuracy.
- The same regularized models show positive transfer when tested on an independent counselling dataset in another language.
Where Pith is reading between the lines
- The method could be tested on non-counselling dialogue domains to determine whether the transition priors must be domain-specific or can be drawn from broader conversation data.
- Because the regularization is lightweight, it offers a route to improve fine-grained dialogue tasks without requiring larger labelled datasets or more expensive model scaling.
- If the transition matrix is recomputed from each new domain, the approach might serve as a general way to inject discourse structure into any sequence prediction model.
Load-bearing premise
The corpus-derived transition patterns supply useful, unbiased priors that remain valid outside the original training data and counselling domain.
What would settle it
No performance gain or outright degradation when the same regularization is applied to a dialogue corpus whose act-transition statistics differ substantially from those of the counselling training set.
Figures
read the original abstract
This paper studies how empirical dialogue-flow statistics can be incorporated into Next Dialogue Act Prediction (NDAP). A KL regularization term is proposed that aligns predicted act distributions with corpus-derived transition patterns. Evaluated on a 60-class German counselling taxonomy using 5-fold cross-validation, this improves macro-F1 by 9--42% relative depending on encoder and substantially improves dialogue-flow alignment. Cross-dataset validation on HOPE suggests that improvements transfer across languages and counselling domains. In systematic ablations across pretrained encoders and architectures, the findings indicate that transition regularization provides consistent gains and disproportionately benefits weaker baseline models. The results suggest that lightweight discourse-flow priors complement pretrained encoders, especially in fine-grained, data-sparse dialogue tasks.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes a KL-divergence regularization term that aligns a model's predicted dialogue-act distribution with empirical transition probabilities derived from a counselling corpus. It evaluates this on a 60-class German counselling taxonomy using 5-fold cross-validation, reporting relative macro-F1 gains of 9-42% over baselines depending on the encoder, improved dialogue-flow alignment, and positive transfer to the HOPE dataset.
Significance. If the gains prove robust under leakage-free cross-validation and include statistical testing, the result would show that lightweight, corpus-derived discourse priors can usefully complement pretrained encoders in fine-grained, data-sparse dialogue tasks. The systematic encoder ablations and cross-dataset experiment are strengths that would support broader applicability to counselling and similar domains.
major comments (2)
- [Experimental setup and 5-fold cross-validation description] The experimental setup does not state whether the transition matrix is recomputed exclusively on each training fold or derived once from the full corpus. If the latter, every regularization term during training and validation incorporates test-fold transition statistics, violating the independence assumption required for the claimed prior and rendering the 9-42% macro-F1 improvements an upper bound that may not hold under proper per-fold estimation.
- [Results and ablations] Results section reports only relative macro-F1 gains without absolute scores, exact baseline implementations, hyperparameter search details, or statistical significance tests (e.g., paired t-test across folds). This leaves the practical magnitude and reliability of the improvements difficult to assess.
minor comments (2)
- [Method] Clarify the exact form of the KL term (including the regularization weight schedule) and how the 60-class taxonomy is mapped to transition counts.
- [Results] Add a table of absolute F1 scores per encoder and fold to complement the relative gains.
Simulated Author's Rebuttal
We thank the referee for the thorough review and constructive feedback. The two major comments identify important issues of experimental rigor and reporting clarity. We address each below and will revise the manuscript to incorporate the necessary clarifications and additional results.
read point-by-point responses
-
Referee: [Experimental setup and 5-fold cross-validation description] The experimental setup does not state whether the transition matrix is recomputed exclusively on each training fold or derived once from the full corpus. If the latter, every regularization term during training and validation incorporates test-fold transition statistics, violating the independence assumption required for the claimed prior and rendering the 9-42% macro-F1 improvements an upper bound that may not hold under proper per-fold estimation.
Authors: We agree that the manuscript does not explicitly describe the computation of the transition matrix within the 5-fold cross-validation protocol. To ensure a leakage-free setup, the transition matrix must be derived exclusively from the training folds. We will revise the Experimental Setup section to state this procedure clearly and will re-run the experiments (if the original runs used the full corpus) to confirm that the reported relative gains hold under per-fold estimation. The revised results will be presented with the same encoder ablations. revision: yes
-
Referee: [Results and ablations] Results section reports only relative macro-F1 gains without absolute scores, exact baseline implementations, hyperparameter search details, or statistical significance tests (e.g., paired t-test across folds). This leaves the practical magnitude and reliability of the improvements difficult to assess.
Authors: We accept that absolute performance numbers, precise baseline descriptions, hyperparameter details, and statistical tests are needed for full assessment. The revised manuscript will report absolute macro-F1 scores for all models, provide exact specifications of the baseline encoders and training procedures, document the hyperparameter search ranges and selection criteria, and include paired t-test results across the five folds to establish statistical significance of the observed gains. revision: yes
Circularity Check
No circularity: transition matrix is an independent empirical prior
full rationale
The paper proposes a KL regularization term that aligns predicted dialogue-act distributions to transition patterns precomputed from the corpus. This prior is derived externally from aggregate statistics and does not reduce to any model parameter, fitted quantity, or self-referential definition within the training objective. No equations or steps equate the regularization target to the model's own outputs by construction, nor do they rely on self-citation chains or imported uniqueness results. The 5-fold CV evaluation and cross-dataset validation on HOPE are presented as external checks; the method remains self-contained against these benchmarks without any load-bearing step that collapses the claimed improvement into a tautology.
Axiom & Free-Parameter Ledger
free parameters (1)
- KL regularization weight
axioms (1)
- domain assumption The 60-class German counselling taxonomy provides a meaningful and consistent categorization of dialogue acts.
Reference graph
Works this paper leans on
-
[1]
Proceedings of the AAAI Conference on Artificial Intelligence , author =
Guiding. Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2020 , pages =. doi:10.1609/aaai.v34i05.6259 , abstract =
-
[2]
Malhotra, Ganeshan and Waheed, Abdul and Srivastava, Aseem and Akhtar, Md Shad and Chakraborty, Tanmoy , year =. Speaker and time-aware joint contextual learning for dialogue-act classification in counselling conversations , isbn =. Proceedings of the fifteenth. doi:10.1145/3488560.3498509 , abstract =
-
[3]
and Guu, Kelvin and Yu, Adams Wei and Lester, Brian and Du, Nan and Dai, Andrew M
Wei, Jason and Bosma, Maarten and Zhao, Vincent Y. and Guu, Kelvin and Yu, Adams Wei and Lester, Brian and Du, Nan and Dai, Andrew M. and Le, Quoc V. , year =. Finetuned language models are zero-shot learners , url =. International conference on learning representations (
-
[4]
Comparing human roleplayers and
Rudolph, Eric and Steigerwald, Philipp and Albrecht, Jens , editor =. Comparing human roleplayers and. Proceedings of the 18th international conference on educational data mining , publisher =. 2025 , pages =. doi:10.5281/zenodo.15870260 , abstract =
-
[5]
Language models are few-shot learners , volume =
Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel and Wu, Jeffrey and Winte...
2020
-
[6]
Brown, Tom B. and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel M. and Wu, Jeffrey and W...
work page internal anchor Pith review doi:10.48550/arxiv.2005.14165 2005
-
[7]
Chen, Zheqian and Yang, Rongqin and Zhao, Zhou and Cai, Deng and He, Xiaofei , year =. Dialogue act recognition via. The 41st international. doi:10.1145/3209978.3209997 , abstract =
-
[8]
Training language models to follow instructions with human feedback , volume =
Ouyang, Long and Wu, Jeffrey and Jiang, Xu and Almeida, Diogo and Wainwright, Carroll and Mishkin, Pamela and Zhang, Chong and Agarwal, Sandhini and Slama, Katarina and Ray, Alex and Schulman, John and Hilton, Jacob and Kelton, Fraser and Miller, Luke and Simens, Maddie and Askell, Amanda and Welinder, Peter and Christiano, Paul F and Leike, Jan and Lowe,...
2022
-
[9]
Demasi, Orianna and Li, Yu and Yu, Zhou , editor =. A. Findings of the. 2020 , pages =. doi:10.18653/v1/2020.findings-emnlp.324 , abstract =
-
[10]
Journal of the Royal Statistical Society
Controlling the. Journal of the Royal Statistical Society. Series B (Methodological) , author =. 1995 , note =
1995
-
[11]
Computational Linguistics , author =
Dialogue act modeling for automatic tagging and recognition of conversational speech , volume =. Computational Linguistics , author =. 2000 , pages =
2000
-
[12]
Language and Speech , author =
Can prosody aid the automatic classification of dialog acts in conversational speech? , volume =. Language and Speech , author =. 1998 , pages =
1998
-
[13]
Switchboard
Jurafsky, Daniel and Shriberg, Elizabeth and Biasca, Debra , year =. Switchboard
-
[14]
Shukuri, Kotaro and Ishigaki, Ryoma and Suzuki, Jundai and Naganuma, Tsubasa and Fujimoto, Takuma and Kawakubo, Daisuke and Shuzo, Masaki and Maeda, Eisaku , month = dec, year =. Meta-control of. doi:10.48550/arXiv.2312.13715 , abstract =
-
[15]
Wagner, Nicolas and Ultes, Stefan , editor =. On the. Proceedings of the 25th. 2024 , pages =. doi:10.18653/v1/2024.sigdial-1.19 , abstract =
-
[16]
doi:10.48550/arXiv.2512.09804 , abstract =
Albrecht, Jens and Lehmann, Robert and Poltermann, Aleksandra and Rudolph, Eric and Steigerwald, Philipp and Stieler, Mara , month = dec, year =. doi:10.48550/arXiv.2512.09804 , abstract =
-
[17]
Journal of Machine Learning Research , author =
Posterior regularization for structured latent variable models , volume =. Journal of Machine Learning Research , author =. 2010 , pages =
2010
-
[18]
Rethinking the Inception Architecture for Computer Vision
Szegedy, Christian and Vanhoucke, Vincent and Ioffe, Sergey and Shlens, Jonathon and Wojna, Zbigniew , month = dec, year =. Rethinking the. doi:10.48550/arXiv.1512.00567 , abstract =
-
[19]
Shang, Guokan and Tixier, Antoine and Vazirgiannis, Michalis and Lorré, Jean-Pierre , editor =. Speaker-change. Proceedings of the 28th. 2020 , pages =. doi:10.18653/v1/2020.coling-main.40 , abstract =
-
[20]
Su, Nan and Zhang, Yuchi and Liu, Chao and Du, Bingzhu and Wang, Yongliang , month = mar, year =. Precognition in. doi:10.48550/arXiv.2203.03244 , abstract =
-
[21]
Jin, Xisen and Lei, Wenqiang and Ren, Zhaochun and Chen, Hongshen and Liang, Shangsong and Zhao, Yihong and Yin, Dawei , month = oct, year =. Explicit. Proceedings of the 27th. doi:10.1145/3269206.3271683 , abstract =
-
[22]
Feng, Shaoxiong and Ren, Xuancheng and Chen, Hongshen and Sun, Bin and Li, Kan and Sun, Xu , editor =. Regularizing. Proceedings of the 2020. 2020 , pages =. doi:10.18653/v1/2020.emnlp-main.534 , abstract =
-
[23]
Kumar, Harshit and Agarwal, Arvind and Dasgupta, Riddhiman and Joshi, Sachindra and Kumar, Arun , month = sep, year =. Dialogue. doi:10.48550/arXiv.1709.04250 , abstract =
-
[24]
Tanaka, Koji and Takayama, Junya and Arase, Yuki , editor =. Dialogue-. Proceedings of the 57th. 2019 , pages =. doi:10.18653/v1/P19-2027 , abstract =
-
[25]
Zhang, Jusheng and Fan, Yijia and Cai, Kaitong and Sun, Xiaofei and Wang, Keze , editor =. Findings of the. 2025 , pages =. doi:10.18653/v1/2025.findings-emnlp.335 , abstract =
-
[26]
Yu, Xiao and Chen, Maximillian and Yu, Zhou , editor =. Prompt-. Proceedings of the 2023. 2023 , pages =. doi:10.18653/v1/2023.emnlp-main.439 , abstract =
-
[27]
Srivastava, Aseem and Pandey, Ishan and Akhtar, Md Shad and Chakraborty, Tanmoy , month = apr, year =. Response-act. Proceedings of the. doi:10.1145/3543507.3583380 , abstract =
-
[28]
Ganesh, Ananya and Palmer, Martha and Kann, Katharina , editor =. What. Findings of the. 2021 , pages =. doi:10.18653/v1/2021.findings-acl.418 , urldate =
-
[29]
Yu, Xiao and Chen, Maximillian and Yu, Zhou , month = oct, year =. Prompt-. doi:10.48550/arXiv.2305.13660 , abstract =
-
[30]
Shreevastava, Sagarika and Foltz, Peter , editor =. Detecting. Proceedings of the. 2021 , pages =. doi:10.18653/v1/2021.clpsych-1.17 , abstract =
-
[31]
Harold Ngabo-Woods, Larisa Dunai, and Isabel Seguí Verdú
The. Journal of Substance Abuse Treatment , author =. 2016 , pages =. doi:10.1016/j.jsat.2016.01.001 , language =
-
[33]
and Moyers, Theresa B
Miller, William R. and Moyers, Theresa B. and Ernst, Denise and Amrhein, Paul , year =. Manual for the
-
[34]
Speech Communication , author =
First steps towards statistical modeling of dialogue to predict the speech act type of the next utterance , volume =. Speech Communication , author =. 1994 , keywords =. doi:10.1016/0167-6393(94)90071-X , abstract =
-
[35]
Ji, Yangfeng and Haffari, Gholamreza and Eisenstein, Jacob , month = apr, year =. A. doi:10.48550/arXiv.1603.01913 , abstract =
-
[36]
and Heidarysafa, Mojtaba and Meimandi, Kiana Jafari and Gerber, Matthew S
Kowsari, Kamran and Brown, Donald E. and Heidarysafa, Mojtaba and Meimandi, Kiana Jafari and Gerber, Matthew S. and Barnes, Laura E. , month = dec, year =. 2017 16th. doi:10.1109/ICMLA.2017.0-134 , abstract =
-
[37]
Wu, Zixiu and Helaoui, Rim and Reforgiato Recupero, Diego and Riboni, Daniele , month = sep, year =. Towards. Interspeech 2022 , publisher =. doi:10.21437/Interspeech.2022-506 , language =
-
[38]
Wu, Zixiu and Balloccu, Simone and Kumar, Vivek and Helaoui, Rim and Reiter, Ehud and Reforgiato Recupero, Diego and Riboni, Daniele , month = may, year =. Anno-. doi:10.1109/ICASSP43922.2022.9746035 , abstract =
-
[39]
Zhao, Tiancheng and Zhao, Ran and Eskenazi, Maxine , editor =. Learning. Proceedings of the 55th. 2017 , pages =. doi:10.18653/v1/P17-1061 , abstract =
-
[40]
Ultes, Stefan and Rojas-Barahona, Lina M. and Su, Pei-Hao and Vandyke, David and Kim, Dongho and Casanueva, Iñigo and Budzianowski, Paweł and Mrkšić, Nikola and Wen, Tsung-Hsien and Gašić, Milica and Young, Steve , editor =. Proceedings of. 2017 , pages =
2017
-
[41]
Ravuru, Lohith and Ryu, Seonghan and Choi, Hyungtak and Yang, Haehun and Ko, Hyeonmok , editor =. Multi-. Findings of the. 2022 , pages =. doi:10.18653/v1/2022.findings-aacl.16 , abstract =
-
[42]
Joint segmentation and classification of dialog acts using conditional random fields , url =
Zimmermann, Matthias , month = sep, year =. Joint segmentation and classification of dialog acts using conditional random fields , url =. Interspeech 2009 , publisher =. doi:10.21437/Interspeech.2009-262 , language =
-
[43]
Patient Education and Counseling , author =
Automated interventions for multiple health behaviors using conversational agents , volume =. Patient Education and Counseling , author =. 2013 , pages =. doi:10.1016/j.pec.2013.05.011 , language =
-
[44]
Transactions of the Association for Computational Linguistics , author =
Large-scale. Transactions of the Association for Computational Linguistics , author =. 2016 , pages =. doi:10.1162/tacl_a_00111 , abstract =
-
[45]
Codegeex: A pre-trained model for code generation with multilingual benchmarking on humaneval-x
Chen, Sirui and Wang, Yuan and Wen, Zijing and Li, Zhiyu and Zhang, Changshuo and Zhang, Xiao and Lin, Quan and Zhu, Cheng and Xu, Jun , month = aug, year =. Controllable. Proceedings of the 29th. doi:10.1145/3580305.3599796 , abstract =
-
[46]
Finetuned Language Models Are Zero-Shot Learners
Wei, Jason and Bosma, Maarten and Zhao, Vincent Y. and Guu, Kelvin and Yu, Adams Wei and Lester, Brian and Du, Nan and Dai, Andrew M. and Le, Quoc V. , month = feb, year =. Finetuned. doi:10.48550/arXiv.2109.01652 , abstract =
work page internal anchor Pith review doi:10.48550/arxiv.2109.01652
-
[47]
Computational Linguistics , author =
Dialogue act modeling for automatic tagging and recognition of conversational speech , volume =. Computational Linguistics , author =. 2000 , note =
2000
-
[48]
Reithinger, N. and Engel, R. and Kipp, M. and Klesen, M. , month = oct, year =. Predicting dialogue acts for a speech-to-speech translation system , volume =. Proceeding of. doi:10.1109/ICSLP.1996.607446 , abstract =
-
[49]
Controlling dialogue strategy in llms for counseling conversations , booktitle =
Srivastava, Rhea and Madaan, Aman and Aggarwal, Arushi and Singh, Manish , year =. Controlling dialogue strategy in llms for counseling conversations , booktitle =
-
[50]
A latent variable recurrent neural network for discourse relation language models , booktitle =
Ji, Yangfeng and Haffari, Gholamreza and Eisenstein, Jacob , year =. A latent variable recurrent neural network for discourse relation language models , booktitle =
-
[51]
Meta dialogue policy learning , journal =
Xu, Yumo and Zhu, Chenguang and Peng, Baolin and Zeng, Michael , year =. Meta dialogue policy learning , journal =
-
[52]
Roncal, Adolfo and Iriondo, Ignasi , year =
-
[53]
Dialogue act prediction using stochastic context-free grammar induction , booktitle =
Geertzen, Jeroen , year =. Dialogue act prediction using stochastic context-free grammar induction , booktitle =
-
[54]
2022 , pages =
Proceedings of the 60th annual meeting of the association for computational linguistics , author =. 2022 , pages =
2022
-
[55]
Heterogeneous graph-based strategy prediction in emotional support dialogue , booktitle =
Xiang, Rui and Wang, Shu , year =. Heterogeneous graph-based strategy prediction in emotional support dialogue , booktitle =
-
[56]
Dialogue-act prediction of future responses based on conversation history , booktitle =
Tanaka, Koji and Takayama, Junya and Arase, Yuki , year =. Dialogue-act prediction of future responses based on conversation history , booktitle =
-
[57]
Kobayashi, Sosuke , editor =. Contextual. Proceedings of the 2018. 2018 , pages =. doi:10.18653/v1/N18-2072 , abstract =
-
[58]
arXiv preprint arXiv:1511.06709 , year=
Sennrich, Rico and Haddow, Barry and Birch, Alexandra , month = jun, year =. Improving. doi:10.48550/arXiv.1511.06709 , abstract =
-
[59]
doi:10.48550/arXiv.2404.00930 , abstract =
Han, Ji-Eun and Koh, Jun-Seok and Seo, Hyeon-Tae and Chang, Du-Seong and Sohn, Kyung-Ah , month = apr, year =. doi:10.48550/arXiv.2404.00930 , abstract =
-
[60]
Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2025 , note =. doi:10.1609/aaai.v39i2.32116 , abstract =
-
[61]
Wei, Jason and Zou, Kai , month = aug, year =. doi:10.48550/arXiv.1901.11196 , abstract =
-
[62]
Sohn, Sungryull and Lyu, Yiwei and Liu, Anthony and Logeswaran, Lajanugen and Kim, Dong-Ki and Shim, Dongsub and Lee, Honglak , editor =. Proceedings of the 2023. 2023 , pages =. doi:10.18653/v1/2023.emnlp-main.204 , abstract =
-
[63]
doi:10.48550/arXiv.2308.00878 , abstract =
Wu, Qingyang and Gung, James and Shu, Raphael and Zhang, Yi , month = aug, year =. doi:10.48550/arXiv.2308.00878 , abstract =
-
[64]
Raheja, Vipul and Tetreault, Joel , editor =. Dialogue. Proceedings of the 2019. 2019 , pages =. doi:10.18653/v1/N19-1373 , abstract =
-
[65]
Anwar, Usman and Ansari, Deevansh , year =. Dual-. Proceedings of the
-
[66]
doi:10.18653/v1/2025.findings-acl.1052 , booktitle =
Zhangwenbo, Zhangwenbo and Yuhan, Wang , year =. doi:10.18653/v1/2025.findings-acl.1052 , booktitle =
-
[67]
Wu, Shenghan and Zhu, Yimo and Hsu, Wynne and Lee, Mong-Li and Deng, Yang , month = feb, year =. From
-
[68]
Jiang, Jiyue and Chen, Liheng and Wang, Sheng and Kong, Lingpeng and Li, Yu and Wu, Chuan , month = jun, year =. Data
-
[69]
Qi, Zhiyang and Inaba, Michimasa , month = aug, year =. Data
-
[70]
Cheng, Jiale and Sabour, Sahand and Sun, Hao and Chen, Zhuang and Huang, Minlie , month = dec, year =
-
[71]
doi:10.48550/arXiv.2202.13047 , abstract =
Zheng, Chujie and Sabour, Sahand and Wen, Jiaxin and Zhang, Zheng and Huang, Minlie , month = may, year =. doi:10.48550/arXiv.2202.13047 , abstract =
-
[72]
Kim, Jun-Woo and Han, Ji-Eun and Koh, Jun-Seok and Seo, Hyeon-Tae and Chang, Du-Seong , month = jun, year =. Enhancing. doi:10.48550/arXiv.2406.08718 , abstract =
-
[73]
Dual-task dialogue understanding:
Anwar, Sibgha and Wiratunga, Nirmalie and Snaith, Mark , year =. Dual-task dialogue understanding:. Proceedings of the
-
[74]
Williamson, Minjeong Kim, and Apu Kapadia
Witzig, Philine and Constantin, Rares and Kovacevic, Nikola and Wampfler, Rafael , month = jul, year =. Multimodal. doi:10.1145/3640794.3665541 , language =
-
[75]
Wang, Ke and Zhu, Jiahui and Ren, Minjie and Liu, Zeming and Li, Shiwei and Zhang, Zongye and Zhang, Chenkai and Wu, Xiaoyu and Zhan, Qiqi and Liu, Qingjie and Wang, Yunhong , month = oct, year =. A. doi:10.48550/arXiv.2410.12896 , abstract =
-
[76]
doi:10.48550/arXiv.2409.01763 , abstract =
Ta, Hoang-Thang and Thai, Duy-Quy and Rahman, Abu Bakar Siddiqur and Sidorov, Grigori and Gelbukh, Alexander , month = feb, year =. doi:10.48550/arXiv.2409.01763 , abstract =
-
[77]
Almasian, Satya and Bruseva, Milena and Gertz, Michael , month = jul, year =. Numbers. doi:10.48550/arXiv.2407.10283 , abstract =
-
[78]
doi:10.48550/arXiv.2312.04668 , abstract =
Sohn, Sungryull and Lyu, Yiwei and Liu, Anthony and Logeswaran, Lajanugen and Kim, Dong-Ki and Shim, Dongsub and Lee, Honglak , month = dec, year =. doi:10.48550/arXiv.2312.04668 , abstract =
-
[79]
Interior derivative estimates and
Dai, Limei and Bao, Jiguang and Wang, Bo , month = may, year =. Interior derivative estimates and. doi:10.48550/arXiv.2305.17831 , abstract =
-
[80]
Su, Jinyan and Yu, Peilin and Zhang, Jieyu and Bach, Stephen H. , month = feb, year =. Leveraging. doi:10.48550/arXiv.2402.01867 , abstract =
-
[81]
Physical Review Letters , author =
Neural-network. Physical Review Letters , author =. 2024 , note =. doi:10.1103/PhysRevLett.133.076401 , abstract =
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.