pith. machine review for the scientific record. sign in

arxiv: 2605.14527 · v1 · submitted 2026-05-14 · 💻 cs.LG · cond-mat.mtrl-sci· physics.comp-ph

Recognition: 2 theorem links

· Lean Theorem

Lang2MLIP: End-to-End Language-to-Machine Learning Interatomic Potential Development with Autonomous Agentic Workflows

Authors on Pith no claims yet

Pith reviewed 2026-05-15 02:13 UTC · model grok-4.3

classification 💻 cs.LG cond-mat.mtrl-sciphysics.comp-ph
keywords machine learning interatomic potentialsmulti-agent systemslarge language modelsautonomous workflowssolid electrolyte interphaseactive learningmaterials modeling
0
0 comments X

The pith

A multi-agent LLM framework automates end-to-end development of machine learning interatomic potentials from natural language input.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents Lang2MLIP as a way to remove the need for domain expertise and fixed pipelines when building machine learning interatomic potentials. It casts the entire workflow as a sequential decision problem in which LLM agents watch the current dataset, model state, evaluation scores, and logs, then pick the next action to improve results. This setup lets the system revisit earlier steps and self-correct when problems appear in complex materials such as solid electrolyte interphases. The approach is positioned as a step toward making MLIP creation practical for non-experts who lack atomistic simulation or active-learning experience.

Core claim

Lang2MLIP is a multi-agent framework that accepts natural-language descriptions and solves MLIP development as a sequential decision-making task solved by large language models, with each decision-making agent observing the dataset, model, evaluation results, and execution log to choose corrective actions and revisit subsystems when failures occur.

What carries the argument

Multi-agent LLM decision system that observes dataset-model-evaluation states and selects actions without a predefined pipeline.

If this is right

  • MLIP workflows no longer require a fixed sequence of stages chosen in advance.
  • Agents can return to earlier subsystems when new failures are detected in multi-component systems.
  • The method is demonstrated on a solid electrolyte interphase containing multiple interfaces.
  • Non-experts can supply natural-language goals instead of designing active-learning loops.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same state-observation and action-selection pattern could be applied to related tasks such as tuning density-functional parameters or force-field parameterization.
  • Reliability will depend on how well the LLM interprets quantitative evaluation outputs in materials systems beyond the tested SEI example.
  • Adding direct links to experimental feedback could further close the loop between simulation and measurement.

Load-bearing premise

LLM agents can reliably read dataset, model, and evaluation states and choose useful corrective actions without expert oversight or fixed stage rules, even for heterogeneous materials.

What would settle it

A run on the SEI system in which the agent repeatedly selects ineffective actions, leaving model error metrics unchanged or worse after several observation-action cycles.

Figures

Figures reproduced from arXiv: 2605.14527 by Nontawat Charoenphakdee, Wenwen Li, Yuki Orimo.

Figure 1
Figure 1. Figure 1: Overview of the Lang2MLIP framework. The robot icon indicates a single agent, while the group-of-people icon indicates multiple agents within a submodule. reference MD scripts for advanced simulation settings (e.g., non-equilibrium MD or metadynamics). This stage supports human-in-the-loop interaction, allowing users to inspect and refine task specifications with language. In the second phase, agents colla… view at source ↗
Figure 2
Figure 2. Figure 2: Schematic of the battery SEI system used in this study. SEI outer layer, primarily composed of amorphous lithium ethylene dicarbonate (LEDC). This layer is relatively porous and mechanically soft, allowing partial transport of ions and solvent molecules. Below this is the inorganic SEI in￾ner layer, dominated by amorphous inorganic compounds Li2CO3, which is denser and plays a critical role in elec￾tronica… view at source ↗
Figure 3
Figure 3. Figure 3: Radial distribution functions (RDFs) for selected atom pairs in the multilayer SEI system at 300 K. Solid lines show the Li–C pair in Li2CO3; dashed lines show the O–F pair in the electrolyte. 4.6. Emergent Adaptive Behaviors Beyond overall task completion, we analyze the execution trace in [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Time evolution of the simulation-box density for the multilayer SEI system at 300 K. B.2. Mean squared displacement (MSD) This subsection presents the mean squared displacement curves for different elements in the SEI simulation box, providing a dynamical comparison between PFP, full Lang2MLIP model, and the ablation model [PITH_FULL_IMAGE:figures/full_fig_p012_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Mean squared displacement (MSD) curves of all elements in the multilayer SEI simulation box at 300 K. 12 [PITH_FULL_IMAGE:figures/full_fig_p012_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Radial distribution functions (RDFs) of the anode region (graphite) in the multilayer SEI system at 300 K [PITH_FULL_IMAGE:figures/full_fig_p013_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Radial distribution functions (RDFs) of the inorganic SEI (Li2CO3) region in the multilayer SEI system at 300 K 13 [PITH_FULL_IMAGE:figures/full_fig_p013_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Radial distribution functions (RDFs) of the organic SEI (LEDC) region in the multilayer SEI system at 300 K [PITH_FULL_IMAGE:figures/full_fig_p014_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Radial distribution functions (RDFs) of the electrolyte (LiPF6/EC/DMC) region in the multilayer SEI system at 300 K 14 [PITH_FULL_IMAGE:figures/full_fig_p014_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Initial structures of basic components of SEI generated by preparation agents 15 [PITH_FULL_IMAGE:figures/full_fig_p015_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Initial structures of interfaces in SEI generated by preparation agents 16 [PITH_FULL_IMAGE:figures/full_fig_p016_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Initial structures of SEI multiple layers structures generated by preparation agents 17 [PITH_FULL_IMAGE:figures/full_fig_p017_12.png] view at source ↗
read the original abstract

Developing machine learning interatomic potentials (MLIPs) for complex materials systems remains challenging because it requires expertise in atomistic simulations, machine learning, and workflow design, as well as iterative active learning procedures. Existing automated pipelines typically assume a fixed sequence of stages or depend on domain experts, which limits their adaptability to heterogeneous materials systems where the optimal curriculum is not known in advance. To lower the barrier to developing MLIPs for non-experts, we propose Lang2MLIP, a multi-agent framework that takes natural-language input and formulates end-to-end MLIP development as a sequential decision-making problem solved by large language models (LLMs). At each step, a decision-making agent observes the current dataset, model, evaluation results, and execution log, and then automatically selects an appropriate action to improve the model. This removes the need for a predefined pipeline and enables the agent to self-correct by revisiting earlier subsystems when new failures arise. We evaluate this approach on a solid electrolyte interphase (SEI) system with multiple components and interfaces. These results suggest that LLM-based multi-agent systems are a promising direction for automating MLIP development and making it more accessible to non-experts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript proposes Lang2MLIP, a multi-agent LLM framework that formulates MLIP development as an end-to-end sequential decision-making task. Given natural-language input, a decision-making agent observes the current dataset, model state, evaluation results, and execution log at each step and autonomously selects actions to improve the potential, enabling self-correction by revisiting subsystems without a fixed pipeline. The approach is evaluated on a solid-electrolyte interphase (SEI) system containing multiple components and interfaces; the authors conclude that LLM-based agentic systems are a promising route to making MLIP development accessible to non-experts.

Significance. If the central claim holds, the work would meaningfully lower the expertise barrier for generating MLIPs on heterogeneous materials systems where optimal curricula are unknown a priori. The removal of rigid stage sequences and the explicit self-correction loop address a recognized limitation of existing automated pipelines. The manuscript does not yet supply the quantitative evidence needed to substantiate these advantages.

major comments (2)
  1. [Evaluation] Evaluation section: The manuscript reports results on a single SEI system but supplies no quantitative metrics (energy/force RMSE, active-learning iteration counts, agent failure rates, or success/failure logs), no baseline comparisons (expert-designed workflows or scripted pipelines), and no error analysis. Because the central claim is that the agents reliably observe states and choose corrective actions without predefined pipelines, the absence of these data leaves the reliability of autonomous correction untested.
  2. [Method] Method section: The observation representation (how dataset, model, and log states are encoded for the LLM), the action space, and the prompting strategy used for decision-making are not specified. These elements are load-bearing for the claim that the framework operates without domain-expert oversight or fixed pipelines.
minor comments (1)
  1. [Abstract] Abstract: The sentence 'These results suggest...' does not indicate what concrete outcomes were observed on the SEI system; adding one or two quantitative highlights would strengthen the summary.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the constructive feedback on our manuscript. We agree that the evaluation and method sections require strengthening to better support the claims regarding autonomous agentic workflows for MLIP development. We address each major comment below and will incorporate revisions in the next version of the manuscript.

read point-by-point responses
  1. Referee: [Evaluation] Evaluation section: The manuscript reports results on a single SEI system but supplies no quantitative metrics (energy/force RMSE, active-learning iteration counts, agent failure rates, or success/failure logs), no baseline comparisons (expert-designed workflows or scripted pipelines), and no error analysis. Because the central claim is that the agents reliably observe states and choose corrective actions without predefined pipelines, the absence of these data leaves the reliability of autonomous correction untested.

    Authors: We agree that the current evaluation is insufficient to substantiate the central claim. The manuscript does not include the requested quantitative metrics, baselines, or error analysis. In the revised version, we will add energy and force RMSE values, active-learning iteration counts, agent failure rates with success/failure logs, comparisons against expert-designed workflows and scripted pipelines, and a detailed error analysis of agent decisions. These additions will directly test the reliability of the autonomous self-correction mechanism. revision: yes

  2. Referee: [Method] Method section: The observation representation (how dataset, model, and log states are encoded for the LLM), the action space, and the prompting strategy used for decision-making are not specified. These elements are load-bearing for the claim that the framework operates without domain-expert oversight or fixed pipelines.

    Authors: We acknowledge that the Method section provides only a high-level overview and omits the load-bearing technical specifications. In the revision, we will expand this section to detail the observation representation (text encoding of dataset statistics, model state, evaluation results, and execution logs), the full discrete action space available to the decision-making agent, and the exact prompting strategies (including templates and few-shot examples) used to enable state observation and action selection without predefined pipelines or domain-expert intervention. revision: yes

Circularity Check

0 steps flagged

No circularity: framework proposal with no derivations or fitted parameters

full rationale

The paper presents Lang2MLIP as a multi-agent LLM framework for end-to-end MLIP development. No equations, parameters, predictions, or derivation steps appear in the abstract or description. The central claim is a methodological proposal for autonomous decision-making by agents observing states, without any reduction to prior fitted values, self-citations, or ansatzes. Evaluation is described at a high level on one SEI system but contains no quantitative self-referential claims that could be circular. This is a standard non-circular framework description with independent content.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The proposal rests on the domain assumption that LLMs possess sufficient reasoning to manage iterative MLIP workflows for complex materials without human intervention or fixed curricula.

axioms (1)
  • domain assumption Large language models can observe current dataset, model, evaluation results, and execution logs and then select appropriate actions to improve the model in an open-ended workflow.
    This is the core mechanism stated in the abstract for removing predefined pipelines.
invented entities (1)
  • Lang2MLIP multi-agent framework no independent evidence
    purpose: To formulate and solve end-to-end MLIP development as a sequential decision-making problem from natural language input
    Newly introduced system whose effectiveness is only suggested by evaluation on one SEI system.

pith-pipeline@v0.9.0 · 5531 in / 1267 out tokens · 41242 ms · 2026-05-15T02:13:51.429578+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

134 extracted references · 134 canonical work pages · 4 internal anchors

  1. [1]

    Reviews of Modern Physics , volume=

    Nobel lecture: Quantum chemical models , author=. Reviews of Modern Physics , volume=. 1999 , publisher=

  2. [2]

    Takamoto, S and Okanohara, D and Li, QJ and Li, J , title =. J. Materiomics , year =

  3. [3]

    Tang, H and Xiao, B and He, WH and Subasic, P and Harutyunyan, AR and Wang, Y and Liu, F and Xu, HW and Li, J , title =. Nat. Comput. Sci. , year =

  4. [4]

    Du, HR and Dong, YH and Li, QJ and Zhao, RR and Qi, XQ and Kan, WH and Suo, LM and Qie, L and Li, J and Huang, YH , title =. Adv. Mater. , year =

  5. [5]

    Ceperley, DM and Alder, BJ , title =. Phys. Rev. Lett. , year =

  6. [6]

    Li, J and Wang, CZ and Chang, JP and Cai, W and Bulatov, VV and Ho, KM and Yip, S , title =. Phys. Rev. B , year =

  7. [7]

    Zhu, T and Li, J , title =. Prog. Mater. Sci. , year =

  8. [8]

    MRS Bull

    Li, J , title =. MRS Bull. , year =

  9. [9]

    Physical Chemistry Chemical Physics , volume=

    Long-range corrected hybrid density functionals with damped atom--atom dispersion corrections , author=. Physical Chemistry Chemical Physics , volume=. 2008 , publisher=

  10. [10]

    Accurate and numerically efficient

    Furness, James W and Kaplan, Aaron D and Ning, Jinliang and Perdew, John P and Sun, Jianwei , journal=. Accurate and numerically efficient. 2020 , publisher=

  11. [11]

    Physical review letters , volume=

    Generalized gradient approximation made simple , author=. Physical review letters , volume=. 1996 , publisher=

  12. [12]

    The Journal of Chemical Physics , volume=

    Indirect learning and physically guided validation of interatomic potential models , author=. The Journal of Chemical Physics , volume=. 2022 , publisher=

  13. [13]

    2024 , publisher=

    Zhang, Duo and Liu, Xinzijian and Zhang, Xiangyu and Zhang, Chengqian and Cai, Chun and Bi, Hangrui and Du, Yiming and Qin, Xuejian and Peng, Anyang and Huang, Jiameng and others , journal=. 2024 , publisher=

  14. [14]

    arXiv preprint arXiv:2501.09009 , year=

    Towards fast, specialized machine learning force fields: Distilling foundation models via energy hessians , author=. arXiv preprint arXiv:2501.09009 , year=

  15. [15]

    Advances in Neural Information Processing Systems , volume=

    Accelerating molecular graph neural networks via knowledge distillation , author=. Advances in Neural Information Processing Systems , volume=

  16. [16]

    arXiv preprint arXiv:2506.10956 , year=

    Distillation of atomistic foundation models across architectures and chemical domains , author=. arXiv preprint arXiv:2506.10956 , year=

  17. [17]

    Physical review letters , volume=

    Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons , author=. Physical review letters , volume=. 2010 , publisher=

  18. [18]

    Machine Learning: Science and Technology , volume=

    The MLIP package: moment tensor potentials with MPI and active learning , author=. Machine Learning: Science and Technology , volume=. 2020 , publisher=

  19. [19]

    Nature Communications , volume=

    Towards universal neural network potential for material discovery applicable to arbitrary combination of 45 elements , author=. Nature Communications , volume=. 2022 , publisher=

  20. [20]

    Nature Machine Intelligence , pages=

    A predictive machine learning force-field framework for liquid electrolyte development , author=. Nature Machine Intelligence , pages=. 2025 , publisher=

  21. [21]

    arXiv preprint arXiv:2405.04967 , year=

    Mattersim: A deep learning atomistic model across elements, temperatures and pressures , author=. arXiv preprint arXiv:2405.04967 , year=

  22. [22]

    Advances in neural information processing systems , volume=

    Batatia, Ilyes and Kovacs, David P and Simm, Gregor and Ortner, Christoph and Cs. Advances in neural information processing systems , volume=

  23. [23]

    The Journal of chemical physics , volume=

    OrbNet: Deep learning for quantum chemistry using symmetry-adapted atomic-orbital features , author=. The Journal of chemical physics , volume=. 2020 , publisher=

  24. [24]

    Physical Review B , volume=

    Atomic cluster expansion for accurate and transferable interatomic potentials , author=. Physical Review B , volume=. 2019 , publisher=

  25. [25]

    International conference on machine learning , pages=

    Equivariant message passing for the prediction of tensorial properties and molecular spectra , author=. International conference on machine learning , pages=. 2021 , organization=

  26. [26]

    The Journal of chemical physics , volume=

    Schnet--a deep learning architecture for molecules and materials , author=. The Journal of chemical physics , volume=. 2018 , publisher=

  27. [27]

    Nature Computational Science , volume=

    A universal graph deep learning interatomic potential for the periodic table , author=. Nature Computational Science , volume=. 2022 , publisher=

  28. [28]

    2023 , publisher=

    Deng, Bowen and Zhong, Peichen and Jun, KyuJung and Riebesell, Janosh and Han, Kevin and Bartel, Christopher J and Ceder, Gerbrand , journal=. 2023 , publisher=

  29. [29]

    The Journal of Physical Chemistry A , volume=

    Performance and cost assessment of machine learning interatomic potentials , author=. The Journal of Physical Chemistry A , volume=. 2020 , publisher=

  30. [30]

    international conference on machine learning , pages=

    Dropout as a bayesian approximation: Representing model uncertainty in deep learning , author=. international conference on machine learning , pages=. 2016 , organization=

  31. [31]

    nature , volume=

    Highly accurate protein structure prediction with AlphaFold , author=. nature , volume=. 2021 , publisher=

  32. [32]

    arXiv preprint arXiv:2504.06231 , year=

    Orb-v3: atomistic simulation at scale , author=. arXiv preprint arXiv:2504.06231 , year=

  33. [33]

    Chemical science , volume=

    A quantitative uncertainty metric controls error in neural network-driven chemical discovery , author=. Chemical science , volume=. 2019 , publisher=

  34. [34]

    The Journal of Chemical Physics , volume=

    Fast uncertainty estimates in deep learning interatomic potentials , author=. The Journal of Chemical Physics , volume=. 2023 , publisher=

  35. [35]

    2025 , publisher=

    Vita, Joshua A and Samanta, Amit and Zhou, Fei and Lordi, Vincenzo , journal=. 2025 , publisher=

  36. [36]

    International Conference on Learning Representations , year=

    Directional Message Passing for Molecular Graphs , author=. International Conference on Learning Representations , year=

  37. [37]

    Advances in neural information processing systems , volume=

    Schnet: A continuous-filter convolutional neural network for modeling quantum interactions , author=. Advances in neural information processing systems , volume=

  38. [38]

    Acs Catalysis , volume=

    Open catalyst 2020 (OC20) dataset and community challenges , author=. Acs Catalysis , volume=. 2021 , publisher=

  39. [39]

    npj Computational Materials , volume=

    Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles , author=. npj Computational Materials , volume=. 2023 , publisher=

  40. [40]

    Physical review letters , volume=

    Generalized neural-network representation of high-dimensional potential-energy surfaces , author=. Physical review letters , volume=. 2007 , publisher=

  41. [41]

    The Journal of chemical physics , volume=

    Perspective: Machine learning potentials for atomistic simulations , author=. The Journal of chemical physics , volume=. 2016 , publisher=

  42. [42]

    Advanced Materials , volume=

    Machine learning interatomic potentials as emerging tools for materials science , author=. Advanced Materials , volume=. 2019 , publisher=

  43. [43]

    Chemical Reviews , volume=

    Machine learning force fields , author=. Chemical Reviews , volume=. 2021 , publisher=

  44. [44]

    Machine Learning: Science and Technology , volume=

    Uncertainty quantification by direct propagation of shallow ensembles , author=. Machine Learning: Science and Technology , volume=. 2024 , publisher=

  45. [45]

    International conference on machine learning , pages=

    Uncertainty estimation for molecules: Desiderata and methods , author=. International conference on machine learning , pages=. 2023 , organization=

  46. [46]

    Reviews in Chemical Engineering , volume=

    Uncertainty quantification and propagation in atomistic machine learning , author=. Reviews in Chemical Engineering , volume=. 2025 , publisher=

  47. [47]

    Advances in neural information processing systems , volume=

    Simple and scalable predictive uncertainty estimation using deep ensembles , author=. Advances in neural information processing systems , volume=

  48. [48]

    Machine Learning: Science and Technology , volume=

    On the role of gradients for machine learning of molecular energies and forces , author=. Machine Learning: Science and Technology , volume=. 2020 , publisher=

  49. [49]

    arXiv preprint arXiv:2401.00096 , year=

    A foundation model for atomistic materials chemistry , author=. arXiv preprint arXiv:2401.00096 , year=

  50. [50]

    Scientific data , volume=

    Charting the complete elastic properties of inorganic crystalline compounds , author=. Scientific data , volume=. 2015 , publisher=

  51. [51]

    Frontiers in Materials , volume=

    Assessing local structure motifs using order parameters for motif recognition, interstitial identification, and diffusion path characterization , author=. Frontiers in Materials , volume=. 2017 , publisher=

  52. [52]

    Chemistry of Materials , volume=

    Data-driven first-principles methods for the study and design of alkali superionic conductors , author=. Chemistry of Materials , volume=. 2017 , publisher=

  53. [53]

    ACS omega , volume=

    Molecular dynamics simulation of silicon dioxide etching by hydrogen fluoride using the reactive force field , author=. ACS omega , volume=. 2021 , publisher=

  54. [54]

    Journal of the American Chemical Society , year=

    Exploration of Lithium-Ion Conductors Based on Local Coordination Environments Using Crystallographic Site Fingerprints , author=. Journal of the American Chemical Society , year=

  55. [55]

    ACS Applied Energy Materials , year=

    Facile Formation of Two-Phase Domains in a Single Crystalline Li7--x Ti5O12 Particle , author=. ACS Applied Energy Materials , year=

  56. [56]

    Journal of Materials Chemistry A , year=

    Tailoring the room-temperature miscibility gap in ordered spinel LiNi 0.5 Mn 1.5 O 4 cathodes by multi-element doping , author=. Journal of Materials Chemistry A , year=

  57. [57]

    Long time

    Shimada, Terumasa and Usov, Pavel M and Wada, Yuki and Ohtsu, Hiroyoshi and Watanabe, Taku and Adachi, Kiyohiro and Hashizume, Daisuke and Matsumoto, Takaya and Kawano, Masaki , journal=. Long time. 2024 , publisher=

  58. [58]

    ACS Catalysis , volume=

    Oxidative Dehydrogenation of Ethane Combined with CO2 Splitting via Chemical Looping on In2O3 Modified with Ni--Cu Alloy , author=. ACS Catalysis , volume=. 2025 , publisher=

  59. [59]

    ACS nano , volume=

    Defect-Driven Evolution of Oxo-Coordinated Cobalt Active Sites with Rapid Structural Transformation for Efficient Water Oxidation , author=. ACS nano , volume=. 2024 , publisher=

  60. [60]

    ACS Applied Materials & Interfaces , volume=

    Constructing reversible Li deposition interfaces by tailoring lithiophilic functionalities of a heteroatom-doped graphene interlayer for highly stable Li metal anodes , author=. ACS Applied Materials & Interfaces , volume=. 2024 , publisher=

  61. [61]

    Small , volume=

    Sustained Area-Selectivity in Atomic Layer Deposition of Ir Films: Utilization of Dual Effects of O3 in Deposition and Etching , author=. Small , volume=. 2024 , publisher=

  62. [62]

    Nature Catalysis , volume=

    Atom-by-atom design of Cu/ZrO x clusters on MgO for CO2 hydrogenation using liquid-phase atomic layer deposition , author=. Nature Catalysis , volume=. 2024 , publisher=

  63. [63]

    Nature Communications , volume=

    Facile synthesis of nanoporous Mg crystalline structure by organic solvent-based reduction for solid-state hydrogen storage , author=. Nature Communications , volume=. 2024 , publisher=

  64. [64]

    Neural Network Potential Molecular Dynamics Simulations of

    Hinuma, Yoyo , journal=. Neural Network Potential Molecular Dynamics Simulations of. 2024 , publisher=

  65. [65]

    ACS nano , volume=

    Molecular dynamics of catalyst-free edge elongation of boron nitride nanotubes coaxially grown on single-walled carbon nanotubes , author=. ACS nano , volume=. 2024 , publisher=

  66. [66]

    ACS Applied Polymer Materials , volume=

    Effects of Alkyl Side Chain Length on the Structural Organization and Proton Conductivity of Sulfonated Polyimide Thin Films , author=. ACS Applied Polymer Materials , volume=. 2024 , publisher=

  67. [67]

    Advanced Energy Materials , volume=

    Intelligent Stress-Adaptive Binder Enabled by Shear-Thickening Property for Silicon Electrodes of Lithium-Ion Batteries , author=. Advanced Energy Materials , volume=. 2024 , publisher=

  68. [68]

    Stress-Induced Martensitic Transformation in

    Miura, Akira and Muraoka, Koki and Maki, Kotaro and Kawaguchi, Saori and Hikima, Kazuhiro and Muto, Hiroyuki and Matsuda, Atsunori and Yamane, Ichiro and Shimada, Toshihiro and Ito, Hiroaki and others , journal=. Stress-Induced Martensitic Transformation in. 2024 , publisher=

  69. [69]

    Advanced Materials , volume=

    A new zinc salt chemistry for aqueous zinc-metal batteries , author=. Advanced Materials , volume=. 2023 , publisher=

  70. [70]

    Physical Review B , volume=

    Evolutionary search for superconducting phases in the lanthanum-nitrogen-hydrogen system with universal neural network potential , author=. Physical Review B , volume=. 2024 , publisher=

  71. [71]

    Science Advances , volume=

    Tunable ion energy barrier modulation through aliovalent halide doping for reliable and dynamic memristive neuromorphic systems , author=. Science Advances , volume=. 2024 , publisher=

  72. [72]

    npj Computational Materials , volume=

    Systematic softening in universal machine learning interatomic potentials , author=. npj Computational Materials , volume=. 2025 , publisher=

  73. [73]

    arXiv preprint arXiv:2506.21935 , year=

    Fine-Tuning Universal Machine-Learned Interatomic Potentials: A Tutorial on Methods and Applications , author=. arXiv preprint arXiv:2506.21935 , year=

  74. [74]

    2022 , publisher=

    Takamoto, So and Izumi, Satoshi and Li, Ju , journal=. 2022 , publisher=

  75. [75]

    Matlantis, software as a service style material discovery tool

  76. [76]

    On First-Order Meta-Learning Algorithms

    On first-order meta-learning algorithms , author=. arXiv preprint arXiv:1803.02999 , year=

  77. [77]

    Applications of

    Jozwik, Pawel and Polkowski, Wojciech and Bojar, Zbigniew , journal=. Applications of. 2015 , publisher=

  78. [78]

    1 (2013) 011002 , author=

    The Materials Project: a materials genome approach to accelerating materials innovation, APL Mater. 1 (2013) 011002 , author=. Available from DOI , volume=

  79. [79]

    arXiv preprint arXiv:2410.12771 , year=

    Open materials 2024 (omat24) inorganic materials dataset and models , author=. arXiv preprint arXiv:2410.12771 , year=

  80. [80]

    arXiv preprint arXiv:2503.04070 , year=

    A foundational potential energy surface dataset for materials , author=. arXiv preprint arXiv:2503.04070 , year=

Showing first 80 references.