pith. machine review for the scientific record. sign in

arxiv: 2605.14179 · v1 · submitted 2026-05-13 · ❄️ cond-mat.mtrl-sci

Recognition: no theorem link

A Neural-Network Framework to Learn History-Dependent Constitutive Laws and Identifiability of Internal Variables

Authors on Pith no claims yet

Pith reviewed 2026-05-15 01:48 UTC · model grok-4.3

classification ❄️ cond-mat.mtrl-sci
keywords constitutive lawsneural networksinternal variablesthermodynamicspolycrystalline materialshistory-dependent modelsidentifiabilitymagnesium
0
0 comments X

The pith

Neural networks can learn history-dependent constitutive laws for materials while guaranteeing consistency with the second law of thermodynamics and stability under extreme strain.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a neural-network framework that learns constitutive laws depending on deformation history by embedding causality and energetic structure directly into the model architecture. This ensures the learned laws respect the second law of thermodynamics, remain stable at large strains, and support the existence of solutions to the governing equations. The formulation also establishes that the internal variables recovered from data are unique up to a linear transformation, clarifying the equivalence classes of possible surrogate models. When tested on the Taylor-averaged response of a polycrystalline magnesium unit cell, the approach reaches 2 percent relative error in predictions.

Core claim

A causal and energetic formulation of a neural network is used to learn history-dependent constitutive laws that are consistent with the second law of thermodynamics, material stability under extreme applied strain, and the mathematical conditions for existence of solutions. The internal variables learned from data are shown to be unique up to a linear transform. The framework is applied to the Taylor-averaged response of a polycrystalline magnesium unit cell and achieves 2 percent relative error.

What carries the argument

Causal and energetic neural-network formulation of history-dependent constitutive laws, with internal variables identifiable uniquely up to linear transformation.

If this is right

  • Learned models can be inserted directly into finite-element codes without violating thermodynamic consistency or stability.
  • Equivalent surrogate models differ only by linear redefinitions of their internal variables, reducing representational ambiguity.
  • The framework enables reliable acceleration of multiscale simulations such as FE2 by replacing expensive microscale solves.
  • The identifiability result applies to any data-driven history-dependent model trained under the same causal-energetic constraints.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The linear uniqueness could be exploited by adding simple regularizers to select a canonical set of internal variables during training.
  • The same formulation might apply directly to other history-dependent phenomena such as viscoelasticity or rate-dependent plasticity.
  • Testing the framework on noisy experimental stress-strain curves rather than simulated unit-cell data would reveal its robustness to measurement error.

Load-bearing premise

A neural network can be formulated in a causal and energetic manner to guarantee consistency with the second law of thermodynamics, stability under extreme strain, and existence of solutions to the governing equations while retaining sufficient expressiveness for real material data.

What would settle it

Independent trainings on the same data set produce internal variables that cannot be related by any linear transformation, or the trained model produces negative dissipation in some strain paths.

Figures

Figures reproduced from arXiv: 2605.14179 by Andrew Stuart, Kaushik Bhattacharya, Lianghao Cao, Mayank Raj.

Figure 3.1
Figure 3.1. Figure 3.1: Visualization of G1 for different parametric values of xm. 6 [PITH_FULL_IMAGE:figures/full_fig_p006_3_1.png] view at source ↗
Figure 3.2
Figure 3.2. Figure 3.2: Effect of different parameters used in G2 on its gorwth. G2 is used to enforce the growth of the potential outside the support of the data. Instead of adding the Gi , i = 1, 2, to the NNs, it is computationally beneficial and more accurate to add their derivatives to the appropriate stresses derived from the NNs; this enables us to bypass the need for automatic differentiation. To this end, the derivativ… view at source ↗
Figure 4.1
Figure 4.1. Figure 4.1: A sample from the dataset. There are six components of the deformation gradient tensor [PITH_FULL_IMAGE:figures/full_fig_p010_4_1.png] view at source ↗
Figure 4.2
Figure 4.2. Figure 4.2: Distribution of different components of F, P† and P † d . There is more disparity in distributions of components of P† as compared to those of dev Pe† . 4.2 Polyconvex Model In this section, we present results of the NN model described before in Section 3.3 that, by design, is polyconvex. Section 4.2.1 shows that six internal variables are sufficient to capture the history dependency of stress on strain.… view at source ↗
Figure 4.3
Figure 4.3. Figure 4.3: The use of six internal variables is the optimal choice for maximum prediction accuracy. [PITH_FULL_IMAGE:figures/full_fig_p012_4_3.png] view at source ↗
Figure 4.4
Figure 4.4. Figure 4.4: Relative error in stress. (Left) Training without decomposition of [PITH_FULL_IMAGE:figures/full_fig_p012_4_4.png] view at source ↗
Figure 4.5
Figure 4.5. Figure 4.5: Visual comparison of different components of true and predicted stress. The sample corresponds [PITH_FULL_IMAGE:figures/full_fig_p013_4_5.png] view at source ↗
Figure 4.6
Figure 4.6. Figure 4.6: The hydrostatic stress with and without the growth imposed on the NN model for a fixed value [PITH_FULL_IMAGE:figures/full_fig_p013_4_6.png] view at source ↗
Figure 4.7
Figure 4.7. Figure 4.7: Increase in stress as shear strain is applied in the 1-2 plane, along the pure shear direction and [PITH_FULL_IMAGE:figures/full_fig_p014_4_7.png] view at source ↗
Figure 4.8
Figure 4.8. Figure 4.8: Increase in stress as shear strain is applied in the 1-2 plane, along the simple shear direction and [PITH_FULL_IMAGE:figures/full_fig_p015_4_8.png] view at source ↗
Figure 4.9
Figure 4.9. Figure 4.9: Visualization of the linear fit between two sets of internal variables. [PITH_FULL_IMAGE:figures/full_fig_p019_4_9.png] view at source ↗
Figure 4.10
Figure 4.10. Figure 4.10: Error in the linear fit between two sets of internal variables obtained by independently training [PITH_FULL_IMAGE:figures/full_fig_p020_4_10.png] view at source ↗
read the original abstract

The identification of constitutive laws is ubiquitous in engineering: in modeling of materials where experimental data are fitted to mathematical models or learning surrogate models to beat the FE\textsuperscript{2} computational cost of multiscale numerical simulations. However, these models of constitutive laws, unless equipped with a potential formulation, are not necessarily consistent with (a) the second law of thermodynamics; (b) stability of the material under extreme applied strain; and (c) the mathematical theory underpinning the existence of solutions of the governing equation. In this work, we present a causal and energetic formulation, consistent with aforementioned properties, of learning a history-dependent constitutive law. This characterization of the class of internal variables sheds light on the equivalence class of equivalent surrogate models for the constitutive law. We show that the internal variables that are learned from the data are unique up to a linear transform. The framework is deployed to learn the Taylor-averaged response of a polycrystalline magnesium unit cell. We achieve 2\% relative error in the prediction of the Taylor-averaged response.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript presents a neural-network framework for learning history-dependent constitutive laws formulated in a causal and energetic manner. This ensures consistency with the second law of thermodynamics, stability under extreme strains, and existence of solutions to the governing equations. The authors show that learned internal variables are unique up to a linear transformation and demonstrate the method on the Taylor-averaged response of a polycrystalline magnesium unit cell, reporting 2% relative error.

Significance. If the architectural constraints preserve expressiveness while enforcing physical properties by construction, the work offers a principled route to thermodynamically consistent surrogate models for multiscale simulations. The identifiability result clarifies equivalence classes of such models, and the magnesium application indicates practical utility. Strengths include the by-construction guarantees and the explicit characterization of internal-variable equivalence.

major comments (3)
  1. [Abstract and §3] Abstract and §3: The claim that internal variables are unique up to linear transform is central but lacks a derivation or proof sketch showing that the causal/energetic constraints do not collapse the representable function class; without this, the identifiability statement cannot be assessed as general rather than architecture-specific.
  2. [§4, Eq. (12)] §4, Eq. (12) or equivalent: The energetic formulation is asserted to satisfy the second law and stability by construction, yet no explicit mechanism (e.g., how the network parametrizes the dissipation potential or enforces the inequality for arbitrary histories) is provided, leaving open whether the restrictions retain sufficient capacity for real material data beyond the reported magnesium case.
  3. [§5, Table 1] §5, Table 1 or results section: The 2% relative error on the Taylor-averaged magnesium response is given without error bars, cross-validation across multiple strain paths, or comparison to an unconstrained baseline network, so it is unclear whether the physical constraints degrade accuracy on more complex history-dependent responses.
minor comments (2)
  1. [Abstract] The abstract references 'FE² computational cost' but the superscript formatting is inconsistent with standard LaTeX usage in the text.
  2. [Methods] Notation for internal variables and the linear transform should be introduced with a clear definition early in the methods section to aid readability.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below and will incorporate revisions to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3: The claim that internal variables are unique up to linear transform is central but lacks a derivation or proof sketch showing that the causal/energetic constraints do not collapse the representable function class; without this, the identifiability statement cannot be assessed as general rather than architecture-specific.

    Authors: We agree that a proof sketch is needed for clarity. In the revision we will add a concise derivation in §3 showing that the causal energetic structure preserves the full representable class of history-dependent maps while enforcing uniqueness of internal variables up to linear transformation. The argument relies on the variational structure and does not impose additional restrictions beyond thermodynamic consistency. revision: yes

  2. Referee: [§4, Eq. (12)] §4, Eq. (12) or equivalent: The energetic formulation is asserted to satisfy the second law and stability by construction, yet no explicit mechanism (e.g., how the network parametrizes the dissipation potential or enforces the inequality for arbitrary histories) is provided, leaving open whether the restrictions retain sufficient capacity for real material data beyond the reported magnesium case.

    Authors: We will expand §4 to detail the parametrization: the dissipation potential is output by a feed-forward network whose final activation is softplus (ensuring non-negativity), and the dissipation inequality is satisfied identically by the variational inequality of the energetic formulation for any input history. We will also add a short discussion confirming that this construction retains sufficient expressivity, as evidenced by the magnesium results and the universal-approximation properties of the chosen architecture. revision: yes

  3. Referee: [§5, Table 1] §5, Table 1 or results section: The 2% relative error on the Taylor-averaged magnesium response is given without error bars, cross-validation across multiple strain paths, or comparison to an unconstrained baseline network, so it is unclear whether the physical constraints degrade accuracy on more complex history-dependent responses.

    Authors: We accept this point. The revised results section will report error bars obtained from five independent training runs, include cross-validation on additional strain paths (e.g., non-proportional loading), and add a direct comparison against an unconstrained network of identical depth and width. These additions will quantify any accuracy trade-off introduced by the thermodynamic constraints. revision: yes

Circularity Check

0 steps flagged

No circularity; derivation chain is self-contained

full rationale

The paper formulates a causal and energetic neural-network architecture to enforce thermodynamic consistency, stability, and existence of solutions by construction, then derives that learned internal variables are unique up to linear transformation from the structure of that class. The 2% error on the magnesium Taylor-averaged response is reported as an empirical validation of retained expressiveness rather than a quantity forced by the fit itself. No load-bearing step reduces by the paper's own equations to a fitted parameter renamed as prediction, a self-citation chain, or an ansatz imported without independent justification. The uniqueness statement is presented as following from the mathematical characterization of the internal-variable equivalence class, independent of the specific data set used for training.

Axiom & Free-Parameter Ledger

0 free parameters · 3 axioms · 0 invented entities

The framework rests on standard domain assumptions from continuum mechanics without introducing new physical entities or free parameters beyond typical neural network training.

axioms (3)
  • domain assumption The constitutive law must satisfy the second law of thermodynamics
    Invoked explicitly as a required consistency property of the learned model.
  • domain assumption Stability of the material under extreme applied strain
    Stated as a necessary condition for the formulation.
  • domain assumption Existence of solutions to the governing equations
    Listed as a mathematical property the model must respect.

pith-pipeline@v0.9.0 · 5491 in / 1301 out tokens · 51097 ms · 2026-05-15T01:48:32.783266+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

37 extracted references · 37 canonical work pages · 1 internal anchor

  1. [1]

    Zico Kolter

    Brandon Amos, Lei Xu, and J. Zico Kolter. Input convex neural networks, 2017

  2. [2]

    A mechanics-informed artificial neural network approach in data-driven constitutive modeling.International Journal for Numerical Methods in Engineering, 123 20 (12):2738–2759, 2022

    Faisal As’ad, Philip Avery, and Charbel Farhat. A mechanics-informed artificial neural network approach in data-driven constitutive modeling.International Journal for Numerical Methods in Engineering, 123 20 (12):2738–2759, 2022. doi: https://doi.org/10.1002/nme.6957. URLhttps://onlinelibrary.wiley. com/doi/abs/10.1002/nme.6957

  3. [3]

    A mechanics-informed deep learning framework for data-driven non- linear viscoelasticity.Computer Methods in Applied Mechanics and Engineering, 417:116463, 2023

    Faisal As’ad and Charbel Farhat. A mechanics-informed deep learning framework for data-driven non- linear viscoelasticity.Computer Methods in Applied Mechanics and Engineering, 417:116463, 2023. ISSN 0045-7825. doi: https://doi.org/10.1016/j.cma.2023.116463. URLhttps://www.sciencedirect.com/ science/article/pii/S004578252300587X

  4. [4]

    Faisal As’ad and Charbel Farhat. A staggered training framework for mechanics-informed neu- ral networks in tractable multiscale homogenization with application to woven fabrics.Computer Methods in Applied Mechanics and Engineering, 452:118666, 2026. ISSN 0045-7825. doi: https:// doi.org/10.1016/j.cma.2025.118666. URLhttps://www.sciencedirect.com/science/...

  5. [5]

    J. M. Ball. Constitutive inequalities and existence theorems in nonlinear elastostat- ics.Nonlinear Analysis and Mechanics, Heriot-Watt Symposium Vol. 1, 1977. URL https://people.maths.ox.ac.uk/~ball/Articles%20in%20Conference%20Proceedings%20and% 20Books/Ball%201977%20Heriot-Watt%20Symposium.pdf

  6. [6]

    John M. Ball. Convexity conditions and existence theorems in nonlinear elasticity.Archive for Rational Mechanics and Analysis, 63(4):337–403, Dec 1976. ISSN 1432-0673. doi: 10.1007/BF00279992. URL https://doi.org/10.1007/BF00279992

  7. [7]

    American Mathematical Soc., 2011

    Alain Bensoussan, Jacques-Louis Lions, and George Papanicolaou.Asymptotic analysis for periodic structures, volume 374. American Mathematical Soc., 2011

  8. [8]

    Learning markovian homogenized models in viscoelasticity.Multiscale Modeling & Simulation, 21(2):641–679, 2023

    Kaushik Bhattacharya, Burigede Liu, Andrew Stuart, and Margaret Trautner. Learning markovian homogenized models in viscoelasticity.Multiscale Modeling & Simulation, 21(2):641–679, 2023. doi: 10.1137/22M1499200. URLhttps://doi.org/10.1137/22M1499200

  9. [9]

    Brenner and P

    R. Brenner and P. Suquet. Overall response of viscoelastic composites and polycrystals: exact asymptotic relations and approximate estimates.International Journal of Solids and Structures, 50 (10):1824–1838, 2013. ISSN 0020-7683. doi: https://doi.org/10.1016/j.ijsolstr.2013.02.011. URL https://www.sciencedirect.com/science/article/pii/S0020768313000760

  10. [10]

    Coleman and Morton E

    Bernard D. Coleman and Morton E. Gurtin. Thermodynamics with internal state variables.The Journal of Chemical Physics, 47(2):597–613, 07 1967. ISSN 0021-9606. doi: 10.1063/1.1711937. URL https://doi.org/10.1063/1.1711937

  11. [11]

    Convex neural networks learn generalized standard material models.Journal of the Mechanics and Physics of Solids, 200:106103, 2025

    Moritz Flaschel, Paul Steinmann, Laura De Lorenzis, and Ellen Kuhl. Convex neural networks learn generalized standard material models.Journal of the Mechanics and Physics of Solids, 200:106103, 2025. ISSN 0022-5096. doi: https://doi.org/10.1016/j.jmps.2025.106103. URLhttps://www.sciencedirect. com/science/article/pii/S0022509625000791

  12. [12]

    Existence results for a class of rate-independent material models with nonconvex elastic energies.Journal f¨ ur die reine und angewandte Mathematik, 2006(595):55–91,

    Gilles Francfort and Alexander Mielke. Existence results for a class of rate-independent material models with nonconvex elastic energies.Journal f¨ ur die reine und angewandte Mathematik, 2006(595):55–91,

  13. [13]

    URLhttps://doi.org/10.1515/CRELLE.2006.044

    doi: doi:10.1515/CRELLE.2006.044. URLhttps://doi.org/10.1515/CRELLE.2006.044

  14. [14]

    Francfort and Pierre M

    Gilles A. Francfort and Pierre M. Suquet. Homogenization and mechanical dissipation in thermovis- coelasticity.Archive for Rational Mechanics and Analysis, 96(3):265–293, Sep 1986. ISSN 1432-0673. doi: 10.1007/BF00251909. URLhttps://doi.org/10.1007/BF00251909

  15. [15]

    Horstemeyer and Douglas J

    Mark F. Horstemeyer and Douglas J. Bammann. Historical review of internal state variable theory for inelasticity.International Journal of Plasticity, 26(9):1310–1334, 2010. ISSN 0749-6419. doi: https:// doi.org/10.1016/j.ijplas.2010.06.005. URLhttps://www.sciencedirect.com/science/article/pii/ S0749641910000847. Special Issue In Honor of David L. McDowell

  16. [16]

    T. J. R. Hughes J. C. Simo.Computational Inelasticity. Springer, May 2006. doi: 10.1007/b98904. URLhttps://doi.org/10.1007/b98904. 21

  17. [17]

    A learning-based multiscale model for reactive flow in porous media.Water Resources Research, 60(9):e2023WR036303, 2024

    Mina Karimi and Kaushik Bhattacharya. A learning-based multiscale model for reactive flow in porous media.Water Resources Research, 60(9):e2023WR036303, 2024. doi: https://doi. org/10.1029/2023WR036303. URLhttps://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/ 2023WR036303. e2023WR036303 2023WR036303

  18. [18]

    Adam: A Method for Stochastic Optimization

    Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017. URLhttps: //arxiv.org/abs/1412.6980

  19. [19]

    Klein, Mauricio Fern´ andez, Robert J

    Dominik K. Klein, Mauricio Fern´ andez, Robert J. Martin, Patrizio Neff, and Oliver Weeger. Polyconvex anisotropic hyperelasticity with neural networks.Journal of the Mechanics and Physics of Solids, 159:104703, 2022. ISSN 0022-5096. doi: https://doi.org/10.1016/j.jmps.2021.104703. URLhttps: //www.sciencedirect.com/science/article/pii/S0022509621003215

  20. [20]

    Cambridge university press, 2000

    U Fred Kocks, Carlos Norberto Tom´ e, and H-R Wenk.Texture and anisotropy: preferred orientations in polycrystals and their effect on materials properties. Cambridge university press, 2000

  21. [21]

    Multiscale modeling of materials: Computing, data science, uncertainty and goal- oriented optimization.Mechanics of Materials, 165:104156, 2022

    Nikola Kovachki, Burigede Liu, Xingsheng Sun, Hao Zhou, Kaushik Bhattacharya, Michael Ortiz, and Andrew Stuart. Multiscale modeling of materials: Computing, data science, uncertainty and goal- oriented optimization.Mechanics of Materials, 165:104156, 2022. ISSN 0167-6636. doi: https://doi. org/10.1016/j.mechmat.2021.104156. URLhttps://www.sciencedirect.co...

  22. [22]

    Learning chaotic dynamics in dissipative systems.Advances in Neural Information Processing Systems, 35:16768–16781, 2022

    Zongyi Li, Miguel Liu-Schiaffini, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Learning chaotic dynamics in dissipative systems.Advances in Neural Information Processing Systems, 35:16768–16781, 2022

  23. [23]

    Stuart, and Kaushik Bhattacharya

    Burigede Liu, Nikola Kovachki, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, Andrew M. Stuart, and Kaushik Bhattacharya. A learning-based multiscale method and its application to inelastic impact problems.Journal of the Mechanics and Physics of Solids, 158:104668, 2022. ISSN 0022-5096. doi: https://doi.org/10.1016/j.jmps.2021.104668. URLhttps://www...

  24. [24]

    Stuart, and Kaushik Bhattacharya

    Burigede Liu, Eric Ocegueda, Margaret Trautner, Andrew M. Stuart, and Kaushik Bhattacharya. Learn- ing macroscopic internal variables and history dependence from microscopic models.Journal of the Me- chanics and Physics of Solids, 178:105329, 2023. ISSN 0022-5096. doi: https://doi.org/10.1016/j.jmps. 2023.105329. URLhttps://www.sciencedirect.com/science/a...

  25. [25]

    Existence results for energetic models for rate-independent systems.Calculus of Variations and Partial Differential Equations, 22(1):73–99, Jan 2005

    Andreas Mainik and Alexander Mielke. Existence results for energetic models for rate-independent systems.Calculus of Variations and Partial Differential Equations, 22(1):73–99, Jan 2005. ISSN 1432-

  26. [26]

    URLhttps://doi.org/10.1007/s00526-004-0267-8

    doi: 10.1007/s00526-004-0267-8. URLhttps://doi.org/10.1007/s00526-004-0267-8

  27. [27]

    A. Mielke. Energetic formulation of multiplicative elasto-plasticity using dissipation distances.Con- tinuum Mechanics and Thermodynamics, 15(4):351–382, Aug 2003. ISSN 1432-0959. doi: 10.1007/ s00161-003-0120-x. URLhttps://doi.org/10.1007/s00161-003-0120-x

  28. [28]

    Existence of minimizers in incremental elasto-plasticity with finite strains.SIAM Journal on Mathematical Analysis, 36(2):384–404, 2004

    Alexander Mielke. Existence of minimizers in incremental elasto-plasticity with finite strains.SIAM Journal on Mathematical Analysis, 36(2):384–404, 2004. doi: 10.1137/S0036141003429906. URLhttps: //doi.org/10.1137/S0036141003429906

  29. [29]

    Global existence results for viscoplasticity at finite strain.Archive for Rational Mechanics and Analysis, 227(1):423–475, Jan 2018

    Alexander Mielke, Riccarda Rossi, and Giuseppe Savar´ e. Global existence results for viscoplasticity at finite strain.Archive for Rational Mechanics and Analysis, 227(1):423–475, Jan 2018. ISSN 1432-0673. doi: 10.1007/s00205-017-1164-6. URLhttps://doi.org/10.1007/s00205-017-1164-6

  30. [30]

    Ortiz and L

    M. Ortiz and L. Stainier. The variational formulation of viscoplastic constitutive updates.Computer Methods in Applied Mechanics and Engineering, 171(3):419–444, 1999. ISSN 0045-7825. doi: https:// doi.org/10.1016/S0045-7825(98)00219-9. URLhttps://www.sciencedirect.com/science/article/ pii/S0045782598002199. 22

  31. [31]

    Pytorch: An imperative style, high-performance deep learning library

    Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-perfo...

  32. [32]

    Springer Science & Business Media, 2008

    Grigorios A Pavliotis and Andrew Stuart.Multiscale methods: averaging and homogenization, volume 53. Springer Science & Business Media, 2008

  33. [33]

    Approximation theory of the mlp model in neural networks.Acta numerica, 8:143–195, 1999

    Allan Pinkus. Approximation theory of the mlp model in neural networks.Acta numerica, 8:143–195, 1999

  34. [34]

    J.R. Rice. Inelastic constitutive relations for solids: An internal-variable theory and its application to metal plasticity.Journal of the Mechanics and Physics of Solids, 19(6):433–455, 1971. ISSN 0022-

  35. [35]

    URLhttps://www.sciencedirect.com/ science/article/pii/002250967190010X

    doi: https://doi.org/10.1016/0022-5096(71)90010-X. URLhttps://www.sciencedirect.com/ science/article/pii/002250967190010X

  36. [36]

    Polyconvex energies for trigonal, tetragonal and cubic symmetry groups

    J¨ org Schr¨ oder, Patrizio Neff, and Vera Ebbing. Polyconvex energies for trigonal, tetragonal and cubic symmetry groups. In Klaus Hackl, editor,IUTAM Symposium on Variational Concepts with Applications to the Mechanics of Materials, pages 221–232, Dordrecht, 2010. Springer Netherlands. ISBN 978-90- 481-9195-6

  37. [37]

    Springer, 1998

    Juan C Simo and Thomas JR Hughes.Computational inelasticity. Springer, 1998. Appendix A Proofs Theorem 4.6.Given a pair of functions (W, bD) satisfying Assumption 4.2. For anyF∈L ∞, consider S(t)andξ(t)given by(4.5). Then, for everyA∈GL(n,R), there exist functions fW:R m ×R n →R +,(f,ep)7→ fW(f,ep); eD:R n →R +,eq7→ eD(eq) with eS:R m ×R n →R m and eF:R m...