pith. machine review for the scientific record. sign in

arxiv: 2605.05416 · v1 · submitted 2026-05-06 · 💻 cs.CY

Recognition: unknown

From Cradle to Cloud: A Life Cycle Review of AI's Environmental Footprint

Authors on Pith no claims yet

Pith reviewed 2026-05-08 15:39 UTC · model grok-4.3

classification 💻 cs.CY
keywords AI environmental footprintlife cycle assessmentcarbon emissionsliterature reviewsustainable AIwater usageembodied emissionsenvironmental reporting
0
0 comments X

The pith

Reviews of AI find inconsistent life cycle terms and limited CO2 metrics

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper reviews scientific studies on the environmental effects of AI systems throughout their entire existence. It applies a structured eight-stage model covering everything from building hardware to disposing of equipment. The review shows that discussions of sustainable AI often use the life cycle idea inconsistently, with some papers only looking at training and running models while others include data collection and physical infrastructure. Most reports estimate carbon emissions using rough calculations rather than measuring water consumption, material use, or multiple types of environmental harm. As a result, it is hard to combine findings or make informed decisions about reducing AI's overall footprint, leading the authors to recommend clearer measurement and reporting standards.

Core claim

Using an eight-stage life cycle framework that includes hardware manufacturing, infrastructure construction, data gathering and preprocessing, model experimentation, training, post-training adaptation, deployment, inference, and end-of-life, we map the coverage in existing literature. We find that life cycle language in AI is common but ill-defined, with narrow focuses on training and inference in some cases versus broader inclusion of data and embodied emissions in others. Reporting depends mainly on CO2e estimates from coarse proxies, with scant attention to water usage, materials manufacturing, and full multi-impact assessments. This makes comparison and aggregation difficult, prompting a

What carries the argument

An eight-stage life cycle framework for AI systems, which organizes impacts from hardware manufacturing through to end-of-life and is used to systematically review what stages, metrics, and methods are covered in the literature

If this is right

  • Studies using different definitions of the AI life cycle cannot be easily compared or combined.
  • Predominant use of approximate CO2e calculations overlooks significant factors like water consumption and material extraction.
  • Without multi-impact assessments, the true environmental cost of AI remains underestimated.
  • Standardized measurement approaches would allow for more accurate and policy-relevant evaluations of AI systems.
  • Adopting the proposed reporting practices could improve transparency in the AI industry.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Adopting a standardized framework might encourage AI developers to account for impacts earlier in the design process.
  • Extending this review to include industry reports could reveal gaps between academic and practical assessments.
  • Policymakers could use such structured reviews to set requirements for environmental disclosures in AI projects.
  • Similar life cycle analyses applied to other technologies could provide benchmarks for AI's relative impact.

Load-bearing premise

That the literature search captured a representative sample of relevant studies and that the chosen eight-stage framework provides a complete and unbiased structure for mapping AI environmental impacts

What would settle it

Identifying a substantial body of studies that provide detailed assessments of water usage, material manufacturing, and multiple environmental impacts using consistent life cycle definitions would contradict the reported limitations in current practices

Figures

Figures reproduced from arXiv: 2605.05416 by Katherine Lambert, Sasha Luccioni.

Figure 1
Figure 1. Figure 1: The 8 AI life cycle stages identified in our analysis. view at source ↗
Figure 2
Figure 2. Figure 2: Number of papers included in the review, by publication year. view at source ↗
read the original abstract

The rapid growth in the deployment and scale of modern artificial intelligence (AI) systems has intensified concerns regarding their environmental impacts, yet we still lack a comprehensive view of where and how these impacts arise across the AI life cycle. In order to shed more light on this question, we conduct a structured, comprehensive literature review of scientific papers and technical reports that examine different aspects of AI's environmental footprint. Using an eight-stage life cycle framework, spanning hardware manufacturing, infrastructure construction, data gathering and preprocessing, model experimentation, training, post-training adaptation, deployment, inference, and end-of-life, we systematically map which stages are covered, the metrics reported at each stage, and the methodological choices made. We then draw conclusions about the information we gathered, finding that although life cycle language is increasingly common in discussions of "green" or "sustainable" AI, its definition remains unclear -- while some studies focus solely on model training and inference, others encompass broader measurements such as data collection, infrastructure, and embodied emissions. We also find that reporting practices rely predominantly on CO2e estimates derived from coarse proxies, with limited attention dedicated to water usage, materials manufacturing, and multi-impact life cycle assessment, making it difficult to compare and aggregate true results. Building on these findings, we propose measurement and reporting approaches to support more comprehensive, comparable and policy-relevant assessments of AI's environmental impacts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper conducts a structured literature review of AI's environmental footprint using a custom eight-stage life cycle framework (hardware manufacturing, infrastructure construction, data gathering/preprocessing, model experimentation, training, post-training adaptation, deployment/inference, and end-of-life). It maps stage coverage, reported metrics, and methodological choices across reviewed studies, concluding that 'life cycle' definitions are inconsistent (some studies limit to training/inference while others include data collection and embodied emissions), reporting relies predominantly on coarse CO2e proxies, and there is limited attention to water usage, materials manufacturing, and multi-impact LCA. The authors propose improved measurement and reporting approaches for more comprehensive assessments.

Significance. If the mapping is representative, the work is significant for synthesizing current practices in AI sustainability research and identifying actionable gaps in definitional clarity and impact coverage. It could help standardize reporting, improve comparability of results, and support policy-relevant assessments of AI's full environmental costs, particularly as AI deployment scales.

major comments (2)
  1. [Methods] Methods section: The manuscript provides no details on the literature search strategy (databases, search strings, date ranges), inclusion/exclusion criteria, total number of papers screened or included, or how the eight-stage framework was applied (e.g., assignment rules or inter-rater checks). This directly undermines the central claims about field-wide patterns in proxy usage and limited multi-impact coverage, as the observed gaps could be artifacts of sampling or categorization rather than representative findings.
  2. [Framework] Framework description (likely §2 or §3): The eight-stage partitioning is presented without justification for its completeness or neutrality, nor discussion of potential overlaps or omissions (e.g., how embodied emissions in hardware are distinguished from infrastructure). This is load-bearing for the gap analysis, as an ad-hoc taxonomy could systematically under-count water or materials impacts.
minor comments (2)
  1. [Abstract] Abstract: While it summarizes findings, it does not mention the number of papers reviewed or key methodological parameters, which would strengthen the claim of a 'comprehensive' review.
  2. [Results] Notation and terminology: 'CO2e estimates derived from coarse proxies' is used repeatedly but without a table or section explicitly listing the proxies encountered in the literature (e.g., energy consumption estimates vs. direct measurements).

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive comments, which identify key areas where additional transparency will strengthen the manuscript. We address each major comment below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [Methods] Methods section: The manuscript provides no details on the literature search strategy (databases, search strings, date ranges), inclusion/exclusion criteria, total number of papers screened or included, or how the eight-stage framework was applied (e.g., assignment rules or inter-rater checks). This directly undermines the central claims about field-wide patterns in proxy usage and limited multi-impact coverage, as the observed gaps could be artifacts of sampling or categorization rather than representative findings.

    Authors: We agree that the current manuscript omits these methodological details, which are necessary to substantiate the representativeness of the mapped patterns. In the revised version we will add a dedicated Methods subsection that specifies: the databases and repositories searched, the precise search strings and Boolean combinations employed, the date range of the search, explicit inclusion/exclusion criteria, the total number of records screened and the final number included, and a PRISMA-style flow diagram. We will also describe the procedure used to assign studies to the eight stages, including any double-coding or inter-rater checks performed. These additions will allow readers to evaluate whether the reported inconsistencies and coverage gaps reflect field-wide practices rather than sampling artifacts. revision: yes

  2. Referee: [Framework] Framework description (likely §2 or §3): The eight-stage partitioning is presented without justification for its completeness or neutrality, nor discussion of potential overlaps or omissions (e.g., how embodied emissions in hardware are distinguished from infrastructure). This is load-bearing for the gap analysis, as an ad-hoc taxonomy could systematically under-count water or materials impacts.

    Authors: The eight-stage framework was derived by extending established ICT life-cycle assessment boundaries to capture AI-specific activities (model experimentation, post-training adaptation) while retaining core stages such as hardware manufacturing and end-of-life. We acknowledge that the manuscript does not supply explicit justification for stage completeness, neutrality, or handling of overlaps. In revision we will expand the framework section to: (i) cite prior LCA literature that informed each stage boundary, (ii) clarify the distinction between hardware manufacturing (chip fabrication and component assembly) and infrastructure construction (data-center building and power systems), (iii) discuss potential overlaps and the rules used to avoid double-counting, and (iv) note that any under-representation of water or materials impacts is observed in the reviewed studies themselves rather than imposed by the taxonomy. This elaboration will reduce the risk that the gap analysis is an artifact of an unexamined partitioning. revision: yes

Circularity Check

0 steps flagged

No circularity: qualitative literature synthesis with independent external inputs

full rationale

This paper performs a structured review of external scientific papers and technical reports on AI environmental impacts, mapping them onto an eight-stage framework the authors introduce for organizational purposes. All core claims (inconsistent life-cycle definitions, predominant use of coarse CO2e proxies, limited coverage of water/materials/multi-impact LCA) are presented as direct observations from the reviewed literature rather than derived quantities, fitted parameters, or predictions. No equations, self-referential definitions, or load-bearing self-citations appear; the framework is an explicit taxonomy applied to outside sources, and the findings remain falsifiable by independent replication of the literature search. The absence of any mathematical derivation chain or internal fitting eliminates the patterns required for a positive circularity finding.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The review depends on the validity of the eight-stage categorization as a comprehensive model and on standard practices for conducting literature reviews; no free parameters or new entities are introduced.

axioms (2)
  • domain assumption An eight-stage life cycle framework spanning hardware manufacturing through end-of-life adequately captures all relevant AI environmental impacts.
    Invoked to systematically map coverage across studies.
  • domain assumption The selected scientific papers and technical reports form a representative sample of the literature on AI environmental footprints.
    Required for the conclusions about common practices and gaps.

pith-pipeline@v0.9.0 · 5545 in / 1300 out tokens · 29230 ms · 2026-05-08T15:39:06.620342+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Towards Resource-Efficient LLMs: End-to-End Energy Accounting of Distillation Pipelines

    cs.LG 2026-05 unverdicted novelty 6.0

    An end-to-end energy measurement framework for LLM distillation pipelines reveals hidden teacher-side costs and yields selection guidelines plus an open-source harness.

Reference graph

Works this paper leans on

100 extracted references · 35 canonical work pages · cited by 1 Pith paper · 8 internal anchors

  1. [1]

    Husam Alissa, Teresa Nick, Ashish Raniwala, Alberto Arribas Herranz, Kali Frost, Ioannis Manousakis, Kari Lio, Brijesh Warrier, Vaidehi Oruganti, TJ DiCaprio, et al. 2025. Using life cycle assessment to drive innovation for sustainable cool clouds.Nature(2025), 1–8

  2. [2]

    Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2022. Machine bias. InEthics of data and analytics. Auerbach Publications, 254–264

  3. [3]

    Lasse F Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan. 2020. Carbontracker: Tracking and predicting the carbon footprint of training deep learning models.arXiv preprint arXiv:2007.03051(2020)

  4. [4]

    Sergio Aquino-Brítez, Pablo García-Sánchez, Andrés Ortiz, and Diego Aquino-Brítez. 2025. Towards an Energy Consumption Index for Deep Learning Models: A Comparative Analysis of Architectures, GPUs, and Measurement Tools.Sensors25, 3 (2025), 846

  5. [5]

    Mauricio Fadel Argerich and Marta Patiño-Martínez. 2024. Measuring and improving the energy efficiency of large language models inference. IEEE Access12 (2024), 80194–80207

  6. [6]

    Yevgeniya Arushanyan, Elisabeth Ekener-Petersen, and Göran Finnveden. 2014. Lessons learned–Review of LCAs for ICT products and services. Computers in industry65, 2 (2014), 211–234

  7. [7]

    Enrico Barbierato and Alice Gatti. 2024. Toward green AI: A methodological survey of the scientific literature.IEEE Access12 (2024), 23989–24013

  8. [8]

    Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623

  9. [9]

    Giulia Bertazzini, Chiara Albisani, Daniele Baracchi, Dasara Shullani, and Roberto Verdecchia. 2025. The Hidden Cost of an Image: Quantifying the Energy Consumption of AI Image Generation.arXiv preprint arXiv:2506.17016(2025)

  10. [10]

    Adrien Berthelot, Eddy Caron, Mathilde Jay, and Laurent Lefèvre. 2024. Estimating the environmental impact of Generative-AI services using an LCA-based methodology.Procedia CIRP122 (2024), 707–712

  11. [11]

    Su Lin Blodgett and Michael Madaio. 2021. Risks of AI Foundation Models in Education. arXiv:2110.10024 [cs.CY] https://arxiv.org/abs/2110.10024

  12. [12]

    Rishi Bommasani. 2021. On the opportunities and risks of foundation models.arXiv preprint arXiv:2108.07258(2021)

  13. [13]

    Lucía Bouza, Aurélie Bugeau, and Loïc Lannelongue. 2023. How to estimate carbon footprint when training deep learning models? A guide and review.Environmental Research Communications5, 11 (2023), 115014

  14. [14]

    Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. InConference on fairness, accountability and transparency. PMLR, 77–91

  15. [15]

    Andrew A Chien, Liuzixuan Lin, Hai Nguyen, Varsha Rao, Tristan Sharma, and Rajini Wijayawardana. 2023. Reducing the Carbon Impact of Generative AI Inference (today and in 2035). InProceedings of the 2nd workshop on sustainable computer systems. 1–7

  16. [16]

    Shih-Kai Chou, Jernej Hribar, Vid Hanžel, Mihael Mohorčič, and Carolina Fortuna. 2024. The Energy Cost of Artificial Intelligence Lifecycle in Communication Networks.arXiv preprint arXiv:2408.00540(2024)

  17. [17]

    Jae-Won Chung, Jeff J Ma, Ruofan Wu, Jiachen Liu, Oh Jun Kweon, Yuxuan Xia, Zhiyu Wu, and Mosharaf Chowdhury. 2025. The ML. ENERGY benchmark: Toward automated inference energy measurement and optimization.arXiv preprint arXiv:2505.06371(2025)

  18. [18]

    Joseph Cook, Romain Jacob, Jo Lindsay Walton, Adrien Berthelot, Asim Hussain, and Daniel Schien. 2025. Beyond Counting Carbon: AI Environmental Assessments Struggle to Inform Net Impact Decisions. (2025)

  19. [19]

    Ben Cottier, Robi Rahman, Loredana Fattorini, Nestor Maslej, Tamay Besiroglu, and David Owen. 2024. The rising costs of training frontier AI models.arXiv preprint arXiv:2405.21015(2024)

  20. [20]

    2021.The atlas of AI: Power, politics, and the planetary costs of artificial intelligence

    Kate Crawford. 2021.The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press

  21. [21]

    Eduardo Cueto-Mendoza and John Kelleher. 2024. A framework for measuring the training efficiency of a neural architecture.Artificial Intelligence Review57, 12 (2024), 349

  22. [22]

    Daswin De Silva and Damminda Alahakoon. 2022. An artificial intelligence life cycle: From conception to production.Patterns3, 6 (2022)

  23. [23]

    DeepSeek-AI. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arXiv:2501.12948 [cs.CL] https: //arxiv.org/abs/2501.12948

  24. [24]

    Paul Delanoë, Dieudonné Tchuente, and Guillaume Colin. 2023. Method and evaluations of the effective gain of artificial intelligence models for reducing CO2 emissions.Journal of environmental management331 (2023), 117261

  25. [25]

    Radosvet Desislavov, Fernando Martínez-Plumed, and José Hernández-Orallo. 2023. Trends in AI inference energy consumption: Beyond the performance-vs-parameter laws of deep learning.Sustainable Computing: Informatics and Systems38 (2023), 100857

  26. [26]

    Pandu Devarakota, Nicolas Tsesmetzis, Faruk O Alpak, Apurva Gala, and Detlef Hohl. 2025. AI and the Net-Zero Journey: Energy Demand, Emissions, and the Potential for Transition.arXiv preprint arXiv:2507.10750(2025)

  27. [27]

    Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the carbon intensity of AI in cloud instances. InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1877–1894. From Cradle to Cloud: A ...

  28. [28]

    Alexandre d’ORGEVAL, Edi ASSOUMOU, Valentina SESSA, Ilknur COLAK, Stuart SHEEHAN, and Quentin AVENAS. 2024. Carbon Footprint of AI Data Centers: A Life Cycle Approach. InInternational Conference on Applied Energy

  29. [29]

    Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. 2024. GPTs are GPTs: Labor market impact potential of LLMs.Science384, 6702 (2024), 1306–1308

  30. [30]

    Cooper Elsworth, Keguo Huang, David Patterson, Ian Schneider, Robert Sedivy, Savannah Goodman, Ben Townsend, Parthasarathy Ranganathan, Jeff Dean, Amin Vahdat, et al. 2025. Measuring the environmental impact of delivering AI at Google Scale.arXiv preprint arXiv:2508.15734(2025)

  31. [31]

    Sophia Falk, Nicholas Kluge Corrêa, Sasha Luccioni, Lisa Biber-Freudenberger, and Aimee van Wynsberghe. 2025. From FLOPs to Footprints: The Resource Cost of Artificial Intelligence. arXiv:2512.04142 [cs.CY] https://arxiv.org/abs/2512.04142

  32. [34]

    Jared Fernandez, Clara Na, Yonatan Bisk, and Emma Strubell. [n. d.]. Evaluating the Environmental Impact of Language Models with Life Cycle Assessment. ([n. d.])

  33. [35]

    Jared Fernandez, Clara Na, Vashisth Tiwari, Yonatan Bisk, Sasha Luccioni, and Emma Strubell. 2025. Energy considerations of large language model inference and efficiency optimizations.arXiv preprint arXiv:2504.17674(2025)

  34. [36]

    Matthias Finkbeiner, Atsushi Inaba, Reginald Tan, Kim Christiansen, and Hans-Jürgen Klüppel. 2006. The new international standards for life cycle assessment: ISO 14040 and ISO 14044.The international journal of life cycle assessment11, 2 (2006), 80–85

  35. [37]

    Raphael Fischer. 2025. Ground-Truthing AI Energy Consumption: Validating CodeCarbon Against External Measurements.arXiv preprint arXiv:2509.22092(2025)

  36. [38]

    Datacomp: In search of the next generation of multimodal datasets

    Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani S. Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Ale...

  37. [39]

    Zico Kolter

    Sachin Goyal, Pratyush Maini, Zachary Chase Lipton, Aditi Raghunathan, and J. Zico Kolter. 2024. Scaling Laws for Data Filtering—Data Curation Cannot be Compute Agnostic.2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2024), 22702–22711. https://api.semanticscholar.org/CorpusID:269033049

  38. [40]

    Fredrik Guldbrandsson and Pernilla Bergmark. 2012. Opportunities and limitations of using life cycle assessment methodology in the ICT sector. In2012 Electronics Goes Green 2012+. IEEE, 1–6

  39. [41]

    Philipp Hacker, Andreas Engel, and Marco Mauer. 2023. Regulating ChatGPT and other large generative AI models. InProceedings of the 2023 ACM conference on fairness, accountability, and transparency. 1112–1123

  40. [42]

    Reinout Heijungs, Gjalt Huppes, and Jeroen B Guinée. 2010. Life cycle assessment and sustainability analysis of products, materials and technologies. Toward a scientific framework for sustainability life cycle analysis.Polymer degradation and stability95, 3 (2010), 422–428

  41. [43]

    Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. Towards the systematic reporting of the energy and carbon footprints of machine learning.Journal of Machine Learning Research21, 248 (2020), 1–43

  42. [44]

    Manuel Herrera, Xiang Xie, Andrea Menapace, Ariele Zanfei, and Bruno Melo Brentan. 2025. Sustainable AI infrastructure: A scenario-based forecast of water footprint under uncertainty. (2025)

  43. [45]

    Ralph Hintemann and Simon Hinterholzer. 2022. Cloud computing drives the growth of the data center industry and its energy consumption. Data centers(2022)

  44. [46]

    Asli Isler-Kaya and Filiz Karaosmanoglu. 2023. Life cycle assessment of a climate-friendly data center cooling device.Energy and Buildings288 (2023), 113006

  45. [47]

    ISO/IEC. 2023. ISO/IEC 5338:2023 Information technology — Artificial intelligence — AI system life cycle processes. https://www.iso.org/standard/ 81118.html Accessed: 2026-03-22

  46. [48]

    Nidhal Jegham, Marwan Abdelatti, Chan Young Koh, Lassad Elmoubarki, and Abdeltawab Hendawi. 2025. How hungry is ai? benchmarking energy, water, and carbon footprint of llm inference.arXiv preprint arXiv:2505.09598(2025)

  47. [49]

    Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning.arXiv preprint arXiv:1910.09700(2019)

  48. [50]

    Imran Latif, Alex C Newkirk, Matthew R Carbone, Arslan Munir, Yuewei Lin, Jonathan Koomey, Xi Yu, and Zhihua Dong. 2025. Single-Node Power Demand During AI Training: Measurements on an 8-GPU NVIDIA H100 System.IEEE Access(2025)

  49. [51]

    Nuoa Lei, Jun Lu, Arman Shehabi, and Eric Masanet. 2025. The water use of data center workloads: A review and assessment of key determinants. Resources, Conservation and Recycling219 (2025), 108310

  50. [52]

    Pengfei Li, Jianyi Yang, Mohammad A Islam, and Shaolei Ren. 2023. Making AI less "thirsty": Uncovering and addressing the secret water footprint of ai models.arXiv preprint arXiv:2304.03271(2023). 18 Lambert and Luccioni

  51. [53]

    Pengfei Li, Jianyi Yang, Mohammad A Islam, and Shaolei Ren. 2025. Making ai less’ thirsty’.Commun. ACM68, 7 (2025), 54–61

  52. [54]

    Anne-Laure Ligozat, Julien Lefèvre, Aurélie Bugeau, and Jacques Combaz. 2022. Unraveling the hidden environmental impacts of AI solutions for environment life cycle assessment of AI solutions.Sustainability14, 9 (2022), 5172

  53. [55]

    Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al

  54. [56]

    Deepseek-v3 technical report.arXiv preprint arXiv:2412.19437(2024)

  55. [57]

    R Lorenzini. 2021. Digital & environment: How to evaluate server manufacturing footprint, beyond greenhouse gas emissions?

  56. [58]

    Alexandra Sasha Luccioni and Alex Hernandez-Garcia. 2023. Counting carbon: A survey of factors influencing the emissions of machine learning. arXiv preprint arXiv:2302.08476(2023)

  57. [59]

    Alexandra Sasha Luccioni, Giada Pistilli, Raesetje Sefala, and Nyalleng Moorosi. 2025. Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice.arXiv preprint arXiv:2504.00797(2025)

  58. [60]

    Alexandra Sasha Luccioni, Emma Strubell, and Kate Crawford. 2025. From efficiency gains to rebound effects: The problem of Jevons’ paradox in AI’s polarized environmental debate. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. 76–88

  59. [61]

    Alexandra Sasha Luccioni, Sylvain Viguier, and Anne-Laure Ligozat. 2022. Estimating the carbon footprint of BLOOM, a 176B parameter language model.arXiv preprint arXiv:2211.02001(2022)

  60. [62]

    Alexandra Sasha Luccioni, Sylvain Viguier, and Anne-Laure Ligozat. 2023. Estimating the carbon footprint of bloom, a 176b parameter language model.Journal of machine learning research24, 253 (2023), 1–15

  61. [63]

    Sasha Luccioni, Boris Gamazaychikov, Theo Alves da Costa, and Emma Strubell. 2025. Misinformation by Omission: The Need for More Environmental Transparency in AI.arXiv preprint arXiv:2506.15572(2025)

  62. [64]

    Sasha Luccioni, Boris Gamazaychikov, Sara Hooker, Régis Pierrard, Emma Strubell, Yacine Jernite, and Carole-Jean Wu. 2024. Light bulbs have energy ratings—so why can’t AI chatbots?Nature632, 8026 (2024), 736–738

  63. [65]

    Sasha Luccioni, Yacine Jernite, and Emma Strubell. 2024. Power Hungry Processing: Watts Driving the Cost of AI Deployment?. InThe 2024 ACM Conference on Fairness Accountability and Transparency (FAccT ’24). ACM, 85–99. doi:10.1145/3630106.3658542

  64. [66]

    Sasha Luccioni, Yacine Jernite, and Emma Strubell. 2024. Power hungry processing: Watts driving the cost of AI deployment?. InProceedings of the 2024 ACM conference on fairness, accountability, and transparency. 85–99

  65. [67]

    Giulio Malenza, Francesco Targa, Adriano Marques Garcia, Marco Aldinucci, and Robert Birke. 2025. Exploring energy consumption of AI frameworks on a 64-core RV64 Server CPU.arXiv preprint arXiv:2504.03774(2025)

  66. [68]

    Nicolás Martínez-Ramón, Fernando Calvo-Rodríguez, Diego Iribarren, and Javier Dufour. 2024. Frameworks for the application of machine learning in life cycle assessment for process modeling.Cleaner Environmental Systems14 (2024), 100221

  67. [69]

    Ioannis Mavromatis, Kostas Katsaros, and Aftab Khan. 2024. Computing Within Limits: An Empirical Study of Energy Consumption in ML Training and Inference.arXiv preprint arXiv:2406.14328(2024)

  68. [70]

    Jacob Morrison, Clara Na, Jared Fernandez, Tim Dettmers, Emma Strubell, and Jesse Dodge. 2025. Holistically evaluating the environmental impact of creating language models.arXiv preprint arXiv:2503.05804(2025)

  69. [71]

    Risang Faiz Muhammad and Muhammad Edo Syahputra. 2024. Comparative Study of GPU Performance and Energy Efficiency Across Generational Architectures: A Systematic Literature. In2024 IEEE International Conference on Control & Automation, Electronics, Robotics, Internet of Things, and Artificial Intelligence (CERIA). IEEE, 1–7

  70. [72]

    Product Carbon Footprint Summary for NVIDIA HGX H100

    NVIDIA. 2025. "Product Carbon Footprint Summary for NVIDIA HGX H100". https://images.nvidia.com/aem-dam/Solutions/documents/HGX- H100-PCF-Summary.pdf

  71. [73]

    OECD. 2025. Recommendation of the Council on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 Accessed: 2026-03-22

  72. [74]

    David Patterson, Joseph Gonzalez, Urs Hölzle, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R So, Maud Texier, and Jeff Dean. 2022. The carbon footprint of machine learning training will plateau, then shrink.Computer55, 7 (2022), 18–28

  73. [75]

    David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training.arXiv preprint arXiv:2104.10350(2021)

  74. [76]

    Christiane Plociennik, Ponnapat Watjanatepin, Karel Van Acker, and Martin Ruskowski. 2025. Life Cycle Assessment of Artificial Intelligence Applications: Research Gaps and Opportunities.Procedia CIRP135 (2025), 924–929

  75. [77]

    Soham Poddar, Paramita Koley, Janardan Misra, Niloy Ganguly, and Saptarshi Ghosh. 2025. Towards sustainable nlp: Insights from benchmarking inference energy in large language models. InProceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Pape...

  76. [78]

    K Pronk and Q Zhao. 2025. Benchmarking Energy Efficiency of Large Language Models Using vLLM.arXiv preprint arXiv:2509.08867(2025)

  77. [79]

    Shaolei Ren, Bill Tomlinson, Rebecca W Black, and Andrew W Torrance. 2024. Reconciling the contrasting narratives on the environmental impact of large language models.Scientific Reports14, 1 (2024), 26310

  78. [80]

    Samuel Rincé and Adrien Banse. 2025. Ecologits: Evaluating the environmental impacts of generative AI.Journal of Open Source Software10, 111 (2025), 7471

  79. [81]

    Rafał Różycki, Dorota Agnieszka Solarska, and Grzegorz Waligóra. 2025. Energy-Aware Machine Learning Models—A Review of Recent Techniques and Perspectives.Energies18, 11 (2025), 2810. From Cradle to Cloud: A Life Cycle Review of AI’s Environmental Footprint 19

  80. [82]

    Serenella Sala, Francesca Reale, J Cristobal-Garcia, Luisa Marelli, and Rana Pant. 2016. Life cycle assessment for the impact assessment of policies. Report EUR28380 (2016)

Showing first 80 references.