Recognition: unknown
From Cradle to Cloud: A Life Cycle Review of AI's Environmental Footprint
Pith reviewed 2026-05-08 15:39 UTC · model grok-4.3
The pith
Reviews of AI find inconsistent life cycle terms and limited CO2 metrics
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Using an eight-stage life cycle framework that includes hardware manufacturing, infrastructure construction, data gathering and preprocessing, model experimentation, training, post-training adaptation, deployment, inference, and end-of-life, we map the coverage in existing literature. We find that life cycle language in AI is common but ill-defined, with narrow focuses on training and inference in some cases versus broader inclusion of data and embodied emissions in others. Reporting depends mainly on CO2e estimates from coarse proxies, with scant attention to water usage, materials manufacturing, and full multi-impact assessments. This makes comparison and aggregation difficult, prompting a
What carries the argument
An eight-stage life cycle framework for AI systems, which organizes impacts from hardware manufacturing through to end-of-life and is used to systematically review what stages, metrics, and methods are covered in the literature
If this is right
- Studies using different definitions of the AI life cycle cannot be easily compared or combined.
- Predominant use of approximate CO2e calculations overlooks significant factors like water consumption and material extraction.
- Without multi-impact assessments, the true environmental cost of AI remains underestimated.
- Standardized measurement approaches would allow for more accurate and policy-relevant evaluations of AI systems.
- Adopting the proposed reporting practices could improve transparency in the AI industry.
Where Pith is reading between the lines
- Adopting a standardized framework might encourage AI developers to account for impacts earlier in the design process.
- Extending this review to include industry reports could reveal gaps between academic and practical assessments.
- Policymakers could use such structured reviews to set requirements for environmental disclosures in AI projects.
- Similar life cycle analyses applied to other technologies could provide benchmarks for AI's relative impact.
Load-bearing premise
That the literature search captured a representative sample of relevant studies and that the chosen eight-stage framework provides a complete and unbiased structure for mapping AI environmental impacts
What would settle it
Identifying a substantial body of studies that provide detailed assessments of water usage, material manufacturing, and multiple environmental impacts using consistent life cycle definitions would contradict the reported limitations in current practices
Figures
read the original abstract
The rapid growth in the deployment and scale of modern artificial intelligence (AI) systems has intensified concerns regarding their environmental impacts, yet we still lack a comprehensive view of where and how these impacts arise across the AI life cycle. In order to shed more light on this question, we conduct a structured, comprehensive literature review of scientific papers and technical reports that examine different aspects of AI's environmental footprint. Using an eight-stage life cycle framework, spanning hardware manufacturing, infrastructure construction, data gathering and preprocessing, model experimentation, training, post-training adaptation, deployment, inference, and end-of-life, we systematically map which stages are covered, the metrics reported at each stage, and the methodological choices made. We then draw conclusions about the information we gathered, finding that although life cycle language is increasingly common in discussions of "green" or "sustainable" AI, its definition remains unclear -- while some studies focus solely on model training and inference, others encompass broader measurements such as data collection, infrastructure, and embodied emissions. We also find that reporting practices rely predominantly on CO2e estimates derived from coarse proxies, with limited attention dedicated to water usage, materials manufacturing, and multi-impact life cycle assessment, making it difficult to compare and aggregate true results. Building on these findings, we propose measurement and reporting approaches to support more comprehensive, comparable and policy-relevant assessments of AI's environmental impacts.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper conducts a structured literature review of AI's environmental footprint using a custom eight-stage life cycle framework (hardware manufacturing, infrastructure construction, data gathering/preprocessing, model experimentation, training, post-training adaptation, deployment/inference, and end-of-life). It maps stage coverage, reported metrics, and methodological choices across reviewed studies, concluding that 'life cycle' definitions are inconsistent (some studies limit to training/inference while others include data collection and embodied emissions), reporting relies predominantly on coarse CO2e proxies, and there is limited attention to water usage, materials manufacturing, and multi-impact LCA. The authors propose improved measurement and reporting approaches for more comprehensive assessments.
Significance. If the mapping is representative, the work is significant for synthesizing current practices in AI sustainability research and identifying actionable gaps in definitional clarity and impact coverage. It could help standardize reporting, improve comparability of results, and support policy-relevant assessments of AI's full environmental costs, particularly as AI deployment scales.
major comments (2)
- [Methods] Methods section: The manuscript provides no details on the literature search strategy (databases, search strings, date ranges), inclusion/exclusion criteria, total number of papers screened or included, or how the eight-stage framework was applied (e.g., assignment rules or inter-rater checks). This directly undermines the central claims about field-wide patterns in proxy usage and limited multi-impact coverage, as the observed gaps could be artifacts of sampling or categorization rather than representative findings.
- [Framework] Framework description (likely §2 or §3): The eight-stage partitioning is presented without justification for its completeness or neutrality, nor discussion of potential overlaps or omissions (e.g., how embodied emissions in hardware are distinguished from infrastructure). This is load-bearing for the gap analysis, as an ad-hoc taxonomy could systematically under-count water or materials impacts.
minor comments (2)
- [Abstract] Abstract: While it summarizes findings, it does not mention the number of papers reviewed or key methodological parameters, which would strengthen the claim of a 'comprehensive' review.
- [Results] Notation and terminology: 'CO2e estimates derived from coarse proxies' is used repeatedly but without a table or section explicitly listing the proxies encountered in the literature (e.g., energy consumption estimates vs. direct measurements).
Simulated Author's Rebuttal
We thank the referee for their constructive comments, which identify key areas where additional transparency will strengthen the manuscript. We address each major comment below and will revise the manuscript accordingly.
read point-by-point responses
-
Referee: [Methods] Methods section: The manuscript provides no details on the literature search strategy (databases, search strings, date ranges), inclusion/exclusion criteria, total number of papers screened or included, or how the eight-stage framework was applied (e.g., assignment rules or inter-rater checks). This directly undermines the central claims about field-wide patterns in proxy usage and limited multi-impact coverage, as the observed gaps could be artifacts of sampling or categorization rather than representative findings.
Authors: We agree that the current manuscript omits these methodological details, which are necessary to substantiate the representativeness of the mapped patterns. In the revised version we will add a dedicated Methods subsection that specifies: the databases and repositories searched, the precise search strings and Boolean combinations employed, the date range of the search, explicit inclusion/exclusion criteria, the total number of records screened and the final number included, and a PRISMA-style flow diagram. We will also describe the procedure used to assign studies to the eight stages, including any double-coding or inter-rater checks performed. These additions will allow readers to evaluate whether the reported inconsistencies and coverage gaps reflect field-wide practices rather than sampling artifacts. revision: yes
-
Referee: [Framework] Framework description (likely §2 or §3): The eight-stage partitioning is presented without justification for its completeness or neutrality, nor discussion of potential overlaps or omissions (e.g., how embodied emissions in hardware are distinguished from infrastructure). This is load-bearing for the gap analysis, as an ad-hoc taxonomy could systematically under-count water or materials impacts.
Authors: The eight-stage framework was derived by extending established ICT life-cycle assessment boundaries to capture AI-specific activities (model experimentation, post-training adaptation) while retaining core stages such as hardware manufacturing and end-of-life. We acknowledge that the manuscript does not supply explicit justification for stage completeness, neutrality, or handling of overlaps. In revision we will expand the framework section to: (i) cite prior LCA literature that informed each stage boundary, (ii) clarify the distinction between hardware manufacturing (chip fabrication and component assembly) and infrastructure construction (data-center building and power systems), (iii) discuss potential overlaps and the rules used to avoid double-counting, and (iv) note that any under-representation of water or materials impacts is observed in the reviewed studies themselves rather than imposed by the taxonomy. This elaboration will reduce the risk that the gap analysis is an artifact of an unexamined partitioning. revision: yes
Circularity Check
No circularity: qualitative literature synthesis with independent external inputs
full rationale
This paper performs a structured review of external scientific papers and technical reports on AI environmental impacts, mapping them onto an eight-stage framework the authors introduce for organizational purposes. All core claims (inconsistent life-cycle definitions, predominant use of coarse CO2e proxies, limited coverage of water/materials/multi-impact LCA) are presented as direct observations from the reviewed literature rather than derived quantities, fitted parameters, or predictions. No equations, self-referential definitions, or load-bearing self-citations appear; the framework is an explicit taxonomy applied to outside sources, and the findings remain falsifiable by independent replication of the literature search. The absence of any mathematical derivation chain or internal fitting eliminates the patterns required for a positive circularity finding.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption An eight-stage life cycle framework spanning hardware manufacturing through end-of-life adequately captures all relevant AI environmental impacts.
- domain assumption The selected scientific papers and technical reports form a representative sample of the literature on AI environmental footprints.
Forward citations
Cited by 1 Pith paper
-
Towards Resource-Efficient LLMs: End-to-End Energy Accounting of Distillation Pipelines
An end-to-end energy measurement framework for LLM distillation pipelines reveals hidden teacher-side costs and yields selection guidelines plus an open-source harness.
Reference graph
Works this paper leans on
-
[1]
Husam Alissa, Teresa Nick, Ashish Raniwala, Alberto Arribas Herranz, Kali Frost, Ioannis Manousakis, Kari Lio, Brijesh Warrier, Vaidehi Oruganti, TJ DiCaprio, et al. 2025. Using life cycle assessment to drive innovation for sustainable cool clouds.Nature(2025), 1–8
2025
-
[2]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2022. Machine bias. InEthics of data and analytics. Auerbach Publications, 254–264
2022
- [3]
-
[4]
Sergio Aquino-Brítez, Pablo García-Sánchez, Andrés Ortiz, and Diego Aquino-Brítez. 2025. Towards an Energy Consumption Index for Deep Learning Models: A Comparative Analysis of Architectures, GPUs, and Measurement Tools.Sensors25, 3 (2025), 846
2025
-
[5]
Mauricio Fadel Argerich and Marta Patiño-Martínez. 2024. Measuring and improving the energy efficiency of large language models inference. IEEE Access12 (2024), 80194–80207
2024
-
[6]
Yevgeniya Arushanyan, Elisabeth Ekener-Petersen, and Göran Finnveden. 2014. Lessons learned–Review of LCAs for ICT products and services. Computers in industry65, 2 (2014), 211–234
2014
-
[7]
Enrico Barbierato and Alice Gatti. 2024. Toward green AI: A methodological survey of the scientific literature.IEEE Access12 (2024), 23989–24013
2024
-
[8]
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623
2021
- [9]
-
[10]
Adrien Berthelot, Eddy Caron, Mathilde Jay, and Laurent Lefèvre. 2024. Estimating the environmental impact of Generative-AI services using an LCA-based methodology.Procedia CIRP122 (2024), 707–712
2024
- [11]
-
[12]
Rishi Bommasani. 2021. On the opportunities and risks of foundation models.arXiv preprint arXiv:2108.07258(2021)
work page internal anchor Pith review arXiv 2021
-
[13]
Lucía Bouza, Aurélie Bugeau, and Loïc Lannelongue. 2023. How to estimate carbon footprint when training deep learning models? A guide and review.Environmental Research Communications5, 11 (2023), 115014
2023
-
[14]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. InConference on fairness, accountability and transparency. PMLR, 77–91
2018
-
[15]
Andrew A Chien, Liuzixuan Lin, Hai Nguyen, Varsha Rao, Tristan Sharma, and Rajini Wijayawardana. 2023. Reducing the Carbon Impact of Generative AI Inference (today and in 2035). InProceedings of the 2nd workshop on sustainable computer systems. 1–7
2023
- [16]
- [17]
-
[18]
Joseph Cook, Romain Jacob, Jo Lindsay Walton, Adrien Berthelot, Asim Hussain, and Daniel Schien. 2025. Beyond Counting Carbon: AI Environmental Assessments Struggle to Inform Net Impact Decisions. (2025)
2025
- [19]
-
[20]
2021.The atlas of AI: Power, politics, and the planetary costs of artificial intelligence
Kate Crawford. 2021.The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press
2021
-
[21]
Eduardo Cueto-Mendoza and John Kelleher. 2024. A framework for measuring the training efficiency of a neural architecture.Artificial Intelligence Review57, 12 (2024), 349
2024
-
[22]
Daswin De Silva and Damminda Alahakoon. 2022. An artificial intelligence life cycle: From conception to production.Patterns3, 6 (2022)
2022
-
[23]
DeepSeek-AI. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arXiv:2501.12948 [cs.CL] https: //arxiv.org/abs/2501.12948
work page internal anchor Pith review arXiv 2025
-
[24]
Paul Delanoë, Dieudonné Tchuente, and Guillaume Colin. 2023. Method and evaluations of the effective gain of artificial intelligence models for reducing CO2 emissions.Journal of environmental management331 (2023), 117261
2023
-
[25]
Radosvet Desislavov, Fernando Martínez-Plumed, and José Hernández-Orallo. 2023. Trends in AI inference energy consumption: Beyond the performance-vs-parameter laws of deep learning.Sustainable Computing: Informatics and Systems38 (2023), 100857
2023
- [26]
-
[27]
Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the carbon intensity of AI in cloud instances. InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1877–1894. From Cradle to Cloud: A ...
2022
-
[28]
Alexandre d’ORGEVAL, Edi ASSOUMOU, Valentina SESSA, Ilknur COLAK, Stuart SHEEHAN, and Quentin AVENAS. 2024. Carbon Footprint of AI Data Centers: A Life Cycle Approach. InInternational Conference on Applied Energy
2024
-
[29]
Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. 2024. GPTs are GPTs: Labor market impact potential of LLMs.Science384, 6702 (2024), 1306–1308
2024
- [30]
- [31]
-
[34]
Jared Fernandez, Clara Na, Yonatan Bisk, and Emma Strubell. [n. d.]. Evaluating the Environmental Impact of Language Models with Life Cycle Assessment. ([n. d.])
- [35]
-
[36]
Matthias Finkbeiner, Atsushi Inaba, Reginald Tan, Kim Christiansen, and Hans-Jürgen Klüppel. 2006. The new international standards for life cycle assessment: ISO 14040 and ISO 14044.The international journal of life cycle assessment11, 2 (2006), 80–85
2006
- [37]
-
[38]
Datacomp: In search of the next generation of multimodal datasets
Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani S. Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Ale...
-
[39]
Zico Kolter
Sachin Goyal, Pratyush Maini, Zachary Chase Lipton, Aditi Raghunathan, and J. Zico Kolter. 2024. Scaling Laws for Data Filtering—Data Curation Cannot be Compute Agnostic.2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2024), 22702–22711. https://api.semanticscholar.org/CorpusID:269033049
2024
-
[40]
Fredrik Guldbrandsson and Pernilla Bergmark. 2012. Opportunities and limitations of using life cycle assessment methodology in the ICT sector. In2012 Electronics Goes Green 2012+. IEEE, 1–6
2012
-
[41]
Philipp Hacker, Andreas Engel, and Marco Mauer. 2023. Regulating ChatGPT and other large generative AI models. InProceedings of the 2023 ACM conference on fairness, accountability, and transparency. 1112–1123
2023
-
[42]
Reinout Heijungs, Gjalt Huppes, and Jeroen B Guinée. 2010. Life cycle assessment and sustainability analysis of products, materials and technologies. Toward a scientific framework for sustainability life cycle analysis.Polymer degradation and stability95, 3 (2010), 422–428
2010
-
[43]
Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. Towards the systematic reporting of the energy and carbon footprints of machine learning.Journal of Machine Learning Research21, 248 (2020), 1–43
2020
-
[44]
Manuel Herrera, Xiang Xie, Andrea Menapace, Ariele Zanfei, and Bruno Melo Brentan. 2025. Sustainable AI infrastructure: A scenario-based forecast of water footprint under uncertainty. (2025)
2025
-
[45]
Ralph Hintemann and Simon Hinterholzer. 2022. Cloud computing drives the growth of the data center industry and its energy consumption. Data centers(2022)
2022
-
[46]
Asli Isler-Kaya and Filiz Karaosmanoglu. 2023. Life cycle assessment of a climate-friendly data center cooling device.Energy and Buildings288 (2023), 113006
2023
-
[47]
ISO/IEC. 2023. ISO/IEC 5338:2023 Information technology — Artificial intelligence — AI system life cycle processes. https://www.iso.org/standard/ 81118.html Accessed: 2026-03-22
2023
- [48]
-
[49]
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning.arXiv preprint arXiv:1910.09700(2019)
work page internal anchor Pith review arXiv 2019
-
[50]
Imran Latif, Alex C Newkirk, Matthew R Carbone, Arslan Munir, Yuewei Lin, Jonathan Koomey, Xi Yu, and Zhihua Dong. 2025. Single-Node Power Demand During AI Training: Measurements on an 8-GPU NVIDIA H100 System.IEEE Access(2025)
2025
-
[51]
Nuoa Lei, Jun Lu, Arman Shehabi, and Eric Masanet. 2025. The water use of data center workloads: A review and assessment of key determinants. Resources, Conservation and Recycling219 (2025), 108310
2025
- [52]
-
[53]
Pengfei Li, Jianyi Yang, Mohammad A Islam, and Shaolei Ren. 2025. Making ai less’ thirsty’.Commun. ACM68, 7 (2025), 54–61
2025
-
[54]
Anne-Laure Ligozat, Julien Lefèvre, Aurélie Bugeau, and Jacques Combaz. 2022. Unraveling the hidden environmental impacts of AI solutions for environment life cycle assessment of AI solutions.Sustainability14, 9 (2022), 5172
2022
-
[55]
Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al
-
[56]
Deepseek-v3 technical report.arXiv preprint arXiv:2412.19437(2024)
work page internal anchor Pith review arXiv 2024
-
[57]
R Lorenzini. 2021. Digital & environment: How to evaluate server manufacturing footprint, beyond greenhouse gas emissions?
2021
- [58]
- [59]
-
[60]
Alexandra Sasha Luccioni, Emma Strubell, and Kate Crawford. 2025. From efficiency gains to rebound effects: The problem of Jevons’ paradox in AI’s polarized environmental debate. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. 76–88
2025
- [61]
-
[62]
Alexandra Sasha Luccioni, Sylvain Viguier, and Anne-Laure Ligozat. 2023. Estimating the carbon footprint of bloom, a 176b parameter language model.Journal of machine learning research24, 253 (2023), 1–15
2023
- [63]
-
[64]
Sasha Luccioni, Boris Gamazaychikov, Sara Hooker, Régis Pierrard, Emma Strubell, Yacine Jernite, and Carole-Jean Wu. 2024. Light bulbs have energy ratings—so why can’t AI chatbots?Nature632, 8026 (2024), 736–738
2024
-
[65]
Sasha Luccioni, Yacine Jernite, and Emma Strubell. 2024. Power Hungry Processing: Watts Driving the Cost of AI Deployment?. InThe 2024 ACM Conference on Fairness Accountability and Transparency (FAccT ’24). ACM, 85–99. doi:10.1145/3630106.3658542
-
[66]
Sasha Luccioni, Yacine Jernite, and Emma Strubell. 2024. Power hungry processing: Watts driving the cost of AI deployment?. InProceedings of the 2024 ACM conference on fairness, accountability, and transparency. 85–99
2024
- [67]
-
[68]
Nicolás Martínez-Ramón, Fernando Calvo-Rodríguez, Diego Iribarren, and Javier Dufour. 2024. Frameworks for the application of machine learning in life cycle assessment for process modeling.Cleaner Environmental Systems14 (2024), 100221
2024
- [69]
- [70]
-
[71]
Risang Faiz Muhammad and Muhammad Edo Syahputra. 2024. Comparative Study of GPU Performance and Energy Efficiency Across Generational Architectures: A Systematic Literature. In2024 IEEE International Conference on Control & Automation, Electronics, Robotics, Internet of Things, and Artificial Intelligence (CERIA). IEEE, 1–7
2024
-
[72]
Product Carbon Footprint Summary for NVIDIA HGX H100
NVIDIA. 2025. "Product Carbon Footprint Summary for NVIDIA HGX H100". https://images.nvidia.com/aem-dam/Solutions/documents/HGX- H100-PCF-Summary.pdf
2025
-
[73]
OECD. 2025. Recommendation of the Council on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 Accessed: 2026-03-22
2025
-
[74]
David Patterson, Joseph Gonzalez, Urs Hölzle, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R So, Maud Texier, and Jeff Dean. 2022. The carbon footprint of machine learning training will plateau, then shrink.Computer55, 7 (2022), 18–28
2022
-
[75]
David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training.arXiv preprint arXiv:2104.10350(2021)
work page internal anchor Pith review arXiv 2021
-
[76]
Christiane Plociennik, Ponnapat Watjanatepin, Karel Van Acker, and Martin Ruskowski. 2025. Life Cycle Assessment of Artificial Intelligence Applications: Research Gaps and Opportunities.Procedia CIRP135 (2025), 924–929
2025
-
[77]
Soham Poddar, Paramita Koley, Janardan Misra, Niloy Ganguly, and Saptarshi Ghosh. 2025. Towards sustainable nlp: Insights from benchmarking inference energy in large language models. InProceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Pape...
2025
- [78]
-
[79]
Shaolei Ren, Bill Tomlinson, Rebecca W Black, and Andrew W Torrance. 2024. Reconciling the contrasting narratives on the environmental impact of large language models.Scientific Reports14, 1 (2024), 26310
2024
-
[80]
Samuel Rincé and Adrien Banse. 2025. Ecologits: Evaluating the environmental impacts of generative AI.Journal of Open Source Software10, 111 (2025), 7471
2025
-
[81]
Rafał Różycki, Dorota Agnieszka Solarska, and Grzegorz Waligóra. 2025. Energy-Aware Machine Learning Models—A Review of Recent Techniques and Perspectives.Energies18, 11 (2025), 2810. From Cradle to Cloud: A Life Cycle Review of AI’s Environmental Footprint 19
2025
-
[82]
Serenella Sala, Francesca Reale, J Cristobal-Garcia, Luisa Marelli, and Rana Pant. 2016. Life cycle assessment for the impact assessment of policies. Report EUR28380 (2016)
2016
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.