pith. machine review for the scientific record. sign in

arxiv: 2605.05519 · v1 · submitted 2026-05-06 · 💻 cs.LG · cs.DC

Recognition: unknown

OpenG2G: A Simulation Platform for AI Datacenter-Grid Runtime Coordination

Authors on Pith no claims yet

Pith reviewed 2026-05-08 16:25 UTC · model grok-4.3

classification 💻 cs.LG cs.DC
keywords AI datacentergrid coordinationsimulation platformpower flexibilitycontroller designAI workloadelectricity grid
0
0 comments X

The pith

OpenG2G is a modular simulation platform that lets users test controllers coordinating AI datacenter power flexibility with the electricity grid.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents OpenG2G to address grid capacity and reliability challenges from rising AI compute demand by enabling rapid datacenter power adjustments. It establishes that the platform supports a range of control paradigms, from classic to learning-based, while measuring how AI model and deployment decisions influence flexibility and coordination results. A sympathetic reader would care because it offers a way to explore solutions for multi-year interconnection delays that currently limit AI growth. The work grounds its claims in real production AI measurements and high-fidelity grid models connected through a generic interface.

Core claim

OpenG2G is a simulation platform whose modular architecture combines a datacenter backend driven by real measurements of production-grade AI services, a grid backend built on high-fidelity simulators, and a generic controller interface that closes the loop, allowing users to implement and compare controllers while quantifying the effects of AI choices on datacenter flexibility and grid coordination outcomes.

What carries the argument

The modular and extensible architecture with datacenter backend, grid backend, and generic controller interface that enables closed-loop testing of control strategies.

If this is right

  • Researchers can directly compare classic, optimization, and learning-based controllers within realistic scenarios.
  • The platform reveals how specific AI model architectures and deployment sizes change the amount of power flexibility available to the grid.
  • Users can evaluate coordination outcomes across varied grid conditions without building physical testbeds.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Widespread use of the platform could shorten the time needed to validate coordination methods before real deployments.
  • If accuracy holds, the same interface might later support live controller deployment rather than only simulation.
  • The approach opens questions about scaling the platform to include additional energy storage or renewable integration factors.

Load-bearing premise

The datacenter and grid backends produce representations of runtime interactions accurate enough to support reliable controller design and comparison.

What would settle it

Running a controller designed in OpenG2G on a real AI datacenter and grid interconnection and finding that the simulated power adjustments and stability outcomes deviate substantially from measured results.

Figures

Figures reproduced from arXiv: 2605.05519 by Jae-Won Chung, Jiasi Chen, Mosharaf Chowdhury, Vladimir Dvorkin, Yanyong Mao, Zhirui Liang.

Figure 1
Figure 1. Figure 1: Overview of OpenG2G’s architecture. OpenG2G composes three pluggable components view at source ↗
Figure 2
Figure 2. Figure 2: Controllers’ voltage regulation performance diverges with feeder complexity. Bars show view at source ↗
Figure 3
Figure 3. Figure 3: Voltage regulation and throughput tradeoffs of controllers. Orange dots show the five PPO view at source ↗
Figure 4
Figure 4. Figure 4: Per-model batch-size responses under the same voltage-disturbance scenario. The datacenter view at source ↗
Figure 5
Figure 5. Figure 5: Model size and architecture: five models served on B200 GPUs (1, 1, 1, 2, and 8 GPUs per view at source ↗
Figure 6
Figure 6. Figure 6: Weight precision: Qwen 3 235B A22B on 8× H100 GPUs. Hatched bars are No Coordina￾tion; solid bars use OFO. BF16 has wider feasible power range; FP8 reaches higher throughput. 1 GPU 2 GPU 2.2 2.4 2.6 2.8 3.0 Datacenter power (MW) 0 20 40 60 Token throughput (M tok/s) (a) GPT-OSS 120B, 1 vs 2 GPU 4 GPU 8 GPU 2.6 2.8 3.0 Datacenter power (MW) 0 1 2 3 4 Token throughput (M tok/s) (b) Qwen 3 235B A22B, 4 vs 8 GPU view at source ↗
Figure 7
Figure 7. Figure 7: Parallelism: (a) GPT-OSS 120B [35] and (b) Qwen 3 235B A22B [44]. Doubling expert view at source ↗
Figure 8
Figure 8. Figure 8: GPU type (hardware generation): Qwen 3 8B and 32B [44] on H100 vs B200. In (a), view at source ↗
Figure 9
Figure 9. Figure 9: Distribution system topologies used in experiments. Subfigures show the IEEE 13-bus, view at source ↗
Figure 10
Figure 10. Figure 10: Diversity of the IEEE-13 training scenario library (235 scenarios). Faint traces show view at source ↗
Figure 11
Figure 11. Figure 11: Per-scenario difficulty distribution (voltage-violation integral under the no-control baseline) view at source ↗
Figure 12
Figure 12. Figure 12: Baseline datacenter electricity load on the IEEE 13-bus feeder induces voltage limit view at source ↗
Figure 13
Figure 13. Figure 13: Per-episode reward breakdown during PPO training (rolling mean over 50 episodes) for view at source ↗
Figure 14
Figure 14. Figure 14: Effect of throughput weight αT on OFO control for IEEE-13. Higher αT drives batch sizes toward the maximum in (a), increasing throughput but widening voltage violations below the 0.95 pu limit in (b). Reducing αT from 0.01 to 0.0001 cuts integral violation by 21× at the cost of 44% lower average throughput. ITL mixture has a heavy tail that produces excess misses under OFO. Each remaining model is match-p… view at source ↗
Figure 15
Figure 15. Figure 15: Effect of voltage weight wv on PPO training and evaluation for IEEE-13. (a) Per-episode squared voltage cost P t PV (t) during training (rolling mean over 100 episodes, with wv divided out for cross-variant comparability). (b) Mean integral voltage violation on the 50-scenario test set, evaluated at each checkpoint; OFO and droop baselines shown as horizontal references. All variants share identical hyper… view at source ↗
Figure 16
Figure 16. Figure 16: Model size and architecture on H100: six models, ordered left-to-right by decreasing view at source ↗
Figure 17
Figure 17. Figure 17: Parallelism on H100: Qwen 3 30B A3B Instruct [44] at 1 vs 2 GPU, match-peak sized. view at source ↗
read the original abstract

AI's growing compute demand and new datacenter buildouts present major capacity and reliability challenges for the electricity grid, leading to multi-year interconnection delays for new datacenters and bottlenecking AI growth. To ease this strain, datacenters increasingly offer rapid power flexibility in response to grid signals, where the datacenter can increase or decrease its power consumption by adapting its workload in real time. In order to understand the impact of large datacenters on the grid and to facilitate the design of effective coordination strategies, we build OpenG2G, a simulation platform for AI datacenter-grid runtime coordination. We show that OpenG2G is capable of answering a wide range of coordination questions by allowing users to implement and compare various control paradigms (including classic, optimization, and learning-based controllers), and quantify how AI model and deployment choices affect datacenter flexibility and coordination outcomes. This versatility is enabled by OpenG2G's modular and extensible architecture: a datacenter backend driven by real measurements of production-grade AI services, a grid backend built on high-fidelity grid simulators, and a generic controller interface that closes the loop between them. We describe the design of OpenG2G and demonstrate its usefulness through realistic grid scenarios and AI workloads.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper introduces OpenG2G, an open simulation platform for studying runtime coordination between AI datacenters and the electricity grid. Its modular design includes a datacenter backend driven by real measurements from production AI services, a grid backend based on high-fidelity simulators, and a generic controller interface that supports classic, optimization-based, and learning-based controllers. The authors claim this architecture enables users to implement, compare, and quantify the effects of different control paradigms and AI model/deployment choices on datacenter power flexibility and grid coordination outcomes, as illustrated through realistic grid scenarios and AI workloads.

Significance. If the simulation fidelity holds, OpenG2G would provide a timely, extensible testbed for exploring datacenter flexibility mechanisms that could alleviate grid interconnection bottlenecks for AI infrastructure. The modular controller interface and use of real AI service traces are particular strengths, as they lower the barrier for comparing control strategies and assessing how model scale or deployment parameters affect coordination performance.

major comments (1)
  1. [Demonstration of usefulness] The central claim that OpenG2G supports reliable controller design, comparison, and quantification of coordination outcomes depends on the accuracy of the closed-loop datacenter-grid dynamics. However, the demonstration sections provide only scenario descriptions without quantitative validation (e.g., matching simulated vs. measured power traces, latency distributions, or stability metrics under identical grid signals and workload shifts).
minor comments (1)
  1. [Abstract] The abstract and introduction would benefit from a clearer statement of the platform's current limitations regarding simulation fidelity and the scope of validation performed.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive review and for recognizing the strengths of OpenG2G's modular architecture and use of real AI service traces. We address the major comment below and will revise the manuscript accordingly to better demonstrate the platform's reliability.

read point-by-point responses
  1. Referee: The central claim that OpenG2G supports reliable controller design, comparison, and quantification of coordination outcomes depends on the accuracy of the closed-loop datacenter-grid dynamics. However, the demonstration sections provide only scenario descriptions without quantitative validation (e.g., matching simulated vs. measured power traces, latency distributions, or stability metrics under identical grid signals and workload shifts).

    Authors: We agree that quantitative validation of the closed-loop dynamics is necessary to substantiate claims about reliable controller design, comparison, and outcome quantification. The current demonstrations illustrate the platform's extensibility through realistic scenarios, but do not include direct fidelity checks. In the revised manuscript we will add a dedicated validation subsection that reports (i) side-by-side comparisons of simulated versus measured power traces from the production AI services used to drive the datacenter backend, (ii) latency distributions under representative workload shifts, and (iii) stability metrics (e.g., frequency deviation and settling time) when the controller is exercised with identical grid signals. These additions will provide concrete evidence for the accuracy of the simulated dynamics. revision: yes

Circularity Check

0 steps flagged

No circularity: platform description with no derivations or fitted predictions

full rationale

The paper describes the architecture and use of a simulation platform (OpenG2G) built from real AI service measurements and high-fidelity grid simulators, plus a generic controller interface. No equations, parameter fitting, predictions, or uniqueness theorems are present in the provided text or abstract. The central claim is simply that the modular system enables users to implement and compare controllers and quantify effects; this is a statement about implemented functionality rather than a closed-form result derived from itself. No self-citation chains, ansatzes, or renamings of known results appear as load-bearing steps. The absence of any derivation chain means the circularity patterns do not apply.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The platform rests on the domain assumption that real production AI measurements and high-fidelity grid simulators are faithful enough proxies for live coordination; no free parameters, invented entities, or additional axioms are stated in the abstract.

axioms (2)
  • domain assumption Real measurements of production-grade AI services accurately capture power consumption dynamics under workload adaptation.
    Invoked to justify the datacenter backend fidelity.
  • domain assumption High-fidelity grid simulators produce representative voltage, frequency, and transmission behavior for coordination studies.
    Invoked to justify the grid backend.

pith-pipeline@v0.9.0 · 5543 in / 1412 out tokens · 32024 ms · 2026-05-08T16:25:10.493357+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

51 extracted references · 13 canonical work pages · 5 internal anchors

  1. [1]

    Amazon EC2 update – inf1 instances with AWS inferentia chips for high performance cost-effective inferencing, 2019

    Jeff Barr. Amazon EC2 update – inf1 instances with AWS inferentia chips for high performance cost-effective inferencing, 2019

  2. [2]

    Real-time feedback-based optimization of dis- tribution grids: A unified approach.IEEE Transactions on Control of Network Systems, 6(3):1197–1209, 2019

    Andrey Bernstein and Emiliano Dall’Anese. Real-time feedback-based optimization of dis- tribution grids: A unified approach.IEEE Transactions on Control of Network Systems, 6(3):1197–1209, 2019

  3. [3]

    V oltage control with inverter-based distributed generation.IEEE transactions on Power Delivery, 20(1):519–520, 2005

    MHJ Bollen and A Sannino. V oltage control with inverter-based distributed generation.IEEE transactions on Power Delivery, 20(1):519–520, 2005

  4. [4]

    Global data center trends 2025

    CBRE. Global data center trends 2025. https://www.cbre.com/insights/reports/ global-data-center-trends-2025, 2025

  5. [5]

    Electricity demand and grid impacts of AI data centers: Challenges and prospects,

    Xin Chen, Xiaoyang Wang, Ana Colacelli, Matt Lee, and Le Xie. Electricity demand and grid impacts of AI data centers: Challenges and prospects.arXiv preprint arXiv:2509.07218, 2025

  6. [6]

    V oltage regulation in distribution systems with data center loads

    Yize Chen and Baosen Zhang. V oltage regulation in distribution systems with data center loads. arXiv preprint arXiv:2507.06416, 2025

  7. [7]

    Ma, Ruofan Wu, Jiachen Liu, Oh Jun Kweon, Yuxuan Xia, Zhiyu Wu, and Mosharaf Chowdhury

    Jae-Won Chung, Jeff J. Ma, Ruofan Wu, Jiachen Liu, Oh Jun Kweon, Yuxuan Xia, Zhiyu Wu, and Mosharaf Chowdhury. The ML.ENERGY benchmark: Toward automated inference energy measurement and optimization. InNeurIPS Datasets and Benchmarks, 2025

  8. [8]

    Chung, R

    Jae-Won Chung, Ruofan Wu, Jeff J. Ma, and Mosharaf Chowdhury. Where do the joules go? diagnosing inference energy consumption.arXiv preprint arXiv:2601.22076, 2026

  9. [9]

    Coskun, Jack Megrue, Ciaran Roberts, Shayan Sengupta, Varun Sivaram, Ethan Tiao, Aroon Vijaykar, Chris Williams, Daniel C

    Philip Colangelo, Ayse K. Coskun, Jack Megrue, Ciaran Roberts, Shayan Sengupta, Varun Sivaram, Ethan Tiao, Aroon Vijaykar, Chris Williams, Daniel C. Wilson, Brandon Records, Zack MacFarland, Daniel Dreiling, Nathan Morey, Anuja Ratnayake, and Baskar Vairamohan. AI data centres as grid-interactive assets.Nature Energy, 2025

  10. [10]

    Electricity consumption

    California Energy Commission. Electricity consumption. https://www.energy. ca.gov/data-reports/energy-almanac/california-electricity-data/ california-energy-consumption-dashboards-0, 2026

  11. [11]

    B. Donnot. Grid2op: A testbed platform to model sequential decision making in power systems, 2020

  12. [12]

    Dugan and Thomas E

    Roger C. Dugan and Thomas E. McDermott. An open source platform for collaborating on smart grid research. InIEEE Power and Energy Society General Meeting, 2011

  13. [13]

    Agent coordination via contextual regression (AgentCONCUR) for data center flexibility.IEEE Transactions on Power Systems, 2025

    Vladimir Dvorkin. Agent coordination via contextual regression (AgentCONCUR) for data center flexibility.IEEE Transactions on Power Systems, 2025

  14. [14]

    arXiv preprint arXiv:2508.15734 (2025), https://arxiv.org/abs/2508.15734 10 A

    Cooper Elsworth, Keguo Huang, David Patterson, Ian Schneider, Robert Sedivy, Savannah Goodman, Ben Townsend, Parthasarathy Ranganathan, Jeff Dean, Amin Vahdat, Ben Gomes, and James Manyika. Measuring the environmental impact of delivering AI at google scale. arXiv preprint arXiv:2508.15734, 2025

  15. [15]

    Ai energy score

    Hugging Face. Ai energy score. https://huggingface.github.io/AIEnergyScore, 2025

  16. [16]

    PowerGym: A reinforcement learning environment for volt-var control in power distribution systems.arXiv preprint arXiv:2109.03970, 2022

    Ting-Han Fan, Xian Yeow Lee, and Yubo Wang. PowerGym: A reinforcement learning environment for volt-var control in power distribution systems.arXiv preprint arXiv:2109.03970, 2022

  17. [17]

    Assessments of data centers for provision of frequency regulation.Applied Energy, 277:115621, 2020

    Yangyang Fu, Xu Han, Kyri Baker, and Wangda Zuo. Assessments of data centers for provision of frequency regulation.Applied Energy, 277:115621, 2020. 10

  18. [18]

    Nextera energy and google cloud announce landmark strategic energy and technology partnership to accelerate ai growth and transform the energy industry

    Google Cloud. Nextera energy and google cloud announce landmark strategic energy and technology partnership to accelerate ai growth and transform the energy industry. https://www.googlecloudpresscorner.com/2025-12-08-NextEra-Energy-and-Google-Cloud- Announce-Landmark-Strategic-Energy-and-Technology-Partnership-to-Accelerate-AI- Growth-and-Transform-the-En...

  19. [19]

    Opti- mization algorithms as robust feedback controllers.Annual Reviews in Control, 57:100941, 2024

    Adrian Hauswirth, Zhiyu He, Saverio Bolognani, Gabriela Hug, and Florian Dörfler. Opti- mization algorithms as robust feedback controllers.Annual Reviews in Control, 57:100941, 2024

  20. [20]

    AWS to offer NVIDIA’s T4 GPUs for AI inferencing, 2019

    HPCwire. AWS to offer NVIDIA’s T4 GPUs for AI inferencing, 2019

  21. [21]

    IEEE PES test feeder

    IEEE PES Distribution System Analysis Subcommittee. IEEE PES test feeder. https://cmte. ieee.org/pes-testfeeders/resources/

  22. [22]

    Energy demand from AI

    International Energy Agency. Energy demand from AI. https://www.iea.org/reports/energy- and-ai/energy-demand-from-ai, 2025

  23. [23]

    Power for AI: Easier said than built.https://about

    Helen Kou and Nathalie Limandibhratha. Power for AI: Easier said than built.https://about. bnef.com/insights/commodities/power-for-ai-easier-said-than-built/, 2025

  24. [24]

    OpenDSSDirect.py: A cross-platform python package that implements a native/direct library interface to the alternative opendss engine from dss-extensions.org, 2024

    Dheepak Krishnamurthy and Paulo Meira. OpenDSSDirect.py: A cross-platform python package that implements a native/direct library interface to the alternative opendss engine from dss-extensions.org, 2024

  25. [25]

    Decentralized optimal reactive power dispatch of optimally partitioned distribution networks.IEEE Access, 6:74051– 74060, 2018

    Peishuai Li, Zaijun Wu, Ke Meng, Guo Chen, and Zhao Yang Dong. Decentralized optimal reactive power dispatch of optimally partitioned distribution networks.IEEE Access, 6:74051– 74060, 2018

  26. [26]

    GPU- to-Grid: V oltage regulation via GPU utilization control

    Zhirui Liang, Jae-Won Chung, Mosharaf Chowdhury, Jiasi Chen, and Vladimir Dvorkin. GPU- to-Grid: V oltage regulation via GPU utilization control. InPowerUp, 2026

  27. [27]

    The Llama 3 Herd of Models

    Meta AI Llama Team. The llama 3 herd of models.arXiv preprint arXiv:2407.21783, 2024

  28. [28]

    Enrico Marchesini, Benjamin Donnot, Constance Crozier, Ian Dytham, Christian Merz, Lars Schewe, Nico Westerbeck, Cathy Wu, Antoine Marot, and Priya L. Donti. RL2Grid: Bench- marking reinforcement learning in power grid operations.arXiv preprint arXiv:2503.23101, 2025

  29. [29]

    The largest meta data center yet brings big impact to louisiana

    Meta. The largest meta data center yet brings big impact to louisiana. https://datacenters. atmeta.com/richland-parish-data-center/

  30. [30]

    SustainDC: Benchmarking for sustainable data center control

    Avisek Naug, Antonio Guillen, Ricardo Luna Gutierrez, Vineet Gundecha, Cullen Bash, Sahand Ghorbanpour, Sajad Mousavi, Ashwin Ramesh Babu, Dejan Markovikj, Lekhapriya Dheeraj Kashyap, Desik Rengarajan, and Soumyendu Sarkar. SustainDC: Benchmarking for sustainable data center control. InNeurIPS Datasets and Benchmarks Track, 2024

  31. [31]

    PyDCM: Custom data center models with reinforcement learning for sustainability

    Avisek Naug, Antonio Guillen, Ricardo Luna Gutiérrez, Vineet Gundecha, Sahand Ghor- banpour, Lekhapriya Dheeraj Kashyap, Dejan Markovikj, Lorenz Krause, Sajad Mousavi, Ashwin Ramesh Babu, and Soumyendu Sarkar. PyDCM: Custom data center models with reinforcement learning for sustainability. InACM BuildSys, 2023

  32. [32]

    NVIDIA hopper GPUs expand reach as demand for AI grows.https://nvidianews

    NVIDIA. NVIDIA hopper GPUs expand reach as demand for AI grows.https://nvidianews. nvidia.com/news/nvidia-hopper-gpus-expand-reach-as-demand-for-ai-grows , 2023

  33. [33]

    Thousands of NVIDIA grace blackwell GPUs now live at CoreWeave, propelling development for AI pioneers

    NVIDIA. Thousands of NVIDIA grace blackwell GPUs now live at CoreWeave, propelling development for AI pioneers. https://blogs.nvidia.com/blog/ coreweave-grace-blackwell-gb200-nvl72/, 2025

  34. [34]

    Nvidia h100 tensor core gpu

    NVIDIA Corporation. Nvidia h100 tensor core gpu. https://www.nvidia.com/en-us/ data-center/h100/, 2024

  35. [35]

    gpt-oss-120b & gpt-oss-20b model card

    OpenAI. gpt-oss-120b & gpt-oss-20b model card. 2025

  36. [36]

    Stargate community.https://openai.com/index/stargate-community/, 2026

    OpenAI. Stargate community.https://openai.com/index/stargate-community/, 2026

  37. [37]

    Electric Rule No

    Pacific Gas and Electric Company. Electric Rule No. 2: Description of Service. https: //www.pge.com/tariffs/assets/pdf/tariffbook/ELEC_RULES_2.pdf, 2023

  38. [38]

    Characterizing power management opportunities for llms in the cloud

    Pratyush Patel, Esha Choukse, Chaojie Zhang, Íñigo Goiri, Brijesh Warrier, Nithish Mahalingam, and Ricardo Bianchini. Characterizing power management opportunities for llms in the cloud. InASPLOS, 2024. 11

  39. [39]

    Carbon Emissions and Large Neural Network Training

    David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training.arXiv preprint arXiv:2104.10350, 2021

  40. [40]

    High-Dimensional Continuous Control Using Generalized Advantage Estimation

    John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation.arXiv preprint arXiv:1506.02438, 2015

  41. [41]

    Proximal Policy Optimization Algorithms

    John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms.arXiv preprint arXiv:1707.06347, 2017

  42. [42]

    Smith, Alex Hubbard, Alex Newkirk, Nuoa Lei, Md Abu Bakar Siddik, Billie Holecek, Jonathan Koomey, Eric Masanet, and Dale Sartor

    Arman Shehabi, Sarah J. Smith, Alex Hubbard, Alex Newkirk, Nuoa Lei, Md Abu Bakar Siddik, Billie Holecek, Jonathan Koomey, Eric Masanet, and Dale Sartor. 2024 united states data center energy usage report. Technical report, Lawrence Berkeley National Laboratory, 2024

  43. [43]

    OpenDSS IEEE Test Cases

    Tom Short. OpenDSS IEEE Test Cases. https://github.com/tshort/OpenDSS/tree/ master/Distrib/IEEETestCases. GitHub repository, accessed April 25, 2026

  44. [44]

    Qwen3 Technical Report

    Qwen Team. Qwen3 technical report.arXiv preprint arXiv:2505.09388, 2025

  45. [45]

    Michael Terrell. How we’re making data centers more flexible to benefit power grids.https: //blog.google/innovation-and-ai/infrastructure-and-cloud/global-network/ how-were-making-data-centers-more-flexible-to-benefit-power-grids/, 2025

  46. [46]

    A new milestone for smart, affordable electricity growth

    Michael Terrell. A new milestone for smart, affordable electricity growth. https: //blog.google/innovation-and-ai/infrastructure-and-cloud/global-network/ demand-response-data-center-milestone/, 2026

  47. [47]

    John, Arjun Suresh, Rowan Taubitz, Sean Zhan, Scott Wasson, David Kanter, and Vijay Janapa Reddi

    Arya Tschand, Arun Tejusve Raghunath Rajan, Sachin Idgunji, Anirban Ghosh, Jeremy Holle- man, Csaba Kiraly, Pawan Ambalkar, Ritika Borkar, Ramesh Chukka, Trevor Cockrell, Oliver Curtis, Grigori Fursin, Miro Hodak, Hiwot Kassa, Anton Lokhmotov, Dejan Miskovic, Yuechao Pan, Manu Prasad Manmathan, Liz Raymond, Tom St. John, Arjun Suresh, Rowan Taubitz, Sean ...

  48. [48]

    Learning the optimal power flow: Environment design matters.Energy and AI, 17:100410, 2024

    Thomas Wolgast and Astrid Nieße. Learning the optimal power flow: Environment design matters.Energy and AI, 17:100410, 2024

  49. [49]

    Kareus: Joint reduction of dynamic and static energy in large model training.arXiv preprint arXiv:2601.17654, 2026

    Ruofan Wu, Jae-Won Chung, and Mosharaf Chowdhury. Kareus: Joint reduction of dynamic and static energy in large model training.arXiv preprint arXiv:2601.17654, 2026

  50. [50]

    Colossus.https://x.ai/colossus, 2026

    xAI. Colossus.https://x.ai/colossus, 2026

  51. [51]

    Qwen3-32B-B200

    Yiheng Xie, Wenqi Cui, and Adam Wierman. Enhancing data center low-voltage ride-through. arXiv preprint arXiv:2510.03867, 2025. 12 A Extending OpenG2G OpenG2G extension points are ordinary Python classes or data objects; the four listings below illustrate the most common extensions. Adding a measured inference workload (Listing 1).OpenG2G’s datacenter com...