Is Energy Now the Main Bottleneck for AI and Cloud Growth?

Is Energy Now the Main Bottleneck for AI and Cloud Growth?

Maryanne Baines is a preeminent authority in cloud technology and infrastructure strategy, specializing in the complex intersection of high-performance computing and energy systems. With extensive experience evaluating tech stacks and product applications for global industries, she has become a leading voice on how power availability is redefining the limits of digital expansion. As AI workloads transform the technical landscape, Maryanne provides critical insights into how the world’s largest cloud providers are bypassing traditional grid limitations to sustain the next generation of GPU-intensive clusters.

AI training and inference require significantly more energy than standard workloads, often leading to multi-year infrastructure delays. How are these grid constraints reshaping your deployment timelines, and what specific metrics do you use to evaluate if a site’s power capacity can sustain long-term GPU clusters?

The shift toward AI has fundamentally altered our project calendars because the energy density required for GPU clusters is vastly higher than what traditional servers demand. We are seeing a reality where the International Energy Agency notes that data center electricity use is rising faster than overall power demand, leading to grid connection delays that can span several years due to the need for new transmission lines and substation upgrades. To navigate this, we look beyond simple “available megawatts” and focus on long-term load stability and the timeline for local utility infrastructure refreshes. If a site cannot guarantee the massive, sustained load required for model training without waiting half a decade for a grid upgrade, we are forced to pivot toward on-site generation strategies like fuel cells to remain competitive.

Large-scale fuel cell deployments are emerging as a solution for on-site power, offering electrical efficiency between 54% and 60%. What are the operational trade-offs of using modular fuel cells compared to traditional gas plants, and how do you integrate these units into existing data center architectures?

Modular fuel cells, particularly solid oxide versions, offer a sophisticated alternative to traditional combustion because they generate power through an electrochemical process that avoids transmission losses. With an electrical efficiency of 54% to 60%, they rival large gas-fired plants while allowing us to add capacity incrementally as our GPU clusters grow, rather than building a massive, underutilized power plant on day one. Integrating these units requires a departure from traditional “grid-first” architecture; we treat the fuel cell farm as the primary heart of the facility, often placing it in close proximity to the server halls to maximize efficiency. This modularity means we can match our capital expenditure directly to our compute scaling, which is a massive operational advantage when deploying gigawatt-scale infrastructure.

Total efficiency for on-site power can exceed 80% when using combined heat and power configurations. Could you walk us through the step-by-step process of capturing and repurposing that excess heat, and what challenges arise when scaling these systems to support several gigawatts of capacity?

Achieving that 80% efficiency threshold involves a strategic capture of the thermal energy byproduct generated during the electrochemical reaction in the fuel cells. We route this high-grade heat through heat exchangers to support the data center’s cooling infrastructure or provide heating for nearby industrial processes, effectively turning a waste product into a resource. However, when you scale this to support the 2.8 gigawatts of capacity seen in major industry agreements, the sheer volume of heat becomes a logistical puzzle. You have to design massive thermal management systems that can handle that load without interfering with the delicate temperature requirements of the GPU racks themselves, often requiring specialized plumbing and heat-sink infrastructure that standard data centers aren’t built to house.

Electricity access has become a competitive differentiator, often outweighing land and connectivity in site selection. How has this shifted the way you negotiate with local utilities, and what anecdotes can you share about markets where power scarcity has completely halted expansion plans?

We have entered an era where power is the new gold, often becoming a more important factor in site selection than land cost or fiber connectivity. In negotiations, we are no longer just customers; we are often partners in infrastructure, sometimes needing to discuss multi-year transmission projects just to get a foot in the door. We have seen markets in both the U.S. and Europe where utilities have issued formal warnings that they simply cannot meet new demand without massive upgrades, effectively putting a “hard stop” on expansion in those regions. This scarcity has created a fierce competitive environment where the ability to bring your own power—via on-site fuel cells or long-term energy agreements—is the only way to bypass these local gridlocks and bring AI capacity online.

While solar and wind are essential for offsetting emissions, their unpredictability necessitates a steady baseload from sources like fuel cells or nuclear power. How do you balance these intermittent and steady energy sources, and what role does hydrogen play in your future sustainability roadmap?

Balancing the grid requires a “belt and suspenders” approach where we use renewables like solar and wind to offset our total carbon footprint, while relying on fuel cells or nuclear for the steady, 24/7 baseload that AI inference demands. Hydrogen is the “holy grail” for our future roadmap because it allows fuel cells to operate with zero carbon emissions, but we face significant hurdles with current infrastructure and high costs. Most of our current deployments still rely on natural gas as the primary fuel source because the supply chains for large-scale hydrogen are still in the early stages of adoption. Our goal is a phased transition where we build the infrastructure today using gas-ready fuel cells that can be converted to hydrogen as the global supply and cost-efficiency of the fuel improve.

What is your forecast for AI-driven energy demand?

I anticipate that AI-driven energy demand will continue to decouple from traditional data center growth, necessitating a complete “off-grid” mindset for the world’s largest tech providers. As training models grow in complexity, we will see the emergence of “power-first” data centers—facilities built specifically where energy is abundant or where on-site generation can reach multi-gigawatt scales, such as the 2.8 GW targets we are currently seeing. This will likely lead to a surge in private energy infrastructure, where cloud companies act more like utility providers to ensure their GPUs never go dark. Ultimately, the winners in the AI race won’t just have the best algorithms; they will be the ones who secured the most reliable and efficient energy pipelines to power them.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later