With the rise of artificial intelligence creating an insatiable appetite for computing power, the physical backbone of the digital world—the data center—is under unprecedented strain. To understand the dynamics of this critical market, we sat down with Maryanne Baines, a leading authority on cloud technology and infrastructure. We explored the immense operational pressures from AI’s power demands, the critical supply chain bottlenecks causing a capacity crunch, what happens when a region runs out of room, and the financial risks operators face if demand unexpectedly cools.
Goldman Sachs forecasts AI’s share of the data center market doubling to 30% soon, with overall power consumption jumping 175% by 2030. Can you walk me through the specific operational and infrastructural changes a data center operator must make to prepare for this dual challenge?
It’s a fundamental rewiring of the entire industry, not just an expansion. You can feel the urgency on the ground. For years, we focused on scaling out, but now it’s about density and power. An operator can’t just add more servers; they have to completely re-engineer their facilities for these new, power-hungry GPUs. This means ripping out old cooling systems and installing next-generation liquid cooling to handle the intense heat. It means working with utility companies years in advance to secure massive power upgrades. We’re not just talking about adding a few more megawatts; we’re talking about a 175% leap in consumption, which requires new substations and grid-level planning. It’s a seismic shift from a predictable, steady growth model to something far more explosive and complex.
The base case scenario predicts occupancy will remain at peak levels through 2026 before supply constraints ease. From your perspective, what are the top three supply-side bottlenecks—like power, land, or hardware—causing this tightness, and what steps are companies taking to overcome them?
Absolutely, and that peak occupancy feels like a pressure cooker. The number one bottleneck, without a doubt, is power. You can have the land and the building, but if you can’t get the electricity, you have a very expensive, empty shell. The second is the supply chain for specialized hardware, especially the high-end GPUs and networking equipment needed for AI clusters. There are only a few manufacturers, and the wait times can be staggering. Finally, there’s the human element: skilled labor to build and operate these incredibly complex facilities is becoming scarce. To get around this, we’re seeing hyperscalers make massive, forward-looking power purchase agreements years into the future. They are signing multi-billion dollar deals to lock in hardware pipelines and are even investing in their own training programs to build the workforce they need. It’s a frantic race to secure every component of the supply chain.
The report outlines a scenario where new GPUs and AI video content push occupancy rates over 100% in peak regions. Could you share a step-by-step example of what happens when a region’s capacity is fully tapped and how hyperscalers are forced to triage or delay demand?
Exceeding 100% capacity is a scenario that keeps executives up at night. It’s not a theoretical line; it’s a hard wall. The first thing that happens is a frantic scramble among customers for any remaining scraps of power or space, and prices go through the roof. Hyperscalers then enter a painful triage mode. Their largest, most strategic clients—the ones driving massive revenue—will be prioritized. New or smaller customers might be told their deployment is delayed indefinitely, or they could be offered capacity in a different, less desirable region, which can introduce latency issues. For a company trying to launch a new AI service, this is a disaster. It’s a situation where the hyperscalers are literally utilizing capacity as fast as they can build it, but the demand wave, especially with a potential 17-point jump in occupancy, is just too big to handle, forcing them to make tough choices about who gets to innovate and who has to wait.
Goldman Sachs presents two scenarios for lower demand: a drop in AI adoption and a slowdown in traditional cloud spending. Which of these two possibilities poses a greater financial risk for data center operators, and what key metrics would you watch to see if we are heading that way?
This is a fantastic question because it gets to the heart of the business model. While a drop in AI demand would be a major blow to future growth projections, a slowdown in traditional cloud spending is the far greater immediate financial risk. Remember, run-of-the-mill cloud and traditional workloads still account for about 85% of data center demand. This is the bedrock of their revenue. A four-percentage-point drop in occupancy from that massive base, as one scenario suggests, would create a significant revenue shortfall and could lead to excess supply. The key metric I would watch is corporate IT spending. Are CFOs telling their teams to “be a little tighter with the usage of cloud services”? When you see a widespread trend of cloud optimization and cost-cutting, that’s the canary in the coal mine signaling that the foundational business is at risk.
What is your forecast for the data center market’s supply-demand balance over the next five years?
My forecast is for a prolonged period of tightness, likely even more so than the base case suggests. The next 18 months will be a severe crunch, with peak occupancy and intense competition for any available capacity. While I agree that new supply will start to come online after 2026, the fundamental demand curve has been permanently steepened by AI. The efficiency of new GPUs is improving, but as Jim Schneider noted, the overall power demand is outstripping what anyone thought possible. I believe the market will remain “tighter for longer,” and instead of occupancy falling back to 90%, it will likely stabilize in the mid-90s. The industry is now in a constant state of trying to build ahead of a tidal wave of demand, and the balance will remain precarious for the foreseeable future.
