The Growing Challenges of Building AI-Ready Data Centers

The Growing Challenges of Building AI-Ready Data Centers

The rapid expansion of artificial intelligence has turned the spotlight onto the backbone of our digital world: the data center. While the public focuses on the capabilities of generative AI, the professionals responsible for the physical infrastructure are navigating an increasingly “ugly picture” of regulatory hurdles, power constraints, and community friction. Navigating this landscape requires more than just engineering prowess; it demands a deep understanding of multi-jurisdictional permitting, the scarcity of specialized labor in key hubs, and the environmental “fatal flaws” that can sink a multi-million dollar project before the first stone is laid. In this conversation, we explore the shifting dynamics of data center development, from the difficulty of securing air quality credits in Texas to the innovative cooling technologies reshaping facility design. We will delve into the complexities of community engagement, the reality of labor shortages in rural markets, and why the “long pole in the tent” for modern infrastructure is often a matter of bureaucratic timing rather than technical capability.

Permitting often involves navigating multiple jurisdictions and environmental factors like migratory bird patterns or air pollutants. How do you manage these overlapping local and state requirements, and what specific steps can be taken to prevent these approvals from becoming the primary bottleneck in a project timeline?

The reality is that permitting has become the “long pole in the tent” for almost every major project we undertake, often stretching into a process that lasts for months rather than the weeks many developers hope for. You aren’t just dealing with a single building department; you are navigating a labyrinth of local, county, and state jurisdictions, each with their own specific demands regarding zoning, water usage, and land easements. For example, a project might be delayed simply because the site sits along a path for migratory birds or requires specialized air pollution permits for backup generators, creating a highly variable and often frustrating timeline. To prevent these from becoming total bottlenecks, we have to move away from treating permitting as a linear step and instead engage in a “paint an ugly picture” exercise early on to identify every possible jurisdictional hurdle. It is about securing those easements and approvals well before the first shovel hits the ground, acknowledging that the land is rarely just purchased and cleared without a complex dance of legal and environmental permissions.

In regions like Dallas-Fort Worth, finding emissions offsets is nearly impossible for facilities exceeding certain annual thresholds. How do you adjust project designs when air quality permits are unavailable, and what are the consequences of attempting to install high-uptime combustion equipment without those credits?

When we look at regions like Dallas-Fort Worth, we hit what I call a “fatal flaw” in the development plan: the major source threshold for emissions is often capped at 25 tons per year, and credits to offset anything higher are virtually non-existent. Many clients come to us with a “pie in the sky” image of a massive AI data center that requires “five nines” of uptime, which necessitates a massive fleet of diesel-powered backup generators and significant fuel storage. If the air permits aren’t available, we have to be the ones to bring them back to reality by scaling down the combustion equipment or drastically redesigning the power backup architecture to stay under that 25-ton limit. The consequences of ignoring this are severe; while some states might allow you to grade the land or pour a foundation, there is a hard line that says you cannot turn on a single piece of emissions-generating equipment without that permit in hand. Attempting to bypass this isn’t just a regulatory risk; it’s a project-ending mistake that can leave a finished facility sitting cold and useless because it cannot legally operate its own emergency systems.

Skilled tradesmen are increasingly difficult to secure in hubs like Georgia and Northern Virginia, especially for projects in rural areas. How do you source specialized labor in these thin markets, and what logistical shifts are necessary when local regulatory meetings happen less frequently than in urban centers?

Even if you have a massive national general contractor with presence in every major city, finding the actual subcontractors—the electricians and mechanical specialists—is a massive hurdle in hotspots like Northern Virginia and Georgia. As we push further into rural areas to find land and power, the labor pool thins out significantly, forcing us to bring in specialized teams from outside the region, which adds layers of logistical cost and complexity. In these rural settings, you also run into a different kind of temporal bottleneck because the local regulatory boards might only meet once a month or even less frequently compared to the constant activity in a major urban hub. This means if you miss a single deadline for a meeting agenda, your project timeline could slide by 30 days instantly, requiring a much more rigid and proactive approach to scheduling. We have to coordinate our construction milestones with these infrequent local calendars while simultaneously managing the housing and mobilization of a workforce that might not live anywhere near the job site.

Public concern regarding water and power usage is growing, yet many modern facilities actually use less water than typical local leisure centers. How should operators better coordinate with communities to reduce daily disruptions, and what specific metrics help prove a project is being a good environmental steward?

There is a significant maturity issue in how the industry handles community relations; too many operators are running at full speed to deliver systems and failing to consider how their project disrupts the daily lives of local residents. We need to do a much better job of educating the public, pointing to data like the TechUK findings which show that an average data center can actually use less water than a typical neighborhood leisure center or gym. The key to being a good steward is coordination—thinking about how we can execute infrastructure projects in concert so we aren’t digging up the same road three different times for power, fiber, and water. By using clear metrics on water reciprocity and showing the community that these facilities are closed-loop or direct-to-chip systems that minimize waste, we can move away from the image of the “resource-hungry elephant” and toward being a responsible industrial neighbor. It’s about taking a moment to breathe and plan for the long term rather than just rushing to meet an immediate delivery target.

Balancing aggressive construction targets with the adoption of advanced liquid-to-chip or immersion cooling requires significant design flexibility. How do you integrate these newer cooling technologies into existing project blueprints, and what trade-offs must be made to meet both sustainability goals and strict delivery deadlines?

Integrating advanced liquid-to-chip or immersion cooling into a project that is already under a tight deadline is like trying to change the engines on a plane while it’s in flight. These systems are essential for the high-density heat produced by generative AI, but they require a completely different infrastructure than traditional air-cooled floors, including specialized piping and closed-loop water systems. Often, the trade-off comes down to the “fluidity” of the timeline; we have to decide if we are willing to risk a delay to implement a more sustainable, high-efficiency cooling solution or if we stick to traditional designs to hit an immediate delivery window. As the market becomes more dynamic, we are building more flexibility into our initial blueprints so that we can pivot to liquid cooling as the hardware arrives, but this requires a massive amount of up-front planning and a per-region strategy to ensure the local grid and water supply can handle the specific demands of these new technologies. It is a constant balancing act between the urgent need for compute and the long-term necessity of building a facility that won’t be obsolete or environmentally unsustainable in five years.

What is your forecast for data center construction?

My forecast for the industry is one of forced evolution, where the “wild west” era of rapid, uncoordinated growth is replaced by a much more disciplined, regionalized approach to infrastructure. We are going to see a major shift toward standardized, modular designs that allow for liquid cooling to be “plugged in” rather than custom-built, which will help mitigate some of the labor shortages we see in places like Georgia and Northern Virginia. However, the regulatory environment is only going to get tougher, and the “fatal flaws” regarding air emissions and power grid constraints will likely push developers toward even more remote locations, necessitating a new brand of self-sustaining, “off-grid” data center designs. Ultimately, the winners in this space will be the operators who prioritize “good stewardship” and community education, as public sentiment will become just as much a “long pole in the tent” as permitting and power availability are today.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later