The tech industry is spending like it’s 1999. Except this time, the zeros are bigger, the stakes are higher, and the promises sound remarkably familiar.
Between 2025 and 2030, companies are projected to pour between $3.7 trillion and $7.9 trillion into AI data center infrastructure. Microsoft alone plans to spend $80 billion this year. Amazon is north of $100 billion. The AI data center market is expected to grow from $236 billion in 2025 to $934 billion by 2030. Those aren’t typos—they’re gambles dressed up as forecasts.
But here’s the uncomfortable truth hiding in plain sight: we’re building infrastructure for demand that hasn’t materialized yet, using
energy we don’t have, in a grid that can’t handle the load, with seven-year wait times for connection requests. The disconnect between aspiration and reality is widening.
The Grid Can’t Cash the Checks
Let’s talk about the elephant in the server room: power.
AI workloads demand 40-250 kilowatts per rack, compared to traditional data centers operating at 10-15 kW. A single ChatGPT query consumes nearly 10 times the electricity of a Google search. Goldman Sachs projects data center power demand will surge 160% by 2030. Morgan Stanley forecasts data center emissions hitting 2.5 billion metric tons of CO2 equivalent.
The energy math gets worse when you look at supply. According to Deloitte’s survey of 120 US-based power company and data center executives, 79% said AI will increase power demand through 2035. The leading challenge for data center infrastructure development? Grid stress. Some connection requests currently face seven-year waiting periods.
This isn’t a short-term bottleneck. This is structural.
When Google pulled its 450-acre data center proposal from Franklin, Indiana in September 2024, residents cheered. They understood what the company didn’t want to emphasize: these facilities consume enormous amounts of water and electricity while delivering minimal local benefits. Similar pushback is occurring in West Virginia, Northern Virginia, and other “Data Center Alley” locations.
The industry’s solution? Build more facilities in areas with cheap electricity and renewable power plants. The problem? You’re just relocating the constraint, not solving it.
The Dotcom Comparison We’re All Avoiding
“It’s not like the 1990s,” everyone says. Except... it kind of is.
The patterns are eerily similar: inflated infrastructure spending, uncertain ROI timelines, and the assumption that if you build it, revenue will come. The dotcom bubble wasn’t wrong about the internet’s importance—it was wrong about the timeline and who would capture the value.
Today’s AI infrastructure boom rests on similar assumptions. Companies like OpenAI, Anthropic, and Elon Musk’s xAI are building what will be the world’s largest supercomputing facilities. The spending is justified by projections that 30% of new drugs will be discovered using AI by 2025, that AI will reduce drug discovery timelines by 25-50%, and that enterprises across every sector will need this computational power.
Maybe. But “maybe” is expensive.
The S&P 500 spent $554 billion on capital expenditures in the first half of 2025. AI-related spending is driving this to levels not seen since the 2000s dotcom era. Companies are collectively betting that better models, better chips, and better infrastructure will unlock trillion-dollar AI applications.
If they’re right, they’re geniuses. If they’re wrong about timing or adoption, they’re building the 2020s equivalent of fiber optic cable that sat dark for a decade.
What Happens When the Music Stops
Here’s where it gets uncomfortable for marketers and advertisers.
When infrastructure spend outpaces revenue generation for too long, companies start cutting. They don’t cut infrastructure—that’s sunk cost. They cut services, staffing, and marketing budgets. The pattern is predictable: overspend on technology, then slash the go-to-market functions when Wall Street starts asking questions about returns.
We’ve seen this movie before. After 2000, ad spending collapsed not because brands didn’t believe in the internet, but because the companies building internet infrastructure ran out of runway. When AWS had its service disruption in December 2021, it affected hundreds of businesses and cost an estimated $150 million in lost business per hour. Now multiply that risk by a market that’s expected to be 10 times larger with more concentrated dependencies.
The marketing implications are stark. If you’re building campaigns assuming unlimited compute power at predictable costs, you’re planning for a world that might not exist. If you’re betting on AI-powered personalization at scale, you better have backup plans when compute costs spike or availability becomes constrained.
The Rebalancing That’s Coming
So what does a more realistic forecast look like?
First, consolidation. Not every company needs its own data center empire. The hyperscalers—AWS, Azure, Google Cloud—will continue to dominate because they have the scale to manage power, cooling, and efficiency better than anyone else. Smaller players will partner or be acquired.
Second, distributed computing will make a comeback. Edge computing deployments can reduce cloud data transfers by up to 70%, according to recent analysis. When centralized infrastructure becomes constrained, businesses will push processing closer to users. This shift matters for advertisers running real-time bidding, personalization engines, or location-based campaigns.
Third, energy efficiency becomes a competitive advantage. Liquid cooling systems, custom chips optimized for inference rather than training, and modular data center designs that reduce construction timelines from 24 months to 12 months will differentiate winners from losers.
Fourth, pricing models will evolve. The current subscription approach to compute will give way to more dynamic pricing that reflects actual supply and demand. Smart CMOs are already budgeting for variable compute costs in their media plans.
What This Means for Your Business
If you’re in marketing, advertising, or media, here’s your homework:
Audit your infrastructure dependencies. Where are your critical systems hosted? What happens if your primary cloud provider has capacity constraints or price increases? Do you have alternatives?
Model your compute costs. Stop treating AI tools as fixed-cost line items. Build scenarios for 2x, 3x, and 5x current pricing. Can your business model withstand that?
Develop contingency plans. What if the AI services you rely on aren’t available 24/7? What if response times slow down during peak demand? Do you have manual processes that can keep operating?
Watch the energy markets. Data center locations are increasingly determined by power availability. If you’re planning physical retail, events, or regional campaigns, understand where the infrastructure is actually being built and where it’s getting blocked.
Question the AI-first mandates. Not every problem needs an AI solution. Sometimes a well-designed rule-based system is faster, cheaper, and more reliable. The companies that will thrive aren’t the ones using the most AI—they’re the ones using the right tools for each job.
The Bottom Line
The AI infrastructure boom is real, necessary, and probably overextended. Between $3.7 trillion and $7.9 trillion is a $4.2 trillion margin of error. That spread tells you everything you need to know about forecast confidence.
Smart businesses aren’t avoiding AI. They’re avoiding dependency on infrastructure that may not exist at prices that may not hold. They’re building flexibility into their systems, maintaining hybrid approaches, and staying ruthlessly focused on ROI measured in quarters, not decades.
The companies that will look smart in 2028 aren’t the ones spending the most on infrastructure today. They’re the ones who know which bets to make, which constraints are real, and when to let someone else pay for the overhead.
The party isn’t over. But last call is closer than the projections admit.

