The Electrify Everything Show

Podcast 25: The Interconnection Queue Is the New Oil

Nigel Broomhall Season 2 Episode 2

Send us a text

AI isn’t being throttled by chips. It’s being throttled by electricity.

In this episode of The Electrify Everything Show, we unpack the emerging power land-grab behind the AI boom—and why the real competitive advantage is no longer just compute, capital, or real estate… it’s deliverable megawatts.

The U.S. Energy Information Administration expects U.S. electricity consumption to hit record highs again—4,199 billion kWh in 2025 and 4,267 billion kWh in 2026—with growth driven in part by large customers like data centers, alongside broader electrification. Meanwhile, U.S. data centers already consumed about 183 TWh in 2024 (just over 4%of U.S. electricity use), with projections rising sharply by 2030. Globally, the IEA projects data-center electricity demand could roughly double to ~945 TWh by 2030 in its base case.

But here’s the part most people miss: you don’t “just add power.” The bottleneck is often interconnection—the studies, upgrades, substations, transformers, timelines, and politics required to connect new massive load safely. Grid operators are reacting in real time. PJM, for example, has forecast 32 GW of peak load growth from 2024 to 2030, with the bulk driven by data centers, and has launched accelerated processes to address the reliability and affordability implications of large load additions.

In plain English: the interconnection queue is the new oil—and the next decade’s winners will be the ones who can secure power, prove flexibility, and fund upgrades without dumping private costs onto public ratepayers.

In this briefing-style episode, you’ll learn:

  • Why AI turns electricity into the limiting factor (and why “4%” is already a big deal)
  • What “interconnection” really means—and why it’s where projects get stuck
  • Why transmission, substations, and transformers can matter more than new generation
  • The new hierarchy of winners: flexible load, phased ramps, on-site capability, smart siting
  • What policymakers, utilities, developers, and normal businesses can do right now

If you work anywhere near energy, AI, real estate, infrastructure, or fleet electrification—send this episode to one person who needs to understand what’s coming. One share beats a thousand likes.

Support the show

You’ve been told the AI revolution is about chips, models, and software.

That’s adorable.

The real constraint—the thing that decides who wins and who stalls—is electricity. Not abstract “energy.” Actual megawatts, delivered at the right place, on time, reliably.

If you want to understand the next decade of tech, real estate, and even your utility bill… you need to understand a single concept:

The interconnection queue is the new oil.

The U.S. Energy Information Administration expects U.S. power consumption to hit record highs again: 4,199 billion kWh in 2025 and 4,267 billion kWh in 2026, with growth driven in part by data centers for AI and crypto, plus broader electrification.

That’s the headline. But the story underneath it is more aggressive.

Data centers in the U.S. consumed about 183 terawatt-hours in 2024—a bit over 4% of total U.S. electricity use—and projections show that could rise to 426 TWh by 2030

That’s not a rounding error. That’s an industry the size of a medium-sized country… showing up, basically, overnight.

Today I’m going to do three things, in plain English:

  1. Show you the real scale of AI-driven electricity demand.
  2. Explain why the grid can’t just “build more power” quickly—because that’s not how the bottleneck works.
  3. Tell you who gets served first, who pays, and what the smart players are doing right now.

And I’ll give you a practical playbook at the end.

Welcome to The Electrify Everything Show. I’m Nigel, and this episode is your briefing on the emerging electricity war behind AI.

One request: if you know someone building a data center, working at a utility, doing site selection, financing infrastructure, or trying to electrify a fleet—send them this episode.

Because the next decade won’t be defined by who has the best pitch deck. It’ll be defined by who has the best queue position.

First, zoom out.

The International Energy Agency projects global electricity demand from data centers will double to around 945 TWh by 2030 in its base case—just under 3% of global electricity consumption by then. 

A useful mental image: that’s roughly “Japan-scale” electricity consumption, but dedicated to data centers. 

And the IEA points out the growth rate here is exceptionally fast—data center electricity demand growing around 15% per year from 2024 to 2030 in the base case. 

That’s not normal. That’s “a new industrial sector arriving at speed.”

Now the U.S.

Pew summarizes IEA estimates that U.S. data centers used 183 TWh in 2024, just over 4% of U.S. electricity—rising to 426 TWh by 2030 under their projection.

Here’s why that matters:

Electric systems are built for peaks and constraints. If you add a large, steady load, you don’t just add “a bit more demand.” You add a need for more wires, more substations, more transformers, more dispatchable capacity, and more planning margin—often in the exact places that are already stressed.

People hear “4%” and relax.

Don’t.

Four percent is already a huge share in a system this large, and it is geographically concentrated. Data centers don’t spread out evenly like Christmas lights. They cluster where fiber, tax incentives, land, and power access align.

That clustering is why certain regions are suddenly acting like they discovered oil under the parking lot.

Now let’s talk about AI specifically.

Classic cloud compute is “big,” but AI training and AI inference can be… spiky, dense, and relentless. Even when efficiency improves, the industry tends to consume it right back—because cheaper compute creates more use cases.

So, yes, chips will get more efficient. Cooling will get better. But the economic flywheel is brutal: more capability creates more demand.

And that brings us to the punchline:
AI demand does not politely wait for 10-year grid planning cycles. It shows up, raises its hand, and says, “I’d like 300 megawatts by Tuesday.”

This is happening at the same moment the U.S. is leaving a long period of relatively flat electricity growth.

EIA’s analysis notes that after more than a decade of little change, they forecast U.S. electricity consumption increases in 2025 and 2026, surpassing the 2024 high. 

And Reuters’ summary of EIA’s forecast explicitly points to data centers and broader electrification as contributors.

So: the system that spent years thinking “flat demand” is now forced to think “growth,” and growth that is concentrated and time-sensitive.

Okay.

Now that we’ve established the demand shock, here’s the next step:

If you think the solution is simply “build more power plants,” you’re thinking about the wrong choke point.

The bottleneck is not just generation.

The bottleneck is interconnection + grid infrastructure + timelines + who pays.

Let’s talk about why.

Interconnection is the process of connecting a new generator—or a new big load—to the grid safely.

It includes studies:

  • Can the local substation handle it?
  • Do the lines overload?
  • Do you need a new transformer?
  • If something fails, does it cascade?

Then it includes upgrades:

  • New equipment
  • New lines
  • Sometimes entirely new substations

And then it includes the fight:

  • Who pays for the upgrades?
  • What happens if the load never shows up?
  • What happens if it shows up faster than upgrades can be built?

This process is why people with money and urgency don’t say “we need power.”

They say: “We need a deliverable interconnection plan with dates.”

[11:15–13:40] PJM as the canary in the coal mine

Let’s use PJM as an example because they’re being unusually direct about what’s happening.

PJM has said its 2025 Long-Term Load Forecast projects 32 GW of peak load growth from 2024 to 2030, with all but 2 GW coming from data centers. 

almost all of the forecast increase is data centers.

This is why PJM initiated an accelerated stakeholder process—what they call a Critical Issue Fast Path—specifically focused on reliably serving large load additions. 

When a grid operator fast-tracks governance, it’s not because they’re bored. It’s because they can see the wall coming.

Here’s what the interconnection queue feels like in practice:

You’re a developer. You want to build a data center. You apply to interconnect at a certain location. You get in line.

But the line isn’t first-come, first-served in a simple way. The system needs to study interactions with everyone else in line, plus the physical network constraints.

So the queue becomes this strange mix of engineering and game theory.

It’s like the DMV, except:

  • everyone’s in a hurry,
  • the forms change mid-queue,
  • and the price tag is nine figures.

And yes, some queue positions are “speculative”—people reserving a spot without certainty they’ll build. That creates real policy tension.

Let me make this very concrete:

You can sign all the clean-energy pledges you want. None of that magically produces a high-voltage transformer, a substation expansion, or a transmission upgrade with a 12-month lead time.

Grid build-out is civil work, heavy equipment, permits, and manufacturing.

It runs on:

  • engineering capacity
  • procurement capacity
  • permitting capacity
  • and political capacity

This is why the “AI meets grid” story rapidly becomes a “state and local politics” story.

If you want evidence that this is real, just watch what the biggest buyers are doing.

Meta announced multiple nuclear-related agreements—long-term power purchases and development collaborations—designed to support AI data centers, targeting up to 6.6 GW of nuclear power by 2035 in public descriptions of the deals. 

AP also reports Meta’s “Prometheus” data center cluster in Ohio is expected to be 1 GW and targeted for 2026.

That’s the size of a large power plant… but on the demand side.

When the biggest tech firms start securing generation like this, they are telling you something:

They do not trust the queue. They do not trust the timeline. And they will not bet their AI roadmaps on “maybe the grid can handle it.”

Now, there’s a second-order effect: when power gets tight, the system leans on what’s dispatchable.

That can mean gas.

And that creates a political and public backlash. You’re already seeing coverage framing AI’s energy demand and associated pollution risks as a serious climate concern. 

Whether you agree with the framing or not, the takeaway is straightforward:

AI’s power appetite is no longer just an engineering conversation. It’s a social license conversation.

So now we’ve got the ingredients:

  • Massive new demand
  • Long infrastructure timelines
  • Concentrated regions getting hit hardest
  • Political scrutiny rising
  • Big players going around the queue with direct power deals

Which leads to the next question:

Who gets power first, who waits, and who pays for the upgrades?

That’s Act 3.

In the old world, being a “good customer” meant creditworthiness and a stable load.

In the new world, the premium trait is flexibility.

If you can:

  • curtail demand when the grid is stressed,
  • ramp in phases,
  • run some onsite generation,
  • or shift workloads to different times…

you move up the priority list, because you’re not a pure reliability liability. You’re an asset.

This is already embedded in how grid operators are thinking about large loads.

PJM’s Critical Issue Fast Path process exists because the system is trying to integrate large load additions without blowing up reliability or affordability. 

And the language around it is not subtle. PJM has publicly described large load growth creating upward pricing pressure and raising resource adequacy concerns. 

So: the system is in triage mode.

And triage has rules.

The easiest megawatt is the one that doesn’t require new lines and new substations.

So sites near:

  • robust transmission
  • existing substations with headroom
  • retired industrial loads
  • or areas with surplus generation

…move faster.

This is why brownfield sites and “boring” locations are suddenly sexy.

The power system is regulated, permitted, and socially governed.

If a project can credibly say:

  • jobs
  • tax base
  • local investment
  • grid support (like demand response capability)

…they get traction.

If a project looks like:

  • private profit
  • public cost
  • higher rates
  • and more emissions

…expect friction.

This isn’t moral judgment. It’s how democratic infrastructure works.

Now the most combustible part: who pays for grid upgrades.

There’s a growing view—especially from state leaders—that it’s unreasonable to broadly socialize certain costs driven by new large loads.

For example, PJM has public materials where governors’ principles explicitly include language along the lines of avoiding “socializing exogenous costs” onto families and small businesses. (PJM)

Translation: if a new mega-load shows up and requires expensive upgrades, the politics will increasingly demand that the mega-load bears more of the cost and the risk.

This is a big shift. It affects:

  • where data centers locate
  • how interconnection deposits are structured
  • and how quickly projects can proceed

Another reason this gets messy: not every request in the queue becomes a real facility.

Some are speculative: a developer wants an option. Some are “maybe” projects. Some are frankly fantasy.

So grid planners are stuck between:

  • taking requests seriously to avoid shortages
  • and not gold-plating upgrades for projects that never arrive

This is why deposits, milestones, and enforceable schedules are becoming more important.

Here’s my view:

Over the next 24 months, we will see a clear “power aristocracy” emerge.

The winners will be the loads that can do at least three of the following:

  1. Ramp in phases rather than demand everything day one
  2. Curtail under contract
  3. Bring generation or storage to the table
  4. Locate smart—near existing grid strength
  5. Pay real money upfront for certainty

If you can’t do any of those, you’ll still build… but you’ll build later. And you’ll pay more.

Alright—enough doom.

What do you actually do with this?

I’m going to give four playbooks:

  • for policymakers,
  • for utilities/RTOs,
  • for developers and hyperscalers,
  • and for normal businesses and households.

Playbook A: cities and states

If you’re a city or state trying to attract AI and data centers without wrecking your grid:

  1. Require phased load plans (e.g., 25 MW now, 50 MW later, etc.).
  2. Demand flexibility commitments: curtailment capability, backup, and operational constraints.
  3. Prioritize grid-adjacent sites: where transmission and substations are strong.
  4. Tie incentives to grid-positive behavior, not just “jobs promised.”

The future winners are regions that can say:
“Yes, we welcome load growth—but we run an adult process.”

Playbook B: utilities and grid operators

If you’re operating the grid:

  1. Make interconnection more real by using milestones + deposits so the queue reflects reality.
  2. Create standardized large-load tariffs that reward flexibility.
  3. Fast-track the upgrades that are obviously needed, but avoid building the Taj Mahal for vapor.

PJM’s move to accelerate processes around large loads is a signal that these governance changes are happening now, not in five years. (insidelines.pjm.com)

Playbook C: data center developers and hyperscalers

If you’re building big load:

  1. Treat “power strategy” as a first-class product requirement.
  2. Design for interruptibility and onsite capability—because it buys you speed.
  3. Expect to pay for certainty. That’s the price of the new era.
  4. Stop pitching “300 MW tomorrow.” Start pitching “here’s our staged plan, here’s our curtailment contract, here’s our contribution to upgrades.”

And learn from what the biggest players are doing: long-term supply arrangements and direct partnerships to lock in capacity. 

Playbook D: normal businesses and households

If you’re not building a data center, you’re not powerless—pun fully intended.

  1. Expect more time-based pricing and demand-response incentives over time.
  2. If you electrify—EVs, heat pumps—do it smartly with timers and load management.
  3. If you’re a business with flexible operations, look for demand-response programs. Flexibility is becoming monetizable.

And yes: this all reinforces why “electrify everything” needs to be paired with “operate everything intelligently.”

So here’s the summary.

The EIA expects record U.S. electricity consumption in 2025 and 2026, and it explicitly cites data centers as part of the demand growth story. (Reuters)
U.S. data centers were already about 4% of U.S. electricity use in 2024, with projections rising sharply by 2030.
Globally, data center electricity demand is projected to roughly double to ~945 TWh by 2030 in the IEA base case.
And grid operators like PJM are explicitly forecasting major peak load growth driven overwhelmingly by data centers—and they’re fast-tracking process changes because reliability and affordability are on the line. 

In short: the new scarcity is not compute. It’s deliverable megawatts.

Next episode, we’ll bring it down to street level: charging, standards, reliability, and why the connector war was just the warm-up.

If this clarified the madness, send it to one person who touches energy, tech, real estate, or infrastructure. One share beats a thousand likes.

I’m Nigel. This is The Electrify Everything Show. See you next time.