Menu

IoT FEATURE NEWS

What are the Hyperscalers' Goals Working the Power Play with Telcos?

By

Generative AI and large language models (LLMs) require unprecedented computational power, leading to a massive increase in server rack density and overall power consumption. GPUs, especially, are far more power-hungry than traditional CPUs. Meanwhile, the hyperscalers have ambitious renewable energy and net-zero goals, which are becoming harder to meet as their total energy consumption skyrockets. Sourcing enough clean, firm power (24/7 renewable energy) is a significant hurdle, which is why Meta contracted for 20 years with Constellation Energy.

Besides sourcing the power, the transmission of the power to the hyperscalers’ data centers is subject to the weak links in the grid. The physical infrastructure (e.g., transmission lines, substations, etc.) may not be sufficient to deliver that power to concentrated data center campuses. Existing electrical grids in many regions are struggling to provide enough power to meet the demand from new hyperscale data center builds, which leads to delays in bringing new capacity online and can strain local grids.

Recognizing these constraints and priorities in disaster scenarios, like hurricanes, consumers’/citizens’ needs must come first and corporate use cases get put on the back burner.

So, the question becomes, where can hyperscalers have ready access to back up their solutions? After the power company, I will submit that telcos are probably amongst the top industries to build redundant power systems, and probably the only other industry that transmits power (a low power to be sure but, unlike hospitals, the goal is not to keep the lights on, but to keep the network working).

While the hyperscalers invite the telcos to be on the edge of edge compute, the side benefit of redundant backup power systems may be more substantial. What we have, then, is a way for the hyperscalers to offload some processing and gain some power.

Let’s take a look at what assets telcos can put into the mix.

Telephone companies and data centers in central offices

  • Evolution of central offices: Traditional central offices, once housing circuit-switched voice equipment, are being modernized and repurposed. With the shift to fiber optics and IP-based services, these locations are ideal for housing data center equipment, especially for edge computing. Edge data centers bring computing resources closer to end-users, reducing latency and improving performance for applications like 5G, IoT, and content delivery.
  • Colocation services: Many telecom companies, like Lumen (formerly CenturyLink), offer colocation services in their data centers, which can include space within or adjacent to their central offices. This allows other businesses to house their IT infrastructure in secure, connected environments.
  • Network infrastructure: Even if not explicitly marketed as a "data center," a central office houses critical network infrastructure that requires continuous power. This includes equipment for fiber optic networks (like OLTs for GPON), traditional voice switches, and data routing equipment.

Backup generators and power redundancy

Telecom central offices are built with multiple layers of power redundancy to ensure continuous service. This typically includes:

  • UPS (Uninterruptible Power Supply) systems: Large battery banks provide immediate, short-term power (minutes to hours) to allow for the seamless transition to generator power. These are essential for preventing service interruptions from momentary power fluctuations or outages.
  • Diesel generators: For extended power outages, central offices have large diesel generators that automatically kick on when grid power is lost. These generators are designed to run for many hours or even days, as long as fuel supplies are maintained.
  • Fuel storage: Telecom companies maintain significant fuel storage capacity at their central offices to keep generators running during prolonged outages.
  • Portable generator hookups: Some smaller remote terminals or aggregation hubs may have battery backups for a limited time (e.g., 24 hours), with provisions for easily plugging in portable generators if an outage extends.
  • Robustness and importance: The telecommunications industry, by its nature, is highly reliant on continuous power. Maintaining communications networks, especially for emergency services (911), is a critical function. Therefore, telcos invest heavily in ensuring their infrastructure, including central offices, to withstand grid failures.

Telcos also have a distributed solution with multiple locations. The choice of edge location depends heavily on the specific application's latency requirements, bandwidth needs, security considerations, and cost-effectiveness.

Now that we have looked at the architecture of telco assets, let’s look at the benefits of the hyperscalers working with telcos.

Distributed Workloads, Reduced Backhaul

  • What it does: By placing edge compute resources closer to the data source and end users (in COs), edge computing reduces the need to send all data back to the massive, centralized hyperscale data centers for processing.
  • Power impact: Less data transmitted over long distances means less energy consumed by the network infrastructure (routers, switches, fiber optics) over those distances. While this is a smaller piece of the pie, compared to server power, it contributes to overall network energy efficiency.
  • Hyperscaler benefit: It lessens the demand for some workloads to be processed at the central cloud, potentially allowing hyperscalers to optimize their centralized data center capacity for the most complex, non-latency-sensitive tasks (like AI model training, large-scale batch processing).

Leveraging Existing Infrastructure and Redundancy

  • What it does: Telco COs already have robust power infrastructure (UPS, generators, fuel storage) and established connections to the electrical grid. They are designed for continuous 24/7 operation and have been built out over decades.
  • Power impact: Hyperscalers don't have to build entirely new, massive data centers from scratch just for edge needs. They can rent or partner to use existing, power-ready facilities. This avoids duplicating costly and time-consuming infrastructure development at every edge location.
  • Hyperscaler benefit: It defers or reduces the need for hyperscalers to sink enormous capital and effort into building out new power infrastructure (substations, generators) at every single edge location, which would only exacerbate their current power sourcing challenges.

Optimization for Localized Workloads

  • What it does: Edge data centers in COs are optimized for specific, low-latency, high-bandwidth applications that benefit from proximity. These workloads might involve real-time analytics, local AI inferencing, or rapid data ingestion.
  • Power impact: By running these specific workloads at the edge, you can potentially avoid the energy consumption associated with the round trip to a distant cloud and back. The compute at the edge can be right-sized for local demand, rather than relying on an over-provisioned centralized data center.
  • Hyperscaler benefit: Allows them to focus their large, centralized facilities on the truly hyperscale, computationally intensive tasks (like AI model training) that genuinely require their enormous, specialized infrastructure, rather than being burdened by less demanding, latency-sensitive workloads that are better served at the edge.

Diversifying Power Demand

  • What it does: Instead of having all data center power demand concentrated in a few data center alleys or zones (which strain local grids), edge computing distributes this demand across potentially hundreds or thousands of COs.
  • Power impact: This can help prevent localized grid overloads and makes it easier for utilities to plan for power delivery, as the demand is more dispersed.
  • Hyperscaler benefit: It supports a more resilient and sustainable overall digital infrastructure by not putting all the power eggs in one basket.

Despite the apparent benefits, there are some important caveats to consider in these scenarios where hyperscalers might partner with telcos.

  • Total power consumption: Edge computing doesn't reduce the total amount of computing being done, so it doesn't fundamentally reduce global electricity consumption. It shifts and optimizes where that computing happens.
  • Smaller, but more numerous: While individual edge sites in COs are smaller than hyperscale data centers, their sheer number could still add up to significant power demand. Each site still needs power, cooling, and backup.
  • Cooling challenges: Edge sites, especially smaller ones, might have different cooling challenges than massive data centers. Some older COs may not be designed for the high heat density of modern AI servers, requiring upgrades.
  • Hyperscalers still need massive data centers: Edge computing offloads certain types of workloads, but the core business of hyperscalers (large-scale storage, complex AI training, global applications) still requires their huge, centralized data centers, and the power challenges there remain significant.

That said, there are many examples of telcos working with hyperscalers.

  • AT&T: Are actively repurposing central offices for edge deployments and colocation services, especially for AI inferencing and low-latency applications.
  • Bell Canada: Partnered with both AWS and Google to run core network functions on its distributed cloud edge.
  • Lumen Technologies (formerly CenturyLink): Is utilizing its COs for enterprise collocation, emphasizing the need for deeper presence in markets and low-latency connectivity.
  • Telefónica: A major global operator, Telefónica is actively demonstrating how its multi-cloud environment can bring applications to the edge, leveraging its telco infrastructure.
  • Verizon: Is known for having some of the more mature edge deployments, often in collaboration with hyperscalers like AWS (through AWS Wavelength).
  • Vodafone, China Mobile, Comcast: These are also major players whose strategies often involve modernizing their central office infrastructure for edge computing.
  • Ziply Fiber: Is also noted for turning old central offices into data centers.

A friend, who recently left a hyperscaler, pointed out that most enterprises are reluctant to use an edge compute from a telco and would prefer to run the edge themselves, which may be accurate for hybrid private/public networks. If that is, indeed, the case, we may see the model change, where the hyperscalers become the customers and not the partners. Time will tell but, for today, the telcos are the best place to gather redundant power reserves.




Edited by Erik Linask
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Partner, Crossfire Media

SHARE THIS ARTICLE
Related Articles

How Kapitus is Reshaping SMB Funding

By: Carl Ford    6/16/2025

Kapitus is a financial institution that provides various financing solutions to SMBs, operating as both a direct lender and a financing marketplace.

Read More

Slicing Up the Network with 5G SA: An Interview with Telit Cinterion's Stan Gray

By: Carl Ford    6/10/2025

Carl Ford speaks with Stan Gray about 5G SA, network slicing, and trends, challenges, and opportunities related to both.

Read More

Cisco Introduces Agentic AI to Industrial AIoT

By: Carl Ford    6/10/2025

The goal at Cisco is to make management of systems easier, particularly for OT, with a focus on operational issues and not on the networks connecting …

Read More

CiscoLive and Well in 2025

By: Carl Ford    6/10/2025

Cisco's new AI infrastructure innovations aim to simplify, secure, and future-proof data centers for the AI era, whether they are on-premises or a hyp…

Read More

What are the Hyperscalers' Goals Working the Power Play with Telcos?

By: Carl Ford    6/6/2025

Are telcos in prime position to support hyperscalers as AI drives up energy and compute needs?

Read More