From CPUs to GPUs, TPUs, and FPGAs, Industrial Edge Computing is Following in the Cloud's Heterogeneous Footsteps: New Economics at the Edge

By Arti Loftus October 12, 2021

With smaller form factors and more powerful silicon coming together to support the rapidly growing “Industry 4.0 Edge,” Industrial IoT expert Charles (Chuck) Byers, Associate Chief Technical Officer at the Industry IoT Consortium, has been focusing on complexity at the edge for several years and recently published a paper in the IIC Journal of Innovation with his insights.

“Heterogeneous computing is the technique where different types of processors with different data path architectures are applied together to optimize the execution of specific computational workloads,” Byers writes in the introduction. Edge computing moves a subset of the computation, network, and storage tasks traditionally done on cloud data centers to edge nodes located deeper in the network and therefore closer to IoT devices. “Traditional CPUs are often inefficient for the types of computational workloads we will run on edge computing nodes.”

Byers studied the evolution of cloud data centers as a precursor to supporting an equally heterogeneous edge, with similar requirements to partition, optimize, and build efficient hardware & software architectures that ensure edge nodes can predictably and efficiently support a wide range of use cases.

While there are myriad definitions of edge computing, Byers defines it in his paper as “a technique through which the computational, storage and networking functions of an IoT network are distributed to a layer or layers of edge nodes arranged between the bottom of the cloud and the top of IoT devices.”

“There are many tradeoffs to consider when deciding how to partition workloads between cloud data centers and edge computing nodes,” Byers said, noting that it is also important to design processor data path architectures optimized at each layer for different applications. 

Alphabet Soup at the Edge

Byers instructs that computing resources in the cloud consist of traditional Complex Instruction Set Computing / Reduced Instruction Set Computing (CISC/RISC) servers, but also include Graphics Processing Unit (GPU) accelerators, Tensor Processing Units (TPUs), Field Programmable Gate Array (FPGA) farms, and a few other processor types to help accelerate certain types of workloads.  

“Many of the capabilities of the cloud data center are mirrored in the heterogeneous computing architecture of the edge computer node,” Byers writes. “It includes modules for multiple processor types, including CISC/RISC CPUs, GPUs, TPUs, and FPGAs. The compute workloads can not only be partitioned between edge and cloud (see the companion article in this issue of the IIC Journal of Innovation) but also partitioned between the various heterogeneous processing resources on both levels.”

He explained that edge computing nodes today generally leverage a homogenous computing base, with all the processing running on the same type of CPU. “CISC / RISC processors are by far the most popular solutions, with smaller edge nodes using a single-core CPU chip providing their processing power, often using X86 or ARM architectures.  Larger edge nodes include multicore processors, with between two and about 32 X86, ARM, or RISC-V cores, or include multiple CPU chips of the same type.”

Byers also explores the edge software platforms, for example, Microsoft Azure Edge, Amazon Greengrass, VMware Edge, and open source edge software projects from the Eclipse Foundation, EdgeX Foundry, and the Linux Foundation as an expansive and maturing range of options.

“These software packages manage the operating system infrastructure, configuration, security, orchestration, management, etc. of CISC / RISC processors in edge nodes,” Byers writes, and “Once one of these software infrastructure packages is up and running on the processor chip of an edge node, algorithms, protocol stacks, and application software can be loaded on top to complete the functionality of the edge system.”

Capable of supporting single-thread architectures and massively parallel computational tasks, Byers explained that for advanced edge computing use cases, heterogenous processors may define the future.

Why Move So Much Processing to the Edge?

“There are several key performance attributes that can be used to judge the suitability of a certain processing architecture to a specific set of applications,” Byers explained. “These attributes relate to performance, efficiency, scalability, density, cost, and many similar areas, with throughput a high priority attribute. Throughput could be quantified using measures like sessions/users/devices supported, link bandwidth processed, latency, model complexity evaluated, transactions/inferences/operations per second, and similar measures.” Similarly, the cost of a computing solution could be analyzed as the purchase price of the hardware, its energy consumption, or its physical volume or weight properties.

When it comes to optimizing economic efficiency, Byers looks to Performance Ratio as the primary calculation for value: Some throughput measure / Some cost measure.

With more open software and hardware ecosystems, costs to purchase the process hardware are positioned to decrease, making larger and multi-edge implementations more feasible and scalable for the end customer. “Getting more throughput per dollar of the system purchase price is one important way of optimizing the total lifecycle cost of ownership of an edge system.  The amount of throughput a dollar will buy is highly dependent upon system architecture, the capabilities of the processor chips, the efficiencies of the hardware and software infrastructure, and the requirements of the software and algorithms,” Byers said.

The electrical power needed to continuously operate edge nodes is another extremely important part of the economics. “Energy consumption is usually the largest component of its ongoing operational expense,” Byers said, “whether this power is supplied by batteries (at least during the times when AC power is unavailable) or direct from the power line. Cooling is another consideration, as the electrical energy that enters a processor chip is almost completely converted to heat that must be removed from the system, and the necessary cooling infrastructure is a strong contributor to the purchase and operational costs. 

“Power and cooling can create absolute limits on the throughput of edge computers,” Byers explained.

Space is another consideration, whether edge nodes are located at the base of cell towers, roadside cabinets, micro data centers the size of shipping containers, in vehicles, or even carried by humans. “As processors get physically larger, the cost associated with providing that space grows rapidly,” Byers said.

Weight is another often overlooked consideration when it comes to calculating the true costs associated with edge compute node implementations. “In certain deployment situations, especially aerospace, maritime or human portable deployments, there are strong constraints to the maximum weight of an edge node. The choices of processor technologies can have a strong influence on the overall weight of the system.”

In his latest paper, developed with and for the IIC, Byers explores in great detail these aspects and concludes, “The performance, cost, and efficiency of edge computing can be optimized through careful selection of various types of heterogeneous processors to run various aspects of the edge workloads.  Different processor types, including CISC/RISC CPUs, GPUs, TPUs, and FPGAs, can be combined into modular implementations that are optimized for the offered workloads.  Modular orchestration systems dynamically adapt the heterogeneous processor infrastructure to match the requirements of the offered load.  Heterogeneous processor techniques are especially valuable in edge computing, as they can greatly improve the throughput cost measures and economic efficiency in resource-constrained edge nodes.”

Byers will be speaking at the upcoming Edge Computing World virtual gathering October 10 – 12 (you can register for this free event here).

Arti Loftus is an experienced Information Technology specialist with a demonstrated history of working in the research, writing, and editing industry with many published articles under her belt.

Edited by Luke Bellos
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Special Correspondent

Related Articles

ZEDEDA Certified Edge Computing Associate Certification to Support Growing Uses of Edge Computing

By: Alex Passett    9/6/2023

The new ZCEA certification from ZEDEDA is available through the company's Edge Academy and provides fundamental knowledge about the many benefits of e…

Read More

T-Mobile and Google Cloud Partner to Advance 5G and Edge Compute Possibilities

By: Alex Passett    6/15/2023

T-Mobile and Google Cloud are helping customers embrace next-gen 5G use cases; applications like AR/VR experiences, for example.

Read More

Aptiv PLC Acquires Wind River Systems to Enhance Software-Defined Vehicles

By: Alex Passett    1/5/2023

Dublin-based automotive technology supplier Aptiv PLC has acquired California-based cloud software and intelligent edge company Wind River Systems.

Read More

Driver Safety and Costs Keep Decision Makers Awake

By: Greg Tavarez    12/15/2022

The two things that are top of mind for SMB fleets are driver safety and financial concerns.

Read More

Tomahawk Hosts Microsoft Azure SDK on KxM Body-Worn Edge Processor

By: Stefania Viscusi    11/10/2022

Tomahawk Robotics, a provider of common control solutions, has successfully hosted Microsoft Azure SDK on its KxM edge device.

Read More