
It was quite the sight at the MWC-Barcelona in February. Everywhere you looked, companies seemed desperate to outdo each other to work the trendiest IT term du jour into their marketing taglines. “AI to Edge.” “Edge to Cloud.” “Cloud to IoT Edge.” “Edge to Edge.”
There’s no mistaking that “edge” is the hottest word in technology these days (not to be confused with an earlier, equally popular usage dealing with endpoint cybersecurity solutions). It’s also no surprise that so many tech enterprises want to capitalize on it and showcase how they’re harnessing fledgling technologies to solve big business problems. However, definitions of the “edge” are sometimes being misrepresented and morphing into things that it’s not – analogous to the “AI washing” a couple of years back.
As a result, it’s becoming difficult to identify – much less explain – the potential of how the edge and IoT can work together. But that’s the problem with buzzwords. Still, market intelligence firm IDC predicts that edge spend will reach up to 18 percent of the total IoT infrastructure spend by 2020.
So, let’s keep the buzz going and clarify the promise behind the words with a primer on what the edge is and what it isn’t. First, for clarification, here is a working definition:
Edge computing is the practice of processing data from IoT devices where it is generated – instead of in a centralized data-processing warehouse or in a public cloud. It promotes real-time data analytics without lag as the data is generated at the source, enabling the smart device to perform as it was designed while simultaneously reducing internet bandwidth.
Two more points:
- While edge computing has many different purposes, it’s popularly associated with IoT because it is focused on devices and technologies that are attached to the “things” in the IoT. For example, the edge connects previously unconnected industrial machines in order to acquire and collect actionable data.
- Edge complements all on-premises, cloud storage and networking solutions used by organizations today.
Now, here are three facts to dispel other common misconceptions about the edge:
Edge computing has many applications, and it’s constantly evolving
The concept of edge computing isn’t particularly new. In fact, it goes back at least a quarter of a century, when Akamai’s content delivery network (CDN) developed technologies that worked closer to users. Amazon’s Elastic Compute Cloud elevated the term to “buzz” status around 1996.
Edge computing solutions can take quite different forms, which many organizations are just now realizing. They can be both mobile, such as data collected in real time from a connected or autonomous car. They can be static, such as information that is slowly collected from a building management solution or an offshore oil rig. And they can also be hybrid solutions for organizations such as hospitals: Electronic medical records (EMR) data is relatively static, but mobile data from patients collected in real-time can be shared and distributed for monitoring and evaluation.
The purpose of the edge is to help data flow, not translate it
Still, confusion continues to reign regarding the edge and its role in translating data. Remember how “experts” talked about edge in its early stages? “Oh, the edge just collects and transmits data and deposits it into an analytics program.” Although data does flow through and into the edge, the edge itself does not translate the data.
The true value of the edge in IoT solutions is that with the rapid increase in the usage of sensors – particularly in industrial environments such as manufacturing facilities and offshore oil rigs – the data being collected occurs right at the source. Then it’s up to the infrastructure and architectures the organization has established to translate and analyze what has been captured, what it means and where it should be stored.
In actuality, the edge is really a decentralized extension for data center networks and the cloud. Edge computing spend is increasing largely because of the additional deployment of converged IT and operational technology (OT) systems – both of which reduce the time to value of data that connected devices can collect.
Edge computing complements and enriches on-premises solutions
That means we need to – and now are able to – incorporate models that can move decision-making at the edge. The goal is to leverage predictive analytics that, for example, can help an autonomous vehicle determine when to stop or what to avoid depending on what comes into its path.
Additionally, because adding capacity to a central storage network may not by the best economic or policy decision, edge computing creates a competitive advantage for organizations that need additional ingestion capabilities from an ever-growing network of connected devices.
As organizations continue to embrace and leverage IoT technologies into their products and services, the link between IoT and edge computing will become more clear and better defined, particularly with regard to where, how – and how fast – data is collected and analyzed. As that happens, there will be little misunderstanding of the edge’s impact and importance.
About the author: Dave Shuman is the Managing Director of Connected Industries and Smart Cities at Cloudera, working with customers to leverage explicit and implicit data to derive actionable insights. Previously, Shuman held a number of roles at Vision Chain, a leading demand signal repository provider enabling retailer and manufacturer collaboration, including COO and VP of field operations. He also served at top consumer good companies such as Kraft Foods, PepsiCo, and General Mills. In addition, Shuman was VP of operations for enews, an ecommerce company acquired by Barnes and Noble; was Executive VP of Management Information Systems, where he managed software development, operations, and retail analytics, and developed e-commerce applications and business processes used by Barnesandnoble.com, Yahoo, and Excite as well as pioneered an innovative process for affiliate commerce.
Edited by
Ken Briodagh