Menu

EDGE FEATURE NEWS

Is Now the Time to Apply Fog Computing to the Internet of Things?

By Special Guest
Dr. Vladimir Krylov, Big Data and Machine Learning Technologies Consultant for Artezio
September 13, 2016

Fog computing, a term originally introduced by Cisco, at first seemed like the exotic idea of a company that was in constant search of new product applications. But the term has become the basis for network solutions, including in IoT. The main idea of fog computing architecture is to distribute data processing and operation procedures over various devices that are connected via the Internet to the cloud (i.e. a virtualized computing environment), as well as to many other devices located in the network. 

One of the key features of fog computing is a vertical distribution of functions by layers that extend from sensors, then in the fog, and, finally, in the cloud depending on the processing latency. This architecture implements high-latency (days to months) enterprise operations in the cloud, whereas technical operations with low latency (milliseconds to hours), starting from high speed to transactional analytics, are realized in fog nodes. Patterns and rules for machine learning algorithms are formed in the cloud, then they move to the fog for quick implementation.

The proposed architecture has not been fully realized yet. Looking at the most known IoT platforms (MS Azure IoT Suite, IBM Watson IoT Platform, and ThingWorx IoT Platform), we can see that their architecture maintains the structure connected to the high-performance platform of dataflow processing via gateways. And probably only Cisco offers a tool, known as Fog Director, that is capable of managing large production deployments based on their unique platform – IOx-enabled fog applications.

In this article, we will analyze several key factors of fog architecture and its benefits for end users.

Factor #1 - the calculating capacity of a fully distributed system.
The possibility of implementing each function in the IoT system determines the minimum amount of capacity required. The generic architecture distributes the general latency in a way that devices generate data with the required capacity and transmit it to gateways of the processing system. Thus, the required calculating capacity is ensured in this system. Such systems are usually based on scalable Big Data technologies, such as Hadoop distributed file system, Apache Kafka, a message broker, and focus on complex and effective dataflow processing of the Apache Storm and Apache Spark platforms. Users are normally offered this kind of solution in IoT cloud services.  

With fog computing, latency is minimized if one uses fog nodes for data analysis without sending it to the cloud. All event aggregation in this case has to be performed in the distributed architecture deployed in the network where devices (sensors) and fog nodes are located. Thus, fog architecture moves the capacity question from the cloud to the network implementation.

If decision-making requires co-operative processing of data from all sensors, we need to build a cluster that has the same productivity as the centralized solution. However, it should be based on the available computing resources of the network, including a sensor network. If the bandwidth remains the same, the capacity of each node is low, and it is required to use more nodes, then the general latency of the distributed system will be defined by Amdahl's law. We have an important result – fog computing can improve a performance indicator of an IoT system only if processing can be distributed to a large number of slow computers in a low-speed network. 

Factor #2 - cost.
The advantage of the fog architecture can be evaluated by comparing deployment costs of a high-performance centralized processing platform with the costs of fog nodes implementation. The total cost will include the cost of a software platform of the distributed computation implementation in such a cluster. The use of virtualization assumes that hypervisors will enable an effective use of calculator resources in all fog nodes. It is worth noting that the deployment cost of such distributed heterogeneous systems interconnected via the Internet can be high and, thus, make the use of the fog architecture ineffective.

Factor #3 - system safety.

When building an IoT classic architecture, an IoT gateway is critical for malicious data injection. Gateways can be protected in the same way as classic websites. These protection methods help manage vulnerability including fast hopping of IP addresses – IP Fast Hopping Protocol. It allows interaction between legitimate participants of this exchange only. In the fog architecture, the processing unit is open in the network by all node interfaces. If data processing involves thousands of devices, all of them should have safe connections in terms of malicious data use, i.e. fog nodes have to be integrated by a safe overlay network, not by TCP/IP links. Given the requirement to provide guaranteed low latency, a solution to this issue has not been found yet. Thus, until this solution is found, the safety of systems with the fog architecture will remain doubtful.

About the Author: Dr. Vladimir Krylov is the Big Data and Machine Learning Technologies Consultant for Artezio, and the former Head of Big Data Technologies Lab of Nizhny Novgorod State Technical University. Krylov is a member of the the Russian Academy of Engineering, IEEE, ACM and Communication Society, Study Group SG17 ITU-T. 




Edited by Ken Briodagh
Get stories like this delivered straight to your inbox. [Free eNews Subscription]


SHARE THIS ARTICLE
Related Articles

ZEDEDA Certified Edge Computing Associate Certification to Support Growing Uses of Edge Computing

By: Alex Passett    9/6/2023

The new ZCEA certification from ZEDEDA is available through the company's Edge Academy and provides fundamental knowledge about the many benefits of e…

Read More

T-Mobile and Google Cloud Partner to Advance 5G and Edge Compute Possibilities

By: Alex Passett    6/15/2023

T-Mobile and Google Cloud are helping customers embrace next-gen 5G use cases; applications like AR/VR experiences, for example.

Read More

Aptiv PLC Acquires Wind River Systems to Enhance Software-Defined Vehicles

By: Alex Passett    1/5/2023

Dublin-based automotive technology supplier Aptiv PLC has acquired California-based cloud software and intelligent edge company Wind River Systems.

Read More

Driver Safety and Costs Keep Decision Makers Awake

By: Greg Tavarez    12/15/2022

The two things that are top of mind for SMB fleets are driver safety and financial concerns.

Read More

Tomahawk Hosts Microsoft Azure SDK on KxM Body-Worn Edge Processor

By: Stefania Viscusi    11/10/2022

Tomahawk Robotics, a provider of common control solutions, has successfully hosted Microsoft Azure SDK on its KxM edge device.

Read More