When Will AIoT Outsmart Humans, and How Will We Know?

By Carl Ford January 06, 2025

It seems everywhere we turn, AI is making headlines. When it comes to the Internet of Things (IoT)/the Industrial Internet of Things (IIoT), all signs point to us being able to continue unlocking astonishing capabilities in predictive and prescriptive maintenance from AI in the near (and far) future.

However, will you really trust it?

As we flexibly ponder this question, let's think of the butterfly effect related to weather forecasts.

For those unfamiliar, this term is closely associated with mathematician and meteorologist Edward Norton Lorenz. Lorenz noted that this "butterfly effect" is derived from a tornado's formation and path being "influenced" by minor perturbations, such as a distant butterfly flapping its wings.

My concern here (in terms of IoT/IIoT/AIoT) is that people may be overlooking initial condition data that may, at first, appear seemingly inconsequential; a butterfly (or AI) and the tornado (the ongoing developments in IoT). Even a very small change in said conditions, after all, has the potential to create a significantly different outcome. (There's your tie-in.)

So, what if people — conditioned, themselves, to look for direct causes and effects — overlook AI's minor and major impacts on IoT over time (waiting for telltale triggers), when ultimately these impacts may not be as readily apparent to us as humans in our every day?

If this were to end up being the case, how are we going to react when — humor another example, if you will — an AI model suggests industrial maintenance shutdowns when we, as humans, are thinking about maintaining full-speed-ahead production? Do we listen?

Let's say the AI is right; let's say it detected a legitimate maintenance problem that will take a negative toll on production. In this case, will human management actually verify the error and take action, or will they keep production running until greater severity hits? (Or until an audit reveals that, if we'd listened to AI, we'd be back in operation.) What may have been missed or overlooked? How would we even know to look if we're trusting AI implicitly?

Surely, this thought exercise can take many turns; at what points is AI to be full-on trusted? Does a technician listen to AI after suggesting an equipment replacement? Is further non-AI analysis required? Will AI be smart enough to recognize evolving needs without further human training/intervention? And in that vein, what would spur the need for humans to realize further training is even necessary?

To play more with these ideas (i.e. both AI and humans' impacts on real-time solutions, and the consequences of either taking action or ignoring them), consider registering for and attending IoT Evolution Expo 2025. I expect this exact topic will come up between Matt Hatton and I at the show.

Read more here.




Edited by Alex Passett


Original Page