Menu

IoT FEATURE NEWS

Nothreat Fights AI Fire with AI in Firewalls

By

Lev Zabudko is co-founder and Chief product Officer of Nothreat, an AI-native cybersecurity company launched in 2023. In just over a year, he has led the company from early-stage development to a valuation of £40 million (approximately $46.9 million), backed by investors and recognized for its ability to respond to the rapidly evolving cybersecurity landscape with real-time, adaptive solutions.

Under Lev’s product leadership, Nothreat has secured strategic partnerships with global organizations, including Lenovo and Pafos FC, and contributed to cybersecurity infrastructure efforts for international events such as the COP29 conference and congress. With more than 15 years of experience in applied AI, product strategy, and critical systems architecture, Lev focuses on developing security technologies that are designed for the scale, speed, and complexity of modern digital threats.

I had the chance to discuss the current security landscape, how it’s being impacted by AI, and, specifically, what it means for the IoT space, especially with countless new connected devices being deployed. Here’s what he had to say.

Carl Ford:  In your Threatscape 2025 report, you share that AI is enabling well-funded cybercriminals. Is this where organized crime has moved, or are these cybercriminals from a new breed and are they connected somehow to nation-states?

Lev Zabudko:  What we’re seeing isn’t just a change in who is attacking – it’s a change in how they’re doing it. AI has made things that used to take time, money, and technical skill much faster and easier. Groups that once relied on manual hacks can now run AI-generated attacks that are fast, adaptive, and surprisingly effective.

In our Threatscape 2025 report, we saw a 178% spike in country-specific attacks. That tells us these aren’t random – they’re planned and intentional. While it’s often hard to say for sure whether a group is state-backed, part of a larger network, or working alone, AI has made those lines much blurrier. Today, it’s less about labels and more about what an attacker can actually do with the tools at hand.

For example, large language models are already being used to write highly tailored phishing messages, using local slang or cultural references. They can also guess likely password combinations by analyzing how people in certain regions tend to think or write. When it comes to leaked data – huge dumps of emails, messages, and credentials on the dark web – AI can process and sort through all of that in minutes. That kind of scale used to be impossible without a big team. Now, it’s automated.

That’s why defenders need to stay just as sharp. It’s not enough to just monitor for leaks; we have to get creative, too. We can place traps in leaked data, like fake credentials or files that alert us when someone tries to use them. It’s all about staying one step ahead.

At the end of the day, it’s not just about whether someone’s a hacker or a part of a criminal group. What matters is how well they use the tools. As with any powerful technology, it always comes down to whose hands it’s in.

CF:  In the report, you indicate that currently deployed systems move too slowly to deal with the onslaught of attacks, which penetrate so quickly that they are already inside before your systems have reacted. Is that because there is a lack of updating to current standards, or errors made in the past?

LZ:  It’s not about past errors; it’s about architecture. Traditional systems rely on static detection – signatures, rules, manual analysis. But AI-enhanced attacks evolve on the fly, employing zero-day exploits and polymorphic malware. In 2024 alone, organizations saw over 600,000 novel malware variants daily, many unseen and unrecognizable to signature-based tools.

A key limitation is the lack of systems that can learn incrementally – that is, continuously evolve without needing to be retrained from scratch or forgetting what they previously learned. This challenge, known as catastrophic forgetting, is a major hurdle in applying machine learning to dynamic environments like cybersecurity.

At Nothreat, we’ve specifically addressed this by building a platform that applies incremental learning in a way that avoids forgetting past threats while adapting to new ones. This lets us preserve defensive memory and adapt simultaneously. The result is a defense system that not only reacts faster but evolves alongside the threat landscape, without resetting the clock each time a new technique appears.

CF:  The report mentions VPN credential stuffing as one method of bombardment. Do we need a new generation of VPNs to emerge? How does one fight these attacks?

LZ:  Credential stuffing is really just the surface symptom of a much deeper issue – poor password hygiene and outdated trust models. What’s changed now is the speed and sophistication of these attacks. AI can generate and test massive combinations of credentials across services, mimic human behavior to avoid detection, and even use deepfake voice calls to impersonate employees and reset passwords. We’ve already seen these tactics used in credential recovery scams.

The rise in VPN credential stuffing, as highlighted in our Threatscape 2025 report, reflects this shift. Many companies still treat VPNs as a central gate, but that gate is often protected by nothing more than static credentials. That’s the problem. The solution isn’t just building a “new generation” of VPNs – it’s changing the way we think about access.

It’s a bit like in that series Prime Target, where the breakthrough isn’t about breaking through the front gate. Rather, it’s about discovering the hidden mathematical key that opens everything. That’s what modern attackers are doing. They don’t brute-force their way in; they find forgotten credentials, leftover access tokens, unmonitored sessions. AI makes it fast and scalable.

So, what we need is adaptive defense. Relying on passwords or even basic multi-factor authentication isn’t enough. We need real-time session analysis, AI-driven behavioral verification, and deception-based defenses – things like honey credentials or decoy environments – to stop abuse before it escalates into a breach. It’s not about hardening the walls anymore. It’s about knowing which keys are still out there and making sure you’re not the one leaving them behind.

CF:  The report also talks about the 3x increase in attacks based on countries. Are these attacks aimed at government facilities, or is there a cultural aspect about the Internet is secured from country to country?

LZ:  Some of the increase certainly involves public infrastructure, but much of it relates to varying levels of digital maturity and operational posture. Attackers study national trends, like patching speed, procurement policies, software versions, and deployment habits. In many cases, they adapt their tooling accordingly.

Rather than a purely cultural factor, it’s often about structural exposure. Where security practices differ or enforcement is uneven, attackers find more accessible entry points. With AI, they can now tailor those entry attempts far more quickly and precisely than before.

CF:  Some IoT equipment is known to have back doors, perhaps mandated by the country of origin. Given that it’s rare a company is willing to do a rip-and-replace, what strategies can be used to stop these backdoors from being used as a gateway?

LZ:  Backdoors in IoT devices, whether from manufacturing oversights or insecure software practices, present significant cybersecurity risks. However, since most companies prefer not to undergo expensive rip-and-replace processes, mitigation must occur at multiple levels.

At the hardware level, auditing devices help identify and neutralize unauthorized or risky components. On the software and network side, we emphasize restricting device communication and continuously monitoring behavior. Our approach employs solutions like AIoT Defender, integrating AI at the network perimeter to prevent activation of latent backdoors. Additionally, deception techniques trap and identify malicious attempts swiftly.

The key isn't assuming trust in every device, but ensuring that, if a compromise occurs, it remains contained, controlled, and harmless.

CF:  The latest groupthink in the industry is the Zero-Trust solution, but I suspect that’s  not as successful if the systems have already been hacked. Am I right or does this represent a strong strategy in all cases?

LA:  Zero Trust works, if it’s implemented as a philosophy, not a product. Its real power is in limiting movement and forcing constant revalidation. On its own, it doesn’t stop AI-generated deepfakes or insider credential abuse.

Modern Zero Trust must be AI-augmented. That means detecting subtle deviations in behavior, identifying synthetic access attempts, and correlating anomalies across systems. Behavioral analytics, session scoring, and contextual access policies are where Zero Trust becomes powerful, even in breached environments.

CF:  If I understand your solutions correctly, they are using AI to communicate with all the perimeter attack surfaces so, when new threats occur, your system broadcasts the antidote to the entire enterprise network. Does this mean that one company’s implementation may be totally different from another’s, or do you keep a repository and broadcast the new threat to all your subscribers?

LZ:  The core implementation of our platform is consistent but, thanks to continuous learning AI, each deployment becomes uniquely adapted to the customer’s environment. There’s no need for manual adjustments – the system learns from real activity, builds behavioral baselines, and responds to anomalies in context.

At the same time, it contributes to a shared intelligence layer. Behavioral patterns and threat indicators are anonymized and abstracted, then used to strengthen protections across all other deployments. So, while each client benefits from a system tailored to their own surface, they also gain from a kind of collective immunity where one detection helps prevent another, without ever exposing sensitive data.

CF:  Is your software designed to work with many vendor’s firewalls? Where else is your software deployed?

LA:  Absolutely. Interoperability is essential. Our platform is vendor-agnostic and designed to integrate seamlessly with existing firewalls, SIEMs, EDRs, and cloud APIs. Rather than replace existing tools, we act as an AI-native layer that enhances them, filling in the visibility and response gaps that traditional solutions often miss.

You’ll find Nothreat’s platform securing enterprise networks, critical infrastructure, and IoT environments, particularly in high-risk or complex systems. Our focus is on deep visibility, intelligent deception, and automated response at scale.

We’re proud to partner with organizations like Lenovo, Azerconnect, ISD Dubai, and Pafos FC – teams that recognize modern digital risk spans everything from industrial IoT to executive impersonation. In recent deployments, Nothreat’s system has achieved 99% detection accuracy while reducing false positives to under 1%.

CF:  With the continuing innovation of AI and the increased capacity being put into IoT devices, has the attack surface of IoT been increased and, therefore, become more vulnerable?

LZ:  No doubt. Smart devices are processing more data, running complex models, and integrating them into critical workflows. That makes them both more valuable and more exploitable.

We’ve seen polymorphic malware written by AI, AI vision models being manipulated in real time, and deepfake audio used to issue fraudulent voice commands. The AIoT ecosystem needs AI-native defense capable of recognizing anomalous behavior, spoofed interfaces, and embedded exploits.

Our approach is to go beyond detection and into active containment, deception, and autonomous remediation. That’s the only way to fight AI with AI.

At the same time, this doesn’t replace the need for SOC teams or internal cybersecurity functions – far from it, in fact. These teams are essential. While our system protects the perimeter and detects threats autonomously, we don’t control the software running on the devices themselves. Patching vulnerabilities, managing configurations, and responding to the specifics of each business environment still require human oversight.

That’s why we built tools like our AI Analyzer. It’s meant to help SOC analysts stay ahead. It breaks down thousands of attack signals in seconds, visualizes statistical patterns, and provides clear, human-readable summaries of what’s happening and why. The goal is not to remove people from the equation, but to free them from noise so they can focus on what truly matters.




Edited by Erik Linask
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Partner, Crossfire Media

SHARE THIS ARTICLE
Related Articles

Mary Meeker Returns with AI and Breezes Past AIoT

By: Carl Ford    6/26/2025

We are entering an era where intelligence is not just embedded in digital applications, but also in vehicles, machines, and defense systems

Read More

Nothreat Fights AI Fire with AI in Firewalls

By: Carl Ford    6/26/2025

According to Nothreat, the only way to fight AI cyber threats in IoT with AI is to go beyond detection and into active containment, deception, and aut…

Read More

How Kapitus is Reshaping SMB Funding

By: Carl Ford    6/16/2025

Kapitus is a financial institution that provides various financing solutions to SMBs, operating as both a direct lender and a financing marketplace.

Read More

Slicing Up the Network with 5G SA: An Interview with Telit Cinterion's Stan Gray

By: Carl Ford    6/10/2025

Carl Ford speaks with Stan Gray about 5G SA, network slicing, and trends, challenges, and opportunities related to both.

Read More

Cisco Introduces Agentic AI to Industrial AIoT

By: Carl Ford    6/10/2025

The goal at Cisco is to make management of systems easier, particularly for OT, with a focus on operational issues and not on the networks connecting …

Read More