AI in cybersecurity 101: The economics of bot vs bot battles

If you run an online image search for “cyber hacker,” you’ll likely find countless pictures of shadowy, hooded figures hunched over a laptop. There’s just one problem with those search But here’s the catch: The image of a human hacker is in the minority these days.

Instead, almost half (47 percent) of internet traffic is from bots, with around 30 percent involved in malicious activities. Because these threats are automated, it’s possible to launch them at scale, for little cost and with low levels of expertise.
It’s a form of AI-powered democratization, but instead of empowering non-technical employees to take on new tasks, it’s lowering the barrier to entry for deploying malicious bots. Build-A-Botnet and other off-the-shelf kits are available to rent or buy online, ready for use with multiple attack vectors.

Of course, bots with AI security can also be used to harden systems and posture. From analyzing log data and enforcing behavioral rules, to detecting incidents and triaging responses. Attempting to manually defend against AI-powered evolving bots, especially in the face of AI-led attacks, would soon burn out many IT teams. That’s where AI supports defense at scale – with new capabilities and in a way that makes economic sense.

The attacker’s economy of scale

Bots allow attackers to use methods that wouldn’t be cost-effective with human labor. For example, credential stuffing has a reported success rate of 0.1–2 percent. The capability of bots to launch these attacks in the thousands make this a worthwhile investment, despite the payoff percentages being minimal.

Low-cost, high reward

The ROI is also high when you look at prices for bot kits, which can start as low as $99. In contrast, the financial impact of a bot attack is high, costing businesses an average $85.6 million, or the average equivalent of around 50 ransomware payouts.

It’s a numbers game offering attackers plenty of chances to win big. Plus, there’s less time needed to learn complex technical processes and protocols, even with distributed botnets.

Distributed botnets

The rise in distributed botnets means that attacks aren’t usually stopped by isolating compromised vectors. That’s because IT leaders face dealing with decentralized and P2P principles. Entities are acting as both servers and clients, removing any single point of failure, through a network with distributed nodes.

The resulting resiliency is good news for defense, but also in attacking scenarios, where large volumes of bots can’t be easily shut down. However, alongside the increase in volumes, there are rises in sophistication and mutation.

More advanced bot scripts can now mimic basic human behaviors on websites, including keystrokes and mouse movements. And eight years on from the Mirai IoT botnet that infected millions of vulnerable IoT devices, variants of its source code continue to be found in modern botnets. For example, NoaBot, which has been targeting SSH servers for crypto mining since 2023.

Financial incentives

Whether the goal is unauthorized use of computer resources or receiving ransoms, cybercriminals only must be lucky once to gain access, whereas the business must maintain security 24/7.

It’s no wonder attackers can profit from economies of scale by launching attacks at unprecedented volumes. The Phorpiex botnet was used to deliver what was widely reported as ‘millions’ of messages carrying Lockbit Black (3.0). This attack vector is a form of Ransomware-As-A-Service, with affiliate users receiving commissions when victims paid out.

Attackers also know that businesses won’t just lose out by making a payment. There’s the knock-on impact to brand reputation, where costs can be less tangible but long-lasting. Combined with costs of recovery and downtime, this can amount to up to 10X the ransom amount.

So it’s not only about whether companies can afford to pay. It’s whether they can afford not to pay. Especially with the rise of malicious bots that are becoming shape-shifters.

Evasion tactics

The rise in AI-driven polymorphic malware means bots have new ways to automatically evade payload detection. It’s an approach seen in polymorphic malware such as BlackMamba, where GenAI dynamically changes its code structure and signature, without affecting functionality.

By shape-shifting in this way, the AI allows malicious actors to play a long game, with bots remaining undetected and establishing long-term footholds after infiltrating networks.

The defender’s economy of scale

AI may evolve for threat actors, but so does cybersecurity in AI. That means harnessing bot-based defense, for a bot vs bot scenario that plays out in real-time. It starts with laying the foundations across four economically viable and scalable areas.

Centralized security operations

The ability of bots to adapt dynamically means it’s possible to deploy adaptive authentication, such as with dynamic MFA, to secure operations at scale.

For example, new logins or logins with a higher risk score. They can be prompted automatically for additional verification. This form of bot cybersecurity helps solve some of the challenges when managing large and often fragmented networks.

Threat intelligence sharing

Bridging the silos can mean useful data starts coming in for mitigating threats. There’s no Garbage-In-Garbage-Out – just data that helps uncover patterns from bot-based behavioral signals.

Added value comes from combining this with machine learning and AI, to leverage User and Entity Behavior Analytics (UEBA), building up profiles for real-time detection.

Automated defenses

As profiles are improved and augmented, bots can automatically improve attack detection and minimize false positives. The data becomes a source of business intelligence to provide actionable insights for hardening the security posture.

Meanwhile, cybersecurity professionals can be freed to focus on more complex issues. With more time available for strategic planning rather than day-to-day incident responses, they can add greater value without exhausting expensive resources.

Cost savings

Further cost savings come from implementing bot scanning, to mitigate bot-related traffic spikes and bandwidth impact, reducing IT infrastructure costs.

There’s also significant benefits to user and customer experience. For example, an ecommerce store with overloaded servers may see their website latency increase. Amazon found that every 100 milliseconds of latency costs 1 percent on sales. For ecommerce, the balance of power is with consumers. As for AI in cybersecurity, the bot-led balance is more delicate.

The balance of power

The rise in remote and distributed work means more applications, APIs and systems. These endpoints are all constant targets for bad bots, making them an advanced persistent threat.

An ongoing arms race

Bots might be prevented from gaining access, but the economics mean they can repeatedly attack without much cost to attackers.

It’s a different story for defense. These AI-based variations on traditional brute force attacks impact websites in a similar way to distributed denial-of-service (DDoS) attacks. Servers slow down, while response times lengthen.

What’s more, the traditional CAPTCHA is no longer effective as a deterrent. AI-powered bots now use APIs to prompt humans – working at CAPTCHA farms – to solve more traditional Challenge Response Authentication Mechanisms in real-time.

The need for continuous innovation

Take the example of credential stuffing. It’s not just automation that’s driving the attacks. Many day-to-day systems and apps still use passwords. These passwords are often weak, shared online and reused (in up to 16 out of 22 accounts), widening the potential attack surface even further. That’s why organizations need to look beyond patching and toward passwordless and multi-layered methods that bots can’t circumvent or spoof as easily.

Weaponizing AI: An ongoing battle

As the attack surface expands, bots are increasingly deployed across endpoints and hybrid architectures. With the rise in advanced botnets and the minimal expertise required to deploy them, bot-based attacks make high economic sense for threat actors.

Those tasked with repelling the attacks must find similarly scalable economic benefits. Centralizing operations to both defend the business and to build insights for predictive strategies. Sharing information between agencies to improve defenses collectively. Just as AI bots are offering attackers more options, AI bots offer more ways to mitigate, defend and even find cost savings with automation allowing human knowledge and skills to develop.

Conclusion

In a constantly shifting landscape, one thing is clear: the bot vs bot AI race is set to continue at pace. To ensure your business stays ahead, bookmark this page and keep coming back for more AI cybersecurity insights.

Anonymous
Related Content