Artificial Intelligence: Friend and foe for IT security
Bharat Mistry, Principal Security Strategist, Trend Micro, outlines how Artificial Intelligence and Machine Learning are shaking up the threatscape for both attackers and defenders

Artificial Intelligence: Friend and foe for IT security

Bharat Mistry, Principal Security Strategist, Trend Micro, outlines how Artificial Intelligence and Machine Learning are shaking up the threatscape for both attackers and defenders.

Everyone’s talking about AI and Machine Learning (ML). At the Infosecurity Europe event this year it was hard to spot a vendor not touting its own capabilities as the latest and greatest to hit the market. In these terms it’s difficult not to view the technologies as merely the latest industry buzzwords to add to a lengthy list that includes UTM, IDS, EDR, sandboxing and more. But they’re far more than that. AI and ML are not only transforming the cybersecurity industry but also the threat landscape.

There’s a storm coming and we’ll need the best that AI can offer to fend off an increasingly sophisticated and effective range of attacks.

Speed and skill

AI is a classic double-edged sword, a technology that can be used by both attacker and defender to improve success rates. Research from last year revealed that 87% of US cybersecurity professionals are currently using some form of AI but that 91% are concerned that hackers will turn the tech against them. At Trend Micro we’ve been using Machine Learning in our products for over a decade, to improve spam detection, calculate web reputation and more.

So what can AI offer the white hats?

Fundamentally, it’s the ability to learn normal behaviour and then spot patterns in network data and threat intelligence feeds that human eyes might miss, enabling analysts to take action or automating threat detection and response. This is particularly important given the security skills shortages facing firms. It was claimed last year that the UK is heading for a skills ‘cliff edge’ as older professionals retire without newer talent coming through to replace them. The shortfall globally is predicted to reach 1.8m professionals by 2021.

Security analysts are expensive and hard to come by, so by automating the discovery of threats with AI, you free up their time to focus on more strategic tasks, whilst improving the effectiveness of your cybersecurity posture. Speed is also of the essence when it comes to threat detection. The longer you leave a threat actor inside the network, the more data they can exfiltrate and the more expensive the resulting breach. Costs are estimated in the UK at an average of £2.7m today, with the mean time to identify a breach standing at an unacceptable 163 days. AI can shorten this dwell time significantly.

Speed is also important in spotting ransomware, which works even faster to encrypt an organisation’s most mission-critical files. Machine Learning can spot inconsistencies and subtle changes in the way the malware works to encrypt your files, which would otherwise be lost in the noise.

Pre-execution Machine Learning can even help firms to block malicious files before they’ve had a chance to infect the organisation. False positives are sometimes a challenge, which is why such tools are often run in combination with run-time analysis to ensure that what you’re blocking is definitely unwanted.

The goal is to have a cybersecurity system featuring AI tools which can learn over time, much as a child does as it grows and matures. They build up patterns, incorporating feedback from threat analysis in a virtuous feedback loop that ensures continuous improvement as it goes on.

The dark side

But on the flipside, there’s huge potential in AI for malicious use. In fact, it could have made historic cyberattacks and breaches far more impactful than they were. Take WannaCry – it might have caused headlines around the world and disrupted a third of the NHS, but as a piece of malware it failed. It was too noisy, attracting the attention of security researchers soon after launch and failed to provide its masters with a decent ROI.

AI could correct this. By installing learning tools on a target’s network, attackers could listen in and baseline user behaviour, understand network traffic and communications protocols and map the enterprise. This would make it child’s play to move laterally inside the organisation to the targeted data or user – all without raising the alarm.

Social engineering is also much easier if you use AI tools to understand users’ writing style and the context of their communications. Just think about a document review process. A hacker could monitor communications between remote employees and then insert a malware-laden document at just the right time, using an email with just the perfect tone and language to convince a user to open it. This is spear-phishing like you’ve never seen it – attacks that even the experts would have a hard time spotting.

More importantly, it could be done at scale, in a highly automated fashion – as could the use of AI to help cybercriminals engineer malware to outwit current tools.

Should we be worried?

Off-the-shelf technology is already available that could be used for malicious purposes. Think of Google’s speech-to-text AI tools used in a cybersurveillance attack on a boardroom. Secondary AI could then be used to go through the text and pick out key sections of interest for the attackers. We’re not seeing any bespoke AI hacking tools as of yet – although this will change in time. Expect the as-a-service model to democratise such tools on the dark web when they do finally appear.

AI is in many ways the cyberarms race writ small. The only way we can manage the inevitable wave of black hat tools designed to circumvent security filters and increase the sophistication of phishing is to fight back in kind. It’s going to be a bumpy ride.

Click below to share this article

Browse our latest issue

Intelligent CIO Europe

View Magazine Archive