How is Generative AI facilitating more sophisticated cyberattacks and how can this be prevented?

How is Generative AI facilitating more sophisticated cyberattacks and how can this be prevented?

Acronis, a global leader in cyber protection, has released the findings of its mid-year cyberthreats report, From Innovation to Risk: Managing the Implications of AI-driven Cyberattacks. The comprehensive study, based on data captured from more than 1 million global endpoints, provides insight into the evolving cybersecurity landscape and uncovers the growing utilisation of Generative Artificial Intelligence (AI) systems, such as ChatGPT, by cybercriminals to craft malicious content and execute sophisticated attacks.

“The volume of threats in 2023 has surged relative to last year, a sign that criminals are scaling and enhancing how they compromise systems and execute attacks,” said Candid Wüest, Acronis VP of Research. “To address the dynamic threat landscape, organisations need agile, comprehensive, unified security solutions that provide the necessary visibility to understand attacks, simplify context and provide efficient remediation of any threat, whether it be malware, system vulnerability and everything in between.”

According to the report, phishing is the primary method criminals use to unearth login credentials. In the first half of 2023 alone, the number of email-based phishing attacks has surged 464% when compared to 2022. Over the same frame, there has also been a 24% increase in attacks per organisation.

In the first half of 2023, Acronis-monitored endpoints observed a 15% increase in the number of files and URLs per scanned email. Cybercriminals have also tapped into the burgeoning large language model (LLM)-based AI market, using platforms to create, automate, scale and improve new attacks through active learning. 

Cybercriminals are becoming more sophisticated in their attacks, using AI and existing ransomware code to drill deeper into victims’ systems and extract sensitive information. AI-created malware is adept at avoiding detection in traditional antivirus models and public ransomware cases have exploded relative to last year. Acronis-monitored endpoints are picking up valuable data about how these cybercriminals operate and recognises how some attacks have become more intelligent, sophisticated and difficult to detect.

Drawing from extensive research and analysis, key findings from the report include:

  • Acronis blocked almost 50 million URLs at the endpoint in Q1 2023, a 15% increase over Q4 2022. 
  • There were 809 publicly mentioned ransomware cases in Q1 2023, with a 62% spike in March over the monthly average of 270 cases.
  • In Q1 2023, 30.3% of all received emails were spam and 1.3% contained malware or phishing links.  
  • Public AI models are proving an unwitting accomplice for criminals looking for source code vulnerabilities, creating attacks and developing fraud prevention-thwarting attacks like deep fakes.

Traditional cybersecurity methods and lack of action let attackers in, the report shares:

  • There is a lack of strong security solutions in place that can detect zero-day vulnerability exploitations.
  • Organisations often fail to update vulnerable software in a timely manner, long after a fix becomes available.  
  • Linux servers face inadequate protection against the cybercriminals who are increasingly going after them.  
  • Not all organisations follow proper data backup protocol, including the 3-2-1 rule.

Acronis emphasises the need for proactive cyber protection measures. Leveraging an advanced solution that combines AI, ML and behavioural analysis can help mitigate the risks posed by ransomware and data stealers.

We asked industry experts about their thoughts on the cyber-risks of Generative AI and how they can be prevented.

Okey Obudulu, CISO at Skillsoft

While Generative AI holds immense potential for positive applications, it has also opened new avenues for attackers, causing substantial cybersecurity challenges. One great concern is that the technology has flattened or at least significantly lowered the barriers of entry for the less technically savvy criminals looking to get into hacking. It has the potential of becoming a one-stop shop that gives criminals not only the techniques and tools to carry out an attack, but also helps them craft compelling and persuasive phishing messages or write malicious code to facilitate an attack.

Generative AI can be used to create all sorts of materials needed to support the ruse in a social engineering phishing scenario. For example, scammers can generate fake voice notes, video recordings or text that closely mimics authentic communication from trusted sources. Attackers can craft highly personalised messages that increase the likelihood of successfully tricking victims into divulging sensitive information or clicking on malicious links. AI-generated personas or fake social media profiles that appear genuine and interact with real users can further manipulate people into revealing confidential information or engaging in actions that compromise security.

Generative AI makes it increasingly difficult to distinguish what is real from fake. However, there are some remedies that organisations can implement to address some of the risk, starting with publishing company-wide policies and guidance on Generative AI use among employees.

Ensuring regular updates and patches are applied to systems, software and security tools helps organisations safeguard against known vulnerabilities that attackers may exploit using Generative AI techniques. Organisations can also implement robust threat detection technologies that leverage advanced Machine Learning algorithms, aiding in identifying anomalies such as AI-generated content. By scrutinising patterns and behaviours to flag suspicious activities or communications, organisations can minimise the likelihood of successful attacks. Implementing Multi-Factor Authentication (MFA) across systems and applications adds an additional layer of security, helping thwart unauthorised access, even if attackers manage to obtain certain credentials through phishing attacks.

Alongside these solutions, organisations must provide comprehensive training to educate employees about identifying and mitigating risks associated with Generative AI-based attacks. This includes imparting knowledge about the latest phishing techniques, raising awareness about the risks of engaging with unknown entities and promoting vigilant behaviour online. Above all, cultivating a training and security awareness culture across all areas of the business, and constantly updating this to address new threats such as Generative AI-based attacks, is crucial.

Peter Klimek, Director of Technology at Imperva

There are a number of ways in which Generative AI is rapidly advancing cyberthreats and the way online criminals operate. To start with, Generative AI and Large Language Models (LLMs) can dramatically improve the sophistication of threat actors. For example, AI will greatly accelerate the discovery of vulnerabilities in existing software (both commercial off-the-shelf and open source libraries). The MOVEit vulnerability, for instance, showed a fairly high level of sophistication from the attackers in discovering and chaining together multiple vulnerabilities. While we don’t know if they were assisted by AI tools in discovering these vulnerabilities, we can safely predict that these tools will be used by attackers in similar attacks in the future.

Secondly, there is the impact on bad bots, which now account for almost a third of all web traffic. Using Generative AI tools, hackers are able to iteratively develop more sophisticated bots faster than ever before, putting businesses at risk of mass disruption through account compromise, data theft, spam, degraded online services and reputational damage.

However, it’s important to note that LLMs don’t just pose an external threat. Given how many blissfully oblivious employees are using third party Generative AI chatbots and other tools to complete tasks like writing code, we’ve already seen that there is a huge insider threat for companies that the LLMs are going to be accessing and sharing data from backend code and other sensitive information.

There’s no malicious intent from these employees, but that doesn’t make ‘shadow AI’ any less dangerous. The genie isn’t going back in the bottle – outright bans simply won’t work – so businesses are going to need to come up with strategies to deal with the data security implications of Generative AI. Yet currently, only 18% of businesses have an insider risk management strategy in place, meaning in the majority of cases, both employees and the business are completely ignorant about what’s at risk.

In order to get a handle on the issue, businesses need to focus on identifying, classifying, managing and protecting their data. Just controlling how data is accessed or shared would go a long way in making things safer. Here are some key steps every business should be taking:

  • Visibility: Organisations must have visibility over every data repository in their environment so that important information stored in shadow databases isn’t forgotten or abused.
  • Classification: The next step is classifying every data asset according to type, sensitivity and value to the organisation. Effective data classification helps an organisation understand the value of its information assets, whether the data is at risk and which risk mitigation controls should be implemented.
  • Monitoring and analytics: Finally, data monitoring and analytics capabilities are essential to detect threats such as anomalous behaviour, data exfiltration, privilege escalation, or suspicious account creation.

Dr Jason Nurse, Director of Science and Research at CybSafe

Generative AI poses threats against organisational cybersecurity in more ways than one. Research from CybSafe found that 52% of UK and 64% of US office workers have already entered work information into a Generative AI tool. Furthermore, the technology seems to be going nowhere, with 32% of users of Generative AI in the UK and 33% in the US saying they’d probably continue using AI tools even if their company banned them. As with the development of any new technology, organisations must learn to adapt to the rapidly developing cyber landscape in a way that genuinely alters specific employee security behaviours for the better.

In terms of threats, there have been instances of Generative AI applications sharing sensitive or proprietary information that workers have previously entered. While tech organisations are undoubtedly working to improve this, it is a good example of the new types of data breaches we can expect to see with the introduction of new technology. Our behaviour at work is shifting and we increasingly rely on Generative AI tools. Understanding and managing this change is crucial.

Another vector to consider is how Generative AI has reduced the barriers to entry for cybercriminals trying to take advantage of businesses. Not only is it helping create more convincing phishing messages, but as workers increasingly adopt and familiarise themselves with AI-generated content, the gap between what is perceived as real and fake will reduce significantly.

There is no denying the benefits that Generative AI gives workers, hence the explosive pickup across the board. It is the responsibility of business leaders, however, to back this up with responsible governance that includes both technological and cultural guardrails. While cybersecurity isn’t at the forefront of the workers’ minds, people want to be part of the solution and it is up to organisations to give them the tools to be effective. The aforementioned research highlighted that more than half of UK respondents said their companies had not taken steps to educate them about the emerging cybersecurity threats associated with Generative AI. This statistic has to change if we want to curtail the expected rise in cyber incidents.

Click below to share this article

Browse our latest issue

Intelligent CIO Europe

View Magazine Archive