With Artificial Intelligence gaining ground in the Middle East owing to its diverse applications across a range of industry verticals and its potential to transform business operations and processes, Emad Fahmy, Systems Engineering Manager, Middle East, NETSCOUT, explains how AI malware will change the IT security landscape.
Artificial Intelligence (AI) has been gaining ground in the Middle East owing to its diverse applications across a range of industries and its potential to transform business operations and processes. Recognising the benefits of AI such as automation and efficiency, many Gulf countries such as the UAE with its UAE Artificial Intelligence Strategy 2031 and Saudi Arabia’s Vision 2030, are incorporating AI strategies as part of their long-term vision. Additionally, AI is also increasingly used in enterprises, with about 58% of Middle East businesses implementing or seeking to initiate AI deployment plans, according to research by BCG and MIT.
The use of AI in cybersecurity however, while often overhyped, is not a new concept. Hackers have included countermeasures in malware since its inception to detect runtime environments or sense detection attempts. Early actions were primitive compared to what we know today, but they laid the groundwork for more critical thought about adaptive and evasive technologies and sophisticated situational awareness. This lethal combination of research and deep targeting is likely the future of malware as adversaries attempt to outsmart the companies and researchers trying to thwart them.
Adaptive and evasive technology
Malware has always adapted to its environment and the inclusion of AI has provided authors with a range of new opportunities. The days of using code to analyse behaviour are coming to an end. The combination of a network card and specific geolocation is enough for malware to begin reconnaissance. Why risk getting caught doing resource development when those resources likely would not produce the results the malware author desires?
Suppose the intended target is the corporate campus in Milwaukee and the malware senses it is in Berlin, Wisconsin. The 82-mile difference could mean the infected host is nearby but not exactly where the author intends to carry out the next move. In this case, the malware would sleep for 15 days and try again. If that attempt fails, the malware will adapt to its host environment and employ other resource development behaviours. This chess game is often complicated, but readily available models and actions make it easier for malware authors.
Marketers have used this tactic for decades now. Learning the behaviours, habits, and values of individuals or groups is the staple of successful advertising campaigns. It stands to reason that this type of research and technology would bleed over to bad actors and malware authors.
There are numerous ways to use deep targeting against (often) unsuspecting victims. End-user behavioural analysis (UBA) is a product or tagline for many cybersecurity companies because professionals who spot deviations can trigger alerts.
Cybercriminals also use this technique to learn user behaviours and blend into their targets’ surroundings. The era when malware woke up at 3:00 AM to execute its attack is long gone. Now 3:00 PM is when the user is busiest, so hackers slipping a few illicit data packets here and there are much more challenging to detect.
Another consideration with deep targeting is where to launch the initial hack. Social media is a treasure trove of useful metadata and behaviours. Getting users to click on a topic they feel passionate about is much easier than getting someone to interact with an off-topic post. Blending into social, political and economic conversations takes little effort and branching those topics opens many potential pathways to target victims. Now it is just a matter of prepping the audience and biding one’s time to launch the real objective: a nefarious campaign.
Spear phishing via a service
Mining social media (both personal and professional) is child’s play. Mapping an executive’s profile to a favourite charity gives a clear roadmap into content that has a high probability of ensnaring a new victim. Spear phishing via a service employs third parties rather than targeting enterprise email channels directly.
Determining physical location
Information about a target’s physical location can include various details, including the position of critical resources and infrastructure. This data may also indicate whether the victim operates within a legal jurisdiction or authority. Knowing those specifics allows adversaries to act quickly.
Hackers may environmentally key payloads or other malware features to evade defences. This method uses cryptography to constrain execution or actions based on specific adversary-supplied conditions present in the target area.
This tactic focuses on destroying files on specific network systems and interrupting availability of services and resources. By overwriting content from local and remote drives, cybercriminals render stored data unrecoverable by forensic techniques.
Many organisations use AI as part of an in-depth security protocol, but hackers now have easy access to the same sophisticated techniques to achieve their malicious goals. As malware becomes more stringent, smarter and undetectable, are enterprise teams and tools ready for the challenge?
Thankfully, the answer is more likely ‘yes’ than ‘no.’ The fundamentals still apply; the challenges will likely just come faster and with more variety. The past months have shown us that our strategies and biases are tested continuously. As noted, the vast majority of the time, the fundamentals win. When they fail, however, there is the chance that they fail massively. Mapping AI-based malware and campaigns through tried-and-true frameworks is a great way to remain informed as adversaries change tactics, employ new techniques and attempt to use norms against their target. Stay vigilant – stay safe.Click below to share this article