The security tightrope of generative AI technology

The security tightrope of generative AI technology

Jon Pratt, CIO, 11:11 Systems on balancing the workplace opportunities presented by advancing generative AI with the complex security challenges it poses. 

There was a time when the idea of organizations embracing Artificial Intelligence (AI) in the workplace was about as likely as the mass adoption of remote work.

Sure, it’s a nice thought, but let’s not kid ourselves. It’s not an “office” unless folks have braved daily commutes and are glued to their chairs for eight hours a day.

How times have changed. 

Much like the explosion in use of communication tools at the onset of the COVID-19 pandemic, generative AI platforms are spreading like wildfire and growing in popularity. Open AI’s ChatGPT, for example, set a record for the fastest-growing user base after reaching 100 million monthly active users just two months after launching.

But it’s not just individuals who are reaping the rewards of generative AI; businesses are as well.

These tools can increase productivity and efficiency by automating repetitive tasks and letting employees focus on higher-value work. They can facilitate rapid content generation for marketing, advertising and customer engagement. They can even foster enhanced creativity and innovation by assisting in brainstorming and ideation processes and generating novel solutions to complex problems.

The sky’s the limit for what generative AI can help accomplish in the workplace. This is why it’s unsurprising that these tools are poised to revolutionise how businesses operate. However, with new technologies come new risks — and generative AI is no different.

Security concerns in generative AI adoption

While generative AI tools bring a world of possibilities, they also open the door to some complex security concerns.

Imagine receiving a phishing email or smishing message that looks and sounds like it came from a real person.

Scary, right?

Generative AI can create such deceptively realistic content, making phishing and social engineering attacks more sophisticated and difficult to detect.

Meta detected roughly 10 new malware strains this year alone — some even impersonating generative AI browser extensions.

Fake news, fabricated reviews and social media manipulation are just a few examples of how generative AI can be weaponised.

With AI-generated content, bad actors can spread misinformation, amplify disinformation, and influence public opinion through propaganda campaigns. And it’s not hard to imagine how badly that can go.

Generative AI often requires access to vast amounts of sensitive data, which poses significant data privacy and protection challenges. Mishandling of or unauthorized access to these datasets can lead to breaches, regulatory penalties and damaged reputations.

Another serious concern is algorithmic bias and discrimination. After all, AI algorithms are only as unbiased as the data they’re trained on. If the training data is biased, discriminatory outcomes are likely to follow.

Insider threats and unauthorized access pose risks as well. Employees may misuse or exploit generative AI tools, leading to unauthorized access to AI models and intellectual property theft. Competitors or malicious actors could even try to snatch and misuse your AI models or proprietary algorithms.

Protecting your intellectual property becomes more critical than ever.

Worryingly, the velocity of AI means that attackers can generate phishing emails and other types of attacks that not only appear more authentic but they can also do this at a much faster rate and in greater volumes. 

So as people marvel at — and cybersecurity pros worry about — the potential of generative AI, checks and balances are essential to ensure the technology does not become a threat.

Take a proactive approach to generative AI security

Although governments are taking measures to promote safe AI adoption, these steps will take time to implement and won’t necessarily cover every scenario.

Businesses must be proactive to mitigate potential security risks.

Implement robust data governance and privacy measures. Securely store and encrypt sensitive data, and regularly audit and assess your data handling processes to identify any vulnerabilities and ensure compliance with data privacy regulations. Never enter proprietary or customer data into a public AI platform that you cannot control or without legal assurances from those who do control it.

Conduct thorough risk assessments and vulnerability testing, and foster cybersecurity awareness and training. Transparency is key, so ensure your AI algorithms are interpretable and provide clear reasoning for their decisions. Implement access controls and authentication mechanisms like multi-factor authentication (MFA) to reduce risk, regularly update and patch AI systems and encourage collaboration between your IT and cybersecurity teams.

To navigate the ever-changing landscape of AI security, adopt best practices that promote responsible AI use. Conduct extensive due diligence in vendor selection, regularly monitor and audit your AI systems and engage in industry collaborations and knowledge sharing. Stay current on emerging threats in the AI space and embrace a mindset of continuous improvement and adaptation.

None of these recommendations will be surprising to anyone involved in cybersecurity but they will help you stay resilient in the face of changing security challenges.

Generative AI is just getting started — use it responsibly and safely

Generative AI tools are powerful and offer businesses unprecedented benefits and opportunities. However, the security threats accompanying them are real and can’t be ignored.

While governments around the world are working hard to address these concerns, they will never completely solve the problem, so it’s up to businesses to take a proactive and comprehensive approach to AI security.

By prioritising security and implementing robust measures, you can harness the power of generative AI while safeguarding your data, privacy, and reputation.

Click below to share this article

Browse our latest issue

Intelligent CIO APAC

View Magazine Archive