Australian IT and security leaders ‘struggle’ with Generative AI threats

Australian IT and security leaders ‘struggle’ with Generative AI threats

ExtraHop’s new research report The Generative AI Tipping Point Australian enterprises struggle to understand and address security concerns that come with employee generative AI use. 

According to the findings, 74% of IT and security leaders admit their employees use generative AI tools or Large Language Models (LLM) sometimes or frequently at work – but they aren’t sure how to appropriately address security risks. 

The report also exposes Generative AI bans as ineffective with organisations want more guidance on Generative AI issues – especially from the government.

Four in five respondents (80%) are very or somewhat confident their current security stack can protect against threats from generative AI tools – but less than half have invested in technology that helps their organisation monitor the use of generative AI.

Only 45% have policies in place governing acceptable use, and 34% train users on safe use of these tools.

“There is a tremendous opportunity for generative AI to be a revolutionary technology in the workplace,” said Raja Mukerji, Co-founder and Chief Scientist, ExtraHop. “However, as with all emerging technologies we’ve seen become a staple of modern businesses, leaders need more guidance and education to understand how generative AI can be applied across their organisation and the potential risks associated with it. By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

Click below to share this article

Browse our latest issue

Intelligent CIO APAC

View Magazine Archive