Why African enterprises should not over-trust AI
Anna Collard, SVP Content Strategy and Evangelist, KnowBe4 AFRICA

Why African enterprises should not over-trust AI

Generative AI models are trained on data from various sources, and these sources are not all verified, lack sufficient context, and are not regulated, which poses risks that can have long-term consequences if users are unaware of them, says Anna Collard at KnowBe4 AFRICA.

By the end of this year, the market for Artificial Intelligence in South Africa is projected to reach $2.4 billion, showing an annual growth rate of 21% between now and 2030. Locally, the technology has the potential to mitigate security risks, enhance decision-making, address legacy challenges, and have a significantly positive societal impact. Despite the impressive applications and implications, associated risks need to be considered.

Generative AI models are trained on data from various sources and these sources are not all verified, lack sufficient context, and are not regulated. AI is incredibly helpful in handling mundane administrative tasks associated with spreadsheets and statistics. However, it becomes concerning when we rely on it to make decisions that have the potential to influence people’s lives.

AI is an algorithmic construct built on the bones of human creative endeavours and data that is often flawed and biased. Kate Crawford, Professor at University of Southern California and Microsoft researcher, points out, AI is not truly artificial or intelligent. This poses risks that can have long-term consequences if users are unaware of them.

Crawford writes: AI is made from vast amounts of natural resources, fuel, and human labour. And it is not intelligent in any kind of human intelligence way. It is not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made.

Since the very beginning of AI back in 1956, we have made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analogue to human intelligence, and nothing could be further from the truth.

There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted.

Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching over trust are missing.

Here are some of the most concerning risks associated with AI:

AI hallucinations

Earlier this year, a New York attorney used a conversational chatbot for legal research. The AI deceitfully incorporated six fabricated precedents into his filing, falsely attributing them to prominent legal databases. This is a perfect example of an AI hallucination, where the output is either fake or nonsense. These incidents happen when prompts are outside of the AI’s training data and so the model hallucinates or contradicts itself in order to respond.

Deepfakes

The implications of fake images extend to various areas. With the rise of fake identities, revenge porn, and fabricated employees, the range of potential misuse for AI-generated photographs is expanding. One particular technology called Generative Adversarial Network, GAN, is a type of deep neural network capable of producing new data and generating highly realistic images by using random input.

This technology opens up the realm of deepfakes, where sophisticated generative techniques manipulate facial features and can be applied to images, audio, and video. This form of digital puppetry carries significant consequences in political persuasion, misinformation or polarisation campaigns.

Automated attacks

This taps directly into the potential of Generative Adversarial Network, as cybercriminals make use of deepfakes in more sophisticated attacks. They use it in impersonation attacks, where fake voice or even video versions of someone can be used to manipulate victims into paying or following other fraudulent instructions.

Cybercriminals also benefit from jailbroken generative AI models to help them automate or simplify their attack methods, such as for example automating the creation of phishing emails.

Over-trust effect

This refers to the fact that human beings tend to attribute human characteristics to machines and develop feelings of empathy towards them. This tendency becomes even stronger when the interactions with machines seem intelligent. Although this can positively impact user engagement and support in the service sector, it also carries a risk.

People become more vulnerable to manipulation, persuasion, and social engineering because of this over-trust effect. They tend to believe and follow machines more than they should. Research has shown, people are likely to alter their responses to queries in order to comply with suggestions made by robots.

Manipulation

AI, through the use of natural language processing, machine learning, and algorithmic analyses, can both respond to and simulate emotions. By gathering information from various sources, agenda driven AI chatbots for example can promptly react to sensory input in real time and use it to accomplish specific objectives, such as persuasion or manipulation. These capabilities create opportunities for the dissemination of predatory content, misinformation, disinformation, and frauds.

Ethical issues

The presence of bias in the data and the current absence of regulations regarding AI development, data usage, and AI application all raise ethical concerns. Global efforts are underway to tackle the challenge of ethics in AI and reduce the risks of AI poisoning, which entails manipulating data to introduce vulnerabilities or biases.

However, South Africa currently lacks momentum in addressing these issues. This must change, as managing and detecting the risk of polluted AI data before it causes long-term harm is essential.

It is important to be mindful of the information we share with AI chatbots and virtual personal assistants. We should always question how our data is being used and by whom. There is a risk of sharing sensitive personal and business information with data training models.

While AI is a valuable tool, it is crucial to use it with critical thinking and mindfulness, and only rely on it in situations where it provides the most value and has been fact checked.


Multiple ways in which AI can fail

  • AI is helpful in mundane administrative tasks associated with spreadsheets and statistics.
  • AI is an algorithmic construct built on human creative endeavours and data that is often flawed and biased.
  • AI is not truly artificial or intelligent.
  • AI is made from vast amounts of human labour, and is not intelligent like human intelligence.
  • AI is not able to discern things without extensive human training, and it has a different statistical logic for how meaning is made.
  • We have made this terrible error, to believe that minds are like computers and vice versa.
  • We assume things are an analogue to human intelligence, and nothing could be further from the truth.
  • AI hallucination, is where the output is either fake or nonsense and happens when prompts are outside of training data.
  • The AI model hallucinates or contradicts itself in order to respond.
  • Human beings tend to attribute human characteristics to machines and develop feelings of empathy towards them.
  • People become vulnerable to manipulation, persuasion, social engineering because of the over-trust effect.
  • People tend to believe and follow machines more than they should.
  • People are likely to alter their responses to queries in order to comply with suggestions made by robots.
  • The presence of bias in the data and the current absence of regulations regarding AI development, all raise ethical concerns.
  • It is important to be mindful of the information we share with AI chatbots and virtual personal assistants.
  • We should always question how our data is being used and by whom.
  • There is a risk of sharing personal and business information with data training models.
Click below to share this article

Browse our latest issue

Intelligent CIO Africa

View Magazine Archive