Managing the risks of AI

Managing the risks of AI

Hannah Pettit, commercial and data protection lawyer, Ashfords, says when investing in AI it is vital to assess and mitigate the associated risks.

AI is everywhere. Whether it is already integrated into your company systems or you are considering how to utilize AI within your business, there is a universal nervousness around being left behind in the race for innovation.

Before deploying any third-party AI products, a CIO should complete an AI risk assessment to document and mitigate the associated risks. This article will explore some of the key things that an AI risk assessment should cover.

What is your intended use for the AI?

Firstly, it is important to address how you intend to use the AI product and how it will be integrated across the business. You should implement an internal AI policy that sets clear parameters around how the AI can be used and what data can be input into it.

Understanding your AI supplier

It is important to carry out due diligence on any AI supplier, as well as on the AI technology they are providing. Most businesses will be reliant on the supplier to ensure that they properly understand how the technology operates. This is why you should satisfy yourself with the expertise of the supplier and also the level of documentation and information they make available.

Transparency, transparency, transparency

You need to understand what is happening to the data you input into any third-party AI system. You should explore whether it is possible to opt out of your organisation’s data being used to train the AI model. You may also want to consider whether it is possible to deploy a private instance of the AI system, although there will be increased costs involved with this.

Where you are responsible as a data controller for any personal data used with AI technology, you will need to provide data subjects with clear information about how their personal data will be processed. To do this you will need to understand how the AI model works and how it is processing any personal data you provide to it. This is why the transparency of your chosen AI supplier is important.

Implementing robust security measures

An important part of your overall risk assessment will be to assess the relevant security measures implemented by your chosen AI provider. Consider whether the supplier has any security accreditations and whether they carry out appropriate vulnerability testing and independent third-party security audits. You should also ensure that the contractual terms you have in place with the supplier, impose appropriate security requirements.

International personal data transfers

You should also assess where personal data will be processed. Consider whether the provider offers UK or EEA data residency. Also, consider whether their IT infrastructure involves the processing of personal data in other jurisdictions.

You must ensure that any cross-border data transfers comply with data protection laws, which may involve implementing additional safeguards such as standard approved data transfer clauses.

Data subject rights

The use of personal data with AI can make it difficult to comply with data subject requests, for example, subject access requests or data deletion or rectification requests. As just one example, when an AI model is trained on personal data that you provide to it, it then memorises this data as part of the learning process. This can make it difficult to fully erase that data from your systems in response to a data deletion request.

However, where you are the data controller of personal data processed by the AI model, you are still required to comply with these requests when you receive them. The challenges of the AI technology you may be using, do not remove your obligations to comply. Your AI risk assessment should therefore evaluate how you will respond to data subject requests and ensure that appropriate measures are in place to enable you to do so.

The human eye

AI can significantly reduce workload and streamline internal processes, but it is still important to manually check the accuracy of AI generated output. Your risk assessment should therefore consider the level of human input required, and at what stages. To guarantee all output is accurate the final level of sign of should include human oversight.

Bias and discrimination

Bias and discrimination are additional factors that need to be assessed and protected against.

If the dataset that an AI system is trained on is biased, then the consequence will be a biased AI system. Where an AI system is responsible for biased decision-making, this poses a significant risk of harm for individuals, such as excluding certain demographics when processing CVs, as well as a risk of reputational damage and liability for the relevant organisation.

Early adoption of AI can create many business opportunities but it is vital not to rush into any decisions and to assess and mitigate the associated risks before investing in new AI.

Browse our latest issue

Intelligent CIO Europe

View Magazine Archive