Can AI solve the problems it creates?

Can AI solve the problems it creates?

John Costello, CTO – Australia, Publicis Sapient, asks if AI could eventually fix its own problems.

John Costello, CTO – Australia, Publicis Sapient

Organisations, businesses and governments are rushing to figure out, embrace, deploy and regulate AI and GenAI.

It’s a technology revolution as significant as anything we’ve seen, comparable to the birth of the Internet and the invention of mobile phones.

But change is not without pain. As much as AI will create new jobs and ways of working, it will make other roles and jobs redundant. It’s already creating a slew of legal conundrums, from deepfakes to AI-generated art and music.

There are also environmental challenges: how sustainable is the vast amount of extra processing power that AI and large language models (LLMs) need? And what are the implications of the quantum era?

While humans grapple with these issues, perhaps we should consider whether AI, with all its superhuman brilliance, could eventually fix its own problems.

1. Environmental issues

Training and running an AI system requires a vast amount of computing power and energy. It’s estimated that Nvidia’s high-end chips will consume the same amount of energy as a small nation in 2024, and that 80% of data center power will be consumed by AI in the next 15 years.

The solution may lie in getting AI to make itself more efficient. AI algorithms can be designed to optimise their own energy consumption. This involves techniques such as pruning, quantisation and using efficient neural network architectures to reduce the computational resources needed for training and inference. AI can also help reduce the energy use and footprint of data center by optimising cooling and power usage.

While AI is already being used in the renewables sector to accelerate research, there are also hopes that AI could play a crucial role in advancing research into cold fusion. This – currently a speculative concept – would be an easy and safe form of nuclear fusion at room temperature. AI could help model complex plasma dynamics, optimise the design of fusion reactors and predict the outcomes of fusion experiments.

2. Security issues

AI presents several new security issues, from LLM interference to the use of AI for phishing attacks. AI is trained on LLMs, many of which already suffer from bias and misinformation. For example, GenAI images of ‘a CEO’ might produce solely images of male CEOs due to more male photos in its training database. There are also concerns that bad actors could deliberately poison LLMs: intentionally manipulating the training data or inputs to produce biased, incorrect or harmful outputs.

GenAI chatbots are also being used in sophisticated phishing attacks. It can rapidly collect and curate sensitive information and use it to craft targeted, convincing messages. Its ability to clone voices is also being used for vishing or voice phishing attacks.

In turn, cyber security teams are also using AI to continuously learn and adapt to new threats. AI algorithms can analyse patterns of attacks and predict vulnerabilities before they are exploited, offering proactive protection against adversarial attacks. The same approach can be taken to protect LLMs.

AI-driven systems can be implemented to track data provenance and perform integrity checks, ensuring that data is from trusted sources and has not been tampered with. Advanced monitoring and anomaly detection algorithms could identify unusual patterns or biases in the model’s outputs, which might indicate that it had been compromised. AI systems can then be designed to initiate automated re-training processes if poisoning or corruption is detected.

3. Intellectual property and copyright issues

From Getty to the New York Times, everyone is starting to sue over GenAI. Artists claim their images have been used without their knowledge or consent to train image-generation engines. Text articles, music, images and video are all potentially being scraped. There are warnings this could lead to a ‘legal doomsday’.

But if western democracies simply slam the brakes on GenAI, will states with less observance of intellectual property rights simply continue with the technology?

Ensuring that the sourced data respects copyright and IP rights is crucial for ethical and legal compliance. AI itself can assist in developing sophisticated content attribution and source tracking mechanisms. Privacy is another issue: federated learning and differential privacy can allow AI systems to learn from data without directly accessing or exposing individual data points.

Paraphrasing algorithms could also generate new training data that captures the essence of source material without infringing on copyright. AI systems could also automate the process of licensing and rights management, matching content with its copyright status and negotiate or execute licensing agreements.

4. Skills gap

The rise of AI has created demand for new roles and skill sets. The World Economic Forum estimates that tens of millions of jobs will be created, changed and destroyed. Given the existing skills shortage, organisations are racing to catch up with demand and reskill and upskill workers to cope in this new environment.

But AI can also play a pivotal role in bridging this gap. From AI-powered personalised learning platforms to GenAI simulations and learning environments, AI can create learning programs as well as monitor results and provide feedback and mentoring. It can also make learning more accessible through natural language processing and translation services.

As a new era of technology dawns, the call for ethical considerations, sustainable practices and a balanced approach to AI deployment is both timely and imperative. The journey ahead will require collaboration between industry, government and society to ensure that AI’s potential benefits are harnessed and that it develops itself as a force for good.

Click below to share this article

Browse our latest issue

Intelligent CIO North America

View Magazine Archive