Regulating artificial intelligence

2 mins read

Concerns over artificial intelligence (AI) have been mounting with growing worries and debate around its impact on jobs, security and data privacy.

Over in the US President Biden has met with Google, Microsoft and OpenAI, the company behind ChatGPT, to discuss the technology and its impact, arguing that companies developing the technology had a “fundamental responsibility to make sure their products are safe before they are deployed or made public”.

Chief executives at the meeting were told that they had a "legal responsibility" to ensure the safety of their AI products and that the administration was ‘open’ to advancing new regulations and supporting new legislation on artificial intelligence.

Here in the UK, the competition watchdog has announced a review of the AI sector. The Competition and Markets Authority said it would look at the underlying systems, or foundation models, behind AI tools such as ChatGPT.

One legal expert described the review as a “pre-warning” to the sector and its findings will be published in September.

In the US, the Federal Trade Commission, which oversees competition, has also said that its staff are “focusing intensely” on how companies might choose to use AI technology.

These moves come as AI has started to have a real impact on some sectors of the economy. For example, IBM is set to pause hiring in roles that could be replaced by AI in the coming years, affecting almost 8,000 positions. While the share prices of leading education companies took a hammering after it was revealed that students were using ChatGPT rather more traditional online tools.

While AI could be as transformative as the Industrial Revolution, according to the government’s outgoing chief scientific adviser Sir Patrick Vallance, he warned that government should “get ahead” of the profound social and economic changes that ChatGPT-style, generative AI could usher in.

However, worries over AI should not cause us to overlook the benefits of AI and the talk, in some parts of the media of an AI ‘apocalypse’ is somewhat overblown, but as Sir Patrick said, as he compared the impact of AI to the first industrial revolution, “While the initial effect was a decrease in economic output as people realigned in terms of what the jobs were – there were then significant benefits.”

Consequently, he added, “We need to get ahead of that.”

The CMA’s review is timely and in fact Vallance called for a national review of which sectors would be most significantly affected so plans could be drawn up “to retrain and give people their time back to do [their jobs] differently”.

He added that there was also a broader question of managing the risk of “what happens with these things when they start to do things that you really didn’t expect”. That has to be the biggest risk.

But the opportunities and benefits of AI are truly immense.

The CMA chief executive, Sarah Cardell, said of AI, “It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information.” Too true!

The CMA review will, according to Cardell, look at how the markets for foundation models could evolve, what opportunities and risks there are for consumers and competition, and it will then formulate “guiding principles” to support competition and protect consumers.

Whatever the conclusions of the review there is a strong argument that there is a need for regulation, especially if AI becomes central to every aspect of human existence. We have a long way to go before reaching that point – if we ever do – but as Sir Patrick suggested, getting ahead of and understanding the social and economic impact of AI is critical.