A new AI industry body

1 min read

Four leading developers in artificial intelligence are forming a new industry body to oversee the safe development of advanced AI models.

The body - the Frontier Model Forum – has been established by the ChatGPT developer OpenAI, Anthropic, Microsoft and Google, which owns UK-based DeepMind.

According to those behind the Forum it will focus on the ‘safe and responsible’ development of AI models, with reference to even more advanced AI technology that’s currently under development.

Making AI technology safe, secure and ensuring that it remains under human control is at the heart of what the Forum is looking to deliver. The aim being to advance AI responsibly.

Members have said that their main objectives were to promote research in AI safety, such as developing standards for evaluating models; encouraging responsible deployment of advanced AI models; discussing trust and safety risks in AI with politicians and academics; and helping develop positive uses for AI.

The establishment of the new body comes as moves towards greater regulation of AI gather pace – the US is introducing new AI safeguards to make it easier to spot misleading material such as deepfakes and to independently test AI models, while the UK and the EU are both looking to better regulate the industry.

While the formation of the Forum could be seen as a positive move towards better regulation of the AI industry, critics have warned that tech industry hasn’t been great at adhering to pledges on self-regulation and there are worries that by allowing the industry to ‘self-regulate’ in this way, national authorities will be by-passed, and the interests of business will take centre stage.

As we’ve seen with too many industries poor regulatory oversight can lead to major problems and disfunction in markets.

AI is far too important an issue to be left in the hands of business alone and while bodies like the Model Forum should be welcomed, we need to ensure that the oversight of AI isn’t left to the industry alone.

It needs independent oversight, free from the AI industry itself.