EU takes a step closer to regulating AI

2 mins read

Europe has moved closer to adopting the world's first rules governing the development of artificial intelligence with the EU Parliament endorsing a provisional agreement.

The AI Act comes at a time when generative AI systems, while becoming more popular, are generating concerns about misinformation and fake news and the legislation is intended to regulate high-impact, general-purpose AI models and high-risk AI systems.

AI developers will need to comply with specific transparency obligations and EU copyright laws in what is the world’s first comprehensive binding framework for trustworthy AI.

Europe has now set a global standard for delivering more trustworthy AI and the legislation is expected to enter into force early next year and apply from 2026 onwards.

As a result of the European Union’s decision, it is now expected that many other countries and regions will use the AI Act as a blueprint.

However, there are critics who warn that it will involve more red-tape for business and that the need for extensive secondary legislation and guidelines will also raise concerns around ‘legal certainty’ and the law's interpretation in practice. All of which could have a massive impact on levels of investment and innovation, according to critics of the legislation.

The act requires that AI tools need to be accurate, that they should be subject to risk assessments with human oversight and should have their usage logged.

While the legislation bans systems that pose an ‘unacceptable risk’ and will ‘closely observe’ what it defines as ‘high-risk’ systems, it does exempt AI tools that are designed for military, defence or national security use and does not apply to systems designed for use in scientific research and innovation.

What could that mean in practice? Well, there are fears that those exemptions could enable states to bypass and abuse these regulations.

In terms of Generative AI, under this new law, all model developers will need to comply with EU copyright law and provide detailed summaries of the content used to train the model, however, what this means for already-trained models is unclear.

For models that pose what the law defines as posing a ‘systemic risk, which is based on an assessment of their more human-like ‘intelligence’, there is a requirement to report serious incidents caused by the models, such as death or breach of fundamental rights.

The legislation is a major step forward and has generally been welcomed, in public, but many tech companies have warned that they may simply move to the US to avoid these tougher restrictions, especially when it comes to the computing power used to train AI models as proposed by the EU.

Models trained with more power than 10 to the power of 25 “flops” will need to prove they don’t create system risks.

Whatever the doubts and weaknesses of this legislation it should be welcomed as, at least, a first step in better regulating artificial intelligence.