Concerns expressed at new EU AI rules

2 mins read

The European Commission (EC) has announced a proposed legal framework for regulating the deployment of AI technologies within the EU.

The rules seeks to differentiate between high risk, limited risk and minimal risk AI applications and AI systems that have been identified as high-risk include AI technology used in:

  • Critical infrastructures
  • Educational or vocational training
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment
  • Essential private and public services
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic process

Commenting on the proposed rules the Commission's digital vp Margrethe Vestager said that trust was essential. "With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

According to these rules high-risk AI systems will be subject to strict obligations before they can be put on the market: such as adequate risk assessment and mitigation systems; high quality of the datasets feeding the system to minimise risks and discriminatory outcomes; logging of activity to ensure traceability of results; and detailed documentation.

The European Artificial Intelligence Board will facilitate the regulations’ implementation, as well as drive the development of standards for AI and fines of up to 6% of revenue are foreseen for companies that don’t comply with bans or data requirements.

In one reaction to the announcement, Soffos.ai CEO, Nikolas Kairinos said that while the AI industry needed a strong system of checks and balances to win public trust, the European Commission’s proposed regime would not sit well with the AI community.

“Loose definitions like ‘high risk’ are unhelpfully vague. AI today comes in many forms, and the risks and considerations vary across different domains. An ambiguous, tick-box approach to regulation that is overseen by individuals who may not have an in-depth understanding of AI technology will hardly inspire confidence within the industry," he said. Adding, "I fear that without clear and fair definitions, ambitious AI developers will be left at the mercy of regulators and risk being barred from the EU market – one that desperately needs to push the needle forwards where innovation is concerned."

According to Kairinos, while the EC should be praised for its efforts to proactively raise the standards for AI development, "introducing a list of rigid tests and benchmarks also risks impeding progress in R&D.

"Innovation, by its nature, involves an element of risk – and any attempts to over-regulate will result in high economic and human costs. That is not to say that we should do away with regulation altogether. Rather, we must be careful to avoid applying too broad a brush, and instead find an approach which meets the needs of all stakeholders.”