How the EU’s New Artificial Intelligence Act Impacts the UK

The European Parliament has recently approved groundbreaking legislation, the Artificial Intelligence Act, governing artificial intelligence (AI). This legislation, developed over five years, aims to ensure human control over AI technology and its beneficial use for society.

Key Points:

  • Risk-Based Regulation: The Act employs a risk-based approach, categorising AI applications based on their potential impact. Low-risk applications, such as user recommendation systems, face less scrutiny, while high-risk applications like AI-powered medical devices undergo more rigorous evaluation.
  • Transparency Requirements: Companies must disclose when AI technology is used, provide clear information to users, and maintain high-quality data for higher-risk applications.
  • Banned Applications: Certain AI applications deemed “too risky” are prohibited, including AI-based police identification systems and predictive policing. Additionally, systems tracking individuals’ emotions in sensitive environments like schools or workplaces are not allowed.
  • Regulation of Deepfakes: Deepfake content must be labelled to prevent the spread of disinformation.
  • Compliance with Copyright Law: Companies developing AI models must adhere to EU copyright law and provide detailed summaries of data used in training.
  • Extra Scrutiny for Powerful AI Models: High-powered AI models face additional scrutiny due to concerns about potential accidents or misuse for cyberattacks.

Impact on the UK:

The EU’s AI regulations have global significance, with governments worldwide monitoring them closely for inspiration. This phenomenon, dubbed the “Brussels effect,” influences other jurisdictions to adopt similar frameworks and principles. To date the UK has been reluctant to push for legal interventions for fear regulation might stunt industry growth. Instead, the UK has AI guidelines, but which are not legally binding; however, this may be due to change with the Prime Minister indicating that new legislation is currently being crafted, which is prompting discussions about aligning with the EU’s regulatory approach. The tech industry is also adjusting its practices in line with the EU regulations, with major companies like Meta and Google implementing similar rules voluntarily. The EU’s AI Act encourages international companies to adopt consistent governance and compliance standards, streamlining operations across different regions. Despite initial concerns from industry leaders, such as OpenAI, about compliance, companies are expected to adapt to the new regulations, which come into force in May 2025.

In conclusion:

The EU’s Artificial Intelligence Act represents a significant milestone in AI regulation, with far-reaching implications for the UK and beyond. Critics claim that “it has been finalised in a rush” and that it prioritises “industry interests, security services and law enforcement bodies” providing limited avenues of redress of individuals. AI start-ups have criticised the EU’s rules, which they see as overregulation that could hamper innovation. What cannot be ignored is that the EU appears to be paving the way by creating the world’s first legal framework. There is no doubt this is only the start of the journey and every other region, including the UK, will be watching on to see how it plays out.

Valkyrie Updates


Stay informed with the latest insights, expertise and innovations in the world of security with Valkyrie’s news, reports and white papers