Mar 20, 2024
Mar 20, 2024

The EU's New Artificial Intelligence Act

Setting the stage for global AI regulation standards, ensuring safety, rights, and ethical AI use, shaping the future of AI governance.

Join our Data Community

Get the latest news in your inbox

Thank you! You are now subscribed to our news.
Oops! Something went wrong while submitting the form.

Follow us

The European Union has taken a bold step into the future with the AI Act, setting a precedent for artificial intelligence regulation worldwide.

As the first legislation of its kind, the AI Act aims to establish a legal framework and set guardrails that ensures AI systems are trustworthy, respect fundamental rights, safety, and ethical principles, and address risks posed by powerful AI models. It seeks to provide clear requirements for AI developers and deployers before releasing to the public.

What Has Happened

The AI Act introduces a refined approach to AI regulation, categorising applications based on their risk levels. This categorisation ensures that the regulatory response is proportionate to the potential risk posed by different AI systems.

The four risk categories are:

  1. Unacceptable Risk: AI systems posing clear threats to safety and rights, such as social scoring by governments or dangerous voice-assisted toys, will be banned.
  2. High Risk: AI systems in areas like critical infrastructure, education, employment, essential services, law enforcement, migration, and justice face strict obligations.
  3. Limited Risk: This category addresses transparency issues, requiring clear disclosure when humans are interacting with AI, such as chatbots or AI-generated content.
  4. Minimal or No Risk: Applications like AI-enabled video games or spam filters are freely used, representing the majority of AI systems in the EU.

High-risk AI systems must undergo a rigorous process before market introduction, including further risk assessment, data quality management, traceability, detailed documentation, clear information for deployers, human oversight, and robustness and accuracy measures.

Unlike the fragmented approach by China or the sector-specific AI policies in the U.S., the EU's legislation offers a comprehensive framework, highlighting its leadership in tech governance.

Biometric Data: What You Can and Can't Do

In line with the new “risk level” approach, the use of biometric data is tightly controlled. Prohibited practices include indiscriminate scraping of facial images and emotion recognition in sensitive areas like schools and workplaces. By setting clear boundaries on what is permissible, particularly in sensitive contexts like law enforcement and public spaces, the Act reinforces its commitment to safeguarding individual rights while harnessing the benefits of AI.

What It Means for You (A Business Perspective)

For organisations globally, the EU's AI Act is a call to align with stringent new standards, reflecting the GDPR's influence on data privacy.

Organisations must ensure they have an AI policy and framework to rigorously vet their AI technologies, ensuring safety, transparency, and compliance, especially for high-risk applications. This extends beyond EU borders, affecting any entity whose AI interacts with EU citizens. Non-compliance carries severe penalties, pressing businesses to adapt swiftly. This urgency is amplified if you are working in Government, law enforcement, or national infrastructure and plan on using AI.

This act will reshape the AI landscape, with similar acts in the pipeline in multiple countries, and even Sam Altman (Open AI CEO) calling for further regulation in the US and saying things could go ‘horribly wrong’.

The EU’s AI Act is like a rulebook that ensures technology works in our favor, protecting our rights and safety as AI becomes a bigger part of our lives.

This law isn't just for the EU; it sends a message to the whole world about the importance of controlling AI technology wisely. It's designed to grow and adapt as technology evolves, making sure that as AI advances, it remains in line with what's good for society.

Essentially, this act is a big step toward a future where technology is developed with care and respect for everyone's rights, encouraging other countries to think about how they handle AI too. As the AI Act begins to take effect, it will gradually introduce these rules, allowing time for adjustments and ensuring that the technology benefits us while keeping our values in check.


To navigate AI's complexities while ensuring ethical use, it's important for organizations to have a clear AI policy. We invite you to download our AI Policy & Checklist template, a practical tool to help you implement AI responsibly.

This template will help you:

- Set up guidelines to protect your organisation and its data.

- Address bias to ensure fairness in AI applications.

- Foster transparency in how AI decisions are made, building trust.

Use our template as a starting point for adopting AI in a way that's mindful of risks and committed to ethical practices. In the rapidly evolving AI landscape, having clear policies is key to leveraging AI's benefits while avoiding pitfalls.