The European Union’s landmark Artificial Intelligence (AI) Act officially enters into force on Thursday, marking a significant moment for the regulation of AI. This groundbreaking law, which was meticulously crafted by EU member states, lawmakers, and the European Commission, sets forth rigorous rules designed to control how companies develop, utilize, and apply AI. Its impact stretches beyond the EU, particularly affecting American technology giants who have been at the forefront of AI development.
What is the AI Act?
The AI Act is a comprehensive piece of EU legislation designed to govern the use of artificial intelligence. First proposed by the European Commission in 2020, the law aims to address the potential negative impacts of AI while fostering responsible technological innovation.
While its scope is broad, the AI Act is likely to have a significant influence on major US technology companies, who are currently leading the charge in developing the most advanced AI systems. However, the act’s provisions extend beyond tech giants, covering a wide range of businesses, including those that may not be directly involved in AI development.
The AI Act establishes a consistent regulatory framework for AI across the EU, taking a **risk-based approach** to managing this burgeoning technology.
Tanguy Van Overstraeten, the head of Linklaters’ technology, media, and technology practice in Brussels, highlights the AI Act’s global significance: “It is likely to impact many businesses, especially those developing AI systems but also those deploying or merely using them in certain circumstances.”
This risk-based approach central to the AI Act means that different forms of AI are regulated differently depending on their perceived level of risk to society.
For example, AI applications categorized as “high-risk” face strict requirements under the AI Act. These obligations include robust risk assessment and mitigation strategies, high-quality training datasets to minimize bias, ongoing activity logging, and mandatory sharing of detailed model documentation with authorities for compliance evaluation.
Examples of AI systems designated as high-risk include:
- Autonomous vehicles
- Medical devices
- Loan decisioning systems
- Educational scoring systems
- Remote biometric identification systems
What does it mean for U.S. tech firms?
US tech giants such as Microsoft, Google, Amazon, Apple, and Meta have been actively investing billions of dollars in AI companies, driven by the global AI frenzy.
Cloud platforms like Microsoft Azure, Amazon Web Services, and Google Cloud play a pivotal role in supporting AI development, providing the substantial computing infrastructure needed to train and execute AI models.
Given their prominent position in the AI landscape, Big Tech firms are likely to face increased scrutiny under the new EU regulations.
“The AI Act has implications that go far beyond the EU. It applies to any organization with any operation or impact in the EU, which means the AI Act will likely apply to you no matter where you’re located,” explains Charlie Thompson, senior vice president of EMEA and LATAM for enterprise software firm Appian. “This will bring much more scrutiny on tech giants when it comes to their operations in the EU market and their use of EU citizen data,” he adds.
Meta has already taken pre-emptive actions, restricting the availability of its AI model in Europe due to regulatory concerns. While not explicitly driven by the EU AI Act, these restrictions highlight the law’s influence.
Meta announced earlier this month that it wouldn’t make its LLaMa models available in the EU, citing uncertainty over compliance with the EU’s General Data Protection Regulation (GDPR). Previously, the company was ordered to stop training its models on Facebook and Instagram posts in the EU due to GDPR concerns.
The AI Act’s implications for Big Tech are significant. The Act’s broad reach and stringent regulations are poised to reshape the AI landscape, demanding greater transparency, accountability, and responsible development practices from tech giants operating within the EU.
How is generative AI treated?
The EU AI Act identifies Generative AI as an example of “general-purpose” AI. This label designates AI tools designed to handle a diverse range of tasks with human-level or superior capabilities.
General-purpose AI models, such as OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude, face specific regulations under the AI Act.
The Act imposes requirements on these systems including:
- Compliance with EU copyright Law
- Transparency disclosures regarding model training data and processes
- Continuous testing and robust cybersecurity measures.
However, not all AI models are treated equally under the Act. AI developers have voiced concerns regarding the potential overregulation of open-source models, which are freely available for public use and facilitate the development of tailored AI applications.
Examples of open-source AI models include:
- Meta’s LLaMa
- Stability AI’s Stable Diffusion
- Mistral’s 7B.
The EU AI Act provides exceptions for open-source generative AI models, but these exemptions come with specific conditions.
To qualify for exemption, open-source providers must publicly release model parameters, including weights, architecture, and usage details, and enable open access, modification, and distribution of the model. However, open-source models identified as posing “systemic risks” are excluded from these exemptions.
The balanced approach to regulating generative AI under the EU AI Act is a crucial aspect of its impact on the future of AI development. The Act encourages responsible development while preserving the freedom and innovation offered by open-source models.
What happens if a company breaches the rules?
Companies that violate the EU AI Act can face significant financial penalties.
Fines can range from 35 million euros ($41 million) or 7% of their global annual revenues (whichever amount is higher) to 7.5 million euros or 1.5% of global annual revenues.
The specific fine levied will depend on the nature and severity of the violation and the size of the company.
These fines surpass those outlined in the GDPR, Europe’s comprehensive data privacy law, where companies face penalties of up to 20 million euros or 4% of annual global turnover for breaches.
The European AI Office, a regulatory body established by the Commission in February 2024, will oversee AI models falling under the purview of the AI Act, including general-purpose AI systems.
Jamil Jiva, global head of asset management at fintech firm Linedata, believes that the EU understands the need for substantial penalties: “They need to hit offending companies with significant fines if they want regulations to have an impact.”
Reflecting the approach used with GDPR, the EU aims to exert regulatory influence on AI best practices globally with the AI Act.
While the AI Act has officially entered into force, the majority of its provisions will not take effect until at least 2026.
Restrictions on general-purpose systems will not come into play for 12 months after the Act’s entry into force, and commercially available generative AI systems like OpenAI’s ChatGPT and Google’s Gemini have been granted a “transition period” of 36 months to achieve compliance.
Key Takeaways
- The European Union’s AI Act is a landmark law that will regulate the development and use of artificial intelligence.
- The act aims to minimize the risks associated with AI while fostering innovation.
- It applies a risk-based approach, categorizing AI applications based on the potential harm they pose.
- US technology giants, which dominate AI development, are likely to face significant compliance challenges under the Act.
- Generative AI models are considered “general-purpose” AI and face stringent oversight.
- Strict penalties are in place for companies violating the AI Act, potentially exceeding those outlined in the GDPR.
- While the Act is now in force, most of its provisions won’t be fully implemented until at least 2026.