EU’s Landmark AI Act is Officially Published
4 min readThe full and final text of the EU AI Act has been published, marking a landmark moment for the European Union. This new regulation focuses on applications of artificial intelligence and is set to change how AI is developed and used. It’s a significant step for AI governance in the EU.
Starting August 1, the law will come into force, giving developers until mid-2026 to fully comply with its provisions. However, the act takes a phased approach with different deadlines over the coming years. This phased implementation aims to give developers the time to adapt while ensuring compliance to new standards.
Publication and Commencement
The full and final version of the EU AI Act has been published in the bloc’s Official Journal. This new regulation focuses on applications of artificial intelligence. It’s a landmark moment for the European Union’s approach to AI governance.
Starting August 1, the law will officially come into force. Developers have until mid-2026 to fully comply with its provisions. However, the act takes a phased approach to implementation with various deadlines over the coming years.
The Framework
The AI Act introduces a risk-based framework, assigning different levels of regulation based on the AI application. Most AI uses are considered low risk and will not be heavily regulated. Only a small number of high-risk applications will face strict controls.
High-risk use cases include biometric recognition, law enforcement, employment, education, and critical infrastructure. Developers in these areas must meet strict requirements related to data quality and anti-bias measures.
Additionally, there is a third tier with lighter transparency obligations. This applies to general-purpose AI tools like chatbots. Makers of these tools must ensure some transparency but have fewer obligations compared to high-risk applications.
General-Purpose AI
For developers of general-purpose AI (GPAI) models, such as the technology behind popular tools like ChatGPT, transparency is key. The most powerful GPAI models must also conduct systemic risk assessments when they meet certain computational thresholds.
Developers have lobbied intensely against stringent regulations, fearing it may hinder Europe’s ability to compete globally. Nevertheless, the law maintains certain transparency requirements to ensure these AI models do not pose undue risks.
Phased Implementation
The phased implementation starts with a list of prohibited AI uses, effective six months after the law comes into force. This includes banning AI applications for social credit scoring, untargeted facial recognition scraping, and real-time remote biometrics by law enforcement unless specific exceptions apply.
Nine months after the law’s activation, codes of practice will start to be enforced. An EU AI Office will be responsible for these codes, though there are concerns about potential industry influence in the drafting process.
By August 1, 2025, transparency rules for GPAIs will be enforced. A subset of high-risk AI systems will have until 2027 to comply fully, offering a more extended deadline for these applications to meet their obligations. Other high-risk systems must comply by 2026.
Industry Concerns and Lobbying
Some AI industry elements and member states have lobbied diligently against more stringent rules for GPAI models. Their concern is that heavy regulations might stifle innovation and prevent Europe from producing competitive homegrown AI giants.
The lobbying efforts highlight a delicate balance the EU must strike between innovation and regulation. The AI Act tries to mitigate potential risks without overly burdening developers.
Codes of Practice and Compliance
The responsibility for codes of practice lies with the EU AI Office, an oversight body created under the new law. However, questions remain about who will draft these guidelines.
Civil society fears that consultancy firms, possibly influenced by AI industry players, may have too much sway. Recently, the AI Office announced it would seek stakeholders to help draft these codes, aiming for a more inclusive process.
Established guidelines will impact the way AI developers operate, setting the standards for compliance across the industry.
Transparency and Accountability
Key to the new regulations is ensuring transparency and accountability in AI development. High-risk AI systems have rigorous obligations for data handling and anti-bias.
General-purpose AI models, like OpenAI’s GPT, must meet specific transparency requirements. For the most powerful models, systemic risk assessments are a must.
These transparency measures are in place to foster trust and safety in AI applications used by the public.
The publication of the EU AI Act is a significant milestone in AI governance, setting a new standard for the responsible development of artificial intelligence across the European Union. Its phased implementation allows developers to adapt gradually, ensuring compliance with important regulations over time. This regulation is vital for balancing innovation with safety while addressing public concerns about the ethical use of AI in high-risk areas.
The AI Act reflects the EU’s commitment to fostering trustworthy AI technologies, promoting transparency, and safeguarding fundamental rights. It underscores the importance of accountability in AI development and lays the groundwork for a safer AI landscape in Europe. As the act’s deadlines approach, the AI community will closely watch its impacts on innovation and technological progress. The outcomes of this legislation could serve as a model for AI regulation worldwide.