top of page
  • Writer's picturemeowdini

The EU AI Act: A New Era of AI Regulation Begins

The EU AI Act, the world's first legislation to regulate AI based on risk levels, goes into effect. Companies must comply with varying rules by risk category.

Effective from August 1, the act mandates compliance for all AI systems, whether currently operational or in development. This legislation introduces a structured approach, categorizing AI systems into four levels of risk: no risk, minimal risk, high risk, and prohibited.


The EU AI Act, the world's first legislation to regulate AI based on risk levels
EU AI Act sets the stage for global AI regulation with risk-based guidelines and stringent compliance measures. Photo: Unsplash


Key Provisions and Timelines

The EU AI Act assigns rules according to the risk associated with each AI system. Systems categorized as "prohibited" will be entirely banned starting February 2025. These include AI practices that manipulate users' decision-making processes or use facial recognition technologies from internet scraping. For high-risk systems—such as those involving biometric data, critical infrastructure, or employment decisions—stringent regulations require companies to disclose training datasets and demonstrate human oversight.

Approximately 85% of AI companies fall under the "minimal risk" category, facing minimal regulatory requirements. However, the transition to compliance is not without its challenges. According to Heather Dawe, head of responsible AI at UST, international clients generally accept the new rules, recognizing the necessity of AI regulation. Companies may need three to six months to align with the new regulations, depending on their size and the extent of their AI applications.



Implementation and Enforcement

To ensure adherence, companies are advised to establish internal AI governance boards comprising legal, technical, and security experts. These boards will audit the technologies in use and ensure compliance with the new laws. Non-compliance could result in fines of up to 7% of a company's global annual turnover, as stated by Thomas Regnier, a spokesperson for the European Commission.

The Commission has established an AI Office to oversee compliance with the new rules, including the regulation of general-purpose AI models. This office will initially employ 60 internal staff members, with plans to hire an additional 80 external experts over the next year. An AI Board, consisting of delegates from all 27 EU member states, met in June to lay the groundwork for harmonizing the Act's implementation across the EU.


Industry Response and Future Outlook

The AI industry has been proactive, with over 700 companies pledging early compliance through an AI Pact. Meanwhile, EU member states have until next August to designate national competent authorities responsible for overseeing the Act's implementation. In parallel, the Commission plans to boost AI investments, starting with a €1 billion injection in 2024 and up to €20 billion by 2030.

Contrary to concerns that stringent regulations might stifle innovation, the Commission asserts that the Act aims to foster a safe and competitive AI environment in the EU. As Regnier noted, "The legislation is not there to push companies back from launching their systems—it's the opposite. We want them to operate in the EU but want to protect our citizens and protect our businesses."


Areas for Improvement

Despite its comprehensive framework, experts like Risto Uuk, the EU Research Lead at the Future of Life Institute, suggest that the Act still needs further clarification, particularly regarding the categorization of specific technologies. For instance, the use of drones for inspecting infrastructure is currently classified as high-risk, which some argue may be overly cautious.

Moreover, there is a call for stricter regulations and higher penalties for Big Tech companies operating generative AI (GenAI) in the EU. Organizations such as European Digital Rights express concern over existing loopholes, especially concerning biometrics, policing, and national security. They urge lawmakers to address these gaps to safeguard human rights more effectively.


As the EU AI Act takes effect, it sets a precedent for AI governance worldwide, highlighting the importance of balancing innovation with ethical considerations and public safety. The Act's implementation will undoubtedly be a key area of focus in the coming years, as both regulators and companies navigate this new regulatory landscape.


Source: Euronews

Comments


bottom of page