The European Union has officially implemented the AI Act, marking the first time a comprehensive framework for artificial intelligence has been established globally.

The European Parliament saw a vast majority of its members support the legislation, with the vote concluding at 523 for, 46 against, and 49 abstaining on Wednesday.

Global tech giants brace for impact of new EU AI legislation

“Europe is NOW a global standard-setter in AI,” stated Thierry Breton, the EU Commissioner for the Internal Market, on X.

This sentiment found resonance throughout the technology sector, though reactions were mixed. The act's ambition to mitigate artificial intelligence risks has been lauded, while there are concerns that the regulations may stifle innovation.

Nonetheless, there's a general consensus about the significant impact of the law. By passing the inaugural legislation of its type, the AI Act has paved the way for other global governments to follow suit.

Setting the global standard for AI regulation

Enza Iannopollo, a Forrester analyst, regarded the adoption of the law as the onset of a novel AI era, stating, “Like it or not, with this regulation, the EU establishes the ‘de facto’ standard for trustworthy AI, AI risk mitigation, and responsible AI,” to TNW. She emphasized that other regions are now in a position where they must catch up.

The EU's decision to expedite the voting process, originally scheduled for the following month, underscores the urgency felt due to the rapid evolution and adoption of technology without an existing alternative framework.

Despite the law being an EU regulation, its implications extend worldwide to any company operating within the EU.

Global impact and industry response

The act stipulates fines of up to 7% of a company's global turnover for non-compliance, a provision causing concern among large tech companies more accustomed to the regulatory environment in the US.

European companies have also voiced apprehensions, fearing a disadvantage against their counterparts in the US and China. Efforts by European startups Mistral AI and Aleph Alpha led to a relaxation in the regulations for foundational models, a key technology behind entities like OpenAI's ChatGPT.

The AI Act introduces a risk-based classification for AI applications, with the most stringent requirements aimed at "high-risk" systems, from automotive to law enforcement. Completely prohibiting "unacceptable" uses such as social credit scoring, these measures are set to take effect this May. Iannopollo advises immediate action for organizations to prepare, highlighting the urgency with "There is a lot to do and little time to do it."

Want to know more about AI regulation? Join our LinkedIn community and get notified when we post.