Tech companies are adamant that the regulation of artificial intelligence in the E.U. is preventing its citizens from accessing the latest and greatest products. However, civil society groups BEUC and noyb feel otherwise, maintaining that AI developers need to produce products that uphold their customers’ safety and privacy.
Some of tech giants’ delayed launches in EU
There have been a number of instances where the launches of AI products in the E.U. have either been delayed or cancelled as a result of regulations. For instance, this week, Meta’s Llama 4 series of AI models was released everywhere except Europe. Its AI chatbots integrated into WhatsApp, Messenger, and Instagram only made it to the bloc 18 months after the U.S.
Similarly, Google’s AI Overviews currently only appear in eight member states, having arrived nine months later than in the States, and both its Bard and Gemini models had delayed European releases. Apple Intelligence has only just become available in the E.U. with the release of iOS 18.4, after “regulatory uncertainties brought about by the Digital Markets Act” held up its release in the region.
“If certain companies cannot guarantee that their AI products respect the law, then consumers are not missing out; these are products that are simply not safe to be released on the E.U. market yet,” Sébastien Pant, deputy head of communications at the European consumer organisation BEUC, told Euronews.
“It is not for legislation to bend to new features rolled out by tech companies. It is instead for companies to make sure that new features, products or technologies comply with existing laws before they hit the EU market.”
SEE: EU’s AI Act: Europe’s New Rules for Artificial Intelligence
EU regulations push companies to build more privacy-conscious tools
E.U. legislation hasn’t always excluded E.U. citizens from AI products; instead, it has often compelled tech companies to adapt and deliver better, more privacy-conscious solutions for them. For example, X agreed to permanently stop processing personal data from E.U. users’ public posts to train its AI model Grok after it was taken to court by the Data Protection Commission.
Kleanthi Sardeli, a data protection lawyer working with the advocacy group noyb, told Euronews that users generally don’t anticipate their public posts being used to train AI models, yet that’s precisely what many tech companies are doing, often with little regard for transparency. “The right to data protection is a fundamental human right and it should be taken into account when designing and deploying AI tools.”
Google, Meta claim EU AI laws disadvantage citizens, but their revenue is also at stake
Google and Meta have openly criticised European regulation of AI, suggesting it will quash the region’s innovation potential.
Last year, Google published a report that detailed how Europe lags behind other global superpowers when it comes to AI innovation. It found that only 34% of E.U. businesses used cloud computing technologies in 2022, a critical enabler for AI developments, which is vastly behind the European Commission’s target of 75% by 2030. Europe also filed just 2% of global AI patents in 2022, while China and the U.S., the top two largest producers, filed 61% and 21% respectively.
The report placed much of the blame on E.U. regulations for the region’s struggles to innovate in advanced technologies. “Since 2019, the EU has introduced over 100 pieces of legislation that impact the digital economy and society. It’s not just the sheer number of regulations that’s the challenge — it’s the complexity,” said Matt Brittin, president of Google EMEA, in an accompanying blog post. “Moving from the regulatory-first approach can help to unlock the opportunity of AI.”
But Google, Meta, and the other tech giants do stand to suffer financially if the rules prevent them from launching products in the E.U., as the region represents a huge market with 448 million people. On the other hand, if they go ahead with launches but break the rules, they could face hefty fines of up to €35 million or 7% of global turnover, in the case of the AI Act.
Europe is currently embroiled in multiple regulatory battles with major tech firms in the U.S., many of which have already led to substantial fines. In February, Meta declared it was prepared to escalate its concerns over what it saw as unfair regulation directly to the U.S. president.
U.S. President Donald Trump referred to the fines as “a form of taxation” at the World Economic Forum in January. In a speech at February’s Paris AI Action Summit, U.S. Vice President Vance disparaged Europe’s use of “excessive regulation” and said that the international approach should “foster the creation of AI technology rather than strangle it.”