Big Tech turns up the lobbying fire hose to water down Europe’s new artificial intelligence rules

Home / Uncategorized / Big Tech turns up the lobbying fire hose to water down Europe’s new artificial intelligence rules

“The EU legislation is ridiculous. It is structured around categorizing AI systems according to risk levels based on their use case. These new AI systems are not trained to do one specific thing. Even us – the people who created them – don’t actually know what they can and can’t do. I expect that it’s going to be, probably, years before we really know all the things that GPT-4 can and can’t do. And the EU wants to legislate?”

Helen Toner, a member of OpenAI’s board and the director of strategy at Georgetown’s Center for Security and Emerging Technology

 

 

24 APRIL 2023 (Paris, France)  – – As I noted last week, EU lawmakers were in the midst of creating the world’s first binding artificial intelligence rulebook when they were knocked off their butts by ChatGPT, forced to take an unexpected detour to address the matter. They found their drafts woefully out-of-date, forced to figure out how to fit ChatGPT and generative AI into their proposal to regulate AI. 

But … confused as they were … the regulators soldiered on, determined to put their finishing touches on a set of wide-ranging rules designed to govern the use of artificial intelligence that, if passed, would make the E.U. the first major jurisdiction outside of China to pass targeted AI regulation. That has made the forthcoming legislation the subject of fierce debate and lobbying, with opposing sides battling to ensure that its scope is either widened or narrowed.

The plan was to agree to a final “proposal draft” this week, allowing the law to progress to negotiations between the bloc’s member states and executive branch – a timeline estimated to be “2-3 years” according to one EU legislator. Mind you it took the General Data Protection Regulation (GDPR) 5+ years to get through this process and that was a far, far simpler subject matter with none of the technological complexities involved with artificial intelligence.

The initial drafts of the “E.U. Artificial Intelligence Act” contained bans on controversial uses of AI like social scoring and facial recognition in public, as well as force companies to declare if copyrighted material is used to train their AIs. Those provisions seems to be intact.

But one of the Act’s most contentious points is whether so-called “general purpose AI” – of the kind that ChatGPT is based on – should be considered high-risk, and thus subject to the strictest rules and penalties for misuse.

On one side of the debate are Big Tech companies and a conservative bloc of politicians, who argue that to label general purpose AIs as “high risk” would stifle innovation.

On the other is a group of progressive politicians and technologists, who argue that exempting powerful general purpose AI systems from the new rules would be akin to passing social media regulation that doesn’t apply to Facebook or TikTok.

Those calling for general purpose AI models to be regulated argue that only the developers of general purpose AI systems have real insights into how those models are trained, and therefore the biases and harms that can arise as a result. They say that the big tech companies behind artificial intelligence – the only ones with the power to change how these general purpose systems are built – would be let off the hook if the onus for ensuring AI safety were shifted onto smaller companies downstream.

Big Tech companies like Google and Microsoft, which have plowed billions of dollars into AI, are arguing against the proposals, according to a report by the Corporate Europe Observatory, a transparency group. Lobbyists have argued that it is only when general purpose AIs are applied to “high risk” use cases – often often by smaller companies tapping into them to build more niche, downstream applications – that they become dangerous, the Observatory’s report states. Google submitted a position paper to the EU regulators and noted:

General-purpose AI systems are purpose neutral: they are versatile by design, and are not themselves high-risk because these systems are not intended for any specific purpose. Categorizing general-purpose AI systems as “high risk” could harm consumers and hamper innovation in Europe.

Microsoft, the biggest investor in OpenAI, made similar arguments through industry groups lobbying against the EU legislation. Microsoft noted:

There is no need for the AI Act to have a specific section on GPAI [general purpose AI]. It is not possible for providers of GPAI software to exhaustively guess and anticipate the AI solutions that will be built based on their software. Further, it means unduly burdening innovation. Our position is that any forthcoming regulations should be assigned to the user that may place the general purpose AI in a high-risk use case, rather than the developer of the general purpose system itself.

All of these issues were discussed in a 2-hour debate on French TV last night. And the overall feeling was “Europe will once again be left by the side of the road”.

One commentator noted it was only 3 years ago that the EU Commission pushed “the European Union will be a leading hub for tech innovation. But it is increasingly clear this is wishful thinking. Heavy-handed regulation is making the bloc inhospitable to innovation, and investors are looking elsewhere. The EU’s recent pushback against ChatGPT is just the latest sign that this is not going to change”.

One of the French drafters of the EU’s Artificial Intelligence Act was seeking to take the bite out, pointing out that AI was not so advanced two years ago and was likely to develop further over the next two years, “so fast” that much of what is being proposed in this Act will no longer be appropriate when the law actually takes effect. He said:

For competitive reasons and because we are already behind, we actually need more optimism to deal with AI more intensively. But what is happening in the European Parliament is that most people are being guided by fear and concerns and trying to rule out everything.

He added that the EU members’ data protection commissioners wanted AI to be monitored by an independent body and that it would make sense to amend the existing data protection legislation. All this (he said with a wink) as the EU Commission braces for the end of its 5-year term and new elections in 2024.

A French technology industry spokeman said:

It is the usual mess, the European Commission and Parliament trying to strike a balance between consumer protection, regulation and the free development of the economy and research. AI offers immense potential in a digital society and economy, and France is going to miss the train.

It was only 2 years ago, when the bloc’s AI legislation was first presented, that everybody said the EU did not want to drive the developers of AI away but promote them and persuade them to settle in Europe. I get we do not want to be dependent on foreign providers and there is an argument that the data for AI should be stored and processed in the EU. But from what I see if the new leaked drafts, this is just heavy-handed regulation”.

He said that it did not suffice and made no sense to apply risk levels to AI applications. Companies simply cannot predict today what their AI products might be able to do tomorrow and as we have seen have been sometimes surprised by the results. He said:

If we are too complicated here, then companies will go elsewhere and develop their algorithms and systems there. Then they will come back and use us only as a consumer country, which seems to be the typical EU path.

That was a repeated argument: that the EU placed itself in a bind by structuring the AI Act in an outdated fashion, pretty much as Helen Toner stated in the opening to this post.

This regulatory pushback is symptomatic of Brussels’ tendency to regulate at all costs, which often clashes with its desire to be a technology trailblazer. The timing could hardly be worse. EU politicians and policymakers appear to realise Europe is lagging behind in the global technology race, and the bloc is about to pour billions of euros into quantum computing, microchips and cloud infrastructure to challenge the US and China’s global technological leadership.

Unfortunately, this may all come to nothing if the EU doesn’t also create an accommodating regulatory environment that enables business to thrive, rather than preventing European firms from leveraging revolutionary technologies. The ban is symptomatic of a broader trend at the EU level: Brussels’ proclivity to regulate, which may be getting slightly out of hand.

The EU recently passed new legislation (the Digital Markets Act and the Digital Services Act) which impose far-reaching transparency and compliance burdens on online services. This is particularly true for digital platforms that exceed certain user and revenue thresholds, like Amazon, Apple, Google, Microsoft and Meta. In addition, the upcoming Data Act will require companies to share their data with rivals. The draft AI Act, meanwhile, would force ‘high risk’ AIs to go through a market-authorisation regime similar to that currently applied to medical devices. Both pieces of legislation will delay the deployment of new AI tools 

All of this comes on top of existing legislation, such as the GDPR, which I showed in a research report last year has discouraged venture-capital investments in the Old Continent. Even if these regulations may be sound in isolation – a big “if” – they cumulatively place a massive regulatory burden on firms operating in Europe that may discourage investment.

Now, all those chickens are now coming home to roost.

Europe’s share of global venture-capital investments dropped from around 27% in the 2010s to 15% and 17% in 2020 and 2021 respectively; early data from 2022 suggests that it followed the same trend. This decline is not simply the result of more rapid growth in Asian economies, as the U.S. share of VC investment has remained stable.

At a more granular level, 2023 has seen the emergence not only of LLMs, but also AI-driven image generators and text-to-video technology. None of the leading players in this field are European. OpenAI (ChatGPT and Dall-E), MidJourney, and Google are all based in the United States, while Stability AI and D-ID are from the UK and Israel respectively. In short, the EU seems to have missed the boat on yet another general-purpose technology.

Europe is also trailing on pivotal technologies like virtual reality and quantum computing. The picture is particularly bleak with regard to quantum computing:, which attracts a lot of the EU’s attention, and is widely seen as a pivotal technology. A recent report by Boston Consulting Group (BCG) warned that, absent deep reforms, “quantum computing could sound the death knell for the EU’s competitiveness and technological independence”.

In short, the EU must decide whether to regulate or innovate. This will be a difficult choice for a union that takes pride in the so-called “Brussels Effect”: the notion that, by being the first and strictest regulator, the EU can impose its vision of regulation globally. Unfortunately, as evidenced by the GDPR, and the early stirrings of the Digital Markets Act and the Digital Services Act, heavy-handed legislation also drags Europe further away from the technological frontier.

I will cautiously venture that it is not too late for Europe to change course. But time is running out – and frankly I just do not see “innovation” in Europe’s DNA.

Related Posts