Europe is building a huge international facial recognition system. Privacy issues? Eh.

Home / Uncategorized / Europe is building a huge international facial recognition system. Privacy issues? Eh.

An upcoming law will let police forces across the EU link photo databases and allow facial recognition to be used on an unprecedented scale.

PLUS: the European union has  agreed on another ambitious piece of legislation to police the online world. Get out the 🍿

 

 

BY:

Salvatore Nicci
Technology Analyst / Reporter
PROJECT COUNSEL MEDIA

 

25 April 2022 (Paris, France) – For the past 15 years, police forces searching for criminals in Europe have been able to share fingerprints, DNA data, and details of vehicle owners with each other. If officials in France suspect someone they are looking for is in Spain, they can ask Spanish authorities to check fingerprints against their database. Now European lawmakers are set to include millions of photos of people’s faces in this system – and allow facial recognition to be used on an unprecedented scale.

The expansion of facial recognition across Europe is included in wider plans to “modernize” policing across the continent, and it comes under the Prüm II data-sharing proposals. The details were first announced this past December, but criticism from European data regulators has gotten louder in recent weeks, as the full impact of the plans have been understood.

Why? Because what you are creating is the most extensive biometric surveillance infrastructure in the world – yes, even greater than what the U.S. has at present. And what is interesting is that for Europe – a continent that prides itself on its data protection regimes – it reveals how so any member states have pushed for facial recognition to be included in the international policing agreement. 

The first iteration of Prüm was signed by seven European countries – Belgium, Germany, Spain, France, Luxembourg, the Netherlands, and Austria – back in 2005 and allows nations to share data to tackle international crime. Prüm II plans to significantly expand the amount of information that can be shared, potentially including photos and information from driving licenses. The proposals from the European Commission also say police will have greater “automated” access to information that’s shared. Lawmakers say this means police across Europe will be able to cooperate closely, and the European law enforcement agency Europol will have a “stronger role.”

The inclusion of facial images and the ability to run facial recognition algorithms against them are among the biggest planned changes in Prüm II. Facial recognition technology has faced significant pushback in recent years as police forces have increasingly adopted it, and it has misidentified people and derailed lives. Dozens of cities in the US have gone as far as banning police forces from using the technology (though over 2,500 U.S. law enforcement agencies and police departments ignore that ban and still use it). The EU is debating a ban on the police use of facial recognition in public places as part of its AI Act.

However, Prüm II still allows the use of retrospective facial recognition. This means police forces can compare still images from CCTV cameras, photos from social media, or those on a victim’s phone against mug shots held on a police database. The technology is different from live facial recognition systems, which are often connected to cameras in public spaces; these have faced the most criticism.

The European proposals allow a nation to compare a photo against the databases of other countries and find out if there are matches – essentially creating one of the largest facial recognition systems in existence. One leaked document notes the number of potential matches could range from between 10 and 100 faces, although this figure needs to be finalized by politicians. A European Commission spokesperson says that a human will review the potential matches and decide if any of them are correct, before any further action is taken. “In a significant number of cases, a facial image of a suspect is available,” France’s interior minister said in the documents. It claimed to have solved burglary and child sexual abuse cases using its facial recongition system.

The Prüm II documents, dated from April 2021, when the plans were first being discussed, show the huge number of face photos that countries hold. Hungary has 30 million photos, Italy 17 million, France 6 million, and Germany 5.5 million, the documents show. These images can include suspects, those convicted of crimes, asylum seekers, and “unidentified dead bodies,” and they come from multiple sources in each country.

Data privacy proponents note that while their criticism of facial recognition systems has mostly focused on real-time systems, those that identify people at a later date are still problematic. Why? When you are applying facial recognition to footage or images retrospectively, sometimes the harms can be even greater, because of the capacity to look back at, say, a protest from three years ago, or to see who I met five years ago, because I’m now a political opponent.

And why the official proposal says that pictures of people’s faces won’t be combined in one giant central database, police forces will still be linked together through a “central router.” Allegedly, this router won’t store any data and will only act as a message broker between nations. But under the new infrastructure, countries only need one connection to the central router and it will be easier to add additional data categories to the system and obtain the information you need.

What is more interesting is that right-wing governments – Hungary, Slovenia, Poland – have been using for greater expansion, pushing for people’s driving license data to be included, as one example.

Of course, there are significant concerns about the differences between police databases and who is included. Police databases are often poorly put together. In July 2021, police in the Netherlands deleted 218,000 photos it wrongly included in its facial recognition database. In the UK, more than a thousand young Black men were removed from a “gangs database” in February 2021. You could have databases that have completely different backgrounds in terms of how this data was collected, where it was sourced, how it was exchanged, and who approved what. This could lead to misidentification.

But the biggest challenge for data privacy proponents is that Prüm II will simply “normalize” the use of facial recognition by police forces across Europe. Their concern is that the Prüm II proposal will incentivize the creation of facial image databases and the application of algorithms to these databases to perform facial recognition. 

The big picture? But sixty years after being invented (1962), facial recognition is really just getting started. Today, facial recognition has become a security feature of choice for phones, laptops, passports, and payment apps. It promises to revolutionize the business of targeted advertising and speed the diagnosis of certain illnesses. It makes tagging friends on Instagram a breeze.

Yet it is also, increasingly, a tool of state oppression and corporate surveillance. In China, the government uses facial recognition to identify and track members of the Uighur ethnic minority, hundreds of thousands of whom have been interned in “reeducation camps.” In the U.S., according to The Washington Post, Immigration and Customs Enforcement and the FBI have deployed the technology as a digital dragnet, searching for suspects among millions of faces in state driver’s license databases, sometimes without first seeking a court order. In 2020, an investigation by the Financial Times revealed that researchers at Microsoft and Stanford University had amassed, and then publicly shared, huge data sets of facial imagery without subjects’ knowledge or consent. (Stanford’s was called Brainwash, after the defunct café in which the footage was captured.) Both data sets were taken down … but not before researchers at tech startups and one of China’s military academies had a chance to mine all of it.

But what is different is that unlike other world-changing technologies whose apocalyptic capabilities became apparent only after years in the wild (see: social media) the potential abuses of facial-recognition technology were apparent almost from its birth. Many of the biases we talk about today – the sample sets skewed almost entirely toward white men; the seemingly blithe trust in government authority; the temptation to use facial recognition to discriminate between races – were all discussed in the mid-1960s when facial recognition was first being developed, and they continue to dog the technology today.

The Digital Services Act : get out the 🍿!!

And so the EU has agreed on another ambitious piece of legislation to police the online world. Early Saturday morning after hours of negotiations, the bloc agreed on the broad terms of the Digital Services Act, or DSA, which will force tech companies to take greater responsibility for content that appears on their platforms. New obligations include removing illegal content and goods more quickly, explaining to users and researchers how their algorithms work, and taking stricter action on the spread of misinformation. Companies face fines of up to six percent of their annual turnover for non-compliance. “The DSA will upgrade the ground-rules for all online services in the EU,” said European Commission President Ursula von der Leyen in a statement. “It gives practical effect to the principle that what is illegal offline, should be illegal online. The greater the size, the greater the responsibilities of online platforms.” 

The DSA shouldn’t be confused with the DMA or Digital Markets Act, which was agreed upon in March. Both acts affect the tech world, but the DMA focuses on creating a level playing field between businesses, while the DSA deals with how companies police content on their platforms. The DSA will therefore likely have a more immediate impact on internet users.

The full, final text of the DSA has yet to be released, but the European Parliament and European Commission detailed a number of major obligations:

• Targeted advertising based on an individuals’ religion, sexual orientation, or ethnicity is banned. Minors cannot be subject to targeted advertising either

• “Dark patterns” — confusing or deceptive user interfaces designed to steer users into making certain choices — will be prohibited. The EU says that, as a rule, cancelling subscriptions should be as easy as signing up for them

• Large online platforms like Facebook will have to make the working of their recommender algorithms (e.g. used for sorting content on the News Feed or suggesting TV shows on Netflix) transparent to users. Users should also be offered a recommender system “not based on profiling.” In the case of Instagram, for example, this would mean a chronological feed (as it introduced recently)

• Hosting services and online platforms will have to explain clearly why they have removed illegal content, as well as give users the ability to appeal such takedowns. The DSA itself does not define what content is illegal, though, and leaves this up to individual countries

• The largest online platforms will have to provide key data to researchers to “provide more insight into how online risks evolve”

• Online marketplaces must keep basic information about traders on their platform to track down individuals selling illegal goods or services

• Large platforms will also have to introduce new strategies for dealing with misinformation during crises (a provision inspired by the recent invasion of Ukraine).

 

Both the DMA and the DSA need to be skewered because of the obvious fallacies and impossibilities of execution. I’ll do that in a subsequent post. But just a few points regarding the DSA:

• The targeted advertising ban will be the most tricky and most likely a lawyer’s field day. So how, say, do hairdressers specialising in ethnic hairstyles find clients without ethnic targeting? And does this mean gay night clubs cannot specially target the gay population? 

• And policing “dark patterns” will be fun. Tell me, dear regulator, who exactly decides it’s “dark”, and who exactly forces the change? Clearly they brought no technical experts into this thing to explain how the pipes + wires + tubes of the internet and platforms work.

• And the recommendation algorithm element? Hoo, boy, is that putting the cat among the pigeons. That needs a separate essay.

• What’s more, the DSA will likely come into force more quickly than the Digital Markets Act. That is bassackwards. 

Strap yourselves down. It’s 🍿 🍿 time!

 

Related Posts