The run-up to LegalTech 2016: Safe Harbor 2.0 – the (im)possible dream?

Home / Uncategorized / The run-up to LegalTech 2016: Safe Harbor 2.0 – the (im)possible dream?

doomsday clock
 

By:  Gregory P. Bufithis   Founder/CEO

 

 

28 January 2016 – This week in Brussels we have the 9th edition of the International Conference on Computers, Privacy and Data Protection (#CPDP2016 for the Twitterverse) and the timing could not be better.  This morning the two lead negotiators in the Safe Harbor 2 saga … Julie Brill of the U.S. Federal Trade Commission and Paul Nemitz of the data protection unit of the European Commission … reconvened a year after their initial meeting here and chatted about the state of play of cross-border data flows between the world’s two largest trading blocks. And they gave away little (more on that in a bit) while they chatted about how privacy interacts with national security, law enforcement and economic considerations.

 

Note: CDPD is a non-profit platform established in 2007 by research groups from the Vrije Universiteit Brussel, the Université de Namur and Tilburg University. Over the years, it has been joined by 20 academic centers from the EU, US and worldwide. The platform, via its conferences, gathers a variety of stakeholders to exchange ideas and discuss the legal, regulatory, academic and technological developments in privacy and data protection. This year’s edition is hosting more than 60 panels and workshops, with 100+ speakers, as well as side events such as open debates, performances and artistic events.

 

 

Oh, they had a few hints about the progress of Safe Harbor and the impact of the new General Data Protection Regulation. But it was pretty much the same old trope about how the invalidation of the EU Commission’s Safe Harbor decision by the Court of Justice of the European Union (sometimes referred to below as the “ECJ”) has “raised the stakes for the transatlantic data debate”. And a need for “adequacy” in data protection, with a review of additional judicial decisions past and expected on both sides of the Atlantic, and how policymakers have “engaged in extensive negotiations to find solutions that address privacy and civil liberties, national security and law enforcement, and industrial and economic concerns”.

 

 

But a most import note: “any agreement must be able to withstand court scrutiny.”  Anticipating Schrems II, are we?

 

 

With one delicious last line by Nemitz: “we will have something to report to the EU Parliament by Monday night”. My sources tell me there is a full court press in motion by both sides as this post distributes, following a 18 hour session yesterday.

 

 

And the clock, she ticks

 

U.S. and EU Commission officials have been stating … over and over … they are doing everything possible to reach agreement over transatlantic data-sharing before a critical deadline at the end of this week. Well, by next Tuesday, 2 February, when the Article 29 Working Party (composed of all the EU member states data protection commissions plus others) meets.  Hence Paul Paul Nemitz’s quote above about “something to report by Monday night”.

 

Because the Working Party (contradiction in terms?) has warned it would “take all necessary and appropriate actions, which may include coordinated enforcement actions” if the deadline was not met.  A photo of its chairman:

 

Darth Vader

 

And U.S. and EU European Commission officials have kicked it up a notch or two. They had a marathon 15 hour session in Paris on 15 January at the U.S. Embassy in Paris, in an attempt to stay away from Brussels’ preying eyes.  The U.S. team arrived in Brussels this past Monday and plan to be here “through the weekend if necessary” to cut a deal.

 

 

Note to the uninformed: Under the Safe Harbor agreement, personal and private information on European citizens was allowed to leave the Continent and be stored in the U.S. – provided the U.S. respected people’s privacy. The revelations of the NSA’s blanket surveillance of the internet shattered that trust, and so the agreement was scrapped. That’s a big problem for Silicon Valley. The only stakeholder that matters, it seems.

 

 

The most curious comment so far this week comes from Deputy General Counsel of the U.S. Department of Commerce, Justin Antonipillai, who was quoted in Politico as saying:

 

“We’ve presented a very strong proposal and foundation to help the European Commission react to the findings that have been made. But time is not on our side. We are committed to do what we can within limits.”

 

That a senior official would be quibbling over 48 hours for talks that were started two years ago and have been going on intensively for three months is certainly a sign that things are not going well.

 

 

A run down of the “Big fight”

 

I have followed this entire process for 2+ years attending numerous EU Parliament and EU Council hearings, plus my fabled visit to Luxembourg last March for the Court of Justice Schrems hearing. And I have had the benefit of some insider sources, too.

 

The negotiations have been a remarkable battle between an economically dominant U.S. and a privacy-respecting Europe. EU’s digital economy representative to the U.S., Andrea Glorioso, has pointed out in numerous interviews that the European Commission had developed 13 recommendations for changing the Safe Harbor agreement more than two years ago after the extent of US government spying, which included grabbing and storing internet data from such services as Facebook, Google and Twitter, was revealed:

 

 

“Following Snowden’s revelations and the impact they had on the European public, rather than suspending the arrangement, we said Safe Harbor has to be improved, strengthened. We have been in discussion since October 2013 on those recommendations.”

 

 

Except .. not one of those recommendations was implemented by the US before the European Court of Justice struck down the agreement.

 

Oops.

 

Since October, occasional leaks over the negotiations have repeatedly pointed to intransigence on the part of the U.S. intelligence services as the main stumbling block. I will address that below.

 

Back in October, EU Justice Commissioner Vera Jourová said that the EC’s position was that blanket surveillance of Europeans by the NSA should be subject to judicial review. The intelligence agencies pushed back heavily on that, prompting Dutch justice minister Ard van der Steur to say in December that he didn’t think an agreement was going to be possible before the end of January.

 

And all of this week that topic was repeatedly referenced by Antonipillai and Glorioso. Antonipillai in The Register earlier this week:

 

 

“The ECJ judgement required the Commission to look at the framework within the context of US law and with a commitment to work together with how intelligence agencies operate. What was not in the decisions – and this is important – there were no findings about US national security law and no findings about how US law enforcement works. The negotiating teams had spent a lot of time ensuring that citizens have many means to pursue legal remedies”.

 

 

He has also noted that they had to be careful that companies were “not subject to all 28 DPAs,” referring to those damned independent data protection agencies of the European Union.

 

 

Oh, and the dear and delightful Max Schrems has been buzzing all around Brussels this week, speaking with anybody who shoves a microphone in his face or wants a selfie with him (I declined). He is also front and center tomorrow on numerous panels focused on Safe Harbor as well as the Microsoft/Ireland case which I believe will have a much greater impact on the “real world” of data protection.

 

 

The PR game

 

All sides are using lobbyists and lawyers to the fullest extent possible, paying for media placement of puff pieces to promote their side, or bombasts to demean the other.

 

One of the most amusing: an opinion on the European Court of Justice ruling by Geoffrey Robertson QC, a rather well known human rights barrister. He has published an “independent opinion” commissioned by Facebook, which has been affected by the ECJ ruling. The social network has been lobbying heavily against the decision, alongside other big technology groups.

 

According to Mr Robertson, the U.S. is “better at protecting its citizens from government snooping than Europe”.  He said that “invalidating the safe harbour agreement was based on trusting news reports of revelations by Edward Snowden rather than a thorough investigation of US law”. And he added that “the U.S. had become more ‘privacy friendly’ than Europe”.

 

After I cleaned up the coffee I had spilled over myself, and got back into my chair, I reflected.

 

No doubt there are many examples of data protection violations in Europe, but this does not define the EU charter’s level of protection.  I agree with Max Schrems on this one:  “To blame the EU for stuff that member states are doing is like blaming the U.S. federal government for stuff Texas is doing, when it has no jurisdiction over it.”

 

In general, privacy advocates have argued that the EU has more stringent rules on data protection. U.S. officials have challenged this view, saying that protections across the Atlantic are of a similar standard but merely enforced in a different way. They have repeatedly argued in private that European intelligence agencies employ practices that are often more egregious than those of their American counterparts, and that EU citizens enjoyed few safeguards against such methods.

 

 

I am going to ignore (for the moment) the ECJ pretending that Europe has this “oh so brilliant” set of safeguards.  And I will give points to the U.S. — many European governments have adopted more invasive practices because of increased concern about terrorism in Europe. For example, the UK is pursuing the Investigatory Powers Bill, while in France emergency powers permit broad data collection.

 

But this quest for essential equivalence is a nonsense. Because European intelligence agencies ain’t on the table.  The ECJ ruled that safe harbor was no longer valid was because the European Commission could not guarantee the privacy of EU citizens based on U.S. violations and actions … not anything done in the EU.

 

The real stumbling blocks

 

One of the issues often discussed during these negotiations has been that U.S. data laws present a thicket of hoops that foreign law enforcement agencies have to jump through in order to obtain the contents of customer data – such as emails – held in the U.S. The Electronic Communications Privacy Act requires foreign law enforcement agencies to present a request for the data to the Justice Department. The department reviews that order, and it is then forwarded to a U.S. attorney, who goes to a judge, obtains a warrant, and presents it to the company in question. The requested data is then routed back to the foreign government via the Justice Department, a process that can take on the order of 10 months.

 

That issue probably has more merit in the Microsoft/Ireland case.  But it is used in the Safe Harbor arguments with regard to legislation that would grant U.S. privacy rights to Europeans … now delayed in the Senate, which may complicated the Safe Harbor 2 negotiations.  Called “The Judicial Redress Act” it would allow citizens of European allied countries to sue over data privacy in the United States.  It was due to be finally voted on today (Thursday, the 28th) but it has been withheld from a scheduled vote.  The recurring “lets-make-it-as-difficult-as-we-can-for-Obama-on-everything” Republican poison is said to be at blame.

 

And, say insiders, the national security exemptions have proved a major sticking point. One source told me “we have argued that the national security exception foreseen by safe harbor is to be used only to an extent that is strictly necessary or proportionate. But that goes against the NSA and its proclivity for data slurping. The hands of the U.S. team may be tied”.

 

Plus those “13 recommendations for changing the Safe Harbor agreement” submitted by the Commission: only two have been implemented.

 

Yet both Julie Brill and Paul Nemitz expressed “confidence” so we shall eagerly await Monday night’s announcement.  And Deep Throat promises me advance notice.

 

 

Privacy – a personal note

 

Privacy, as we understand it, is only about 150 years old. Humans do have an instinctual desire for privacy. However, for 3,000 years, cultures have nearly always prioritized convenience and wealth over privacy. And I think cutting edge health technology will force people to choose between an early, costly death and a world without any semblance of privacy. Given historical trends, the most likely outcome is that we will forgo privacy and return to our traditional, transparent existence.

 

Most humans living throughout history had little concept of privacy in their tiny communities. Sex, breastfeeding, and bathing were shamelessly performed in front of friends and family. The lesson from 3,000 years of history is that privacy has almost always been a back-burner priority. Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude. It has existed for only a sliver of human existence.

 

The first 150 years of American life saw an explosion of information technology, from the postcard to the telephone. As each new communication method gave a chance to peek at the private lives of strangers and neighbors, Americans often (reluctantly) chose whichever technology was either cheaper or more convenient. Privacy was a secondary concern.

 

In fact, it was only the rampant use of President Grover Cleveland’s wife, Frances, on product advertisements, that eventually led to the nation’s first privacy law. In 1903, the New York legislature made it a penalty to use someone’s unauthorized likeness for commercial purposes, with a fine of up to $1,000.

 

And indeed, for most of the 19th century, privacy was practically upheld as a way of maintaining a man’s ownership over his wife’s public and private life – including physical abuse:

 

“We will not inflict upon society the greater evil of raising the curtain upon domestic privacy, to punish the lesser evil of trifling violence”

 

– from the 1868 decision State V. Rhodes where the court decided the social costs of invading privacy was not greater than that of wife beating

 

Lovely.

 

There did become an evolving need for privacy as history shows, due to multiple reaons. In fact the “right to privacy” is said to have been officially acknowledged as a political right by virtue of the now famous Harvard Law Review article of December 1890 by future U.S. Supreme Court Justice Louis Brandeis who pleaded for a “right to privacy” and said:

 

“The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.”

 

Interestingly enough, the right to privacy was justified on the very grounds for which it is now so popular: technology’s encroachment on personal information.

 

 

But that right of privacy has existed for only a sliver of human existence.  And while many continue to argue the need for privacy, technology has reversed the ability to maintain it.

 

One of the most telling points in the discussions over Safe Harbor 2 has been the vivid discussion about the difficulty that regulators, legislators and others find in keeping abreast of developments in information and communication technologies … and their lack of understanding how these complex systems work … so that their supervision, oversight and sanctions with regard to privacy can be based on an adequate understanding of practices and trends.

 

It’s a problem.  In today’s hyper-connected, always-on society, the whole notion of isolation is quickly becoming obsolete.  That organizations can tell whether or not someone is pregnant based on their buying habits is well-covered territory. In the U.S., the idea that organizations are always watching and learning from their customers is now just a part of life.

 

And that Big Monster called Big data makes it possible for analytics run against aggregated customer data to potentially be reverse engineered to reveal potentially identifiable information.  The process of sifting through and analyzing structured and unstructured data to gain new insights about customers is hardly new. Big data represents the evolution of the technology that allows you to perform these tasks in real time across massively distributed platforms and retain the data far longer.

 

And the longer the data is retained, the greater the risk of private or personally identifiable information – names, social security numbers, addresses, driver’s license numbers – being leaked or stolen.

 

But go to any conference these days that discusses data privacy (even this one in Brussels) and what you have is lawyers and policy wonks discussing the political, legal, sociological, and psychological issues of privacy. There is never a technical/scientific speaker to discuss how technology has simply eroded our privacy … some of it willingly.

 

In Europe  the privacy and data protection commissioners and mandarins are only slowly realizing their ambitious plans need to be reworked because of the tech.

 

One EU exception: ENISA, which is the European Network and Information Security Agency. I was able to get on their conference invite list and I have attended several of their events. Last year one event covered “the reality” of what can and cannot be done with respect to privacy given privacy eroding technologies.

 

Case in point: “right to be forgotten”.  It just cannot be done.  In one of the most brilliant, most compelling presentations on Google, the internet, how personal information ends up in the analytical whirlpool of big data, etc. it was shown how (almost inevitably) all data becomes orphaned from any permission framework that the discloser granted for its original use. Machine learning systems, commercial and otherwise, end up deriving properties and models from the data until the replication, duplication and derivation of that data can never hope to be controlled or “called back” by the originator. You cannot “cancel” data.  The complexity of networks does not permit anything to be “forgotten”.

 

Alas, food for another post.  Enjoy LegalTech.

 

 

 

 

Related Posts