Friday, July 4, 2014

Fait divers - Discussions enrich knowledge, most of the time :-)

Earlier today I was in a "tweet" discussion with @BrusselsGeek. I try to present it at the bottom of this post, so you can analyse the  back-and-forth yourself. 

Discussions are always enriching as they force you to think about your message (and how it came across) and the other's message (and how you interpreted it). On a topic like data protection, my conclusion is that twitter is perhaps not the best forum. A table in the sun with something fresh is to be preferred in more than one way. :-)

Distrust in the private sector

The starting point was the statement that (relating to the right to be forgotten) private profit-making companies shouldn't be in charge of decisions. A silent reference to the debate whether or not it should be up to Google to determine if and when they actually remove a search result or not after being requested to do so by a data subject (via the installed procedure or otherwise).

The regulation actually empowers all controllers

My knee jerk reaction to such statements is always : private companies take similar decisions on data processing all the time, e.g.
  • do we use the email-adresses in our database for direct marketing even if we do not have the consent (opt-in) of the data subject?
  • do we transfer or even sell this personal data to third parties even if the reference to such possibility is tucked away deeply in our general terms and conditions?
  • do we (re-)use this information (e.g. banks transactional data or posts to friends only on facebook) for a purpose that is not really that close to the original one?
  • do we enrich our data on our customers with data they have published online by hoovering it and matching it to our database?
  • etc.
Yes, indeed, if you are a controller of personal data, you can and have to make those decisions yourself. The EU data protection legislation has "only" set the (minimum) bar for controllers to be allowed to process personal data. To be a controller the legislation does not make a distinction between a private or public body. They both can be a controller, if and when they “alone or jointly with others determine() the purposes and means of the processing of personal data”.

The "guidance" controllers get there is embedded in the EU Personal Data Protection Directive. And yes, that text leaves room for a lot of interpretation room, especially, but not only with regard to the bases for legitimate processing (see art. 7 of the Directive for the correct wording) with concepts like: 
  • consent (the Art. 29 WP text hereon is not binding)
  • duty under the law (which - in the interpretion of the authorities - than can not be a law of another country and especially not a law of a non-EEA country and which is all too often badly drafted)
  • when the balance between the controller's interest and the data subjects interest is not broken (the "balance test")
And no, there is no hand holding or oversight by the data protection authorities. At the very least, they do not have the resources for that. 

The control on the controllers is ad hoc. The data subject that suspects something or has discovered misbehaviour can exercise its rights, including the right on access, the right to rectify and the right to block. The data subject can also ask for support by the data protection authorities or the courts. There is no or barely (pro-)active control by data protection authorities.

The system fails (?)

This way things in reality do not result in the behaviour wanted of the controllers. From time to time a case surfaces above the water that covers of day-to-day practice. 

Facebook, as well as Google, has become a scapegoat for such cases:
  • they change their privacy statement and especially the purposes for which the personal data can be used
  • they use personal data in the background for all kinds of "creepy" stuff: know what people type and don't post, see how they react to skewing the posts they actually see, etc. 
But Facebook is far from alone. My impression is that hardly a single controller is compliant. Some really and honestly try, but being fully compliant on all accounts of the data protection legislation is near impossible in an age where (personal) data is one of the key resources in the economy, omnipresent (the so-called datafication) and under the de facto control of anybody with access to it (and with a smart phone).

One high-profile specific "fail": Google reacting to the right to block

Now, The European Court of Justice in its recent decision in the Gonzalez case applied the rules and stated that Google had no case in rejecting the request to block by Mr. Gonzalez. This came to be known as a de facto right to be forgotten, a concept that has emerged from doctrine, but is also enscribed in the new draft EU Data Protection Regulation. 

So Google set up procedures to comply to this interpretation of the law. Organising itself to reasonably follow the "guidance" the ECJ gave. One can imagine that is not easy, as the ECJ did not aim to give full guidance. It only ruled in that particular case and gave some comment on the side ("obiter dictum"), which is non-binding and not complete.

So comments role in.
  • "Google acts as if it is God." 
  • "Google shouldn't be deciding on this."
  • "Google is doing a bad job."
But is it really?
  • Google had or should have had an internal procedure to respond to data subjects exercising their rights under the PDP legislation. All they did after the ECJ ruling was to adapt it to the new situation, make it more uniform to be able to process them easier and faster (like any operations manager would do) and put a spotlight on it whereas before it was dug in deep in the Terms of Service / Privacy Statement.
  • Requests of data subjects exercising their rights under the PDP law have to be assess on a case by case basis. But obviously you want to insert some consistency in the system, so you look at new types of request more in depth, "rule" on them with your privacy A-team, and then instruct your B-team to act the same, unless there is a relevant distinction to be made. Basically, how the stare decisis system of precendent in the US and UK judiciary system works.
  • The decisions by the A-team are guided by the law and the law for some requests give latitude to the decision maker. The main example is where the basis for the legitimate processing is the balance test of art. 7 f of the European PDP Directive.

The next step : Outcry for Oversight

The consequence of the negative comments is an outcry for oversight. Google, or in general search engines, should be supervised in their assessment of situations where the legislator has given latitude to the controllers. Why did the legislator do that? Very likely for multiple reasons, they wanted a catch all, they didn't know themselves, they wanted to be futureproof / technology neutral, they accepted some suggestions by lobbyists, etc. That is politics. 

So the legislator should turn that around, and install oversight that goes beyond the current controls (see higher)? That is a polically valid request.

My question then is, is that better and can you validly and reasonably organise that?

First, oversight by whom?

(a) By the government? Do we really trust the government? Remember that privacy more or less started of as a "shield" against people we per se have to trust (like doctors, lawyers,...) and the government (e.g. the 4th Amendment in the US Bill of Rights). Now with multinational companies that are bigger than x% of the countries in the world (in terms of market cap or turnover v GDP), they seem to be the leviathan to be feared and fended off. So governnment has turned the lesser of two evils? That argument sets of the alarm in my head that referes to abuses of data in government environment: 
  • do we want a system like the one that was set up for the NSA: an independent FISA court? Hmm, that failed. See Snowden.
  • do we want a system like the one that was set up for SWIFT after the debacle in 2006? Hmm, that failed. See the follow-up report on that.
  • do we want a system like the one that was set up for the DPOs in the EU instances: the EDPS? Hmm, not in the scope of their mandate. And it de facto is really close to the current controls (see higher) on other controllers (see e.g. the contribution of Renaudière in the CPDP panel - start somewhere at 33' if you want to focus).
  • do we want a system where the Data Protection Authorities have to oversight? Hmm, then you at least have the general problem of who oversees the overseeers. Moreover, from the experience in the financial sector, authorities do not want to carry that first line burden. I makes them "liable" (even if the law exonerates them). Just look at the fate of quite a number of supervisory structures in countries hit by the economic and finacial crisis. "They did not see it coming."
I personally am not convinced. You?

(b) By another non-governmental body? Does that give us more comfort?  Oversight is human, just as the actual assessment is. It is subject to perspectives, bias, prejudices, etc. But yes, you could install privacy advocates, but won't they skew towards blocking. Or you could install free speech advocates, but won't they skew towards not blocking. We can put both in to balance eachother, but does not not bring us back to the starting point, where it is Google's responsibility to try to strike that balance.

Let me add some personal experience here, and I think any compliance officer in a financial institution will in whole or in part relate: after setting up policies with departments in a company, after advising in specific cases, after controling the implementation of the policies, ... for years you become biased yourself. The thing is, that you have to know it, be aware of it. So in cases were I knew the stakes were high, I consulted with third parties, gave them my arguments and explicitly asked them to chop them down, forcefully. I call that the House-approach, after Dr. Greg House, yes from the TV series. You may think you have the answer and be sure 99% that you are, but you want to do the good thing, so you have it challenged by the best people you can find.

I have no inside information on how Google does it, but I can imagine, with the eyes of quite a lot of groups with different angles, that they are very aware of their responsibility and do apply the House-approach or something similar. In any case, I am prepared to give them the benefit of the doubt. And yes, they will make mistakes. If the number of files runs up in the ten of thousand, statistically, they are bound to. But if you tackle that by any type of oversight, the law of big numbers gives me a hunch that that oversight will fail at times as well.

So my question remains, as I do not have the answer, is there a solid way of oversight that is better than the current system:
  • the controller is responsible "in the front line"
  • the data subject has options to challenge further, before DPAs and courts
I am open to suggestions.

The last step : ... but only for search engines, they are special


Search engines are special in the sense that they have the specific added value to retrieve things on the internet, that is basically just a giant haystack. So they should have that oversight over them. 
  • I refer to my general argument that oversight that goes beyond what is installed now is in my opinion not really value adding. 
  • But I dare to add the question: but are they really that special? What about social platforms, where things get picked up and spread by the virtual word of mouth? What about banks and payment institutions that have info on your transaction that are at time more "valuable" than metadata of phone calls? What about cloud providers that at least in theory have access to so much data that big data solutions can spin quite a lot of derivatives out of them? Where does it stop.
And all of the above (and more) was, tweet-wise condensed to the reply
A bit short sighted. All "controllers" make such decisions in assessing a request by data subjects, setting up data processes

So I end where I began: this is an interesting discussion that should have taken place on a terras in the sun with a fresh drink. Perhaps an idea to engage in... after the world cup, of course. ;-)


THE END 
... for now

  1. That's precisely what I meant- the ruling clarifies that a search engine is a controller. It doesn't change my general view
  2. But then you challenge the foundation of the DP legislation, namely that complying corporations can decide to process data
  3. 2/2 that they shouldn't be unilaterally removing links to (perfectly accurate) information Need external oversight at least
  4. Which bit of DP legislation? The current draft Reg, or the old Dir that the ruling was based on?
  5. all controllers make similar assessments all the time, you can't have oversight on all those decisions
  6. Either way,"processing" certainly should be more narrowly defined.
  7. that would narrow the scope of the legislation which would lower the protection offered by law
  8. There you're only looking at *personal* data willingly given by the data subject. Totally not the case with RTBF requests.
  9. info gathered without consent and elsewhere is in as well, art 7f and 11 PDP Directive (basis for ECJ)
  10. Then why on earth did you post a link to an irrelevant blog? None of this makes any difference to the underlying problem.
  11. You want oversight, I say then you should install that for all controller as all decide on "fluffy" grounds (ao7f)
  12. You're forgetting the issue of scope. Search engines aren't like other controllers.
  13. That's my point, they are not. The ECJ does not say that. "RTFB" was exercised against the newspaper also and denied.

No comments:

Post a Comment