Sunday, 23 October 2011

Privacy icons and privacy nudges – how do we leave the world of the ubergeek?


One of the most thought provoking presentation on privacy I’ve seen in many weeks was delivered by Patrick Gage Kelly last Tuesday. He was speaking at a privacy workshop organised by the GSM Association in Central London, and is currently involved with a considerable amount of academic research that’s ongoing at Carnegie Mellon University, which has established the CyLab Usable Privacy and Security Laboratory. This lab brings together researchers working on a diverse set of projects related to understanding and improving the usability of privacy and security software and systems. The researchers employ a combination of three high-level strategies to make secure systems more usable: building systems that "just work" without involving humans in security-critical functions; making secure systems intuitive and easy to use; and teaching humans how to perform security-critical tasks.

Let’s get this straight. Patrick is not an ubergeek. He is determined, however, to ensure that privacy does not become an issue controlled only by ubergeeks, as it’s clear that when they are in charge, the rest of us can have little idea of what’s going on, and can’t make proper choices about how we would like our personal information to be used. And, for the most part, we can’t make these proper choices because those designing privacy systems make the choice mechanisms fiendishly difficult to operate.

To give us an example, Patrick tore into the privacy dashboard that has been built into the new online behavioural advertising initiative, started in the USA and currently being rolled out in Europe. I blogged about this on 4 September. Patrick made the point that unless users actually understood the choices that were presented to them, and actually knew where to look on the screens to find the right drop down menus to click the right bits to register their objections, then the opt-out mechanism was somewhat limited in terms of privacy protection. Perhaps this is why the current opt-out rate is low. When I say low, the figure of .0002% (based on ads shown to users) was mentioned by the person who runs Evidon, the solution provider behind the Advertising Option Icon initiative. It was a great pity that the Evidon representative was not able to refute the quite troubling points that Patrick raised. He had left the building just before Patrick rose to speak. But he did know what Patrick was going to say. Evidently, he had heard Patrick speak before.

Given that some users are now being served some 1,100 ads per week by Google as they surf the internet, an opt out rate as low as .0002% is mighty impressive. Those promoting the scheme see the Advertising Option Icon initiative, with its ways to change preference management sessions so they can alter what the ad provider thinks of them, as the ultimate cookie. Is this the data protection equivalent of the mythical ring in Tolkien’s famous saga? Have we found the one cookie to rule them all?

Patrick was not so sure. As far as he was concerned, users wanted protections that didn't break things. Too often, one set of configurations simply mess up existing services. How often, for example, do we need to reconfigure the “pop up blocker” on our laptops so that a favoured website can work as originally designed? Apparently, users have found that the:
• privacy tools they are presented with are usually hard to understand and configure;
• privacy terminology is confusing, as people simply aren’t familiar with these concepts; and
• privacy tools provide little or no feedback, which leads many to think they may have configured the tools to block trackers, when they actually hadn’t.

Patrick’s main point was that privacy nudges are really hard to incorporate in the privacy sphere, as their purpose is usually, using soft determinism techniques, and psychological biases, to nudge users in a direction that is considered to be beneficial. But what is beneficial in the privacy sphere? And how should this be expressed to the user?

It can’t be the case that it is always better to safeguard our privacy. If that were the case, Facebook would close down tomorrow. The whole point of the exercise is that Facebook is an outstanding example of self-promotion. People love it because in this new world, we are all celebrities (of various degrees). But a well designed Facebook privacy nudge might work if, as well as being given the standard options of whether users wanted to share stuff with their friends, or their friends and their friends, users were given the total number of friends and the friends of friends – so the user could appreciate just how many people would be capable of seeing it. Will Facebook take up this idea? Well, they just might. They haven’t said no, yet.

And, whether the protesters like it or not, targeted advertising at least serves to offer material to a device user that may be more, rather than less, relevant to their recent interests ( as expressed through their browsing behaviour).

So where should we go from here?

We certainly shouldn’t give up – but we should redouble our efforts to dumb down privacy notices. Context matters, not long legalistic documents that simply protect a data controller. Controllers should try to make their privacy labelling clearer – and should take great care not to use colours and symbols that are associated with good and bad connotations - this is simply likely to scare people, when one choice can actually be just as valid as another choice, so long as the user appreciates the consequences of their choice.

And we shouldn’t give up on the Advertising Option Icon concept – but we really ought to make privacy choices easier for the likes of Homer Simpson, rather than Albert Einstein, to understand and use.


Source:
http://cups.cs.cmu.edu/#news

.