Online data collection: 6 musings from a privacy advocate
01/05/2014 |
Q&A with Aurélie Pols, Pan-European Privacy & Digital Analytics Specialist; Co-Founder & Chief Visionary Officer, Mind Your Privacy.
We caught up with Aurélie, a user privacy advocate, soon after she delivered her keynote on privacy at the recent eMetrics conference in San Francisco. Aurélie helped pioneer digital analytics (and data collection) in Europe, so its fascinating to hear her perspective on Internet privacy. Below is the complete interview:
Commanders Act: Looking at your profile, much of your career was spent in the web analytics field. What drove you to become a user privacy advocate?
Aurélie: Rob Lowe and Sun Microsystems, the latter now part of Oracle. One of my favorite TV shows of all times is The West Wing, written by Aaron Sorkin, staring Martin Sheen as the President of the United States of America. Rob Lowe in one of the early episodes talks about privacy.
Sun Microsystems held a presentation at eMetrics back in 2006-2007 about some very cool analytics they were doing for its HR department: tracking who was communicating with who and what they were talking about. From an analytics perspective, it was fabulous. From a privacy perspective, it scared me so I turned around to my U.S. colleagues and asked if they were worried. They told me privacy was dead. Since then, we’ve come a long way!
Commanders Act: It’s been reported that businesses are reluctant to adopt more stringent user data collection policies because — let’s face it — it’s big business. In fact, in the first half of 2013 in the US alone, revenue from online advertising totaled approximately $20bn*. Is there a happy medium in the type and amount of personal information (e.g.: gender, age, location, etc.) collected by advertisers and where and how it’s used? If yes, what should it be?
Aurélie: Yes, it’s big business and digital advertising is growing faster than any other channel. The reason why it’s liked so much– it’s our life vest to assure growth. But does it really? Advertising doesn’t spur growth; sales & specialization do, if you take Ricardo’s comparative advantage theory.
While advertising is a necessity to have your product known, if your product is bad, it doesn’t matter how much advertising you do. You can throw a lot of Dollars or Euros at the wall, hoping it will stick. This however, doesn’t make for long lasting relationships and I think this is what most companies are after: lifetime value (LTV) calculations, not pure conversion rates. We collect data to improve, yes, our advertising spend – the infamous 50% — but mainly our products or service offering, to align offer and demand, in my humble opinion.
Funny thing also in what you mention is that advertisers will always tell you they don’t collect personal information.
The right balance is found when companies define what they want to accomplish with the data collected. It’s what’s been lacking in big data warehousing and CRM types of projects for years where you just collected everything and see how to make sense out of all the data collected. It was understandable 20 years ago as the technology wasn’t agile enough so you couldn’t afford to miss out on a piece of data you might need later on as your thought process and data analysis evolved. This is not the case anymore and this same line of thinking – collect it all and let’s see what happens – is one of the reasons why today we see big data projects fail with rates of over 50%. Define why you’re collecting the data, start with the right business questions and then collect what you need. If you miss out on anything you can still revisit it later. That’s what learning is all about.
Such thinking also aligns with the initial fundamental privacy framework: purpose and choice. While some have shouted on the rooftops that big data is killing the privacy framework, I totally disagree: it’s pushing us to ask the right questions and that’s not easy.
Last but not least, choice is increasingly being jeopardized if we aren’t careful. While I love Google’s algorithms that suggest answers [during the search process], the lack of transparency also tends to box me in, leaving me with less choice or guiding my choices. In a democratic society, we should be careful about that.
Commanders Act: Can you provide a bit more insight on PII (personally identifiable information) and provide examples of how it varies by country?
Aurélie: Thanks for asking! PII is a very American concept in my eyes and we’ve done some studies at Mind Your Privacy to understand it better. PII varies per state. The common variables for all states are: full name, home & email address, personal identification number (such as social security number, passport number, driver’s license number), IP address, vehicle registration plate number, and telephone number. Then you’ve got outliers per state: passwords for Georgia, Maine and Nebraska; data of birth for North Dakota; biometric & genetic data for Iowa, Nebraska, North Carolina and Wisconsin, etc., etc.
The very notion of PII is on the one hand moving sand in my humble opinion as it will evolve with jurisprudence and at a faster pace, inline with technological evolution. Secondly and most importantly, defining an individual today by a single variable is a fallacy. It’s through adding multiple variables that you identify often an individual. As a business analyst, we’ve been using gender, date of birth and zip code for decades to de-duplicate customers!
Europe is not too much into PII or personal information. If you look closely the legislation variables are never really clearly defined. PII is not seen as an on/off switch: if this than that and if not that nothing. Life is never black & white but shades of grey. We tend to work on two axes– the more risky the data you’re collecting and treating is, the more privacy safeguards and information security measures you need to surround your data with. We talk of low level risks, which are typically impressions and clicks; medium level risks are when you’re starting to profile (think OBA or online behavioral advertising); high level risks for sensitive data such as financial or health; and then extremely high, which is doing profiling on sensitive data. Once you start thinking like that, your information security measures can start to focus on where it’s necessary and your privacy safeguards as well.
That’s how we help out our clients: we define the data flows (what, where, why); look at the lifecycles (for how long and data retention periods per country) and advise related to security management processes as well as privacy.
Commanders Act: The rise of social media has let users share more intimate information. Do you think social platforms should be allowed to use this data for commercial purposes? What controls would you impose?
Aurélie: First of all, it’s important to realize that not all social media tools are the same. My Twitter account is a lot more public than my Facebook account. So I have a responsibility and a choice related to the type of information that I share through these channels. In that sense, I see the privacy debate as a triangle: government, citizens/consumers and businesses. Citizens and/or consumers also have their role to play and education is of essence for this part. I chose my son’s school partially because its use of technology – mainly iPads – but also because it has educative classes and debates about privacy.
Governments are there to define legislation that makes sense and educate. Businesses also have a responsibility. The first one is to act according to what they say and, more specifically, are the guardians of my data through their stated privacy policies. Infringement of these should be punished in my humble opinion. Having said that, with the FTC not having a clear mandate on security, this is a rather complicated thing in the U.S. but that’s another story.
I, as a consumer, also have a choice to use the tools or not. I think its clear for everyone today that if the product is free, you’re the product or your data is. That’s fine as long as there’s a choice. I asked Facebook last year in Brussels if they would imagine having a paid service where, for a small monthly fee, they would not sell or use my data. They didn’t seem to understand the question but that’s fine, give them time.
The problem I often find with free tools is the lack of choice. It’s free so your basic choice is to use it or not, it’s black and white. It would be great if we could add shades of grey to this.
Let’s take WhatsApp, which has been under investigation by the Dutch and the Canadians for over a year. I reluctantly used the service due to social pressure but once Facebook acquired them, I deleted it from my phone. Their Privacy Policy states that they can access my contact list and I’m not ok with that. But I have no way of telling them… that’s a shame: they should monitor how many people deleted their account due to that.
Commanders Act: You mentioned at the eMetrics Summit that Spain is responsible for 80% of the data protection fines in the EU so far this year. Why is this?
Aurélie: Yes, the infographic where we analyzed the annual reports of the European Data Protection Agencies can be found here.
The amount of fines in Spain is the highest in Europe, together with the UK. Average amount of fines hover around 50.000 Euros and can go up to 600.000 Euros, which happened once. It was for Dutch TV producer Endemol and the first Big Brother reality TV show in Spain. They left psychological profiles and other information about aspiring candidates in the bin. They got fined for 600.000– silly human security measures error but oh so common! A recent Symantec study pointed to this also: 80% of data breaches are due to human error and with data breach legislation being rolled out like hot buns out of an oven, it’s time to seriously tackle those issues.
The second reason is that the Spanish DPA (data protection agency) has the obligation, by law, to check each complaint. This is not the case in the UK….
All in all, if you ask me, the strictest legislation related to privacy or data protection is in Spain as security measures are embedded into the 1995 law [European Data Protection Directive]. Lawyers are used to undergoing Privacy Impact Audits (or PIAs), so it’s a good country to outsource your privacy support to and appoint a Data Protection Officer (DPO) as this will be required once the Personal Data Protection Regulation the EU is working on passes. This law will probably take effect by 2016 and will apply to any company addressing EU citizens, independent of if there’s a legal entity or not on European soil.
Commanders Act: There’s been lots of talk of using mobile tracking technologies (iBeacons, QR codes, etc) to monitor consumers’ activities in stores and combining this information with their online profiles (and vice versa). Does this raise red flags for you?
Aurélie: Hey we also fool around with Raspberry Pi and sensors so yes and no. To be honest, I’m most of all worried about the security aspect of the increased data collection hording we’re currently facing.
One of my previous companies got acquired by DigitasLBi, now part of Publicis Groupe. One of the reasons why we sold to them was because of NY based IconNicholson, which was a great team. They had developed something called Social Retailing almost 5 years ago. It’s a concept I still adore and stand behind as it’s based on choice and consent: if I as a consumer can choose to opt-into a specific program, I’m all for it.
However, if like in Minority Report, I’m “attacked” by ads coming at me when I enter a store just because they pick-up my MAC address and automatically, without my consent, link to my profile that was built by some data broker, I’m not ok. And to be fair, I imagine a lot of people will consider this to be creepy.
A balance needs to be found between risk of being fined, or falling under a class action suit, and my customer’s perception about how creepy my technology is. Privacy and more specifically data protection is about that: hedging against the risk of being fined so being as compliant as possible but also making sure you’re not being too pro-active in terms of technology and risking being perceived as creepy. The first is short-term damage and can be paid out easily. The second issue is about long-term damage to your brand, which is a lot more difficult to quantify.
*Econsultancy, 19 March, 2014