(Click on title to go to the source)
A shorter version of this post ran as an op-ed in the San Jose Mercury News on October 6, 2015.
The ubiquitous blue “Like” or “Share” buttons that you see all over the Internet are hiding an ugly secret. Starting this month, Facebook will use them to track your visit to every Web page that displays the buttons—even if you don’t click on anything. Facebook will use the data it collects to build a detailed dossier of your browsing habits, meticulously logging every site you visit, so it can finally learn those last few details about your life that it doesn’t already know. And there’s nothing you can do about it, short of staying totally logged out of the social media site or tracking down and installing a special browser extension to protect from this kind of sneaky behavior.
And who wants to bother? Yeah it’s creepy, but maybe you don’t care enough about a faceless corporation’s data mining to go out of your way to protect your privacy, and anyway you don’t have anything to hide. Facebook counts on that shrug of your shoulders; indeed its business model depends on our collective confusion and apathy when it comes to privacy. And that’s wrong, as a matter of business ethics and arguably in a legal sense as well.
Facebook’s response to criticism of the new massive increase in tracking has been to claim that it’s not a problem because the company allows users to opt out of the program. But that excuse—and others like it across the industry—is disingenuous and fundamentally unfair in two important ways. First, when users opt out, Facebook doesn’t actually stop tracking their browsing habits. It merely stops showing the user so called “interest-based” ads. In other words, Facebook doesn’t allow us to opt out of being tracked all over the Internet; it merely allows us to hide that fact from ourselves.
Second and more importantly, the new tracking violates consumers’ expectations. The Federal Trade Commission’s longstanding Fair Information Practice Principles begin with the concepts of notice and choice. Companies are expected to make consumers aware of information collection and give consumers control over how their information is used. When we click a “Like” button, we expect Facebook to take note. But when we visit a website and don’t click the button, we’re given no indication whatsoever that Facebook is still keeping track of that visit, much less given the ability to control what Facebook does with that information.
Of course, Facebook is hardly the only offender. Google and its manufacturing partners have been shipping millions of low-cost notebook computers, known as Chromebooks, to schools around the country for use by students in the classroom and at home. The devices are wonderful—powerful, secure, and easy to use. And they come with “completely free” Google Apps for Education services including classroom tools, email, document collaboration, and calendaring, among others.
Google’s Chromebooks as used in schools also come with “Chrome Sync” enabled by default, a feature that sends the student users’ entire browsing trail to Google, linking the data collected to the students’ accounts which often include their names and dates of birth. Google notes that the tracking behavior can be turned off by the student or even at a district level. But as shipped, students’ Chromebooks are configured to send every student’s entire browsing history back to Google, in near real time. That’s true even despite Google’s signature on the “Student Privacy Pledge” which includes a commitment to “not collect … student personal information beyond that needed for authorized educational/school purposes, or as authorized by the parent/student.”
EFF and other digital privacy groups have been actively engaged with the technology sector in an attempt to convince companies to place meaningful limits on various forms of consumer tracking. Earlier this year, EFF, along with eight other privacy organizations, left a multi-stakeholder process intended to develop a privacy-friendly set of best practices for companies using facial recognition, led by the National Telecommunications Information Administration. We insisted that companies must give regular people the choice of whether to participate in a face recognition database, or, in other words, operate their facial recognition systems on an opt-in basis. Our demand isn’t crazy; it is already the law in Europe. But when the companies made it clear that in this country they were only willing to provide an opt-out for people who proactively put themselves on a do-not-track list, we walked out. There was no point to our continued participation in a process dominated by companies who insist on maintaining a privacy model that depends on consumers not knowing their rights, or even the fact they’re being tracked.
It’s incredibly difficult for even the most concerned consumers to figure out who’s collecting data about them, much less exercise any control over what companies do with that data. It took us at EFF some serious research—and an hour-long conference call with Google engineers, lawyers, and PR reps—to figure out how Google treats student-browsing data. Because the companies make it so difficult for privacy-conscious consumers to figure out when, where, and how they’re being tracked, users are left with only one real choice: apathy, which companies then use as an excuse to further escalate and obscure their tracking behavior.
There is no excuse for making it so difficult to get the answers to questions as simple as “are you tracking our students?” Don’t even try asking the companies what they do with the behavioral data they’re gathering about us: other than using it for behavior advertising, they won’t say. And for those of us who have opted out of behavioral advertising, the companies have given no justification for continuing to collect our data. We have no way of knowing what they’re using our data for, and that’s a problem.
Companies across the tech industry claim that they honor our privacy and endeavor to treat users with respect. And I have no doubt that the vast majority of engineers, designers, and policy makers working in Silicon Valley want to do the right thing. My message to the companies then is this: if a new feature, system, or app will impact users’ privacy, just ask the users for their permission first. Providing an opt-out after the fact demonstrates a total lack of commitment to users and is fundamentally unfair.
If a business model wouldn’t work if users had to opt in, it deserves to fail.