Fifty two percent of Americans believe that US Government officials should not be able to monitor “emails and other online activities” to prevent future terrorist activities, according to a Washington Post poll conducted on June 10 in response to Snowden’s leaks. As is often the case with these kinds of large scale surveys, however, I would argue that the Post asked the wrong question.
Do you remember when you were first invited to try GMail? GMail was a revolutionary service when it was first released for all kinds of reasons. One of the most important was that it marked the first large scale deployment of advertising personalization. A lot of online privacy advocates steadfastly refused to sign up for GMail because somewhere at Google, a computer was reading your email and figuring out which ads to show you. If you booked a hotel room in another city, Google might show you an ad for a rental car company, for example. Some of the hullabaloo that Google was going to be reading all of your email even made it to the mainstream media and, I’m sure, frightened off some percentage of possible users. The thing is, in some ways it was old news. It was certainly already the case that someone at your internet service provider (AOL, Comcast, your university, etc) had the technical capability of reading all of your email. People were less aware of it because it wasn’t being used as obviously (or maybe at all). But the cases where that did occur would startle someone far more than an algorithm realizing that you like ice cream and not cake. Ask yourself which you prefer: a technician or an algorithm reading your email.
As Google, Facebook, Skype, Apple, and Verizon customers (read: nearly every American) we’ve already largely decided that the convenience of these free or cheap services is worth giving up our algorithmic privacy. I would wager that we would also give up our algorithmic privacy for other types of convenience, like not worrying about dirty bombs in our cities, weaponized smallpox, or other large scale threats to personal safety including, perhaps, threats to financial security like the Enron debacle or the banking crisis of 2008.
Personal privacy is a different story. As far as I know, Google has never published a description or set of guidelines for when a technician or agent can personally login to your email account and read your email. I feel certain that if Google didn’t have adequate social and technical safeguards in place, we would have heard of at least one case of a Google employee snooping or abusing their power. It seems unlikely that government agencies safeguards are as strong. Did we ever figure out exactly why and how that FBI agent was reading David Petraeus’s GMail? That agent outed him as having an affair that, as far as I can tell, had no significant national security or criminal implications. This kind of case strongly suggests that at least the FBI is using insufficient safeguards for people’s personal privacy.
A couple of weeks ago, Lawrence Lessig spoke on Bill Moyer’s program and alluded to some ideas about technical safeguards for privacy. While I don’t know exactly what he was thinking of, his ideas sounded similar to some that I’ve had in the past. The essential idea is that there are classes of crime that most people are willing to give up their privacy to prevent. A serious public debate should take place about exactly what kinds of crime these are. Then the NSA could be given a warrant to develop software that would monitor all kinds of communications channels and algorithmically assign probabilities to each person’s likelihood to commit one of those crimes and the time frame in which they might do so. When the probability reaches a certain threshold (say, 85%), they could then be issued a secret warrant to invade the personal privacy of these individuals to gather more intelligence and determine the best course of action. Such software is entirely plausible to develop and could be independently audited by security firms to ensure that it is only being deployed with respect to the sanctioned crimes.
I believe that this system would resolve most of the issues that people have with the current system. It would prevent invasions of personal privacy, which, as I define it, is the invasion of your privacy by another person, unless and until such time as there is a high probability that you might be about to commit a horrendous crime. The system would be even less intrusive than the Google and Facebook ads that we already see because it wouldn’t even be used for advertising. By requiring a certain algorithmically-determined probability of criminality before allowing an invasion of personal privacy, it would prevent possible abuse by individuals in power as may have been the case in the Petraeus incident. It is certainly no worse than whatever is already in place and will assuredly remain in place whether we’re aware of it or not. As a society, we would be empowered to publicly and transparently determine which crimes are worth this kind of privacy invasion. False accusations would be no greater than they are now because ordinary warranted personal privacy invasions would still be allowed prior to arrest or accusation. False negatives might even be reduced by reducing the number of cases that analysts need to examine, preventing fatigue and human error. The accuracy of the software would be constantly improving as more criminals are caught or crimes are prevented (or succeed), but it could probably be made reasonably accurate right now based on the massive trove of data the government already has on which to train the algorithms.
Finally, there may even be some awesome side effects. If there’s an accurate estimate of who’s likely to commit a crime and the number of people who are high likelihood is relatively low (I’m sure this would be the case), we could perhaps regain some of the conveniences we used to enjoy — like shorter lines at airport security bolstered by improved random checks.
Although this proposal still retains a privacy invasion, I would argue that we’re unlikely to see a return to an era in which most people have strong algorithmic privacy in online interactions. Even beyond that, I’d argue that most people don’t want to live in that world (as evidenced by the fact that most people use Facebook, Twitter, and GMail) — which raises another question about alternative funding models for sites like those which might not require invasions of algorithmic privacy. I’ll address that in a future post.
Please share your thoughts.
Addendum: This post is very US-centric, but obviously this should be a global conversation. The EFF just published a great discussion on this topic.