Fifty years ago, if you worked for law enforcement, you started with a crime.
First, you had a suspect; then you collected data
You used information from that crime to make a list of possible suspects. And then you looked into their life. Maybe you did this with publicly accessible information, by digging through their trash. And maybe you got permission from a judge to wiretap their phone communications.
The point is, you did the easy stuff first. Nobody wanted to wade through bags of garbage just in case there was something incriminating hiding between the rancid salmon and last week’s stale teabags. And wiretaps were hard, requiring preparation for warrants and working with the phone company.
Today, you have data, and find suspects
Today, the order’s reversed. The easy thing to do is suck up all the data. The harder work is making a case from what’s found. As citizens, we lead our lives in public, leaving a breadcrumb trail of our movements, purchases, interactions, and political thoughts.
The field of Predictive Policing (which is full of software startups and controversies) is based on the notion of using data to guess where crimes are most likely to happen. It’s a concept that was well explored in Minority Report and other films.
It has serious problems. Two of them are:
- Its algorithms shit where they eat. Software that predicts where crime will occur can easily become a self-fulfilling prophecy. When more police patrol a hotspot, they make more arrests, which lowers property values, which makes it harder for a healthy community to thrive because of a drop in tax revenue.
- It suffers from reporting bias. If you leave a mattress out on the street in the rich Oakland Heights area, everyone calls the authorities. If you leave one out in the poor parts of Oakland, someone takes a nap. So the source data is questionable in the first place.
Collecting data is easy
When Yahoo let the government search for specific strings within emails, they were supporting the collect-and-find-suspects premise. Given a flood of data coming into a system, it’s easy to look for a particular piece of text, then form a list of possible suspects. And at first blush, it might seem like a decent precaution when faced with the asymmetric threat of terrorism, particularly if you’re privileged and think you have nothing to worry about.
But content is not always intent. The possibility of misinterpretation is huge.
- Searches aren’t crimes. When breast cancer patients searched for medical information, their searches were treated as adult content, because the word “breast” was being broadly interpreted as smut. Even the decisions about what to block are weirdly subjective.
- All talk, no action. Those who discuss terrorism are probably more open about doing so than the actual bad guys, who (presumably) take steps to hide their actions.
- Political winds change quickly. A decade after the Russians were America’s allies in World War II, US citizens were being blackballed from jobs and society for consorting with people from the Soviet Union.
Collect-and-find-suspects is the upending of “innocent until proven guilty,” a fundamental tenet of our society. But if you think the current state of affairs is bad, you should consider what tomorrow will be like.
A computer is better than you at recognizing things
Computers are already really, really good at image recognition. 2012 was a milestone in image detection, because that’s the date when computers became better than humans at recognizing objects. And the thing about computers is they can work tirelessly, in parallel, and they get better the more they work.
Google’s new Photos app is creepy-good at finding data in your photos. I synchronized pictures with it recently and it extracted every face from my daughter’s class photo (though it didn’t know who the faces were, yet.) But it it doesn’t recognize those faces, because it’s not allowed to.
Google knows perfectly well that’s a picture of Barack Obama. But it’s chosen not to look him up, because the photo of him is in my private photo collection. This is a policy decision, not a capability decision. Lawmakers and law enforcement likely have entirely different policies.
When computers watch movies for us
Kyle McDonald took NeuralTalk, a neural network that performs image recognition, and ran it on a backpack computer that interprets, in real time, everything he showed it on a camera as he walked around Amsterdam.
The video’s a bit funny—the AI is obsessed with bicycles for some reason—but it’s also great foreshadowing. McDonald can return home, type in a search string, and instantly find that moment in his journey around the city. It might look rudimentary today, but this stuff is getting better, faster than you think.
Now consider the number of video sources in the world. Imagery from every traffic camera, bank machine, airport drop-off zone, taxi dashcam, restaurant security camera, and more. There are millions in the UK alone.
Consider, too, the past. Because video footage is a time machine. A video search isn’t just about what’s being recorded today, but what’s been recorded and stored in the past. Every frame on Facebook, Youtube, and elsewhere. Every digital recording system.
Today, scouring through video footage is the equivalent of dumpster-diving in the fifties. It’s smelly, and tiresome, and nobody really wants to do it. There’s a coefficient of friction that makes it hard to do, so it’s not worth doing.
We’re about to clear another milestone—parsing video content better than humans. And once we can do it in parallel, we can search years of footage instantaneously.
Tomorrow, you’ll have history, and find data targets
What happens when we ask a computer to look through all available footage and identify scenes of possible crimes being committed? What tough-on-crime candidate wouldn’t get behind such a scheme? And will the apparent commission of a crime be considered sufficient justification for a background search, or further investigation?
When parsing and searching video gets easy, it’ll change how we approach homeland security, background checks, employment interviews, and even documentation of our daily lives. This technology’s already making great strides in policing, and has promising consequences. If it’s available to the general public, it might lend us an air of civility—knowing the great Other is watching, perhaps we’ll be more careful.
At the same time, a life constantly watched is one in which we’ve lost our right to be forgotten, and where, in the absence of a recording, we’re presumed guilty. Tomorrow’s panopticon casts a wide gaze, seeing through our own cameras, annotated effortlessly by the code with which we gird ourselves in search of safety.