It’s January 9, 2020. A typically cold day in northern Michigan. There’s frost on the ground; a hard chill in the air.
In a police detention center, Robert Williams sits across from a detective, wondering what he’s done wrong.
A Black father in his early forties, Williams hasn’t broken the law. Yet, just hours ago, officers arrested him in front of his two young daughters.
As Williams sits in shock, a picture is slid across the table. A security camera photo of a different Black man stealing watches from a store.
“That’s not me,” Williams says. When the detectives don’t respond, he holds the picture up next to his head, tries a joke. “I hope you don’t think all Black people look alike.”
Then come the words that chill the Detroit native to his core.
But, the officers tell him, “the computer says it’s you.”
At this moment the proud father’s life takes a sudden left turn. Away from normality, and into a dystopia. One where untested, Orwellian tech has the power to tear you away from your family without any oversight.
Welcome to the chilling rise of facial recognition technology.
At the dawn of the computing era, sci-fi writers imagined dazzling futures where robots would act as our butlers, while humans focused on things like creativity and strategy.
Instead, the opposite sort of happened. By the 1990s, computers were winning handily at human games like chess, while seemingly simple tasks left metaphorical steam pouring out their ears.
Among those not-so simple tasks was recognizing human faces.
This wasn’t for lack of trying. As far back as the 1960s, Mormon bishop Woody Bledsoe had been working on programs that could identify humans.
But it wouldn’t be until the second decade of the 21st Century that the revolution came.
The advent of deep neural networks around 2010 turned facial recognition from a technology that only worked when matching clear headshots to one that could be used out in the real world.
By 2011, the US was using it to identify Osama Bin Laden’s body. By 2014, Facebook had rolled it out across their platform.
Leap forward to today, and you probably use facial recognition tech (or FRT) every day: to unlock your Apple phone, or when Google automatically sorts your photos.
Yet this harmless stuff is only the tip of FRT’s not-so-harmless iceberg.
To get a picture of how worrying FRT’s rise is, we need to check in with America’s police departments.
For well over a decade, many US forces have been using facial recognition to match suspects’ faces against databases.
This has led to breakthroughs like locating suspects in cold cases, or quickly identifying people accused of sexual assault.
On the other hand, it’s also led to some truly dystopian moments.
In the case of Robert Williams, the police had a photo of their suspect, one a human observer could tell wasn’t the 42-year old father.
But once the AI had identified him, the police put aside their better judgement. As an overview of FRT published in Nature explains, studies:
“show that humans tend to overestimate the technology’s credibility, even when they see the computer’s false match next to the real face.”
It was this overconfidence that led to Williams spending 30 hours in custody for a crime committed by a completely different Black man.
And, yes, that focus on Williams’s race is important, because dark skin is a problem for FRT.
A major 2018 study from MIT found the tech was up to 100 times more-likely to misidentify Black folk or Asian Americans than white people.
Partially, this may be due to how FRT is programed.
Facial recognition AI has to be trained on massive datasets.
In the 1990s, this involved researchers asking individuals to pose for photos, but today it involves “scraping” – or, to put it bluntly, stealing – millions of images from websites like Flickr.
Given America’s Black population is almost a fifth smaller than its white one, it stands to reason AI should be better at identifying white dudes. It has more images to practice on.
What’s less reasonable, though, is the way police forces continue to use FRT despite these known racial flaws. In 2018 alone, the NYPD deployed it in around 8,000 cases.
Yet the scariest part of FRT isn’t the way it reinforces existing biases, or how it can lead to innocent men like Robert Williams being arrested.
No, it’s the technology’s potential to end human privacy.
Mass surveillance FRT is when AI scans vast crowds of people in public places, and identifies individuals within them.
By the end of 2019, 64 countries were using this tech, ranging from wannabe dystopias like China and Russia, to Western democracies like the USA and Britain.
The stated rationale for this monitoring is to reduce crime. To make our streets safer.
But the evidence that mass FRT surveillance stops the bad guys is minimal, at best.
Meanwhile, the potential for it to infringe our rights is very real and absolutely terrifying.
In the US, the most Orwellian development of FRT is probably Clearview AI.
Founded by Australian Hoan Ton-That in 2016, Clearview flew under the radar until a New York Times expose in 2020.
By then, Ton-That’s facial recognition tech had been sold to at least 2,200 federal agencies.
And that’s worrying, because Clearview AI is creepy as hell.
In its years of operation, Clearview has scraped over 3 billion photos from the internet to create one of the world’s biggest datasets.
This was done without permission, and possibly illegally. But it doesn’t matter.
If you face has ever appeared on Facebook, Twitter, YouTube, or even Venmo, it’s likely now in Clearview’s clutches.
And that means anyone using their tech just has to take a picture of your face to learn everything about you.
To see how freaky this is, let’s imagine you decide to exercise your right to peaceful protest.
Before, you could only be identified in a crowd if you were already in a law enforcement database, or if someone who knows you happened to see the footage.
With Clearview’s FRT, though, any video taken of a crowd can be scanned, giving the government access to your identity, your social media accounts, your address, your employment details and more.
It’s the sort of thing that could easily create a chilling effect around protest, especially among communities that distrust the government.
Yet the issues go beyond just what law enforcement might do.
Clearview AI has code that could one day allow it to be used with augmented-reality glasses, meaning you’d just have to glance at someone to know everything about them.
Imagine: weird guys able to uncover a pretty woman’s address with a single look. Strangers passing in the street, able to immediately know your name and place of work.
Intimate or embarrassing photos you didn’t even know had been taken, suddenly always there, all the time, for everyone who ever meets you to immediately see.
At the time this video was made, Clearview AI was getting sued in multiple states to stop precisely this kind of permanent shutdown of privacy.
But the issues it raises are bigger than one company. Once the ability to do something like this exists, you can be sure that – unless it’s outlawed – someone will do it.
And to see what unrestrained FRT is like, we only have to look to China.
The Middle Kingdom is home to the world’s largest number of facial recognition cameras: 630 million of them, installed in public places.
While many are used for benign purposes, like paying for goods, others are put to creepier use.
In several provinces, like Zhejiang for example, FRT cameras conduct live searches for people doing things like jaywalking.
Anyone caught then has their personal details immediately displayed on electronic billboards as a form of public shaming.
The system isn’t perfect. Recently, a billboard in Ningbo accidentally shamed businesswoman Dong Mingzhu for jaywalking after FRT read her face from an advert on the side of a bus.
But, in many ways, it doesn’t have to be perfect. Or even good.
If citizens fear public shaming, they’ll start to police themselves. Over time, doing even minor things the Communist Party deems wrong will become unthinkable.
For those that don’t police themselves… well, Chinese FRT is capable of some scary things.
In Xinjiang, home to the persecuted Uighur minority, facial recognition has already crossed the line from “potentially dystopian” to “outright Orwellian.”
The streets are lined with cameras, any one of which is capable of pulling up not only your address and ID number as you pass, but your employment, education, family, where you’ve recently been, and who you’ve met.
Patents filed by tech giants like Huawei show the AI has even been trained to racially profile Uighurs, and restrict them from certain areas.
The effect is a high-tech panopticon, one where a minority can be traced at all times, even as other races move around relatively unimpeded.
It’s a chilling glimpse of what FRT could one day become in other parts of the world.
Worryingly, that future may be closer than you think.
As you watch this, more and more countries are leaping on the facial recognition bandwagon.
London is still hoping to begin live mass surveillance soon, despite legal challenges. Serbia has adopted the technology to punish those not wearing covid masks. And Russia wants to bring it into schools, under the darkly-ironic name “Orwell”.
Yet while things look bleak, there is still hope.
In the USA, the state of Illinois enacted a law back in 2008 that made widespread biometric surveillance illegal.
Thanks to this, FRT companies can be sued for breach of privacy – something that’s currently happening with Clearview AI.
Over in Europe, meanwhile, upcoming legislation may completely outlaw the public use of FRT under data protection laws.
The battle for your privacy, then, is not yet lost. But make no mistake.
Unless lawmakers do something, this technology will soon be everywhere. A million cameras, forever monitoring your movements, allowing companies or law enforcement to track you without ever lifting a finger.
Whether we can trust them with the keys to our privacy is yet to be seen.