Let’s say a random stranger approaches you on the street, snaps a quick photo of you in a public place (which is perfectly legal), uploads the photo to an app, and soon finds your social media profiles. And your Venmo account. And your full name. And your address.
That’s a privacy disaster any way you slice it—but it’s also at the heart of an app called Clearview AI, which The New York Times recently called “The Secretive Company That Might End Privacy as We Know It.”
It’s not just extremely dangerous because stalkers could instantly find people through the app and hound them over social media or even show up at their house, but because hundreds of law enforcement agencies, plus the FBI, are currently using this facial recognition technology, despite the pushback the tech has seen in legislative spaces.
In San Francisco, for instance, it’s not even legal for law enforcement to use facial recognition. What’s more, some security companies even have access to Clearview AI, which sets a dangerous precedent.
Clearview AI features a database of over three billion images, which were scraped from websites like Facebook, Twitter, and even Venmo. Other databases pale in comparison, according to marketing materials the company provided to law enforcement agencies. The FBI has a database of 411 million photos, while more local authorities, like the Los Angeles Police Department, only have access to about eight million images.
Sure, Clearview AI isn’t readily available to the public, and when you visit the company’s website, there isn’t really much information on the app at all. You have to request access to learn more, let alone use the service. However, both the Times and investors in Clearview AI think that the app will be available for anyone to use in the future.
That’s frightening, and it’s led technology think tanks like Fight for the Future, a nonprofit based in Worcester, Massachusetts, and the Washington, D.C.-based Demand Progress, to call on legislators to take action on facial recognition tech.
When companies like Google—which has received a ton of flack for taking government contracts to work on artificial intelligence solutions—won’t even build an app, you know it’s going to cause a stir. Back in 2011, former Google Chairman Eric Schmidt said a tool like Clearview AI’s app was one of the few pieces of tech that the company wouldn’t develop because it could be used “in a very bad way.”
Facebook, for its part, developed something pretty similar to what Clearview AI offers, but at least had the foresight not to publicly release it. That application, developed between 2015 and 2016, allowed employees to identify colleagues and friends who had enabled facial recognition by pointing their phone cameras at their faces. Since then, the app has been discontinued.
Meanwhile, Clearview AI is nowhere near finished. Hidden in the app’s code, which the New York Times evaluated, is programming language that could pair the app to augmented reality glasses, meaning that in the future, it’s possible we could identify every person we see in real time.
Perhaps the silver lining is that we found out about Clearview AI at all. Its public discovery—and accompanying criticism—have led to well-known organizations coming out as staunchly opposed to this kind of tech.
Fight for the Future tweeted that “an outright ban” on these AI tools is the only way to fix this privacy issue—not quirky jewelry or sunglasses that can help to protect your identity by confusing surveillance systems.
We can’t fix this with gimmicky jewelry or sunglasses we’re supposed to wear when we leave our homes
We can’t fix it with industry-friendly regulations
We need to meet surveillance capitalism head on
We need an outright ban on AI-powered surveillancehttps://t.co/lQjYPHHs9U
We’ve been tracking facial recognition for some time and thought we’d seen it all. But this story shows our worst fears have become real. It’s time for Congress to act. https://t.co/XdUO13diBJ
These fears and disavowals of facial recognition tech come just months after two senators introduced a bipartisan bill to limit how the FBI and the U.S. Immigration and Customs Enforcement agency could use it.
“Facial recognition technology can be a powerful tool for law enforcement officials,” Mike Lee, a Republican from Utah, said in a statement at the time. “But its very power also makes it ripe for abuse.”