NRPLUS MEMBER ARTICLE W e’ve all heard the stories: Authorities have installed cameras operating facial-recognition technology everywhere — on street corners, in shopping malls, even in office buildings. The software behind it is biased, generating high rates of false-positive matches, particularly for minorities. And law enforcement is using those matches, even false ones, to prosecute and imprison innocent people.
Does this sound like a dystopian nightmare, perhaps something that would be imposed on the Uyghurs in China? In fact, this is the image that the American Civil Liberties Union, the Electronic Freedom Foundation (EFF), and a host of other alarmists are attempting to conjure in the minds of the media, elected officials, and the American public. According to them, rapacious technology companies and prying authorities are installing facial-recognition software without rules or guidelines, rapidly creating an Orwellian surveillance state. Only complete bans, it is said, will keep us free.
While that dire narrative makes for good press and does wonders to generate fear and opposition — exactly what these groups want — it is completely misleading.
The reality is that the use of facial-recognition technology, at least in democratic, rule-of-law nations such as those in Europe and the United States, will be far more utopian than dystopian, making our lives easier and safer. To understand why, it’s worth considering some plausible scenarios and asking how existing law might easily be modernized to prevent abuse, particularly surveillance of innocent Americans.
Imagine that you have to catch a flight for a friend’s “destination wedding.” You’re in a hurry, and when you get to the airport you realize you left your wallet at home, so you don’t have your driver’s license. Not to worry, because both your airline and the Transportation Security Administration use a facial-recognition system to which you opted in, so you can check your bag and get through security using only your face as ID. To prevent any harms in this scenario, federal law should prohibit airlines from sharing your image, and TSA’s use of your image should be cabined off from the rest of the federal government.
How about some other, darker scenarios, as when a child is abducted by a stranger? Imagine the child’s family quickly sharing an electronic photo with police so they can load the image into a networked facial-recognition network that allows any system in the nation to report a positive match if it identifies the child’s face. When the abductor drives through a tollbooth that is equipped with a facial-recognition system, it makes a positive match of the child’s face and automatically notifies the police. With a description of the car now in hand, the police stop it 30 miles down the road, arresting the abductor and safely returning the child to her parents. To prevent abuses in this scenario, the operator of the system should not be allowed to store any captured image for longer than an agreed-upon period — perhaps seven days — or share images or matches with any other party except law enforcement under tightly prescribed circumstances.
Another scenario: A man sets off a pipe bomb at a local running race. Mounted cameras capture images of the people at the scene and feed them into the region’s facial-recognition system. Using an appropriately high confidence threshold, the police are able to match the locally captured images against a national FBI database of more than 630 million individuals, including images from driver’s licenses. They quickly find two potential matches, get search warrants, and go to the suspects’ homes. They realize that the first person could not have planted the bomb, but at the second suspect’s home they find bomb-making material, and a social-media search turns up hateful diatribes. An hour later, another pole-mounted camera in a nearby suburb, also connected to the region’s facial-recognition system, alerts police to the suspect’s location, and they make an arrest. Here again, federal law should make it clear that no private or government entity can use databases maintained by the FBI or other government agencies to obtain matches unless law enforcement has probable cause of a crime.
In other words, even though federal, state, and local governments have connected tens of millions of cameras around the nation to facial-recognition systems to keep us safe, none of those cameras should be allowed to record and store images for more than a set number of days. Moreover, the facial-recognition systems should not be permitted to record matches unless someone is wanted for a crime or is a known victim of a crime, such as in cases of child abduction or sex trafficking.
All these positive scenarios are possible, given the already advanced state of facial-recognition technology, which is getting even better every year. And in some cases, the necessary legal restrictions are already in place. Unfortunately, alarmists such as the ACLU and EFF are winning the public debate, which is why states and localities from San Francisco to Massachusetts have pursued all-out bans on facial-recognition technology, and others are being proposed throughout the country.
We shouldn’t let alarmists cap the opportunities that facial-recognition technology can hold. With the right commercial and governmental rules restricting its use, Americans can be safer and have more convenience with little or no reduction of our precious civil liberties. It is time for Congress to pass legislation that puts in place the necessary civil-liberties protections but also enables and actively spurs wide-scale deployment of facial-recognition technology that can be used to make our lives safer and more convenient.