A ban on facial recognition for law enforcement in San Francisco highlights growing public concerns about technology which is seeing stunning growth for an array of applications while provoking worries over privacy.
All but one of the nine members of San Francisco’s board of supervisors endorsed the legislation Tuesday, which will be voted on again next week in a procedural step.
Facial recognition technology uses a scan of a person’s face to create an algorithm that is compared against a database. It can be used to unlock a smartphone or vehicle, pay in retail stores, verify identities at bank machines or develop customized fashion or beauty recommendations.
But its use in law enforcement has created the greatest outcry among rights activists because of the potential for errors and mismatches, and because the technology relies on vast databases that may have little or no oversight.
Adding to those concerns has been the deployment in China of a vast surveillance system that can track criminals but also dissidents, and which has been used to track the movements of the Uighur Muslim minority, according to some reports.
“Most of today’s facial recognition applications are not ready for mass rollout, esp. where fundamental rights are at risk,” said a tweet from Tiffany Li, a researcher at the Yale Law School’s Information Society Project.
“The tech is not accurate, fair, or safe enough yet. And our laws and policies don’t know how to safeguard us from the many abuses that can occur.”
While facial recognition may help catch criminals faster than traditional methods, civil liberties activists say the technology is not perfect, with high error rates for minorities found in some studies.
“As things stand, it is inappropriate for government agencies to use face recognition technology because the dangers substantially outweigh the benefits,” said Evan Selinger, a professor at the Rochester Institute of Technology and fellow at the Future of Privacy Forum.