Facebook's decision to stop using facial-recognition technology on its core social media platform is fueling renewed conversation about what role the U.S. government should take in regulating the technology's use.
While many applauded Facebook's acknowledgement of public concerns regarding the technology, they noted that its use on the Facebook platform was already limited. Meta Platforms Inc., Facebook's new parent company following a recent reorganization, also cited a lack of clear guidance from regulators about how facial-recognition technology should be used as factoring into the decision. Facebook is under intense scrutiny from lawmakers and regulators in the U.S. and Europe following recent leaks of internal documents on its business practices and research.
In a Nov. 2 blog post, Meta Platforms Vice President of Artificial Intelligence Jerome Pesenti said the platform change means that in the coming weeks, Facebook's systems will stop making recommendations about tagging users in photos and videos shared there. The executive suggested that Meta may resume working with the technology once there is greater consensus about how to best use it.
"We still see facial recognition technology as a powerful tool, for example, for people needing to verify their identity, or to prevent fraud and impersonation," Pesenti wrote. "We believe facial recognition can help for products like these with privacy, transparency and control in place, so you decide if and how your face is used. We will continue working on these technologies and engaging outside experts."
Facebook's facial-recognition technology "mostly helped people tag their friends in photos," wrote Daniel Castro, vice president of the Information Technology and Innovation Foundation in Washington, D.C., in an email to S&P Global Market Intelligence. Castro said that the "disproportionate outcry" against the technology has been driven by misleading rhetoric.
Patrick Hall, a professor of data ethics at The George Washington University and principal scientist at Washington, D.C.-based bnh.ai, a law firm specializing in AI and data analytics, said the decision by Facebook was a net positive, if perhaps performative given Facebook's regulatory pressures.
"The only real surprise was that Facebook decided to do something to diminish consumer harm," Hall said in an interview. Other big tech companies, including Amazon.com Inc. and Microsoft Corp., announced limitations on the use of their facial-recognition technologies last year following public concerns about how the biometric technology could be used by law enforcement.
Hall said Meta could do more good by make greater efforts to clamp down on viral misinformation on its platforms.
In a prepared statement emailed to Market Intelligence, Maureen Mahoney, senior policy analyst at nonprofit consumer organization Consumer Reports, called on Congress to deploy comprehensive federal privacy legislation and adequately fund the U.S. Federal Trade Commission to issue regulations on online privacy.
The FTC in January issued guidance on best practices for the use of AI, including facial-recognition technology. In May, app developer Everalbum Inc. settled a complaint with the commission that alleged it deceived consumers on its Ever app about Everalbum's use of facial-recognition technology and retention of users' photos and videos, including from deactivated accounts.
The FTC did not respond to a request for comment about Meta's decision by time of publishing.
A group of Democrats in the U.S. House and Senate last year introduced a bill that would ban the use of biometric surveillance by federal agencies unless explicitly authorized by an act of Congress. While certain states such as Virginia and California have enacted their own laws around usage of facial recognition, Congress has yet to reach consensus on a federal law. On Nov. 3, House Republicans released a draft privacy bill entitled the "Control Our Data Act," though facial recognition is not explicitly mentioned in the legislation.
In Europe, remote biometric identification is banned under the General Data Protection Regulation, with the exception of specific use cases when it is of substantial public interest.