In a letter to Congress this week, IBM’s CEO announced that it will sunset its general-purpose facial recognition and analysis software products. On IBM’s heels, Amazon announced a one-year moratorium on police use of Rekognition, and Microsoft said it will not sell facial recognition technology to US police departments until there is a national law in place. These values-based decisions come amid grave concerns about the use of facial recognition practices. In fact, many municipalities in the US — from San Francisco, California to Brookline, Massachusetts — have already issued bans on law enforcement use of facial recognition.

Five Questions To Consider In The Facial Recognition Debate

  • Will regulators manage to define laws for such a complicated technology? These moves by global tech giants will force much-needed dialogue on the ethics of facial recognition technology’s use (and, by extension, other AI-based tech) by domestic law enforcement agencies. The privacy and civil rights dangers are clear and present. The pressure is on for lawmakers to properly consider the impact and lawful use of this important, yet potentially dangerous, technology. But legislation moves slow — and enforcement even slower. Technologists will need to support lawmakers and regulators here.
  • Should we ban government use of this technology altogether? Facial recognition (and all its flavors, such as face detection, body recognition, emotion recognition, and biometric verification) has many governmental applications beyond law enforcement — boarding airplanes will be faster, keeping schools safe may be easier, and obtaining government services may be more private. So abandoning the technology altogether, or creating rules only for law enforcement agencies, is not the answer. IBM has been championing less discriminatory systems with initiatives like the “Diversity in Faces” data set and Watson OpenScale, which was one of the first enterprise-grade solutions for bias detection. Many other technology companies such as Microsoft and Google have had similar initiatives to tackle the responsible use of AI and have published AI principles for the responsible development and use of AI.
  • What about using facial recognition tech for enterprises? Firms that want to use facial recognition for security, marketing, or customer service must develop a data and AI ethics program to ensure they aren’t using the technology illegally or unethically. That includes disciplined testing for AI, reducing bias in facial recognition, creating an ethics oversight committee to vet uses of the tech, and deploying it using privacy-by-design principles. This isn’t a one-and-done action, either. It must be an ongoing process that aligns with the firm’s diversity and inclusion programs to mitigate discriminatory or unfair practices.
  • Are civilians and individuals going to disrupt facial recognition tech? It’s already happening, and it’s called the “anti-surveillance economy.” These include cosmetics that thwart facial recognition software, shirts that confuse auto-tagging software from Facebook, and prosthetics that distort faces to fool security cameras. Escaping ubiquitous surveillance isn’t easy, but individuals that don’t trust governments and enterprises aren’t totally powerless, either.
  • How’s this going to affect the availability of facial recognition tools in the market? For now, not much is changing if you’re not a government entity. It’s business as usual for most vendors, whether they are software firms or analytics firms. But as you make procurement decisions, make sure that the vendor you choose has irons in the fire when it comes to regulatory efforts and standards groups. For an enterprise, focus on use cases where vision-based AI technologies can be used to create efficiencies and drive revenue opportunities. There is a vibrant ecosystem of large and small tech vendors that offer specialized ways to maximize the use of computer vision across the enterprise beyond facial recognition.