How much prying AI surveillance is too much?
BOSTON — When a CIA-backed venture capital fund took an interest in Rana el Kaliouby’s face-scanning technology for detecting emotions, the computer scientist and her colleagues did some soulsearching — and then turned down the money.
“We’re not interested in applications where you’re spying on people,” said el Kaliouby, CEO and co-founder of the Boston startup Affectiva. The company has trained its artificial intelligence systems to recognize if individuals are happy or sad, tired or angry, using a photographic repository of more than 6 million faces.
Recent advances in AI-powered computer vision have accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. But as these prying AI “eyes” find new applications in store checkout lines, police body cameras and war zones, the tech companies developing them are struggling to balance business opportunities with difficult moral decisions that could turn off customers or their own workers.
El Kaliouby said it’s not hard to imagine using real-time face recognition to pick up on dishonesty — or, in the hands of an authoritarian regime, to monitor reaction to political speech to root out dissent. But the small firm, which spun off from an MIT research lab, has set limits on what it will do.
The company has shunned “any security, airport, even lie detection stuff,” el Kaliouby said. Instead, Affectiva has partnered with automakers trying to help tired-looking drivers stay awake, and with consumer brands that want to know if people respond to a product with joy or disgust.
Such queasiness reflects new qualms about the capabilities and possible abuses of allseeing, always watching AI camera systems — even as authorities are growing more eager to use them.
In the immediate aftermath of Thursday’s deadly shooting at a newspaper in Annapolis, Maryland, police said they turned to facial recognition to identify the uncooperative suspect when fingerprint analysis ran into delays. They did so by tapping a state database that includes mug shots of past arrestees and, more controversially, everyone who registered for a Maryland driver’s license.
In June, Orlando International Airport announced plans to require face-identification scans of passengers on all arriving and departing international flights by the end of this year. Several other U.S. airports have already been using such scans for some departing international flights.
Chinese firms and municipalities are already using intelligent cameras to shame jaywalkers in real time and to surveil ethnic minorities , subjecting some to detention and political indoctrination.
Concerns over the technology can shake even the largest tech firms. Google, for instance, recently said it will exit a defense contract after employees protested the military application of the company’s AI technology. The work involved computer analysis of drone video footage from Iraq and other conflict zones.
Similar concerns about government contracts have stirred up internal discord at Amazon and Microsoft. Google has since published AI guidelines emphasizing uses that are “socially beneficial” and that avoid “unfair bias.”
Amazon, however, has so far deflected growing pressure from employees and privacy advocates to halt Rekognition, a powerful face-recognition tool it sells to police departments and other government agencies.
Saying no to some work, of course, usually means someone else will do it. The drone-footage project involving Google, dubbed Project Maven, aimed to speed the job of looking for “patterns of life, things that are suspicious, indications of potential attacks,” said Robert Work, a former top Pentagon official who launched the project in 2017.
While it hurts to lose Google because they are “very, very good at it,” Work said, other companies will continue those efforts.