Jamaica Gleaner

How much all-seeing AI surveillan­ce is too much?

-

WHEN A CIA-backed venture capital fund took an interest in Rana el Kaliouby’s face-scanning technology for detecting emotions, the computer scientist and her colleagues did some soulsearch­ing — and then turned down the money.

“We’re not interested in applicatio­ns where you're spying on people,” said el Kaliouby, the CEO and co-founder of the Boston start-up Affectiva.

The company has trained its artificial intelligen­ce systems to recognise if individual­s are happy or sad, tired or angry, using a photograph­ic repository of more than six million faces.

Recent advances in AI-powered computer vision have accelerate­d the race for self-driving cars and powered the increasing­ly sophistica­ted photo-tagging features found on Facebook and Google. But as these prying AI ‘eyes’ find new applicatio­ns in store checkout lines, police body cameras and war zones, the tech companies developing them are struggling to balance business opportunit­ies with difficult moral decisions that could turn off customers or their own workers.

El Kaliouby said it's not hard to imagine using real-time face recognitio­n to pick up on dishonesty — or, in the hands of an authoritar­ian regime, to monitor reaction to political speech in order to root out dissent. But the small firm, which spun off from an MIT research lab, has set limits on what it will do.

The company has shunned “any security, airport, even liedetecti­on stuff,” el Kaliouby said. Instead, Affectiva has partnered with automakers trying to help tired-looking drivers stay awake, and with consumer brands that want to know if people respond to a product with joy or disgust.

Such queasiness reflects new qualms about the capabiliti­es and possible abuses of all-seeing, always-watching AI camera systems — even as authoritie­s are growing more eager to use them.

In the immediate aftermath of the deadly shooting at a newspaper in Annapolis, Maryland, police said they turned to face recognitio­n to identify the uncooperat­ive suspect when fingerprin­t analysis ran into delays. They did so by tapping a state database that includes mug shots of past arrestees and, more controvers­ially, everyone who registered for a Maryland driver's licence.

A growing necessity

In June, Orlando Internatio­nal Airport announced plans to require face-identifica­tion scans of passengers on all arriving and departing internatio­nal flights by the end of this year. Several other US airports have already been using such scans for some, but not all, departing internatio­nal flights.

Chinese firms and municipali­ties are already using intelligen­t cameras to shame jaywalkers in real time and to surveil ethnic minorities, subjecting some to detention and political indoctrina­tion. Closer to home, the overhead cameras and sensors in Amazon’s new cashierles­s store in Seattle aim to make shopliftin­g obsolete by tracking every item shoppers pick up and put back down.

Concerns over the technology can shake even the largest tech firms. Google, for instance, recently said it will exit a defence contract after employees protested the military applicatio­n of the company’s AI technology. The work involved computer analysis of drone video footage from Iraq and other conflict zones.

Similar concerns about government contracts have stirred up internal discord at Amazon and Microsoft. Google has since published AI guidelines emphasisin­g uses that are “socially beneficial” and that avoid “unfair bias”.

Amazon, however, has so far deflected growing pressure from employees and privacy advocates to halt Rekognitio­n, a powerful face-recognitio­n tool it sells to police department­s and other government agencies.

Commercial and government interest in computer vision has exploded since breakthrou­ghs earlier in this decade using a brain-like ‘neural network’ to recognise objects in images. Training computers to identify cats in YouTube videos was an early challenge in 2012. Now, Google has a smartphone app that can tell you which breed.

A major research meeting — the annual Conference on Computer Vision and Pattern Recognitio­n, held in Salt Lake City in June — has transforme­d from a sleepy academic gathering of “nerdy people” to a gold-rush business expo attracting big companies and government agencies, said Michael Brown, a computer scientist at Toronto’s York University and a conference organiser.

Brown said researcher­s have been offered high-paying jobs on

the spot. But few of the thousands of technical papers submitted to the meeting address broader public concerns about privacy, bias or other ethical dilemmas. “We’re probably not having as much discussion as we should,” he said.

Start-ups are forging their own paths. Brian Brackeen, the CEO of Miami-based facial-recognitio­n software company Kairos, has set a blanket policy against selling the technology to law enforcemen­t or for government surveillan­ce, arguing in a recent essay that it “opens the door for gross misconduct by the morally corrupt”.

Boston-based start-up Neurala, by contrast, is building software for Motorola that will help policeworn body cameras find a person in a crowd based on what they're wearing and what they look like. CEO Max Versace said that "AI is a mirror of the society," so the company only chooses principled partners.

“We are not part of that totalitari­an, Orwellian scheme,” he said.

 ?? AP ?? In this April 23, 2018 photo, Ashley McManus, global marketing director of the Boston-based artificial intelligen­ce firm Affectiva, demonstrat­es facial-recognitio­n technology that is geared to help detect driver distractio­n at their offices in Boston.
AP In this April 23, 2018 photo, Ashley McManus, global marketing director of the Boston-based artificial intelligen­ce firm Affectiva, demonstrat­es facial-recognitio­n technology that is geared to help detect driver distractio­n at their offices in Boston.

Newspapers in English

Newspapers from Jamaica