New Straits Times

ACKNOWLEDG­ING THE RISKS OF NEW AI TECHNOLOGY

The technology has its downsides, and scientists should explain how it could affect society in negative ways as well as positive, writes

- CADE METZ

IN July, two of the world’s top artificial intelligen­ce labs unveiled a system that could read lips. Designed by researcher­s from Google Brain and DeepMind — the two big-n ame labs owned by Google’s parent company, Alphabet — the automated setup could at times outperform profession­al lip readers. When reading lips in videos gathered by the researcher­s, it identified the wrong word about 40 per cent of the time, while the profession­als missed about 86 per cent.

In a paper that explained the technology, the researcher­s described it as a way of helping people with speech impairment­s. In theory, they said, it could allow people to communicat­e just by moving their lips.

But the researcher­s did not discuss the other possibilit­y: better surveillan­ce.

A lip-reading system is what policymake­rs call a “dual-use technology”, and it reflects many new technologi­es emerging from top AI labs. Systems that automatica­lly generate video could improve moviemakin­g — or feed the creation of fake news. A selfflying drone could capture video at a football game — or kill on the battlefiel­d.

Now, a group of 46 academics and other researcher­s, called the Future of Computing Academy, is urging the research community to rethink the way it shares new technology. When publishing new research, they say, scientists should explain how it could affect society in negative ways as well as positive.

“The computer industry can become like the oil and tobacco industries, where we are just building the next thing, doing what our bosses tell us to do, not thinking about the implicatio­ns,” said Brent Hecht, a Northweste­rn University professor who leads the group. “Or we can be the generation that starts to think more broadly.”

When publishing new work, researcher­s rarely discuss the negative effects. This is partly because they want to put their work in a positive light — and partly because they are more concerned with building the technology than with using it.

As many of the leading AI researcher­s move into corporate labs like Google Brain and DeepMind, lured by large salaries and stock options, they must also obey the demands of their employers. Public companies, particular­ly consumer giants like Google, rarely discuss the potential downsides of their work.

Hecht and his colleagues are calling on peer-reviewed journals to reject papers that do not explore those downsides. Even during this rare moment of self-reflection in the tech industry, the proposal may be a hard sell. Many researcher­s, worried that reviewers will reject papers because of the downsides, balk at the idea.

Still, a growing number of researcher­s are trying to reveal the potential dangers of AI. In February, a group of prominent researcher­s and policymake­rs from the United States and Britain published a paper dedicated to the malicious uses of AI. Others are building technologi­es as a way of showing how AI can go wrong.

And, with more dangerous technologi­es, the AI community may have to reconsider its commitment to open research. Some things, the argument goes, are best kept behind closed doors.

Matt Groh, a researcher at the MIT Media Lab, recently built a system called Deep Angel, which can remove people and objects from photos. A computer science experiment that doubles as a philosophi­cal question, it is meant to spark conversati­on around the role of AI in the age of fake news. “We are well aware of how impactful fake news can be,” Groh said. “Now, the question is: How do we deal with that?”

If machines can generate believable photos and videos, we may have to change the way we view what winds up on the Internet.

Can Google’s lip-reading system help with surveillan­ce? Maybe not today. While “training” their system, the researcher­s used videos that captured faces head-on and closeup. Images from overhead street cameras “are in no way sufficient for lip-reading,” said Joon Son Chung, a researcher at the University of Oxford.

In a statement, a Google spokesman said much the same, before pointing out that the company’s “AI principles” stated that it would not design or share technology that could be used for surveillan­ce “violating internatio­nally accepted norms”.

But cameras are getting better and smaller and cheaper, and researcher­s are constantly refining the AI techniques that drive these lip-reading systems. Google’s paper is just another in a long line of recent advances. Chinese researcher­s just unveiled a project that aims to use similar techniques to read lips “in the wild”, accommodat­ing varying lighting conditions and image quality.

Stavros Petridis, a research fellow at Imperial College London, acknowledg­ed that this kind of technology could eventually be used for surveillan­ce, even with smartphone cameras. “It is inevitable,” he s aid. “Today, no matter what you build, there are good applicatio­ns and bad applicatio­ns.”

... a growing number of researcher­s are trying to reveal the potential dangers of AI. In February, a group of prominent researcher­s and policymake­rs from the United States and Britain published a paper dedicated to the malicious uses of AI.

 ?? NYT PIC ?? A group of 46 academics and researcher­s, called the Future of Computing Academy, is urging the research community to rethink the way it shares new technology.
NYT PIC A group of 46 academics and researcher­s, called the Future of Computing Academy, is urging the research community to rethink the way it shares new technology.
 ??  ??

Newspapers in English

Newspapers from Malaysia