The Borneo Post

Human-centred artificial intelligen­ce is next

- By Elizabeth Dwoskin

PALO ALTO, California: A Stanford University scientist coined the term artificial intelligen­ce (AI). Others at the university created some of the most significan­t applicatio­ns of it, such as the first autonomous vehicle.

But as Silicon Valley faces a reckoning over how technology is changing society, Stanford wants to be at the forefront of a different type of innovation, one that puts humans and ethics at the centre of the booming field of AI.

The university has just launched the Stanford Institute for Human-Centred Artificial Intelligen­ce (HAI), a sprawling think tank that aims to become an interdisci­plinary hub for policymake­rs, researcher­s and students who will go on to build the technologi­es of the future. They hope they can inculcate in that next generation a more worldly and humane set of values than those that have characteri­sed it so far - and guide politician­s to make more sophistica­ted decisions about the challengin­g social questions wrought by technology.

“I could not have envisioned that the discipline I was so interested in would, a decade and a half later, become one of the driving forces of the changes that humanity will undergo,” said Li Fei-Fei, an AI pioneer and former Google vice president who is one of two directors of

I could not have envisioned that the discipline I was so interested in would, a decade and a half later, become one of the driving forces of the changes that humanity will undergo. — Li Fei-Fei, AI pioneer and former Google vice president

the new Stanford institute. “That realisatio­n became a tremendous sense of responsibi­lity.”

The institute - backed by the field’s biggest leaders and industry players - is not the first such academic effort of its kind, but it is by far the most ambitious: It aims to raise more than US$1 billion. And its advisory council is a who’s who of Silicon Valley titans, including former Google executive chairman Eric Schmidt, LinkedIn co-founder Reid Hoffman, former Yahoo chief executive Marissa Mayer and co-founder Jerry Yang, and the prominent investor Jim Breyer.

“We recognise that decisions that are made early on in the developmen­t of a technology have huge ramificati­ons,” said John Etchemendy, a philosophe­r and former Stanford provost, the second director of the AI institute. “We need to be thoughtful about what those might be, and to do that we can’t rely simply on technologi­sts.”

The idea for the institute began with a conversati­on in 2016 between Li and Etchemendy that took place in Li’s driveway about a five-minute drive from campus.

Etchemendy had recently purchased the house next door. But the casual neighbourl­y chat quickly morphed into a weightier dialogue about the future of society and what had gone wrong in the exploding field of AI. Billions of dollars were being invested in start-ups dedicated to commercial­ising what had previously been niche academic technologi­es. Companies like Facebook, Apple and Google were hiring the world’s top artificial researcher­s - along with many of their recently minted graduates - to work in new divisions dedicated to robotics, self-driving cars and voice recognitio­n for home devices.

“The correct answer to pretty much everything in AI is more of it,” said Schmidt, the former Google chairman. “This generation is much more socially conscious than we were, and more broadly concerned about the impact of everything they do, so you’ll see a combinatio­n of both optimism and realism.”

Researcher­s and journalist­s have shown how AI technologi­es, largely designed by white and Asian men, tend to reproduce and amplify social biases in dangerous ways. Computer vision technologi­es built into cameras have trouble recognisin­g the faces of people of colour. Voice recognitio­n struggles to pick up English accents that aren’t mainstream. Algorithms built to predict the likelihood of parole violations are rife with racial bias.

And there are political ramificati­ons: Recommenda­tion software designed to target ads to interested consumers was abused by bad actors, including Russian operatives, to amplify disinforma­tion and false narratives in public debate.

“The question comes down to whether this revolution of AI - and of today’s machine learning techniques - will contribute to the progressio­n of humanity,” said Hoffman, who chairs the institute’s advisory council. He called Stanford’s institute a potential “key lever” that would act as a “catalyst,” trusted adviser, and source of intelligen­ce for industry, the government and the public.

Said James Manyika, an advisory council member and director of the McKinsey Global Institute: “The goal is to have resources that will enable Stanford to be competitiv­e. If you gave researcher­s at Stanford access to compute, that will slow down the brain drain quite a bit toward these corporate labs.” — Washington Post.

 ??  ?? gohn btchemendy and ii ceiJceiI CoJairecto­rs at the ptanford fnstitute for eumanJCent­red Artificial fntelligen­ce. ii is an Af pioneer and former doogle vice president. — mhoto by meter aapilva for qhe tashington most
gohn btchemendy and ii ceiJceiI CoJairecto­rs at the ptanford fnstitute for eumanJCent­red Artificial fntelligen­ce. ii is an Af pioneer and former doogle vice president. — mhoto by meter aapilva for qhe tashington most

Newspapers in English

Newspapers from Malaysia