Human-centred artificial intelligence is next
PALO ALTO, California: A Stanford University scientist coined the term artificial intelligence (AI). Others at the university created some of the most significant applications of it, such as the first autonomous vehicle.
But as Silicon Valley faces a reckoning over how technology is changing society, Stanford wants to be at the forefront of a different type of innovation, one that puts humans and ethics at the centre of the booming field of AI.
The university has just launched the Stanford Institute for Human-Centred Artificial Intelligence (HAI), a sprawling think tank that aims to become an interdisciplinary hub for policymakers, researchers and students who will go on to build the technologies of the future. They hope they can inculcate in that next generation a more worldly and humane set of values than those that have characterised it so far - and guide politicians to make more sophisticated decisions about the challenging social questions wrought by technology.
“I could not have envisioned that the discipline I was so interested in would, a decade and a half later, become one of the driving forces of the changes that humanity will undergo,” said Li Fei-Fei, an AI pioneer and former Google vice president who is one of two directors of
I could not have envisioned that the discipline I was so interested in would, a decade and a half later, become one of the driving forces of the changes that humanity will undergo. — Li Fei-Fei, AI pioneer and former Google vice president
the new Stanford institute. “That realisation became a tremendous sense of responsibility.”
The institute - backed by the field’s biggest leaders and industry players - is not the first such academic effort of its kind, but it is by far the most ambitious: It aims to raise more than US$1 billion. And its advisory council is a who’s who of Silicon Valley titans, including former Google executive chairman Eric Schmidt, LinkedIn co-founder Reid Hoffman, former Yahoo chief executive Marissa Mayer and co-founder Jerry Yang, and the prominent investor Jim Breyer.
“We recognise that decisions that are made early on in the development of a technology have huge ramifications,” said John Etchemendy, a philosopher and former Stanford provost, the second director of the AI institute. “We need to be thoughtful about what those might be, and to do that we can’t rely simply on technologists.”
The idea for the institute began with a conversation in 2016 between Li and Etchemendy that took place in Li’s driveway about a five-minute drive from campus.
Etchemendy had recently purchased the house next door. But the casual neighbourly chat quickly morphed into a weightier dialogue about the future of society and what had gone wrong in the exploding field of AI. Billions of dollars were being invested in start-ups dedicated to commercialising what had previously been niche academic technologies. Companies like Facebook, Apple and Google were hiring the world’s top artificial researchers - along with many of their recently minted graduates - to work in new divisions dedicated to robotics, self-driving cars and voice recognition for home devices.
“The correct answer to pretty much everything in AI is more of it,” said Schmidt, the former Google chairman. “This generation is much more socially conscious than we were, and more broadly concerned about the impact of everything they do, so you’ll see a combination of both optimism and realism.”
Researchers and journalists have shown how AI technologies, largely designed by white and Asian men, tend to reproduce and amplify social biases in dangerous ways. Computer vision technologies built into cameras have trouble recognising the faces of people of colour. Voice recognition struggles to pick up English accents that aren’t mainstream. Algorithms built to predict the likelihood of parole violations are rife with racial bias.
And there are political ramifications: Recommendation software designed to target ads to interested consumers was abused by bad actors, including Russian operatives, to amplify disinformation and false narratives in public debate.
“The question comes down to whether this revolution of AI - and of today’s machine learning techniques - will contribute to the progression of humanity,” said Hoffman, who chairs the institute’s advisory council. He called Stanford’s institute a potential “key lever” that would act as a “catalyst,” trusted adviser, and source of intelligence for industry, the government and the public.
Said James Manyika, an advisory council member and director of the McKinsey Global Institute: “The goal is to have resources that will enable Stanford to be competitive. If you gave researchers at Stanford access to compute, that will slow down the brain drain quite a bit toward these corporate labs.” — Washington Post.