Weighing AI’s promise, pitfalls
Experts discuss artificial intelligence technology at Stanford’s new center
Artificial intelligence will unleash changes humanity is not prepared for as the technology advances at an unprecedented pace, leading experts said at the official opening Monday of Stanford University’s new AI center.
At a daylong symposium accompanying the center’s launch, speakers from Microsoft co-founder and philanthropist Bill Gates to former Google AI chief Fei-Fei Li and a host of other leaders in the field laid out the promise of AI to transform life for the better or — if appropriate measures are not taken — for the worse.
The Stanford Institute for Human-Centered Artificial Intelligence, a cross-disciplinary research and teaching facility dedicated to the use of AI for global good, needs to educate government along with students, Gates said during his keynote speech.
“These AI technologies are completely done by universities and private companies, with the private companies being somewhat ahead,” Gates told the audience. “Hopefully things like your institute will bring in legislators and executive branch people, maybe even a few
judges, to get up to speed on these things because the pace and the global nature of it and the fact that it's really outside of government hands does make it particularly challenging.”
Gates said AI can speed up scientific progress.
“It's a chance — whether it's governance, education, health — to accelerate the advances in all the sciences,” Gates said.
Artificial intelligence is, essentially, algorithmbased software that can “see,” “hear” and “think” in ways that often mimic human processes but faster and, theoretically, more accurately. However, rapid advances in AI have sparked growing concern about the ethics of allowing algorithms to make decisions, the possibility that the technology will replace more jobs than it creates, and the potentially harmful results algorithms can produce when their input includes human bias.
“This is a unique time in history — we are part of the first generation to see this technology migrate from the lab to the real world at such a scale and speed,” institute co-director Li told the audience. But, she said, “Intelligent machines have the potential to do harm.” Possible pitfalls include job displacement, “algorithmic bias” that results from data infected by human prejudices, and threats to privacy and security.
“This is a technology with the potential to change history for all of us. The question is, ‘Can we have the good without the bad?' ” she said.
That question remains to be answered, said Susan Athey, a professor of the economics of technology at the university's business school. “If we knew all the answers, we wouldn't need to found the institute,” Athey said in an interview. “We're trying to grapple with big questions that no discipline has monopoly over. What we want to do is make sure we get the greatest minds studying these questions.”
Those minds will come from Stanford schools and departments including computer science, medicine, law, economics, political science, biology, sociology and humanities. The interdisciplinary structure of the institute will allow researchers, students and instructors to explore the effects of AI on human life and the environment, symposium speakers said. “AI should be inspired by human intelligence, but its development should be guided by its impact,” said university president Marc Tessier-Lavigne.
Because the facility is located at a university, students and faculty can create collaborations that “allow people to learn about AI while actually improving the social good,” Athey said.
Areas ripe for AIboosted development include medicine, climate science, emergency response, governance and education, speakers said. The technology promises to augment human intelligence, helping doctors diagnose illness or helping teachers educate children.
Still, AI in many ways falls far short of human intelligence, experts said. While the technology can be applied generally across many fields, its usefulness is, so far, very narrow.
“It does only one thing,” said Jeff Dean, Google's head of AI. “How do we actually train systems that can do thousands of things, tens of thousands of things? How do we actually build much more general systems?”
Another leader in America's AI field, MIT professor Erik Brynjolfsson, highlighted the potential prosperity the technology might deliver — if humans can keep up with the pace of change it creates.
“The first-order effect is tremendous growth in the economic pie, better health, ability to solve so many of our societal problems. If we handle this right, the next 10 years, the next 20 years, should be, could be, the best couple of decades that humanity has ever seen,” Brynjolfsson said.
Because there's no economic law that says everyone must benefit, “We need to be proactive about thinking about how we make this shared prosperity,” Brynjolfsson said. “The challenge isn't so much massive job loss, it's more a matter of poor-quality jobs and uneven distribution.”
Currently, companies are focusing on using AI to perform certain tasks, and work based on such tasks is disappearing, Brynjolfsson said.
“The problem is that human skills, human institutions, business processes change much more slowly than technology does,” he said. “We're not keeping up. That's why this humancentered AI initiative is so important. How can we adapt our economics, our laws, our societies? Otherwise, we're going to be facing more of the unintended consequences.”