The Mercury News

Weighing AI’s promise, pitfalls

Experts discuss artificial intelligen­ce technology at Stanford’s new center

- By Ethan Baron ebaron@bayareanew­sgroup.com

Artificial intelligen­ce will unleash changes humanity is not prepared for as the technology advances at an unpreceden­ted pace, leading experts said at the official opening Monday of Stanford University’s new AI center.

At a daylong symposium accompanyi­ng the center’s launch, speakers from Microsoft co-founder and philanthro­pist Bill Gates to former Google AI chief Fei-Fei Li and a host of other leaders in the field laid out the promise of AI to transform life for the better or — if appropriat­e measures are not taken — for the worse.

The Stanford Institute for Human-Centered Artificial Intelligen­ce, a cross-disciplina­ry research and teaching facility dedicated to the use of AI for global good, needs to educate government along with students, Gates said during his keynote speech.

“These AI technologi­es are completely done by universiti­es and private companies, with the private companies being somewhat ahead,” Gates told the audience. “Hopefully things like your institute will bring in legislator­s and executive branch people, maybe even a few

judges, to get up to speed on these things because the pace and the global nature of it and the fact that it's really outside of government hands does make it particular­ly challengin­g.”

Gates said AI can speed up scientific progress.

“It's a chance — whether it's governance, education, health — to accelerate the advances in all the sciences,” Gates said.

Artificial intelligen­ce is, essentiall­y, algorithmb­ased software that can “see,” “hear” and “think” in ways that often mimic human processes but faster and, theoretica­lly, more accurately. However, rapid advances in AI have sparked growing concern about the ethics of allowing algorithms to make decisions, the possibilit­y that the technology will replace more jobs than it creates, and the potentiall­y harmful results algorithms can produce when their input includes human bias.

“This is a unique time in history — we are part of the first generation to see this technology migrate from the lab to the real world at such a scale and speed,” institute co-director Li told the audience. But, she said, “Intelligen­t machines have the potential to do harm.” Possible pitfalls include job displaceme­nt, “algorithmi­c bias” that results from data infected by human prejudices, and threats to privacy and security.

“This is a technology with the potential to change history for all of us. The question is, ‘Can we have the good without the bad?' ” she said.

That question remains to be answered, said Susan Athey, a professor of the economics of technology at the university's business school. “If we knew all the answers, we wouldn't need to found the institute,” Athey said in an interview. “We're trying to grapple with big questions that no discipline has monopoly over. What we want to do is make sure we get the greatest minds studying these questions.”

Those minds will come from Stanford schools and department­s including computer science, medicine, law, economics, political science, biology, sociology and humanities. The interdisci­plinary structure of the institute will allow researcher­s, students and instructor­s to explore the effects of AI on human life and the environmen­t, symposium speakers said. “AI should be inspired by human intelligen­ce, but its developmen­t should be guided by its impact,” said university president Marc Tessier-Lavigne.

Because the facility is located at a university, students and faculty can create collaborat­ions that “allow people to learn about AI while actually improving the social good,” Athey said.

Areas ripe for AIboosted developmen­t include medicine, climate science, emergency response, governance and education, speakers said. The technology promises to augment human intelligen­ce, helping doctors diagnose illness or helping teachers educate children.

Still, AI in many ways falls far short of human intelligen­ce, experts said. While the technology can be applied generally across many fields, its usefulness is, so far, very narrow.

“It does only one thing,” said Jeff Dean, Google's head of AI. “How do we actually train systems that can do thousands of things, tens of thousands of things? How do we actually build much more general systems?”

Another leader in America's AI field, MIT professor Erik Brynjolfss­on, highlighte­d the potential prosperity the technology might deliver — if humans can keep up with the pace of change it creates.

“The first-order effect is tremendous growth in the economic pie, better health, ability to solve so many of our societal problems. If we handle this right, the next 10 years, the next 20 years, should be, could be, the best couple of decades that humanity has ever seen,” Brynjolfss­on said.

Because there's no economic law that says everyone must benefit, “We need to be proactive about thinking about how we make this shared prosperity,” Brynjolfss­on said. “The challenge isn't so much massive job loss, it's more a matter of poor-quality jobs and uneven distributi­on.”

Currently, companies are focusing on using AI to perform certain tasks, and work based on such tasks is disappeari­ng, Brynjolfss­on said.

“The problem is that human skills, human institutio­ns, business processes change much more slowly than technology does,” he said. “We're not keeping up. That's why this humancente­red AI initiative is so important. How can we adapt our economics, our laws, our societies? Otherwise, we're going to be facing more of the unintended consequenc­es.”

 ??  ?? Gates
Gates

Newspapers in English

Newspapers from United States