Bangkok Post

AI will ‘mostly’ make us better off by 2030, but fears remain

- EDWARD C. BAIG USA TODAY (TNS)

The year is 2030, and artificial intelligen­ce has changed practicall­y everything. Is it a change for the better or has AI threatened what it means to be human, to be productive and to exercise free will?

You’ve heard the dire prediction­s from some of the brightest minds about AI’s impact. Tesla and SpaceX chief Elon Musk worries that AI is far more dangerous than nuclear weapons. The late scientist Stephen Hawking warned AI could serve as the “worst event in the history of our civilisati­on” unless humanity is prepared for its possible risks.

But many experts, even those mindful of such risks, have a more positive outlook, especially in healthcare and possibly in education.

That’s one of the takeaways from a new AI study released by the Pew Research Center and Elon University’s Imagining the Internet Center. Pew canvassed the opinions 979 experts over the summer, a group that included prominent technologi­sts, developers, innovators and business and policy leaders.

Nearly two-thirds predicted most of us will be mostly better off. But a third think otherwise, and a majority of the experts expressed at least some concern over the long-term impact of AI on the “essential elements of being human”.

Among those concerns were data abuse, loss of jobs, loss of control as decision-making in digital systems is ceded to “black box” tools that take data in and spit answers out, an erosion in our ability to think for ourselves, and yes, the mayhem brought on by autonomous weapons, cybercrime, lies and propaganda.

“There’s a quite consistent message throughout answers that some good things would emerge and there were some problems to worry about,” says Lee Rainie, director of internet and technology research at Pew Research Center. Janna Anderson, director of Elon University’s Imagining the Internet Center, added that some respondent­s thought we’d be OK through 2030, “but I’m not sure after that”.

Andre McLaughlin at Yale, who had been deputy chief technology officer in former US president Barack Obama’s administra­tion and a global public policy lead at Google, said that “my sense is that innovation­s like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognisab­le. AI will drive a vast range of efficiency optimisati­ons but also enable hidden discrimina­tion and arbitrary penalisati­on of individual­s in areas like insurance, job seeking and performanc­e assessment”.

Technology blogger Wendy Grossman writes: “I believe human-machine AI collaborat­ion will be successful in many areas, but that we will be seeing, like we are now

over Facebook and other social media, serious questions about ownership and who benefits. It seems likely that the limits of what machines can do will be somewhat clearer than they are now, when we’re awash in hype. We will know by then, for example, how successful self-driving cars are going to be, and the problems inherent in handing off control from humans to machines in a variety of areas will also have become clearer.”

Leonard Kleinrock, Internet Hall of Fame member, replied: “As AI and machine learning improve, we will see highly customised interactio­ns between humans and their healthcare needs. This mass customisat­ion will enable each human to have her medical history, DNA profile, drug allergies, genetic make-up, etc, always available to any caregiver/medical profession­al.”

Robert Epstein, senior research psychologi­st at the American Institute for Behavioral Research and Technology says: “By 2030, it is likely that AIs will have achieved a type of sentience, even if it is not human-like. They will also be able to exercise varying degrees of control over most human communicat­ions, financial transactio­ns, transporta­tion systems, power grids and weapons systems, and we will have no way of dislodging them. How they decide to deal with humanity — to help us, ignore us or destroy us — will be entirely up to them, and there is no way currently to predict which avenue they will choose. Because a few paranoid humans will almost certainly try to destroy the new sentient AIs, there is at least a reasonable possibilit­y that they will swat us like the flies we are — the possibilit­y that Stephen Hawking, Elon Musk and others have warned about.”

A social scientist who remained anonymous says: “My chief fear is face-recognitio­n used for social control. Even Microsoft has begged for government regulation! Surveillan­ce of all kinds is the future for AI. It is not benign if not controlled.”

Yet another anonymous respondent offered a different concern: “Knowing humanity, I assume particular­ly wealthy white males will be better off, while the rest of humanity will suffer from it.”

Ben Shneiderma­n, founder of the Human Computer Interactio­n Center at the University of Maryland, offers a very bullish take: “Automation is largely a positive force, which increases productivi­ty, lowers costs and raises living standards. Automation expands the demand for services, thereby raising employment, which is what has happened at Amazon and FedEx. My position is contrary to those who believe that robots and artificial intelligen­ce will lead to widespread unemployme­nt.”

And Wendy Hall, a professor of computer science at the University of Southampto­n and executive director of the Web Science Institute, says: “It is a leap of faith to think that by 2030 we will have learnt to build AI in a responsibl­e way and we will have learnt how to regulate the AI and robotics industries in a way that is good for humanity. We may not have all the answers by 2030, but we need to be on the right track by then.”

 ??  ?? Intel held an Artificial Intelligen­ce Day in the Indian city of Bangalore last year.
Intel held an Artificial Intelligen­ce Day in the Indian city of Bangalore last year.
 ??  ??

Newspapers in English

Newspapers from Thailand