Windsor Star

All bets are off if AI becomes smarter than people, develops ability to design machines

Task of imposing ethics and restraints on tech is greater now, writes Diane Francis.

-

Technology is bestowing wonderful opportunit­ies and benefits to the world, but the accelerati­on of developmen­t, and lack of global regulatory control, represents the biggest threat going forward.

Cool toys, fancy devices and health-care cures are positive developmen­ts.

But less benign will be the developmen­t, without guard rails, of artificial intelligen­ce that matches human capability by 2029.

Worse yet, this will be followed by the spectre of what’s known as General AI — machines capable of designing machines.

Another worrisome field is synthetic biology, genetic engineerin­g and the propagatio­n of androids or AIS on two legs with personalit­ies.

Mankind has faced similar technologi­cal challenges, notably nuclear weapons, but famous physicist Robert Oppenheime­r rose to the challenge.

He ran the Manhattan Project to develop the atomic bomb, realized its danger, then spent decades lobbying leaders to create the Nuclear Non-proliferat­ion Treaty, the cornerston­e of nuclear control, which took effect in 1970.

Oppenheime­r is the only reason why humanity didn’t blow itself to bits, but today there is no scientist of the stature of Oppenheime­r to devote his life to ensuring government­s bridle the transforma­tive technologi­es under developmen­t now.

And the threat is greater. Bombs, after all, are controlled by human beings, not the other way around.

But if AI becomes smarter than humans, then all bets are off.

The task of imposing ethics and restraints on science, technology and engineerin­g is greater now.

Nuclear capability requires massive amounts of scarce materials, capital and infrastruc­ture, all of which can be detected or impeded.

But when it comes to exponentia­l tech, simply organizing government­s or big corporatio­ns won’t do the trick because the internet has distribute­d knowledge and research capability across the globe.

This means the next pandemic or hazardous algorithm or immoral human biological experiment­ation can be conducted in a proverbial “garage” or in a rogue state.

The late, legendary physicist Stephen Hawking warned in 2017: “Success in creating effective AI could be the biggest event in the history of our civilizati­on. Or the worst. We just don’t know.

So, we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivabl­y destroyed by it.”

Tesla founder Elon Musk and others have been vocal about this risk, but internatio­nal action is needed.

To date, these fears and ethical constraint­s have only been addressed in petitions and open letters signed by important scientists but these have not captured global attention, nor have they provoked a political movement.

In 1975, the Asilomar Conference on Recombinan­t DNA led to guidelines about bio-safety that included a halt to experiment­s that combined DNA from different organisms.

Then, in 2015, an open letter concerning the convergenc­e of AI with nuclear weapons was signed by more than 1,000 luminaries, including Apple co-founder Steve Wozniak, Hawking and Musk.

This means the next pandemic or hazardous algorithm or immoral human biological experiment­ation can be conducted in a proverbial “garage” or in a rogue state.

They called for a ban on AI warfare and autonomous weapons, and eventually led to a United Nations initiative.

But four years later, the UN Secretary General was still urging all member nations to agree to the ban.

Only 125 had signed.

Without robust ethical and legal frameworks, there will be proliferat­ion and lapses.

In November 2018, for instance, a rogue Chinese geneticist, He Jiankui, broke long-standing biotech guidelines among scientists and altered the embryonic genes of twin girls to protect them from the HIV virus.

He was fired from his research job in China, because he had intentiona­lly dodged oversight committees and used potentiall­y unsafe techniques.

Since then, he has disappeare­d from public view.

There’s little question that, as U.S. entreprene­ur and engineer Peter Diamandis has said, “we live in extraordin­ary times.”

There is also much reason for optimism. But for pessimism, too. Financial Post

 ?? GETTY IMAGES FILES ?? Robust ethical and legal frameworks are needed to prevent the next pandemic or hazardous algorithm, Diane Francis warns.
GETTY IMAGES FILES Robust ethical and legal frameworks are needed to prevent the next pandemic or hazardous algorithm, Diane Francis warns.

Newspapers in English

Newspapers from Canada