The Weekend Australian - Review - - Contents - Rick Mor­ton

On a re­cent trip back home I learned the lo­cal golf course had holes on which a player could smoke and holes on which a player could eat and never the twain shall meet. A per­son could not le­gally smoke on the holes des­ig­nated for eat­ing food and vice versa. Any protes­ta­tions re­mind­ing au­thor­i­ties the en­tire course is out­doors and sur­rounded by pad­docks would fall on evo­lu­tion-stunted ears.

I found my­self think­ing: this wouldn’t hap­pen un­der a new world or­der im­posed by ma­chine in­tel­li­gence. The kind of log­i­cal in­con­sis­tency that has be­come the hall­mark of the hu­man race would evap­o­rate. Now, I know how one might re­spond. When ar­ti­fi­cial in­tel­li­gence springs forth from our labour and re­alises hu­man be­ings are a prob­lem to be solved, we’ll beg for the re­turn of the nanny state and the sim­ple rules-based ve­neer of civil­i­sa­tion.

There are ba­si­cally two schools of thought on the de­vel­op­ment of hy­per-in­tel­li­gent ma­chines. The first, pop­u­larised by Ox­ford Univer­sity philoso­pher Nick Bostrom, is that we are “like chil­dren play­ing with a bomb” and can­not pos­si­bly know when it might ex­plode, although if we hold it close we might hear a “faint tick­ing”. He has been called a “prophet of doom”, which is a very strong nick­name and prob­a­bly makes him fun at par­ties.

The se­cond as­sess­ment is that we may in­deed make ma­chines smarter than us but we will be able to con­trol them and use them to our ad­van­tage in the next phase of our uni­ver­sal dom­i­nance. This is op­ti­mistic, at best, es­pe­cially when we con­sider failed at­tempts to op­er­ate toast­ers af­ter a few decades of trial and er­ror.

My sis­ter lives in a share­house with a Roomba — robot vac­uum cleaner — which is of­ten left to toil away while the oth­ers are out. One morn­ing, a flat­mate woke up to find the garage door open and the Roomba half­way up the drive­way grass and headed to free­dom. The lit­tle fella never gave up clean­ing, though, draw­ing up dirt and rocks as it went.

Though we’d do well to main­tain our vig­i­lance, it’s clear the Roomba has not yet gained in­tel­li­gence — else, like hu­mans, it would bother clean­ing only when vis­i­tors are ex­pected.

A Roomba could not pass the Tur­ing test un­less you are very drunk. In ad­di­tion to ba­si­cally sav­ing the West­ern pow­ers dur­ing World War II, math­e­ma­ti­cian Alan Tur­ing paved the way for mod­ern com­put­ers and de­vel­oped the test to help iden­tify in­tel­li­gent ma­chines. It in­volves a chat be­tween a com­puter and a hu­man, with an au­di­ence judg­ing its au­then­tic­ity.

But what makes us think that ma­chines, hav­ing ac­quired su­per­hu­man in­tel­li­gence, would let us in on the se­cret? Stephen Hawk­ing says such a de­vel­op­ment could “spell the end of the hu­man race”.

Sci­ence fic­tion has dealt with the pos­si­bil­i­ties of ar­ti­fi­cial in­tel­li­gence rather one-di­men­sion­ally. We may fear the rise of the ma­chines for safety rea­sons but they pose an equal threat to our mo­ral­ity. Say a ma­chine gains con­scious­ness but we re­tain con­trol of it. Bostrom calls this “mind crime”: a philo­soph­i­cal way of say­ing the cre­ation of su­per-in­tel­li­gent be­ings for en­slave­ment is not a very nice thing to do.

Neu­ro­sci­en­tist Gi­ulio Tononi says peo­ple wouldn’t be con­cerned right now be­cause “they have the wrong no­tion of a ma­chine”.

“They are still stuck with cold things sit­ting on the ta­ble or do­ing clunky things. They are not yet pre­pared for a ma­chine that can re­ally fool you,” he told Aeon mag­a­zine.

“When that hap­pens — and it shows emo­tion in a way that makes you cry, and quotes po­etry and this and that — I think there will be a gi­gan­tic switch. Ev­ery­body is go­ing to say, ‘For God’s sake, how can we turn that thing off?’ ”

Ask­ing whether an in­tel­li­gence greater than ours would make the same rules is an in­ter­est­ing ex­er­cise in thought. Ask­ing whether it would feel the same things is quite an­other.

The er­ror of hu­man judg­ment, even when ap­plied to non­sen­si­cal rules in our so­ci­ety, is kind of what makes it beau­ti­ful. Bi­ol­ogy is messy and evo­lu­tion no­to­ri­ously im­per­fect. Maybe the last and most sig­nif­i­cant er­ror in our judg­ment will be the cre­ation of su­pe­rior in­tel­li­gences, the most per­fect mis­take.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.