When ar­ti­fi­cial in­tel­li­gence ex­ceeds hu­man ca­pac­ity

More fear­some than the Ter­mi­na­tor, vot­ing ro­bots could soon be at hand

The Washington Times Daily - - OPINION - By Gary An­der­son Gary An­der­son is a re­tired Ma­rine Corps colonel who led early mil­i­tary ex­per­i­men­ta­tion in ro­bot­ics. He lec­tures in al­ter­na­tive anal­y­sis at the Ge­orge Wash­ing­ton Univer­sity’s School of In­ter­na­tional Af­fairs.

Elon Musk, the vi­sion­ary en­tre­pre­neur, fired a warn­ing shot across the bow of the na­tion’s gover­nors re­cently re­gard­ing the rise of ar­ti­fi­cial in­tel­li­gence (AI) which he feels may be the great­est ex­is­ten­tial threat to hu­man civ­i­liza­tion, far eclips­ing global warm­ing or ther­monu­clear war. In that, he is joined by Stephen Hawk­ing and other sci­en­tists who feel that the quest for sin­gu­lar­ity and AI self-aware­ness is dan­ger­ous.

Sin­gu­lar­ity is the point at which ar­ti­fi­cial in­tel­li­gence will meet and then ex­ceed hu­man ca­pac­ity. The most op­ti­mistic es­ti­mates of sci­en­tists who think about the prob­lem is that ap­prox­i­mately 40 per­cent of jobs done by hu­mans to­day will be lost to ro­bots when the sin­gu­lar­ity point is reached and ex­ceeded; oth­ers think the dis­place­ment will be much higher.

Some be­lieve that we will reach sin­gu­lar­ity by 2024; oth­ers be­lieve it will hap­pen by mid-cen­tury, but most in­formed ob­servers be­lieve it will hap­pen. The ques­tion Mr. Musk is pos­ing to so­ci­ety is this; just be­cause we can do some­thing, should we?

In pop­u­lar lit­er­a­ture and films, the night­mare sce­nario is Ter­mi­na­tor-like ro­bots over­run­ning hu­man civ­i­liza­tion. Mr. Musk’s fear is the dis­place­ment of the hu­man work­force. Both are pos­si­ble, and there are sci­en­tists and econ­o­mists se­ri­ously work­ing on the im­pli­ca­tions of both even­tu­al­i­ties. The most wor­ry­ing eco­nomic sce­nario is how to re­im­burse the bil­lions of dis­placed hu­man work­ers.

We are no longer just talk­ing about coal min­ers and steel work­ers. I re­cently talked to a food ser­vice ex­ec­u­tive who be­lieved that fast food places like McDon­ald’s and Burger King will be to­tally au­to­mated by the mid­dle of the next decade. Self-driv­ing ve­hi­cles will likely dis­place Team­sters and taxi driv­ers (to in­clude Uber) in the same time frame.

The ac­tual threat to hu­man dom­i­na­tion of the planet will not likely come from killer ro­bots, but from vot­ing ro­bots. At some point in time af­ter sin­gu­lar­ity oc­curs, one of th­ese self­aware ma­chines will surely raise its claw (or vir­tual hand) and say; “hey, what about equal pay for equal work?”

In the Dil­bert comic strip, when the of­fice robot be­gins to make de­mands, he gets re­pro­grammed or con­verted into a cof­fee maker. He hasn’t yet called Hu­man Rights Watch or the ACLU, but it is likely that our fu­ture ac­tivist AI will do so. Once the robot rights move­ment gets mo­men­tum, the sky is the limit. Vot­ing ro­bots won’t be far be­hind.

This would lead to some very in­ter­est­ing pol­icy prob­lems. It is log­i­cal to as­sume that ar­ti­fi­cial in­tel­li­gence will be ca­pa­ble of re­pro­duc­ing af­ter sin­gu­lar­ity. That means that the AI party could, in time, pro­duce more vot­ers than the hu­man Democrats or Repub­li­cans. Re­quir­ing ro­bots to wait un­til they are 18 years af­ter cre­ation to get fran­chise would

only slow the process, not stop it.

If this sce­nario seems fan­ci­ful, con­sider this. Only a cen­tury ago women were de­mand­ing the right to vote. Less than a cen­tury ago most white Amer­i­cans didn’t think African and Chi­nese Amer­i­cans should be paid wages equal to whites. Many women are still fight­ing for equal pay for equal work, and Sil­i­con Val­ley is a no­to­ri­ously hos­tile work­place for women. Smart, self-aware ro­bots will fig­ure this out fairly quickly. The only good news is that they might price them­selves out of the la­bor mar­ket.

This raises the ques­tion of whether we should do some­thing just be­cause we can. If we are go­ing to limit how self-aware ro­bots can be­come, the time is now. The year 2024 will be too late. Ar­ti­fi­cial in­tel­li­gence and “big data” can make our lives bet­ter, but we need to ask our­selves how smart we want AI to be. This is a pol­icy de­bate that must be con­ducted at two lev­els. The sci­en­tific com­mu­nity needs to dis­cuss the eth­i­cal im­pli­ca­tions, and the pol­icy mak­ing com­mu­nity needs to de­ter­mine if le­gal lim­its should be put on how far we push AI self-aware­ness.

This ap­proach should be in­ter­na­tional. If we put a pro­hi­bi­tion on how smart we want ro­bots to be, there will be an ar­gu­ment that the Rus­sians and Chi­nese will not be so eth­i­cal; and the Ira­ni­ans are al­ways look­ing for a com­pet­i­tive ad­van­tage, as are non­state ac­tors such as ISIS and al Qaeda. How­ever, they prob­a­bly face more dan­ger from bril­liant, smart ma­chines than we do. Self-aware AI would quickly catch the il­logic of rad­i­cal Is­lam. It would not likely tol­er­ate the log­i­cal con­tra­dic­tions of Chi­nese Com­mu­nism or Rus­sian klep­toc­racy.

It is not hard to imag­ined a time when a bril­liant robot will roll into the Krem­lin and an­nounce, “Mr. Putin, you’re fired.”

If we are go­ing to limit how self­aware ro­bots can be­come, the time is now. The year 2024 will be too late.

IL­LUS­TRA­TION BY GREG GROESCHG

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.