RO­BOTIC RE­BEL­LION

IS AR­TI­FI­CIAL IN­TEL­LI­GENCE A DOU­BLE-EDGED SWORD?

American Survival Guide - - TABLE OF CONTENTS - By Michael D’angona

Is ar­ti­fi­cial in­tel­li­gence a dou­ble-edged sword?

Tech­nol­ogy has come a long way in a very short time. Voice-ac­ti­vated nav­i­ga­tion, home light­ing, door locks and cam­eras con­trolled by com­put­ers, and mo­tor ve­hi­cles driv­ing them­selves down the busy streets—just a decade ago, this would seem im­pos­si­ble. Now, it’s a re­al­ity.

But if these amaz­ing changes in tech­nol­ogy hap­pened in such a short pe­riod of time, what are we, as hu­mans on this planet, go­ing to en­counter in the very near fu­ture?

The an­swer is both mirac­u­lous and ex­tremely scary. Ad­vances in the com­puter and ro­botic sciences have, in­deed, put us in a po­si­tion to ob­tain in­for­ma­tion and en­joy phys­i­cal con­ve­niences around the clock. Tech­nol­ogy has leapt so far that some of our ba­sic needs are not even con­sciously con­trolled by us but via ar­ti­fi­cial in­tel­li­gence ... and that’s how our ul­ti­mate fan­tasy world could turn into our apoc­a­lyp­tic night­mare.

Ar­ti­fi­cial in­tel­li­gence, or “AI,” as it’s com­monly known, con­tin­ues to ad­vance at a dizzy­ing pace. Com­put­ers that can seem­ingly think on their own—or, more fright­en­ing, think for us—might be a near-fu­ture, world­wide prob­lem. At present, “it” could be just wait­ing in the shad­ows, ready to strike.

But is this a gen­uine con­cern, or is it just a con­spir­acy the­ory that is more fit­ting for a sci­ence fic­tion film?

IS THE SEED AL­READY PLANTED?

One pop­u­lar idea is that the begin­nings of an AI takeover are al­ready un­der­way in to­day’s de­vel­oped so­ci­eties, although most peo­ple don’t even re­al­ize it. That is be­cause con­ve­nience

and the over­whelm­ing de­sire for new and im­proved tech­no­log­i­cal ad­vance­ments trump a per­son’s log­i­cal thought process about long-term reper­cus­sions of us­ing this new tech. Sim­ply put: If it makes a per­son’s life eas­ier, it is wel­comed into their life, nearly al­ways with­out ques­tion.

This, in it­self, is quite dan­ger­ous. To blindly in­cor­po­rate cell phones, au­to­mated sys­tems through­out the house­hold and other com­puter-driven sys­tems de­signed to make life eas­ier guar­an­tees that the tidal wave of in­creas­ingly ca­pa­ble ad­vances won’t slow any­time soon.

This isn’t to say that your au­to­mated garage door opener will trap you within your home or that the GPS sys­tem in your car will drive you off a cliff. No; it’s not that dra­matic, nor is it in­tended as such. With the al­lowance—or, more specif­i­cally, the en­cour­age­ment—for more-ad­vanced AI wanted by the gen­eral pub­lic, com­put­er­ized sys­tems will be­come more ad­vanced in ba­sic “think­ing,” and this will in­crease ex­po­nen­tially. That’s when the first in­di­ca­tions of ma­chines “think­ing” for them­selves could pos­si­bly oc­cur ... and, after that, the first phys­i­cal con­flicts with hu­mans.

SO­CI­ETY CAN’T HAVE IT BOTH WAYS

A mea­sure that can be taken to limit an AI is to keep it con­tained within cer­tain pa­ram­e­ters. These pa­ram­e­ters would only al­low it to progress to a cer­tain point and no fur­ther. Nev­er­the­less, this idea con­tra­dicts the en­tire rea­son AI was pur­sued in the first place: to aid in the per­for­mance of a hu­man’s ev­ery­day tasks and evolve pro­por­tion­ately.

Lim­it­ing a su­per­com­puter’s abil­i­ties might pre­vent a fu­ture takeover, but it would halt the progress of AI tech­nol­ogy. Also, keep­ing it con­tained, in re­al­ity, won’t work be­cause of man’s con­stant “I can do bet­ter” or “we can go fur­ther” at­ti­tude. This is what drives the hu­man race

WITH THE AL­LOWANCE— OR, MORE SPECIF­I­CALLY, THE EN­COUR­AGE­MENT— FOR MORE-AD­VANCED AI WANTED BY THE GEN­ERAL PUB­LIC, COM­PUT­ER­IZED SYS­TEMS WILL BE­COME MORE AD­VANCED IN BA­SIC “THINK­ING,” AND THIS WILL IN­CREASE EX­PO­NEN­TIALLY.

ONE POP­U­LAR IDEA IS THAT THE BEGIN­NINGS OF AN AI TAKEOVER ARE AL­READY UN­DER­WAY IN TO­DAY’S DE­VEL­OPED SO­CI­ETIES, ALTHOUGH MOST PEO­PLE DON’T EVEN RE­AL­IZE IT.

to ex­plore, im­prove tech­nol­ogy and strive to do things never be­fore achieved. Ar­ro­gance or con­fi­dence? A lit­tle of both, but only be­cause we are in­tent on mov­ing for­ward with the hu­man race’s con­tin­ued progress. Sim­ply stated, com­put­ers al­low hu­mans to do more than we can do on our own.

An­other rea­son it could be very dif­fi­cult to con­tain an AI sys­tem is be­cause the

AI, it­self, might “find” a way around the con­straints we try to im­pose on it. It could fig­ure out how to ma­nip­u­late the sys­tem or its hu­man users—or even dis­cover an elec­tronic path out of its “con­tain­ment field.” Hu­man­ity might not be aware of just how far the su­per­com­puter has pro­gressed, and we might es­sen­tially lower our col­lec­tive guard … and that could prove dis­as­trous.

PRE­CAU­TIONS TO PRO­TECT

The very idea that com­puter tech­nol­ogy could evolve and ul­ti­mately con­trol hu­mans— sim­i­lar to what has been de­picted in sci­ence fic­tion books and movies—is re­futed by some sci­en­tists within the tech field.

Their ar­gu­ment is that safe­guards within the pro­gram­ming would be put into place to avoid such sce­nar­ios. This, on the sur­face, might seem to be a sim­ple and ef­fec­tive pre­ven­ta­tive mea­sure, but “wild cards” need to be taken into con­sid­er­a­tion.

Ter­ror­ist in­ter­ven­tion is one. If a hos­tile or­ga­ni­za­tion ei­ther hacks the pro­grammed pre­ven­ta­tive mea­sures or cre­ates its own AI with­out such re­straints—per­haps un­wit­tingly, even to the ter­ror­ists—a takeover could oc­cur. An­other pos­si­bil­ity would be that an ac­ci­dent, ei­ther man-made or deep within the pro­gram­ming, could trig­ger a snow­ball ef­fect; the out­come could be a

com­put­er­ized, au­to­mated sys­tem with self-preser­va­tion as its main ob­jec­tive.

Surely, the sharp minds of the most in­tel­li­gent peo­ple on Earth could find a so­lu­tion? One prob­lem is that the com­puter’s “brain” out­classes hu­man thought in speed by an in­con­ceiv­able scale. For ex­am­ple, hu­man ax­ons carry sig­nals in the brain at about 120 me­ters per sec­ond, while a com­puter’s abil­ity to move in­for­ma­tion through­out its sys­tem trav­els at nearly the speed of light (ap­prox­i­mately 300 mil­lion me­ters per sec­ond). That’s quite a dif­fer­ence! The AI would be mil­lions of times ahead of a hu­man’s abil­ity to process data.

TIME FRAME

Ex­actly when ar­ti­fi­cial in­tel­li­gence could, or would, take over the world is an­other sub­ject un­der much de­bate. The gen­eral con­sen­sus is that it won’t hap­pen overnight, next year or within the next 10 years, but it could hap­pen within sev­eral decades. Late physi­cist Stephen Hawk­ing said he be­lieved it could hap­pen within 100 years. That’s a very large ball­park fig­ure. But, just as tech­nol­ogy has pro­gressed within the past 100 years, the idea of ro­bots con­tin­u­ally up­grad­ing their hard­ware and soft­ware is not such a far-fetched idea.

If you lived about 100 years ago and told some­one that fu­ture ver­sions of those first rudi­men­tary au­to­mo­biles would be able to drive them­selves, your san­ity would have been called into ques­tion. Mean­while, this

IF YOU LIVED ABOUT 100 YEARS AGO AND TOLD SOME­ONE THAT FU­TURE VER­SIONS OF THOSE FIRST RUDI­MEN­TARY AU­TO­MO­BILES WOULD BE ABLE TO DRIVE THEM­SELVES, YOUR SAN­ITY WOULD HAVE BEEN CALLED INTO QUES­TION.

tech­nol­ogy is be­com­ing an in­creas­ingly com­mon oc­cur­rence on the streets to­day, and com­pe­ti­tion among man­u­fac­tur­ers to cre­ate a va­ri­ety of vi­able au­tonomous ve­hi­cles is fu­el­ing their progress and adop­tion. It’s the steady process of AI evo­lu­tion that could be man’s un­do­ing—not just one big, sud­den event that oc­curs to break the nor­mal cy­cle of con­tin­ual progress.

FOL­LOW­ING LOGIC, NOT SUPREMACY

Con­trary to what has been shown in sci­ence fic­tion movies and books dur­ing the past cen­tury, the rea­son AI would take over isn’t be­cause it wants to dom­i­nate hu­mans or to reign supreme over the Earth; rather, it would hap­pen be­cause it would have a goal that would need to be ful­filled. That goal could be a very sim­ple one; some­thing as in­signif­i­cant as the need to col­lect a cer­tain item or to com­plete a pre-pro­grammed task.

How­ever, be­cause the com­puter can’t iden­tify its task as in­con­se­quen­tial (as a hu­man

could), it would re­sort to any means within its abil­ity to ac­com­plish its goal, in­clud­ing elim­i­nat­ing any­thing or any­one that got in its way. This idea negates the com­mon mis­con­cep­tion that an AI can be friendly or evil. It has no such dis­tinc­tions; it just does what is needed to reach its ob­jec­tive. By do­ing so, that puts it into a cat­e­gory—when viewed by a hu­man— of ei­ther “good” or “bad.”

WHAT CAN BE DONE?

Is there re­ally any­thing a sin­gle in­di­vid­ual can do to pre­vent the worst-case AI sce­nario from oc­cur­ring? Un­for­tu­nately, the an­swer is no. Only through joint dis­cus­sions with the top com­puter de­sign­ers and ar­ti­fi­cial in­tel­li­gence pi­o­neers through­out the world can uni­ver­sal con­straints and pre­cau­tions be put into place.

How­ever, ac­cep­tance of, and com­pli­ance with, these lim­i­ta­tions is a very tall or­der. As cor­po­ra­tions and coun­tries com­pete against one an­other to ad­vance com­puter tech­nol­ogy, who would aban­don the op­por­tu­ni­ties to cre­ate AI ap­pli­ca­tions that can be used to cre­ate prof­its or po­lit­i­cal ad­van­tages?

With so many loose ends across the globe, the like­li­hood that some­thing will slip through in­ter­na­tional agree­ments and blos­som into a life-threat­en­ing prob­lem for hu­mans is a very real pos­si­bil­ity.

Not un­like the de­vel­op­ment and spread of nu­clear weapons, tech­no­log­i­cal progress can't be stopped. When that progress threat­ens the safety and, to the ex­treme, the very ex­is­tence of the hu­man race, the only op­tion could be to fight back against the ro­botic up­ris­ing!

AD­VANCES IN THE COM­PUTER AND RO­BOTIC SCIENCES HAVE, IN­DEED, PUT US IN A PO­SI­TION TO OB­TAIN IN­FOR­MA­TION AND EN­JOY PHYS­I­CAL CON­VE­NIENCES AROUND THE CLOCK.

© GETTY IM­AGES

© GETTY IM­AGES

Ro­bots sim­i­lar to those that are com­mon on to­day's fac­tory floors might be­come our ad­ver­saries in the fu­ture.

© GETTY IM­AGES

Above: The “brain” of mod­ern-day com­put­ers op­er­ates at near light-speed— sig­nif­i­cantly faster than a hu­man’s abil­ity to think.

© GETTY IM­AGES

Right: Small agri­cul­tural ro­bots could be the pre­cur­sors of more-ad­vanced roam­ing ro­bots. Some say the “seeds” are al­ready planted to­day for a ro­botic re­bel­lion in the fu­ture.

Far right: Our de­vices can al­ready be linked and synched to each other and the Cloud. Will there be a time when our Ai-en­abled con­ve­niences di­rect us through our lives in­stead of be­ing as­sis­tants?

Near right: If the pur­suit of cures for hu­man dis­eases were taken over by Ai-en­abled en­ti­ties, would the cures be found sooner ... or never?

© GETTY IM­AGES

If ro­bots have ac­cess to parts that can be used to build other ro­bots, an army of me­chan­i­cal sol­diers is not be­yond the realm of pos­si­bil­ity.

© GETTY IM­AGES

For now, at least, hu­mans are still re­quired to per­form main­te­nance on ro­bots. It is a safe­guard we might lose con­trol over.

© GETTY IM­AGES

Ro­bots have been work­ing in U.S. auto fac­to­ries since the 1970s and in other types of man­u­fac­tur­ing for even longer.

© GETTY IM­AGES

Ter­ror­ist hack­ers might “open the box” and undo re­stric­tive pa­ram­e­ters that keep su­per­com­put­ers in check.

© GETTY IM­AGES

Right: Is there a bet­ter way to in­doc­tri­nate hu­mans into ac­cept­ing AI de­vices than to in­cor­po­rate them into the child-rear­ing process? Ev­ery day, mil­lions of chil­dren are oc­cu­pied by elec­tronic de­vices—in many cases, to give their par­ents ex­tra free time.

© GETTY IM­AGES

Above: Orig­i­nally con­ceived as work­ers that would han­dle all our mun­dane and dan­ger­ous chores, in­tel­li­gent ma­chines could be­come the great­est threat to our ex­is­tence.

© GETTY IM­AGES

Above: Hu­mans have an in­her­ent in­abil­ity to re­main peace­ful for long pe­ri­ods of time. Would there be an ad­van­tage in hav­ing an AI overseer so that hu­man con­flict would not be al­lowed?

© GETTY IM­AGES

With com­bat and other types of drones al­ready heav­ily de­ployed around the world, is the fear of un­con­trol­lable ro­botic ar­mies truly that un­re­al­is­tic?

© GETTY IM­AGES

In a world dom­i­nated by AI, would hu­mans ever have pri­vacy and anonymity, or would we have to adapt to a life through­out which we are al­ways be­ing mon­i­tored?

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.