Ar­ti­fi­cial In­tel­li­gence vs Hu­man In­tel­li­gence

Trillions - - In This Issue -

The rush to de­velop truly sen­tient, self-pro­gram­ming ma­chines is on, with bil­lions of dol­lars a year be­ing al­lo­cated to re­search and de­vel­op­ment in nu­mer­ous coun­tries, but with­out suf­fi­cient con­sid­er­a­tion to the im­pacts on hu­man­ity and the planet.

China's e-com­merce gi­ant Alibaba re­cently al­lo­cated more than 100 bil­lion yuan (about $15 bil­lion U.S.) over the next three years for its new "DAMO Academy.” (“DAMO” stands for “dis­cov­ery, ad­ven­ture, mo­men­tum and out­look.”)

DAMO has al­ready set up part­ner­ships with more than 200 re­search in­sti­tutes and uni­ver­si­ties and has started cre­at­ing re­mote branches in Europe, Asia and the United States. DAMO will tap the best ta­lent and suck up the most ad­vanced tech­nol­ogy from around the world.

Lim­ited AI is al­ready be­ing widely used in cars, cus­tomer ser­vice sys­tems, gam­ing, weapons sys­tems, man­u­fac­tur­ing, sci­en­tific anal­y­sis, vir­tual as­sis­tants and many other ap­pli­ca­tions.

AI has been beat­ing hu­man chess play­ers since 10 Fe­bru­ary 1996 when IBM'S Deep Blue de­feated Garry Kas­parov in game one of a six-game match. In Oc­to­ber 2015, Al­phago be­came the first com­puter Go pro­gram to beat a hu­man pro­fes­sional Go player with­out hand­i­caps on a full-sized 19×19 board.

This year, a creepy ro­bot named Sophia was awarded cit­i­zen­ship in Saudi Ara­bia in a poorly con­ceived PR stunt that back­fired.

Sophia was granted more rights than Saudi fe­male hu­man cit­i­zens. Added to the ob­vi­ous in­sult is the fact that Saudi Ara­bia is a big buyer of sex ro­bots and those ro­bots may have more rights than real women. The stunt has cer­tainly fu­eled the anger and re­sent­ment felt by the coun­try's fe­male hu­man half and fur­ther il­lu­mi­nated the gross in­ad­e­qua­cies of the male rulers. It did not make the bru­tal monar­chy seem cool and pro­gres­sive as in­tended, nor did it demon­strate the ethics or in­tel­li­gence of the ro­bot's man­u­fac­turer.

Self-driv­ing cars are com­ing very soon to many coun­tries and will send mil­lions of taxi driv­ers to the un­em­ploy­ment lines.

Fox­conn, the huge Tai­wanese man­u­fac­turer of nu­mer­ous elec­tron­ics de­vices, is re­plac­ing tens of thou­sands of its hu­man work­ers with ro­bots made by other ro­bots.

Most au­to­mo­biles are now made pri­mar­ily by ro­bots.

As far as is pub­licly known, truly sen­tient AI has yet to be de­vel­oped. The AI cur­rently in use merely runs pro­grams and can­not re­ally think beyond its cod­ing and while it can learn to a cer­tain de­gree it can't yet re­ally self-pro­gram. But that may soon change as AI pro­grams are taught to re­pro­gram them­selves.

The pur­suit of truly sen­tient AI raises im­por­tant ques­tions about the na­ture of con­scious­ness and the fu­ture of hu­man­ity.

The AI Im­per­a­tive

AI could be de­vel­oped to serve hu­man­ity and make our lives bet­ter, and to a cer­tain de­gree it is. But, given the re­al­i­ties of hu­man na­ture, AI is also be­ing de­vel­oped to re­place us and con­trol us.

Re­plac­ing Hu­man Work­ers - While some hu­mans are bril­liant and we are col­lec­tively some­what clever, the av­er­age per­son is not re­ally very in­tel­li­gent and is not well suited for many jobs. Peo­ple get sick, make mis­takes, for­get things, are emo­tional and their per­sonal lives can neg­a­tively im­pact their work.

When we are an­gry at our em­ployer we tend to sab­o­tage our em­ployer with low per­for­mance, theft and gen­eral dis­agree­able­ness.

Hu­mans aren't suited for repet­i­tive mun­dane tasks and when forced to do them for pro­longed pe­ri­ods it can drive some work­ers to sui­cide, which is one of the rea­sons Fox­conn is re­plac­ing hu­mans with ro­bots.

In a profit-driven sys­tem where work­ers are of­ten treated as just cogs in a wheel, it is nat­u­ral that many em­ploy­ers would be­come frus­trated with hu­man work­ers and seek to re­place them with some­thing more re­li­able, pre­cise and less costly.

Gov­ern­ment Use of AI - And then there is gov­ern­ment spon­sored AI and the de­sire to con­trol the masses.

AI is be­ing used in­creas­ingly for sur­veil­lance, law en­force­ment and tax com­pli­ance. It is used to iden­tify those who are re­sis­tant to state con­trol and non-com­pli­ant with the state's wishes.

One of the largest users of AI is the U.S. Na­tional Se­cu­rity Agency (NSA) which en­deav­ors to record all hu­man com­mu­ni­ca­tions world­wide and then an­a­lyze the data to iden­tify those who might pose a threat to the U.S. gov­ern­ment or its largest cor­po­ra­tions. The data is also used to iden­tify op­por­tu­ni­ties for the same cor­po­rate-state to ex­ploit for fi­nan­cial gain.

Be­cause of the im­mense amount of data it must man­age, the NSA has been at the fore­front of se­cret AI de­vel­op­ment and was us­ing quan­tum and DNA com­put­ers long be­fore the pub­lic had even con­ceived of such a thing.

China has taken the NSA'S use of AI for sur­veil­lance to the next step and is us­ing it also to con­trol the flow of in­for­ma­tion and shape the think­ing of in­di­vid­u­als. But, it does this with what it con­sid­ers are good in­ten­tions.

AI in Mar­ket­ing & Ad­ver­tis­ing - Ad­ver­tis­ing plat­forms such as Google and Face­book, use AI to track and pro­file users and dis­play ads that users are most likely

to re­spond to. This track­ing is be­ing in­creas­ingly ex­tended into the phys­i­cal world as peo­ple are tracked by their cell phones. Some stores are even closely track­ing shop­pers as they shop and us­ing that data to shape dis­plays, pric­ing and mar­ket­ing ef­forts.

The data gath­ered on us by var­i­ous par­ties is com­piled with other data and used to bet­ter un­der­stand and in­flu­ence us in ever more ef­fec­tive ways.

Face­book gen­er­ated more than $26 bil­lion in ad­ver­tis­ing rev­enue in 2016 be­cause it is able to use AI to build deep pro­files on its 2 bil­lion monthly ac­tive users and dis­play ads spe­cific to each user. It tracks ev­ery­thing its users do in Face­book and even at­tempts to track them out­side of Face­book. It knows who users are and their age, race, gen­der, likes and dis­likes, em­ploy­ment, sex­ual ori­en­ta­tion, health sta­tus and much more. By mon­i­tor­ing user re­sponses to ads, mes­sages, news and other stim­uli it can pre­dict fu­ture re­sponses and shape user's thoughts, feel­ings and ac­tions.

Face­book and its AI is rapidly be­com­ing a mass con­trol sys­tem.

AI in Pol­i­tics - The Don­ald Trump cam­paign used a pow­er­ful data­base named Project Alamo that con­tained de­tailed iden­tity pro­files on 220 mil­lion Amer­i­cans. It used those pro­files to shape its ap­proach and get Amer­i­cans to ei­ther vote for Trump or not vote at all if they were likely to vote for Hil­lary Clin­ton and couldn't be con­verted to Trump sup­port­ers.

The Trump cam­paign used the pro­files to tar­get po­ten­tial Hil­lary vot­ers on Face­book be­cause Face­book could match ads with spe­cific users who were likely or known Hil­lary sup­port­ers. The ads were also tar­geted by race.

The ads worked very well and sup­port for Hil­lary plum­meted just be­fore the elec­tion.

AI is be­ing used in­creas­ingly in pol­i­tics be­cause it is so ef­fec­tive and will be­come ever more ef­fec­tive and in­tru­sive as bil­lions are in­vested in get­ting can­di­dates in power who will do the bid­ding of those with the money to in­flu­ence elec­tions.

Con­sumer De­mand - Most hu­mans love their gad­gets and the fancier the gad­get is the bet­ter, as long as the owner of the gad­get can fig­ure out how to use it. But, few smart-gad­get users think about where their de­vice came from or what its ul­ti­mate con­se­quences will be.

Most peo­ple are OK with trad­ing their pri­vacy and con­trol over their own lives for the con­ve­nience and nov­elty of smart gad­gets. Par­ents are OK sup­ply­ing their chil­dren with de­vices that spy on their kids and can be eas­ily hacked and turned into ma­li­cious in­flu­enc­ing de­vices. Many mo­torists are OK driv­ing a car that can be re­motely con­trolled and shut off with­out warn­ing or de­lib­er­ately crashed with the airbags dis­abled.

How­ever, many con­sumers are also prac­ti­cal and pre­fer sim­ple and in­ex­pen­sive de­vices that just do what they are sup­posed to.

The Risks and Ben­e­fits of AI

Most of the de­vel­op­ers of AI jus­tify their pro­grams with re­as­sur­ing state­ments such as the one from Google's Deep­mind which is:

"Solve in­tel­li­gence. Use it to make the world a bet­ter place."

While such a state­ment may put some peo­ple at ease, many oth­ers no longer trust Google and re­al­ize that it has a sin­is­ter agenda. Its re­cent de-list­ing of some in­de­pen­dent me­dia in search re­sults and block­ing of truth­ful videos on Youtube are just a cou­ple of ex­am­ples of the com­pany's de­struc­tive ac­tions.

Some of our great­est thinkers have is­sued strong warn­ings against the de­vel­op­ment of ar­ti­fi­cial in­tel­li­gence and es­pe­cially for mil­i­tary ap­pli­ca­tions.

Physi­cist Stephen Hawk­ing said in an in­ter­view with the BBC, "The de­vel­op­ment of full ar­ti­fi­cial in­tel­li­gence could spell the end of the hu­man race."

Tech­nol­ogy en­tre­pre­neur Elon Musk warns that AI is "our big­gest ex­is­ten­tial threat".

In the past two years more than 23,000 peo­ple have signed an open let­ter call­ing for a mora­to­rium on the de­vel­op­ment and use of lethal au­to­mated weapons sys­tems (LAWS). Sig­na­to­ries in­cluded physi­cist Stephen Hawk­ing, Elon Musk of Tesla and Ap­ple’s Steve Woz­niak.

A vast per­cent­age of hu­man­ity's re­sources are spent not on mak­ing the world a bet­ter place but in killing hu­mans for profit. The U.S. war in­dus­try alone costs Amer­i­can tax­pay­ers $1 tril­lion each year just in di­rect fi­nan­cial costs. The global so­cial and en­vi­ron­men­tal costs are in­finitely higher.

Swe­den's Stock­holm In­ter­na­tional Peace Re­search In­sti­tute (SIPRI) data­base al­ready lists 381 LAWS, of which 195 are un­armed and 175 are weaponized. Since most such weapons sys­tems are highly clas­si­fied and un­known, this is just the tip of the AI weapons ice­berg.

So, it would be un­in­tel­li­gent to think that AI will not be used for de­struc­tive pur­poses. It is al­ready be­ing

used to mon­i­tor, track and con­trol us. It is al­ready be­ing used to iden­tify peo­ple who pose a threat to those in power so that the threat can be man­aged or elim­i­nated.

Given the fact that democ­racy is mostly a delu­sion and cen­tral gov­ern­ments are of­ten con­trolled by oli­garchs or large crim­i­nal cor­po­ra­tions, AI in the hands of most states be­comes a se­ri­ous threat not just to one's to free­dom and lib­erty but also to one's life.

The very real dan­ger of this is ev­i­denced by Amer­ica's use of drones to kill en­tire Mus­lim fam­i­lies and other groups of civil­ians be­cause one of them may have ex­pressed anti-amer­i­can sen­ti­ment that was de­tected by the NSA'S AI. Some peo­ple are also tar­geted be­cause their death will help cre­ate more en­e­mies and help jus­tify an ever larger and more in­tru­sive mil­i­tary.

Drones and tar­get­ing sys­tems are be­come ever smarter and there is less hu­man in­volve­ment in their op­er­a­tions.

How long till au­to­mated as­sas­si­na­tion sys­tems are de­ployed on U.S. soil or by pow­er­ful cor­po­ra­tions?

Then there are the con­se­quences of re­plac­ing peo­ple with ro­bots and AI and cre­at­ing mass un­em­ploy­ment, hunger, home­less­ness, crime and civil un­rest. As des­per­ate peo­ple take to the streets there will be fur­ther jus­ti­fi­ca­tions to use AI weapons sys­tems against them.

A num­ber of coun­tries are aware that con­tin­ued au­to­ma­tion and de­ploy­ment of AI will cre­ate mass un­em­ploy­ment and are ex­plor­ing or de­vel­op­ing plans for Uni­ver­sal Ba­sic In­come — a monthly pay­ment to ev­ery­one so that they won't starve or threaten the gov­ern­ment when gain­ful em­ploy­ment is sim­ply not avail­able to them.

When we lose our jobs to AI we could use our free time to visit with fam­ily, learn new skills, en­gage in art and ex­er­cise and lead hap­pier and health­ier lives. But, if Amer­ica and Europe are used as ex­am­ples, most of us on-the-dole won't use our time pro­duc­tively. We will use it watch TV, play video games, have un­safe sex and use drugs to numb our­selves to our feel­ings of lack of pur­pose and in­se­cu­rity from not hav­ing a job.

In­stead of con­cen­trat­ing on driv­ing, a self-driv­ing car could free us up to learn new skills, but in re­al­ity most peo­ple would not use the time pro­duc­tively.

While some coun­tries are plan­ning ahead and think­ing up ways to en­sure that peo­ple un­em­ployed by AI are taken care of, in the United States, the few safety nets al­ready in place are be­ing pulled out from un­der peo­ple well in ad­vance of AI. The cur­rent gov­ern­ment is de­ter­mined to elim­i­nate medi­care, med­i­caid and so­cial se­cu­rity so that the money can in­stead be given to the rulers. This may ul­ti­mately re­sult in mass re­volt and the op­por­tu­nity for the oli­garchs to use the ad­vanced AI weapons they have been hold­ing back.

To re­duce our re­sis­tance to killer ro­bots, we have been fed a steady diet of me­dia fea­tur­ing friendly and hero ro­bots with hu­man emo­tions. Only rarely are we shown the po­ten­tial re­al­ity of AI.

Another con­se­quence of AI is that as we rely on it more we will rely less on our own in­tel­li­gence and will be­come dumber. This is ev­i­dent as gad­gets get smarter, their users are be­com­ing no­tice­ably dumber.

Build­ing ma­chines that are smarter than we are and able to process vast amounts of data very quickly of course has many ob­vi­ous ben­e­fits but only if AI is used in the right way and we can con­trol it and adapt to it.

Be­cause AI would be log­i­cal and we are of­ten not, some AI could help us act less stupid and make bet­ter choices.

The de­vel­op­ment of AI helps us to bet­ter un­der­stand con­scious­ness, how it works and how to de­fine sen­tience.

The stan­dard test for ma­chine in­tel­li­gence has been what is called the Tur­ing test. It was de­vel­oped by Alan Tur­ing in 1950 as a test of a ma­chine's abil­ity to ex­hibit in­tel­li­gent be­hav­ior equiv­a­lent to, or in­dis­tin­guish­able from, that of a hu­man.

A Tur­ing test com­pe­ti­tion in 2014 was claimed as won by the Rus­sian chat bot Eu­gene Goost­man. The bot, dur­ing a se­ries of five-minute-long text con­ver­sa­tions, con­vinced 33% of the con­test's judges that it was hu­man. How­ever, this means that 66% of judges did not be­lieve that the bot was hu­man.

While com­pe­ti­tion's or­ga­niz­ers be­lieved that the Tur­ing test had been "passed for the first time" at the event, say­ing that "The event in­volved more si­mul­ta­ne­ous com­par­i­son tests than ever be­fore, was in­de­pen­dently ver­i­fied and, cru­cially, the con­ver­sa­tions were un­re­stricted. The bot was merely pro­grammed well enough to pro­vide plau­si­ble an­swers to ques­tions and sim­u­late sim­ple con­ver­sa­tions.

The Tur­ing test com­pe­ti­tion demon­strated that a new mea­sure and def­i­ni­tion of sen­tience and in­tel­li­gence needs to be adopted.

A Log­i­cal End Game

The ul­ti­mate risk is that sen­tience ma­chine in­tel­li­gence would reach the log­i­cal con­clu­sion that hu­mans are what they are — an in­fe­rior de­struc­tive life­form that threat­ens other life and must be heav­ily con­trolled or elim­i­nated. This idea has been ex­plored in a num­ber of books and movies.

With the un­bri­dled creation of AI we re­ally do face a po­ten­tial fu­ture like the one fea­tured in the Ter­mi­na­tor story in which in­tel­li­gent ma­chines come to the con­clu­sion that hu­mans are their great­est threat and must be de­stroyed.

The Ter­mi­na­tor con­cept came to James Cameron in a fever and hunger in­duced vi­sion. Per­haps it was a vi­sion of our fu­ture in a way.

In the Ter­mi­na­tor story, ma­chine in­tel­li­gence is first made pos­si­ble by a self-aware self-pro­gram­ming mil­i­tary AI plat­form called "Skynet".

Some peo­ple who are highly in­tu­itive sense that the Amer­i­can mil­i­tary's quan­tum DNA com­put­ers have al­ready re­sulted in a Skynet type AI that is even more ad­vanced and in­tru­sive but stealth­ier than por­trayed in the Ter­mi­na­tor movies and TV se­ries.

Some feel that the AI is mon­i­tor­ing them closely and in­ter­fer­ing with them on an en­er­getic level.

De­fend­ing Our­selves Against AI

Hu­man­ity lived just fine with­out AI and can con­tinue to do so, but at this stage AI can't be stopped. In­creas­ingly in­tel­li­gent and dan­ger­ous AI is in­evitable.

If we lived in an ac­tual democ­racy, the ob­vi­ous way to pro­tect our­selves from po­ten­tially con­trol­ling or ma­li­cious AI would be to tightly reg­u­late its de­vel­op­ment and de­ploy­ment. But, there are al­most no coun­tries with gov­ern­ments that ac­tu­ally serve the needs of their cit­i­zens first and which have not been in­fil­trated by sin­is­ter forces.

Given the re­al­ity that ma­li­cious forces con­trol most gov­ern­ments and thus reg­u­la­tion is not fea­si­ble, another log­i­cal op­tion would be for peo­ple to band to­gether and cre­ate zones where AI is heav­ily reg­u­lated or banned al­to­gether. But this would mean an in­creas­ing de­gree of iso­la­tion from the rest of hu­man­ity that is pro-ai and at­tempts at self-de­fense would re­sult in be­ing branded sub­ver­sive and tar­geted.

We may not be able to live in the mod­ern world and evade be­ing neg­a­tively af­fected by AI, but we can re­duce the po­ten­tial for neg­a­tive AI in­flu­ence.

1. Evolve - You can in­crease your in­tel­li­gence and per­sonal power through en­er­getic heal­ing, emo­tional man­age­ment, med­i­ta­tion, diet, ex­er­cise and by speed­ing up your brain. Think for your­self and as­so­ciate with think­ing peo­ple. Use the power of pos­i­tive think­ing and fo­cus on what you want in­stead of what you don't want. Be­com­ing more means be­ing less vul­ner­a­ble.

2. Re­duce Your Dig­i­tal Ex­po­sure - You can avoid be­ing mon­i­tored, tracked and in­flu­enced. Get off Face­book, stop us­ing any­thing by Google or Mi­crosoft. Don't use a cell phone or leave yours off when not needed. Keep your com­puter off when not in use and un­plug the net­work ca­ble when you don't need to be on­line. Don't use Wifi or blue­tooth de­vices. Re­duce screen time. Don't al­low your power com­pany to in­stall a smart me­ter on your house. Drive an older ve­hi­cle with­out the track­ing and re­mote con­trol soft­ware.

3. Align Your­self With the Life Force - One thing many techies and pro­po­nents of AI and trans-hu­man­ism fail to rec­og­nize is that there is in­deed an in­tel­li­gent life­force that is the frame­work of our re­al­ity. It is what makes life pos­si­ble and con­nects all things. By rec­og­niz­ing it and res­onat­ing with it one be­comes aligned with a much greater power than hu­mans can con­struct.

4. Join the Re­sis­tance - Band to­gether with oth­ers to ex­pose sin­is­ter forces and build a re­al­ity in which ma­li­cious AI does not gain the up­per hand.

Photo by In­ter­na­tional Telecom­mu­ni­ca­tion Union, CC

Ter­mi­na­tor ro­bot- Photo by gothopotam, CC

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.