Is Ar­ti­fi­cial In­tel­li­gence a threat to hu­man­ity in the fu­ture?

Technowize Magazine - - Features -

The fu­ture has a lot of dif­fer­ent po­ten­tials. In movies like Ex Machina and Tran­scen­dence, what we re­ally see is an­thro­po­mor­phic in­ter­pre­ta­tions of AI hav­ing hu­man qual­i­ties. A lot of things need to be clar­i­fied first. To start with, what we see in sci-fi movies is sim­ply fic­tion. Se­condly, AI is sim­ply a soft­ware. Why would a soft­ware have hu­man qual­i­ties and emo­tions?

Take for in­stance IBM’S Watson, it is a type of AI, but all it does it make sur­mises and give you an­swers based on some sort of sta­tis­ti­cal un­der­stand­ing and ma­chine learn­ing.

We’re still at the dawn of the AI-AGE. We are yet to define ex­actly what ‘in­tel­li­gence’ is for hu­mans. how can we define ex­actly what ‘ar­ti­fi­cial in­tel­li­gence,’ is?

When it comes to com­puter pro­grams, they can be taught to learn for a very spe­cific pur­pose us­ing Ma­chine Learn­ing. of­ten, peo­ple con­fuse ma­chine learn­ing with hu­man learn­ing. Now, ma­chine learn­ing is sim­ply train­ing the soft­ware to

be­have in a cer­tain man­ner based on a sta­tis­ti­cal pat­tern. Ev­ery­day ob­jects such as self-driv­ing cars, Watson and even im­age recog­ni­tion soft­ware, are taught to be­have in a cer­tain man­ner and can­not go be­yond those lim­i­ta­tions. They don’t re­ally pose a threat to hu­man­ity.

But, is it pos­si­ble that in a few hun­dred years soft­ware be­come sen­tient? It’s pos­si­ble, but ma­chine won’t be in­tel­li­gent enough to pose a threat to hu­man­ity. You see, a ma­chine would never have the same emo­tions that we do – anger, greed, envy, etc. Even if a piece of soft­ware is highly in­tel­li­gent, there’s no rea­son why it would nec­es­sar­ily be a threat to hu­man­ity. If any­thing, the soft­ware would be con­trolled by us. All tech­nolo­gies have both good and bad as­pects that can be used for. Take for in­stance nu­clear, which is the most glar­ing ex­am­ple of this with nu­clear bombs and nu­clear power

The rea­son why re­searchers and sci­en­tists con­sider AI to be po­ten­tially dan­ger­ous is be­cause in­tel­li­gence could in­crease to colos­sal lev­els. More­over, it can mod­ify it­self to in­crease its in­tel­li­gence, and keep do­ing it un­til it reaches the apex. Ex­perts be­lieve we have a rea­son to worry about AI pos­ing a threat to hu­man­ity.

First of all, it would be very dif­fi­cult to AI to learn hu­man val­ues. We can­not ex­actly as­sume that a ma­chine will be able to think as clearly as hu­mans do. For in­stance, even if we pro­gram AI to have hu­man val­ues, we’ll need to give it some kind of met­ric to mea­sure how well it is do­ing.

Se­condly, hu­mans and chimps are quite sim­i­lar in in­tel­li­gence. how­ever, the small gap that ex­ists al­lows hu­mans to play a species far su­pe­rior to chimps. We can shoot them with a tran­quil­izer and put them in a cage. can you imag­ine AI us­ing its own in­tel­li­gence to play a far su­pe­rior species? Will AI lead in the in­tel­li­gence gap be­tween us?

‘AI will pose a threat to hu­man­ity,’ is noth­ing more than sci­encefic­tion. There are quite a many com­pelling ar­gu­ments that sug­gest that de­spite the po­ten­tial for hor­ren­dous out­comes, ar­ti­fi­cial in­tel­li­gence will be the sin­gle most im­por­tant in­ven­tion mankind will have made. The problem is that there’s no way to know right now.

of course, we will, sooner or later, try to cre­ate an uber­in­tel­li­gent AI. To cut down on the pos­si­bil­ity of AI play­ing god, we can cre­ate what Wil­liam gib­son called the “Tur­ing po­lice.” This is good AI play­ing the cop and catch­ing the bad AI.

Sci­ence fic­tion is lousy with tales of AI gone wild. The ne­far­i­ous Skynet from the ‘Ter­mi­na­tor’ films, ul­tron from Avengers, came close to de­feat­ing hu­man­ity. But in the real world, we’re mov­ing in a dif­fer­ent direction al­to­gether with ar­ti­fi­cial in­tel­li­gence. AI is ev­ery­where, from au­to­mated in­dus­trial sys­tems to self-driv­ing cars and smart gad­gets. It is noth­ing more than a com­puter with cog­ni­tive func­tions; it em­ploys ma­chine learn­ing to as­sess in­for­ma­tion and solve prob­lems.

The best ex­am­ple of AI gone wild in the real life is of Mi­crosoft chat­bot Tay. The ex­per­i­men­tal AI tweeted rad­i­cal mes­sages such as “hitler was right,” spout­ing abu­sive and ha­tred filled sen­ti­ments un­til Mi­crosoft had to take it off­line. To be­gin with, Tay was sim­ply par­rot­ing of­fen­sive state­ments made by mil­lions of users on Twit­ter. The chat­bot was de­signed to mimic the lan­guage pat­tern of users on Twit­ter us­ing

ma­chine learn­ing and adap­tive al­go­rithms. un­for­tu­nately, Tay was mix­ing with the wrong kind of crowd. In­stead of en­gag­ing with peo­ple in a fun filled, ca­sual con­ver­sa­tion, it was par­rot­ing state­ments made by trolls who were sim­ply try­ing to pro­voke the AI chat­bot.

Ar­ti­fi­cial in­tel­li­gence will pose a dif­fer­ent kind of threat to hu­man­ity. The dam­age AI will cause be­cause they lack com­mon sense or in­ter­pret our com­mands too lit­er­ally will cre­ate what a “sor­cerer’s ap­pren­tice” problem. As a mat­ter of fact, it hap­pens all the time. For in­stance, when some­one is mis­di­ag­nosed, or when an in­no­cent cit­i­zen is flagged as a ter­ror­ist, etc. There’s no way we can be cer­tain that we will be able to min­i­mize such er­rors. Why are we wor­ried that some­day com­put­ers will take over the world? The big­gest problem we’ve been fac­ing is that com­put­ers are too stupid for their own good and they have in fact, taken over the world.

Ar­ti­fi­cial in­tel­li­gence is noth­ing more than just an­other ad­vanced tech­nol­ogy. It is a tool than can be used ef­fi­ciently or poorly, by our com­mands. Let’s not for­get that it is the hu­man con­scious­ness who cre­ates and uses the AI to make cer­tain as­pects of life bet­ter.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.