Should you be wor­ried about the rise of AI?

Kuwait Times - - TECHNOLOGY -

Tech ti­tans Mark Zucker­berg and Elon Musk re­cently slugged it out on­line over the pos­si­ble threat ar­ti­fi­cial in­tel­li­gence might one day pose to the hu­man race, al­though you could be for­given if you don’t see why this seems like a press­ing ques­tion.

Thanks to AI, com­put­ers are learn­ing to do a va­ri­ety of tasks that have long eluded them - ev­ery­thing from driv­ing cars to de­tect­ing can­cer­ous skin le­sions to writ­ing news sto­ries . But Musk, the founder of Tesla Mo­tors and SpaceX, wor­ries that AI sys­tems could soon sur­pass hu­mans, po­ten­tially lead­ing to our de­lib­er­ate (or in­ad­ver­tent) ex­tinc­tion.

Two weeks ago, Musk warned US gov­er­nors to get ed­u­cated and start con­sid­er­ing ways to reg­u­late AI in or­der to ward off the threat. “Once there is aware­ness, peo­ple will be ex­tremely afraid,” he said at the time. Zucker­berg, the founder and CEO of Face­book, took ex­cep­tion. In a Face­book Live feed recorded Satur­day in front of his bar­be­cue smoker, Zucker­berg hit back at Musk, say­ing peo­ple who “drum up these dooms­day sce­nar­ios” are “pretty ir­re­spon­si­ble.” On Tues­day, Musk slammed back on Twit­ter , writ­ing that “I’ve talked to Mark about this. His un­der­stand­ing of the sub­ject is lim­ited.” Here’s a look at what’s be­hind this high-tech flare-up and what you should and shouldn’t be wor­ried about.

What is AI, any­way?

Back in 1956, schol­ars gath­ered at Dart­mouth Col­lege to be­gin con­sid­er­ing how to build com­put­ers that could im­prove them­selves and take on prob­lems that only hu­mans could han­dle. That’s still a work­able def­i­ni­tion of ar­ti­fi­cial in­tel­li­gence. An ini­tial burst of en­thu­si­asm at the time, how­ever, de­volved into an “AI win­ter” last­ing many decades as early ef­forts largely failed to cre­ate ma­chines that could think and learn - or even lis­ten, see or speak.

That started chang­ing five years ago. In 2012, a team led by Geoffrey Hin­ton at the Univer­sity of Toronto proved that a sys­tem us­ing a brain-like neu­ral net­work could “learn” to rec­og­nize im­ages. That same year, a team at Google led by Andrew Ng taught a com­puter sys­tem to rec­og­nize cats in YouTube videos - with­out ever be­ing taught what a cat was. Since then, com­put­ers have made enor­mous strides in vi­sion, speech and com­plex game anal­y­sis. One AI sys­tem re­cently beat the world’s top player of the an­cient board game Go.

Here comes terminator’s Skynet, maybe

For a com­puter to be­come a “gen­eral pur­pose” AI sys­tem, it would need to do more than just one sim­ple task like drive, pick up ob­jects, or pre­dict crop yields. Those are the sorts of tasks to which AI sys­tems are largely lim­ited to­day. But they might not be hob­bled for too long. Ac­cord­ing to Stu­art Rus­sell, a com­puter sci­en­tist at the Univer­sity of Cal­i­for­nia at Berke­ley, AI sys­tems may reach a turn­ing point when they gain the abil­ity to un­der­stand lan­guage at the level of a col­lege stu­dent. That, he said, is “pretty likely to hap­pen within the next decade.” While that on its own won’t pro­duce a robot over­lord, it does mean that AI sys­tems could read “ev­ery­thing the hu­man race has ever writ­ten in ev­ery lan­guage,” Rus­sell said. That alone would pro­vide them with far more knowl­edge than any in­di­vid­ual hu­man. The ques­tion then is what hap­pens next. One set of fu­tur­ists be­lieve that such ma­chines could con­tinue learn­ing and ex­pand­ing their power at an ex­po­nen­tial rate, far out­strip­ping hu­man­ity in short or­der. Some dub that po­ten­tial event a “sin­gu­lar­ity,” a term con­not­ing change far be­yond the abil­ity of hu­mans to grasp.

Near-term con­cerns

No one knows if the sin­gu­lar­ity is sim­ply science fic­tion or not. In the mean­time, how­ever, the rise of AI of­fers plenty of other is­sues to deal with. AI-driven au­to­ma­tion is lead­ing to a resur­gence of US man­u­fac­tur­ing - but not man­u­fac­tur­ing jobs . Self-driv­ing ve­hi­cles be­ing tested now could ul­ti­mately dis­place many of the al­most 4 mil­lion pro­fes­sional truck, bus and cab drivers now work­ing in the US. Hu­man bi­ases can also creep into AI sys­tems. A chat­bot re­leased by Mi­crosoft called Tay be­gan tweet­ing of­fen­sive and racist re­marks af­ter on­line trolls baited it with what the com­pany called “in­ap­pro­pri­ate” com­ments. Har­vard Univer­sity pro­fes­sor Latanya Sweeney found that search­ing in Google for names as­so­ci­ated with black peo­ple more of­ten brought up ads sug­gest­ing a crim­i­nal ar­rest. Ex­am­ples of im­age-recog­ni­tion bias abound. “AI is be­ing cre­ated by a very elite few, and they have a par­tic­u­lar way of think­ing that’s not nec­es­sar­ily re­flec­tive of so­ci­ety as a whole,” says Mariya Yao, chief tech­nol­ogy of­fi­cer of AI con­sul­tancy TopBots.

Mit­i­gat­ing harm from AI

In his speech to the gov­er­nors, Musk urged gov­er­nors to be proac­tive, rather than re­ac­tive, in reg­u­lat­ing AI, al­though he didn’t of­fer many specifics. And when a con­ser­va­tive Repub­li­can gov­er­nor chal­lenged him on the value of reg­u­la­tion, Musk re­treated and said he was mostly ask­ing for govern­ment to gain more “in­sight” into po­ten­tial is­sues pre­sented by AI. Of course, the pro­saic use of AI will al­most cer­tainly chal­lenge ex­ist­ing le­gal norms and reg­u­la­tions. When a self­driv­ing car causes a fa­tal ac­ci­dent, or an AI-driven med­i­cal sys­tem pro­vides an in­cor­rect med­i­cal di­ag­no­sis, so­ci­ety will need rules in place for de­ter­min­ing le­gal re­spon­si­bil­ity and li­a­bil­ity. With such im­me­di­ate chal­lenges ahead, wor­ry­ing about su­per in­tel­li­gent com­put­ers “would be a tragic waste of time,” said Andrew Moore, dean of the com­puter science school at Carnegie Mel­lon Univer­sity. That’s be­cause ma­chines aren’t now ca­pa­ble of think­ing out of the box in ways they weren’t pro­grammed for, he said. “That is some­thing which no one in the field of AI has got any idea about.”—AP

SAN FRAN­CISCO: This combo of file im­ages shows Face­book CEO Mark Zucker­berg, left, and Tesla and SpaceX CEO Elon Musk. — AP

Newspapers in English

Newspapers from Kuwait

© PressReader. All rights reserved.