What is AI, any­way?

The Denver Post - - BUSINESS -

Back in 1956, schol­ars gath­ered at Dart­mouth Col­lege to be­gin con­sid­er­ing how to build com­put­ers that could im­prove them­selves and take on prob­lems that only hu­mans could han­dle. That’s still a work­able def­i­ni­tion of ar­ti­fi­cial in­tel­li­gence.

An ini­tial burst of en­thu­si­asm at the time, how­ever, de­volved into an “AI win­ter” last­ing many decades as early ef­forts largely failed to cre­ate ma­chines that could think and learn — or even lis­ten, see or speak.

That started chang­ing five years ago. In 2012, a team led by Ge­of­frey Hinton at the Univer­sity of Toronto proved that a sys­tem us­ing a brain-like neu­ral net­work could “learn” to rec­og­nize im­ages. That same year, a team at Google led by An­drew Ng taught a com­puter sys­tem to rec­og­nize cats in Youtube videos — with­out ever be­ing taught what a cat was.

Since then, com­put­ers have made enor­mous strides in vi­sion, speech and com­plex game anal­y­sis. One AI sys­tem re­cently beat the world’s top player of the an­cient board game Go.

Here comes Ter­mi­na­tor’s Skynet, maybe

For a com­puter to be­come a “gen­eral pur­pose” AI sys­tem, it would need to do more than just one sim­ple task like drive, pick up ob­jects, or pre­dict crop yields. Those are the sorts of tasks to which AI sys­tems are largely lim­ited to­day.

But they might not be hob­bled for too long. Ac­cord­ing to Stu­art Rus­sell, a com­puter sci­en­tist at the Univer­sity of Cal­i­for­nia at Berke­ley, AI sys­tems may reach a turn­ing point when they gain the abil­ity to un­der­stand lan­guage at the level of a col­lege stu­dent. That, he said, is “pretty likely to hap­pen within the next decade.” While that on its own won’t pro­duce a ro­bot over­lord, it does mean that AI sys­tems could read “every­thing the hu­man race has ever writ­ten in ev­ery lan­guage,” Rus­sell said. That alone would pro­vide them with far more knowl­edge than any in­di­vid­ual hu­man.

The ques­tion then is what hap­pens next. One set of fu­tur­ists be­lieve that such ma­chines could con­tinue learn­ing and ex­pand­ing their power at an ex­po­nen­tial rate, far out­strip­ping hu­man­ity in short or­der. Some dub that po­ten­tial event a “sin­gu­lar­ity,” a term con­not­ing change far be­yond the abil­ity of hu­mans to grasp.

Near-term con­cerns

No one knows if the sin­gu­lar­ity is sim­ply sci­ence fic­tion or not. In the mean­time, how­ever, the rise of AI of­fers plenty of other is­sues to deal with.

Ai-driven au­to­ma­tion is lead­ing to a resur­gence of U.S. man­u­fac­tur­ing — but not man­u­fac­tur­ing jobs. Self­driv­ing ve­hi­cles be­ing tested now could ul­ti­mately dis­place many of the al­most 4 mil­lion pro­fes­sional truck, bus and cab driv­ers now work­ing in the U.S. Hu­man bi­ases also can creep into AI sys­tems. A chat­bot re­leased by Mi­crosoft called Tay be­gan tweet­ing of­fen­sive and racist re­marks af­ter on­line trolls baited it with what the com­pany called “in­ap­pro­pri­ate” com­ments.

Har­vard Univer­sity pro­fes­sor Latanya Sweeney found that search­ing in Google for names as­so­ci­ated with black peo­ple more of­ten brought up ads sug­gest­ing a crim­i­nal ar­rest. Ex­am­ples of im­age-recog­ni­tion bias abound.

Mit­i­gat­ing harm form AI

In his speech to the gov­er­nors, Elon Musk urged gov­er­nors to be proac­tive, rather than re­ac­tive, in reg­u­lat­ing AI, al­though he didn’t of­fer many specifics. And when a con­ser­va­tive Repub­li­can gov­er­nor chal­lenged him on the value of reg­u­la­tion, Musk re­treated and said he was mostly ask­ing for govern­ment to gain more “in­sight” into po­ten­tial is­sues pre­sented by AI.

Of course, the pro­saic use of AI will al­most cer­tainly chal­lenge ex­ist­ing le­gal norms and reg­u­la­tions. When a self-driv­ing car causes a fa­tal ac­ci­dent, or an Ai-driven med­i­cal sys­tem pro­vides an in­cor­rect med­i­cal di­ag­no­sis, so­ci­ety will need rules in place for de­ter­min­ing le­gal re­spon­si­bil­ity and li­a­bil­ity.

With such im­me­di­ate chal­lenges ahead, wor­ry­ing about su­per­in­tel­li­gent com­put­ers “would be a tragic waste of time,” said An­drew Moore, dean of the com­puter sci­ence school at Carnegie Mel­lon Univer­sity. That’s be­cause ma­chines aren’t now ca­pa­ble of think­ing out of the box in ways they weren’t pro­grammed for, he said. “That is some­thing which no one in the field of AI has got any idea about.”

— The As­so­ci­ated Press

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.