An AI Dic­tionary for Lead­ers

Rotman Management Magazine - - FROM THE EDITOR -

Au­ton­o­mous

Put sim­ply, au­ton­omy means that an AI con­struct doesn’t need help from peo­ple. Driver­less cars il­lus­trate the term in vary­ing de­grees. Level four au­ton­omy rep­re­sents a ve­hi­cle that doesn’t need a hu­man in­side of it to op­er­ate at full ca­pac­ity. If we ever have a ve­hi­cle that can op­er­ate with­out a driver, and also doesn’t need to con­nect to any grid, server, GPS, or other ex­ter­nal source in or­der to func­tion, it will have reached level five au­ton­omy. Any­thing be­yond that would be called ‘sen­tient’, and de­spite the leaps that have been made in the field of AI, the sin­gu­lar­ity (an event rep­re­sent­ing an AI that be­comes self-aware) is purely the­o­ret­i­cal at this point.

Al­go­rithm

The most im­por­tant part of AI is the al­go­rithm. Th­ese are math for­mu­las and/or pro­gram­ming com­mands that in­form a reg­u­lar non-in­tel­li­gent com­puter on how to solve prob­lems with ar­ti­fi­cial in­tel­li­gence. Al­go­rithms are rules that teach com­put­ers how to fig­ure things out on their own.

Ma­chine Learn­ing

Ma­chine learn­ing is the process by which an AI uses al­go­rithms to per­form ar­ti­fi­cial in­tel­li­gence func­tions. It’s the re­sult of ap­ply­ing rules to cre­ate out­comes through an AI.

Black Box

When the rules are ap­plied, an AI does a lot of com­plex math. Of­ten, this math can’t even be un­der­stood by hu­mans, yet the sys­tem out­puts use­ful in­for­ma­tion. When this hap­pens it’s called ‘black box learn­ing’. We don’t re­ally care how the com­puter ar­rived at the de­ci­sions it’s made, be­cause we know what rules it used to get there.

Neu­ral Net­work

When we want an AI to get bet­ter at some­thing, we cre­ate a neu­ral net­work that is de­signed to be very sim­i­lar to the hu­man ner­vous sys­tem and brain. It uses stages of learn­ing to give AI the abil­ity to solve com­plex prob­lems by break­ing them down into lev­els of data. The first level of the net­work may only worry about a few pix­els in an im­age file and check for sim­i­lar­i­ties in other files; once the ini­tial stage is done, the neu­ral net­work will pass its find­ings on to the next level, which will try to un­der­stand a few more pix­els, and per­haps some meta­data. This process con­tin­ues at ev­ery level of a neu­ral net­work.

Deep Learn­ing

Deep learn­ing is what hap­pens when a neu­ral net­work gets to work. As the lay­ers process data, the AI gains a ba­sic un­der­stand­ing. You might be teach­ing your AI to un­der­stand cats, but once it

learns what paws are, that AI can ap­ply that knowl­edge to a dif­fer­ent task. Deep learn­ing means that in­stead of un­der­stand­ing what some­thing is, the AI be­gins to learn ‘why’.

Nat­u­ral Lan­guage Pro­cess­ing

It takes an ad­vanced neu­ral net­work to parse hu­man lan­guage. When an AI is trained to in­ter­pret hu­man com­mu­ni­ca­tion, it’s called nat­u­ral lan­guage pro­cess­ing. This is use­ful for chat bots and trans­la­tion ser­vices, but it’s also rep­re­sented at the cut­ting edge by AI as­sis­tants like Alexa and Siri.

Re­in­force­ment Learn­ing

AI and hu­mans learn in al­most the ex­act same way. One method of teach­ing a ma­chine, just like a per­son, is to use re­in­force­ment learn­ing. This in­volves giv­ing the AI a goal that isn’t de­fined with a spe­cific met­ric, such as telling it to ‘im­prove ef­fi­ciency’ or ‘find so­lu­tions’. In­stead of find­ing one spe­cific an­swer, the AI will run sce­nar­ios and re­port re­sults, which are then eval­u­ated by hu­mans and judged. The AI takes the feed­back and ad­justs the next sce­nario to achieve bet­ter re­sults.

Su­per­vised Learn­ing

This is the very se­ri­ous busi­ness of prov­ing things. When you train an AI model us­ing a su­per­vised learn­ing method, you pro­vide the ma­chine with the cor­rect an­swer ahead of time. Ba­si­cally the AI knows the an­swer and it knows the ques­tion. This is the most com­mon method of train­ing be­cause it yields the most data and de­fines pat­terns be­tween the ques­tion and an­swer. If you want to know why some­thing hap­pens, or how some­thing hap­pens, an AI can look at the data and de­ter­mine con­nec­tions us­ing the su­per­vised learn­ing method.

Un­su­per­vised Learn­ing

With un­su­per­vised learn­ing, we don’t give the AI an an­swer. Rather than find­ing pat­terns that are pre­de­fined, like ‘why peo­ple choose one brand over an­other’, we sim­ply feed a ma­chine a bunch of data so that it can find what­ever pat­terns it is able to.

Trans­fer Learn­ing

Once an AI has suc­cess­fully learned some­thing, like how to de­ter­mine if an im­age is a cat or not, it can con­tinue to build on its knowl­edge even if you aren’t ask­ing it to learn any­thing about cats. You could take an AI that can de­ter­mine if an im­age is a cat with 90 per cent ac­cu­racy, hy­po­thet­i­cally, and af­ter it spent a week train­ing on iden­ti­fy­ing shoes, it could then re­turn to its work on cats with a no­tice­able im­prove­ment in ac­cu­racy. -Cour­tesy of The Next Web (TNW), www.thenex­tweb.com

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.