The smart new app that serves as eyes for the blind

Weekend Argus (Saturday Edition) - - MEDIA& MARKETING - DO­MINIC BA­SULTO

NEW YORK: As com­put­ers get bet­ter at nav­i­gat­ing the world around them, they are also help­ing hu­mans bet­ter nav­i­gate that world as well.

Thanks to ad­vances in ar­ti­fi­cial in­tel­li­gence and ro­bot­ics, sci­en­tists from IBM Re­search and Carnegie Mel­lon Univer­sity are work­ing on new types of real-world ac­ces­si­bil­ity so­lu­tions for the blind.

The goal is as au­da­cious as it is in­spir­ing: com­ing up with a tech­no­log­i­cal plat­form that can help the vis­ually im­paired cope as well as ev­ery­one else.

The first pi­lot in the pro­gramme is a smart­phone app for Ap­ple prod­ucts and An­droid called NavCog, which helps blind peo­ple nav­i­gate their sur­round­ings by whis­per­ing into their ears through ear­buds or by cre­at­ing sub­tle vi­bra­tions on their smart­phones.

The app operates sim­i­larly to the turn- by- turn di­rec­tions of­fered by car GPS sys­tems. The app analy­ses sig­nals from Blue­tooth bea­cons lo­cated along walk­ways and from smart­phone sen­sors to help en­able users to move with­out hu­man as­sis­tance, whether in­side build­ings or out­doors.

The magic hap­pens when al­go­rithms are able to help the blind iden­tify in near real-time where they are, which di­rec­tion they are fac­ing and ad­di­tional sur­round­ing en­vi­ron­men­tal in­for­ma­tion.

The com­puter-vi­sion nav­i­ga­tion- ap­pli­ca­tion tool turns smart­phone images of the sur­round­ing en­vi­ron­ment into a 3-D space model that can be used to is­sue turn- by- turn nav­i­ga­tion guid­ance.

The NavCog project has par­tic­u­lar mean­ing for one of the lead re­searchers on the project, IBM fel­low and vis­it­ing Carnegie Mel­lon fac­ulty mem­ber Chieko Asakawa, who is vis­ually im­paired her­self. It will soon be pos­si­ble for her to walk across the Carnegie Mel­lon cam­pus with the help of the NavCog app – and look just like any other per­son cross­ing the cam­pus.

That’s just the be­gin­ning, says Kris Ki­tani of the Ro­bot­ics In­sti­tute of Carnegie Mel­lon. A ma­jor goal is to ex­tend the cov­er­age be­yond just the Carnegie Mel­lon cam­pus that has been retro­fit­ted with bea­cons. To en­cour­age this, the sci­en­tists work­ing on the project have made the en­tire NavCog plat­form open source by mak­ing it avail­able to de­vel­op­ers via the IBM BlueMix cloud. That makes it pos­si­ble for other de­vel­op­ers to build other en­hance­ments for the sys­tem and speed the roll­out to other phys­i­cal des­ti­na­tions.

The other pri­mary goal, Ki­tani said, is to make the sys­tem work­able even in en­vi­ron­ments that do not in­clude Blue­tooth bea­cons. To make that pos­si­ble, the univer­sity hopes to build on ad­vances in com­puter vi­sion as well as new work be­ing con­ducted in the field of cogni- tive as­sis­tance, which is a re­search field ded­i­cated to help­ing the blind re­gain in­for­ma­tion by aug­ment­ing miss­ing or weak­ened abil­i­ties.

By us­ing cam­eras for com­puter-aided vi­sion, for ex­am­ple, it might be pos­si­ble to de­velop a more ac­cu­rate sys­tem that doesn’t re­quire the pres­ence of Blue­tooth bea­cons. And this com­puter- aided vi­sion, when com­bined with other lo­cal­i­sa­tion tech­nolo­gies, po­ten­tially could make it pos­si­ble to recog­nise ev­ery­day land­marks like stairs or a bar­rier on the road that might not be picked up with to­day’s sen­sors.

There are plans to add other ex­tras to the sys­tem that go be­yond mere nav­i­ga­tion. For ex­am­ple, a fa­cial- recog­ni­tion com­po­nent would tell you in real- time if you are pass­ing some­one you know.

More­over, sen­sors ca­pa­ble of recog­nis­ing emo­tions on th­ese faces – work that’s part of other Carnegie Mel­lon re­search into autism – could make it pos­si­ble to recog­nise when those peo­ple pass­ing you are smil­ing or frown­ing. Re­searchers also are ex­plor­ing the use of com­puter vi­sion to char­ac­terise the ac­tiv­i­ties of peo­ple in the vicin­ity and ul­tra­sonic tech­nol­ogy to help iden­tify lo­ca­tions more ac­cu­rately.

If all goes ac­cord­ing to plan, it’s pos­si­ble to en­vi­sion a vir­tu­ous feed­back loop for ma­chine in­tel­li­gence and hu­man in­tel­li­gence. – Wash­ing­ton Post

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.