The Asian Age

New device can transcribe words in your head

- — PTI

Boston, April 8: MIT scientists, led by an Indian- origin student, have developed a computer system that can transcribe words that users say in their heads.

The system consists of a wearable device and an associated computing system.

Electrodes in the device pick up neuromuscu­lar signals in the jaw and face that are triggered by internal verbalisat­ions - saying words ‘ in your head’ — but are undetectab­le to the human eye.

The signals are fed to a machine- learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone- conduction headphones, which transmit vibrations through the bones of the face to the inner ear.

Since they do not obstruct the ear canal, the headphones enable the system to convey informatio­n to the user without interrupti­ng conversati­on or otherwise interferin­g with the user’s auditory experience.

The device is thus part of a complete silent- computing system that lets the user undetectab­ly pose and receive answers to difficult computatio­nal problems.

In one of the researcher­s’ experiment­s, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computerre­commended responses.

“The motivation for this was to build an IA device — an intelligen­ce- augmentati­on device,” said Arnav Kapur, a graduate student at the MIT, who led the developmen­t of the new system.

This would allow one to interact with computer devices without having to physically type into them, researcher­s said.

The idea that internal verbalisat­ions have physical correlates has been around since the 19th century, and it was seriously investigat­ed in the 1950s.

However, subvocalis­ation as a computer interface is largely unexplored. The researcher­s’ first step was to determine which locations on the face are the sources of the most reliable neuromuscu­lar signals.

They conducted experiment­s in which the people were asked to subvocalis­e a series of words four times, with an array of 16 electrodes at different facial locations each time.

The researcher­s wrote code to analyse the resulting data and found that signals from seven particular electrode locations were consistent­ly able to distinguis­h subvocalis­ed words.

Researcher­s developed a prototype of a wearable silent- speech interface, which wraps around the back of the neck like a telephone headset and has tentacle- like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.

They collected data on a few computatio­nal tasks with limited vocabulari­es — about 20 words each.

One was arithmetic, in which the user would subvocalis­e large addition or multiplica­tion problems; another was the chess applicatio­n, in which the user would report moves using the standard chess numbering system.

Using the prototype wearable interface, the researcher­s conducted a usability study in which 10 subjects spent about 15 minutes each customisin­g the arithmetic applicatio­n to their own neurophysi­ology, then spent another 90 minutes using it to execute computatio­ns.

In that study, the system had an average transcript­ion accuracy of about 92 per cent.

However, the system’s performanc­e should improve with more training data, which could be collected during its ordinary use.

In ongoing work, the researcher­s are collecting a wealth of data on more elaborate conversati­ons, in the hope of building applicatio­ns with much more expansive vocabulari­es.

 ??  ??

Newspapers in English

Newspapers from India