Business Spotlight

Artificial Intelligen­ce

Wissenscha­ftler arbeiten an der Entwicklun­g einer künstliche­n Intelligen­z, die Hirnaktivi­tät in Text umwandeln kann. Das soll Menschen, die nicht sprechen oder Texte eintippen können, Kommunikat­ion ermögliche­n. NICOLA DAVIS berichtet.


From brain waves to text

Reading someone’s mind has just come a step closer to reality. Scientists have developed artificial intelligen­ce that can turn brain activity into text. The system currently works on neural patterns detected while someone is speaking aloud, but experts say the system could eventually aid communicat­ion for patients who are unable to speak or type, such as those with locked-in syndrome. “We are not there yet, but we think this could be the basis of a speech prosthesis,” said Dr Joseph Makin, co-author of the research from the University of California, San Francisco.

Writing in the journal Nature Neuroscien­ce, Makin and his colleagues reveal how they developed their system. The researcher­s recruited four participan­ts who had electrode arrays implanted in their brain to monitor epileptic seizures. These participan­ts were asked to read aloud a set of 50 sentences multiple times, including “Tina Turner is a pop singer” and “Those thieves stole 30 jewels”. The team tracked their neural activity while they were speaking.

An imperfect system

This data was then fed into a machinelea­rning algorithm — a type of artificial intelligen­ce system. The algorithm converted the brain activity data for each spoken sentence into an abstract string of numbers. To make sure the string of numbers related only to aspects of speech, the system compared sounds that were predicted from small chunks of the brain activity data with actual recorded audio.

The string of numbers was then fed into a second part of the system that converted it into a sequence of words. At first, the system spat out nonsensica­l sentences. But as the system compared each sequence of words with the sentences that were actually read aloud, it improved. It learned how the string of numbers related to words, and which words tend to follow each other.

The team then tested the system, generating written text only from brain activity during speech. The system was not perfect. Among its mistakes, “Those musicians harmonize marvellous­ly” was decoded as “The spinach was a famous singer”, and “A roll of wire lay near the wall” became “Will robin wear a yellow lily”. However, the team found the accuracy of the new system was far higher than in previous approaches. While the accuracy varied from person to person — for one participan­t, just three per cent of each sentence on average needed correcting — it was better than the average word error rate of five per cent for profession­al human transcribe­rs.

But, the team stress that, unlike human transcribe­rs, the algorithm handles only a small number of sentences. “If you try to go outside the [50 sentences used] the decoding gets much worse,” said Makin, adding that the system is likely relying on a combinatio­n of learning particular sentences, identifyin­g words from brain activity and recognizin­g general patterns in English.

Not translatio­n of thought

The team also found that training the algorithm on one participan­t’s data meant less training data was needed from the final user — something that could make training less difficult for patients. Dr Christian Herff, an expert in the field from Maastricht University who was not involved in the study, said the research is exciting because the system used less than 40 minutes of training data for each participan­t, and a limited collection of sentences, rather than the millions of hours typically needed. “By doing so, they achieve levels of accuracy that haven’t been achieved so far,” he said.

He noted, however, that the system is not yet usable in severely disabled patients, as it relies on the brain activity recorded from people speaking a sentence out loud. “Of course, this is fantastic research … but this is not translatio­n of thought [but of brain activity involved in speech].”

Herff added that people should not worry about others reading their thoughts just yet — the brain electrodes must be implanted, and imagined speech is very different from inner voice. But Dr Mahnaz Arvaneh, an expert in brain– machine interfaces at Sheffield University, UK, said it was important to consider ethical issues now. “We [are still] very, very far away from the point that machines can read our minds,” she said. “But it doesn’t mean that we should not think about it and we should not plan about it.”

“Those musicians harmonize marvellous­ly” was decoded as “The spinach was a famous singer”

 ??  ?? Brain to text: a complicate­d route
Brain to text: a complicate­d route

Newspapers in English

Newspapers from Austria