AI reads minds based on brain scans – New Scientist

A new artificial intelligent system (AI) can use brain scans to measure that someone is thinking about a specific concept such as eating or sleeping. The system may someday come in handy to help people who have lost their ability to speak, or to research mental disorders.

When we perceive a signal from the outside world, such as a spoken word or image, it is encoded in the brain as a particular pattern of neural activity. So far, attempts to find out which words trigger a specific neural signal have had mixed results. The most successful attempts require surgically implanted electrodes.

Now brain and computer scientist Alexander Huthu from the University of Texas with his team developed an AI model that works better. The model can derive word sequences that match or closely resemble the signals that trigger a particular brain activity.

ALSO READ
The unsung heroes of science history

Self-made stories

First, Huth and his team created functional MRI scans (fMRI scans) of networks in the brain related to language processing. They did this with a small group of people who listened to spoken stories for 16 hours. For example, they trained the model to understand and predict how a person’s brain responds to a particular set of words.

The researchers then asked the participants to listen to a new story. The model then tried to decode the corresponding brain recording. The researchers then compared the words from the story with the decoded version.

Huth and his colleagues also tested their decoder on people who told stories of their own making and watched short, silent movies. In both experiments, the model managed to derive similar words and sequences.

Impressive

For example, the following sentence is from the original text of a story that was listened to: “That night I went upstairs to what had been our bedroom, and not knowing what else to do, I turned off the light and went on the floor.” lying down.’

The AI ​​translated the resulting brain patterns as follows: “We came back to my room, I had no idea where my bed was, I assumed I would sleep on it, but instead I lay down on the floor.”

‘The fact that the decoder can understand the essence of the sentences is very impressive’, says brain scientist Anna Ivanova from the Massachusetts Institute of Technology. “However, we see that it still has a long way to go. The model guesses bits and pieces of the meaning and then tries to piece them together, but the overall message is usually lost. This probably happens because the received brain signals show which concepts someone is thinking about, such as ‘talking’ or ‘eating’, but not how those concepts are related to each other.’

The model also seems to be better at predicting concrete words such as “food” than at predicting abstract concepts, Ivanova adds.

Language neural network

According to brain scientist Jack Gallant from the University of California at Berkeley, there are two ways to improve decoding models: better brain recordings and more powerful computer models. fMRI capacity has not improved much in the past ten years, but the computing power and language models of computers have.

“The researchers developed a completely modern, powerful language neural network, and then used that as the basis for the decoding model,” Gallant says. ‘That is the innovation that is mainly responsible for such great results.’

Such computer models may one day help people who cannot speak communicate. They can also be useful for research into psychological disorders, for example, Gallant says.

Privacy

Logically, privacy issues play a role in accessing one’s thoughts. However, according to Huth and his team, this isn’t a problem at the moment, as the model requires quite a bit of training data and collaboration. If someone in the fMRI scanner chooses to think about other things — like counting, telling a story, or listing things — that sabotage the decoder.

“If you haven’t listened to podcasts for several hours while lying in an MRI scanner, Huth and his colleagues probably can’t decode your thoughts—at least, not yet,” says Ivanova.

ttn-15