AI uses a brain scan to visualize what someone is looking at – New Scientist

Based on brain scans, an artificial intelligence can generate images of pictures that people look at. Such systems already existed, but are extremely complex and energy-intensive. The new approach is much simpler.

The system has been developed on the basis of a popular artificial intelligence (AI) that can convert text into images. Through adaptation, this AI can now generate images based on brain signals, instead of text. But the system still requires extensive training with large and expensive imaging equipment, so it is not really practical yet.

Recently, several research groups have succeeded in generating images based on brain signals, but they used energy-intensive AIs that require fine-tuning of millions to billions of parameters.

READ ALSO

‘The wind tunnel was the puzzle piece that was missing for Tour winner Jonas Vingegaard’

Flemish professor Bert Blocken played a role in Jonas Vingegaard’s Tour de France victory. Since 2017 he has been an aerody…

Neuroscientists Shinji Nishimoto and Yu Takagi from Osaka University in Japan have now developed a much simpler approach. For this they used Stable Diffusion, a text-to-image generator released by the Stability AI company in August 2022. Their method, published on the preprint website Biorxiv, includes only thousands instead of millions of parameters.

fMRI data

Normally, Stable Diffusion turns a piece of text into an image by starting with random visual noise, from which the program produces images that resemble images with similar text captions in the training data.

Nishimoto and Takagi built two new software models that they connected to Stable Diffusion so that it could handle brain signals instead of text. They used data from four people who had taken part in a study that had fMRI scans of their brains as they viewed 10,000 different images of landscapes, objects and people.

Using about 90 percent of the scan data, they trained a model to make connections between fMRI data from a brain region that processes visual signals, the early visual cortexand the images people viewed.

80 percent accuracy

Using the same dataset, Nishimoto and Takagi trained a second model to make connections between text descriptions of the images and fMRI data from a brain region that processes image meaning, the ventral visual cortex.

After their training, these two models, which had to be adapted to each individual, could translate brain scan data into a form that could be entered directly into Stable Diffusion. He then succeeded in reconstructing about a thousand images that people viewed with an accuracy of about 80 percent, without being trained on those specific images. This level of accuracy is comparable to that achieved in a Previous research in which the same data was analyzed using a more cumbersome approach.

‘I could not believe my eyes. I strode to the bathroom, took a quick look in the mirror, then returned to my desk to check the results again,” says Takagi.

AI converts brain scans into images
The images in the bottom row were recreated by the AI ​​based on the brain scans of someone looking at the images in the top row. Image: Yu Takagi and Shinji Nishimoto/Osaka University in Japan

Totally not practical

On a side note, the study was only conducted on four people, and mind-reading AIs work better on some people than others, Nishimoto says.

In addition, because the models must be adapted to each individual’s brain, this approach requires lengthy brain-scanning sessions and huge fMRI machines, computer scientist says Sikun Lin from the University of California at Santa Barbara. “It’s totally impractical for day-to-day use,” she says.

In the future, more practical versions of this approach could allow people to create art, alter images or add new elements to games, Lin says, using only their imaginations.

ttn-15