A group of researchers from Singapore and Hong Kong used generative AI to create “high quality” videos by reading brain activity of study subjects: l‘AI that reads minds. With an fMRI, the researchers played videos that were similar (but not the same) as what the subjects were viewing. And they confirmed the brain areas involved in the imagining process.
From brain activity to video, AI reads minds
Jiaxin Qing, Zijiao Chen and Juan Helen Zhou from the National University of Singapore and the Chinese University of Hong Kong used one fMRI (functional magnetic resonance imaging) pairing it with artificial intelligence based on Stable Diffusion. The model is called MinD-Video and researchers have explained how it works in a dedicated research paper.
But they have published a website online that shows the comparison between two videos. The first are those that researchers they showed to the subjects which they tested. The seconds are produced by their brains. Yet they are very similar to each other: same scenes, same colors.
Video, brain, video
The researchers defined MinD-Video as a “two-module link that bridges brain decoding from images to video.” In fact, if already in the recent past the researchers of Osaka have been able to create images starting from the reading of functional magnetic resonance – but those of MinD move.
Explanatory image of the search
Researchers have shared videos showing the difference between the i’sOriginal images of horses in a meadow and reconstructed images of more colorful horses. In another video, a car is seen driving through a wooded area and the reconstructed images give the idea of being at the driver’s seat on a curved road. The researchers evaluated that the reconstructed images were “high quality”, based on the movements and dynamics of the scene. They also claimed the pictures had an accuracy of 85%, higher than that of the other methods.
The authors explained that this field has promising applications as larger models develop, from neuroscience to brain-computer interfaces. And even if, like any scientific study, it needs cross-evaluation by other scholars all over the world, the possibilities seem enormous.
Promising results
In particular, they indicated that these results highlighted three interesting pieces of information. One is the role of visual cortex, which demonstrates that this part of the brain is an essential component of image perception. But very interesting how the brainwave decoder uses different parts to process images and it works in hierarchical way: it starts with structural information and then moves on to more abstract and visual features on deeper levels. Finally, the authors found that the fMRI decoder keeps getting better at every stage of learning, showing his ability to acquire more nuanced information as he continues his education.
The researchers say they are excited about the enhanced AI model used in this new research, which allows for a more precise visualization. “One of the key qualities of our Stable Diffusion model compared to other generative models, such as GANs, is its ability to produce higher quality video. It takes advantage of the representations learned from the fMRI encoder and uses its unique diffusion process to generate videos that are not only higher quality, but but also more faithful to the original neural activities”wrote the researchers.
AI can’t read our minds yet – but she seems ever closer to doing so. With enormous future implications: from the possibility of communicating with people with disabilities to the possibility of seeing our dreams the next morning. And much, much more.
Leave a Reply
View Comments