The researchers also showed participants short Pixar videos that contained no dialogue and recorded their brain responses in a separate experiment designed to test whether the decoder was able to retrieve the general content of what the user was watching. It turned out that it was.
Romain Brette, a theoretical neuroscientist at the Vision Institute in Paris who was not involved in the experiment, is not entirely convinced of the technology’s effectiveness at this stage. “The way the algorithm works is basically that an AI model creates sentences based on vague information about the semantic field of the sentences inferred from the brain scan,” he says. “There might be some interesting use cases, like inferring what you’ve dreamed about, at a general level. But I’m a little skeptical that we’re really getting close to the mind-reading level.”
It may not work so well yet, but the experiment raises ethical issues surrounding the possible future use of brain decoders for surveillance and interrogation. With that in mind, the team set out to test whether you could train and run a decoder without the help of a human. They did this by trying to decode each participant’s perceived speech using decoder models trained on someone else’s data. They found that they performed “barely above chance.”
This, they say, suggests that a decoder could not be applied to someone’s brain activity unless that person was willing and had helped train the decoder in the first place.
“We think that mental privacy is really important and that no one’s brain should be decoded without their cooperation,” says Jerry Tang, a doctoral student at the university who worked on the project. “We believe it is important to continue researching the privacy implications of brain decoding and to enact policies that protect each person’s mental privacy.”