LIFESTYLE

Is AI able to read human minds? Though it’s unlikely, that doesn’t mean we shouldn’t be concerned

Neuralink placed a chip in the brain of a 29-year-old man from the United States named Noland Arbaugh, who suffers from paralysis from the shoulders down, earlier this year. Thanks to the chip, Arbaugh can now mentally manipulate the mouse cursor on a screen.

Additionally, US researchers revealed in May 2023 a non-invasive method using generative AI in conjunction with brain scans to “decode” the words an individual is thinking. Articles regarding a “mind-reading AI hat” were generated by a related project.

Are generative AI and neural implants really able to “read minds”? When will computers be able to accurately record our thoughts in real time and make them available for anyone to read?

While there might be some advantages to such technology, especially for marketers searching for fresh sources of consumer targeting information, it would destroy the last remaining barrier to privacy—the privacy of our own thoughts. But before we get too worked up, let’s take a moment to consider what brain implants and generative AI are really capable of “reading minds.”

As far as we now understand, brain activity is the source of conscious experience. This implies that every conscious mental state should have a certain pattern of nerve cells (neurons) firing in the brain, or what philosophers and cognitive scientists refer to as a “neural correlate.”

Thus, there is a matching pattern of brain activity for every conscious mental state you might experience, such as thinking about the Roman Empire or seeing a cursor move.

Thus, it stands to reason that a gadget that can monitor our brain activity should also be able to interpret our thoughts. Correct?

We must be able to pinpoint exact, one-to-one correspondences between certain conscious mental states and brain states in order to enable real-time AI-powered mind-reading. And it’s conceivable that this won’t work.

Knowing exactly which brain states correlate to certain mental states is necessary in order to read a mind from brain activity. This implies that, for instance, one must differentiate between the brain states corresponding to seeing a red rose and those corresponding to smelling, touching, visualizing, or thinking that your mother loves red roses.

Additionally, one must differentiate all those brain states from those corresponding to seeing, smelling, feeling, visualizing, or thinking about anything else, such as a juicy lemon. And so on, for everything else that comes to mind or that you are able to see or conceive.

Saying that this is challenging would be a huge understatement.

Consider the perception of faces. Different types of brain activity are involved in the conscious perception of a face.

However, a lot of this activity seems to be connected to processes such as working memory, selective attention, self-monitoring, task planning, and reporting that occur either before or after the conscious experience of the face.

Current neuroscience is far from completing the enormous problem of sifting out those brain processes that are uniquely and only responsible for the conscious perception of a face.

Neuroscientists would only have discovered the brain correlates of a certain kind of conscious experience—that is, the general perception of a face—even if they were successful in completing this goal. As a result, they would not have discovered the brain correlates of the sensations associated with certain faces.

Therefore, even if incredible advancements in neuroscience were to occur, a potential mind-reader may not be able to determine from a brain scan whether you are seeing your mother, Barack Obama, or an unfamiliar face.

It wouldn’t be something to get too excited about in terms of mind-reading.

However, what about AI?

However, don’t recent news reports about brain implants and artificial intelligence indicate that some mental states, such as seeing moving cursors and speaking to oneself, may be read?

Not always. Prioritize taking the neural implants.

Usually, the purpose of a neural implant is to assist a patient in carrying out a certain function, such moving a cursor on a screen. They don’t need to be able to pinpoint the precise brain functions that are connected to the desire to move the cursor in order to do that. All they really need to do is obtain a rough idea of the brain mechanisms that often support those goals, some of which may even be underlying other closely related mental functions like organizing tasks, remembering things, and so on.

Therefore, even if neural implants have been very successful and it is probable that future implants will gather more accurate data on brain activity, it does not demonstrate that exact one-to-one mappings between certain mental states and specific brain states have been found. Therefore, it doesn’t increase the likelihood of actual mind-reading.

Consider the “decoding” of inner speech using a system described in this work that combines generative AI with a non-invasive brain scan. The purpose of this method was to “decode” continuous narratives from brain scans taken while participants were viewing movies, listening to podcasts, or mentally repeating tales. Even if the algorithm isn’t particularly precise, it’s still really amazing that it was able to anticipate these mental contents more accurately than chance.

Let us now pretend that the algorithm could accurately anticipate continuous tales from brain scans. The device would only be successful at monitoring that task; it wouldn’t be able to track any other brain activity, similar to the neural implant.

To what extent might this technology track mental activity? That depends: how much of our mental lives are spent creating, interpreting, or contemplating coherent, coherent stories that are easily articulated in simple terms?

Our mental lives are erratic, multi-stream, and very quick. They include recollections, expectations, and imaginations all at once, as well as real-time perceptions and memories. It’s difficult to see how all of that could be accurately captured in a transcript generated by even the most sophisticated AI combined with the best brain scanner.

AI research has shown a propensity to overcome obstacles that seem intractable in recent years. Thus, it would be foolish to completely rule out the prospect of mind-reading enabled by AI.

However, as neuroscience is still in its infancy and our understanding of the brain is still limited, confident predictions concerning AI-powered mind-reading should be regarded with a grain of salt given the complexity of our mental life.

Related Articles

Back to top button