It is not surprising that anesthetists as a group stand out from among other scientists and philosophers as having contributed much pioneering research on consciousness. Ether’s ability to induce loss of consciousness was first demonstrated on a tumor patient at the Massachusetts General Hospital in Boston in 1846, within a surgical theater that later became known as “the Ether Dome.” So consequential was the procedure that it was captured in a famous painting, “First Operation Under Ether,” by Robert C. Hinckley. The Ether Dome is still in daily use. During my post-doc studies at the Massachusetts General, I was asked to present a paper in this famous hall. To say that I was anxious is the understatement of the day.

When people are administered an anesthetic, they seem to lose consciousness, or at least they stop reacting to their environment. Evidently, anesthetic agents do not suppress brain function globally but exert dose-dependent effects on specific brain systems that sustain internal consciousness and perception of the environment. Each agent has its own mechanism of action and induces distinct altered consciousness states.

Researchers at the University of Turku, Finland, found that the brain can process sounds and words, even though the subject did not recall them afterward. Against common belief, anesthesia does not induce full loss of consciousness, as it is sufficient to just disconnect the patient from the environment. The findings indicate that the state of consciousness induced by anesthetics is similar to natural sleep. When we sleep, we are not fully conscious, but we are also not fully unconscious: as, for example, lucid dreams prove. After we wake up, we may remember some of our dreams. If a person is under general anesthesia, should one of the doctors say something alarming like, “Oh, I think we just ruptured her stomach,” the patient will react with all the physical signs of a panic attack. The brain may be underperforming, but the mind is fully operational.

Unfortunately, unlike healthy people, there is a small number of brain-injured patients who appear to be permanently asleep and are unable to communicate through speech or movement with their caregivers. But not all of them are unconscious.

This problem is known in academic circles as “covert consciousness” or, more colloquially, “locked-in syndrome.” It is a rare neurological disorder in which there is complete paralysis of all voluntary muscles except for the ones that control the movements of the eyes. Individuals with locked-in syndrome are conscious and awake but have no ability to produce movements or to speak. Cognitive function is usually unaffected. Communication is possible in some cases through eye movements or blinking.

Communicating with patients who suffer from locked-in syndrome

Locked-in syndrome is caused by damage to the pons, a part of the brainstem that contains nerve fibers that relay information to other areas of the brain. Understanding which of these patients are conscious is incredibly important to preserve their sanity. But how? How can we ever know what, if anything, these patients are thinking or feeling?

Patients who are in this vegetative state are the focus of the research of Martin Monti at UCLA. For years, Monti and Adrian Owen had been studying vegetative patients. Eventually, they devised a method for communicating with such “locked-in” people by detecting their unspoken thoughts. They would pose a question and ask the patient if he could signal “yes” by imagining playing tennis or “no” by thinking about walking around his house. Their fMRI machine displayed a cross-section of the patient’s brain. Monti knew where to look to spot the yes and the no signals.

He would ask, “Is your father’s name Alexander?”

The man’s premotor cortex lit up. He was thinking about tennis—yes.

“Is your father’s name Thomas?”

Activity in the parahippocampal gyrus. He was imagining walking around his house—no.

The answers were correct. What a relief it must have been for the locked-in person to be liberated, to finally relay their thoughts and feelings to the outside world and regain, at least partially, their humanity.

Using the same fMRI scanners Ken Norman, chair of the psychology department at Princeton University, and his group, drawing on insights from machine learning, conceive of thoughts as collections of points in a dense “meaning space.” By identifying how these points are interrelated and encoded, they start to produce an inventory of the mind.

The next logical step for this research, Norman has said, is a general-purpose thought decoder. Such a device could read a person’s thoughts. In 2018, Matthew Botvinick, a colleague of Norman’s with other scientists at MIT, succeeded in building a program that could decode sentences that subjects read silently to themselves. The system learned which brain patterns were evoked by certain words and used that knowledge to guess which words were implied by the new patterns it encountered.

Of course, like so much in science, new discoveries can be used for good or evil.

The work at Princeton was funded by the IARPA in the USA. IARPA stands for Intelligence Advanced Research Projects Activity. Their mission: invest in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the intelligence community. Does that tell you something?

Adrian Owen, Professor of Cognitive Neuroscience and Imaging at the University of Western Ontario, is best known for showing that functional neuroimaging can reveal conscious awareness in some patients who appear to be entirely vegetative. Owen has been quoted as saying, “I have no doubt that at some point down the line, we will be able to read minds.”

This is a terrifying prospect, considering that in the wrong hands, thought reading can easily become thought stealing.