Integrated Information Theory (IIT) is Giulio Tononi’s bold concept of the the neural underpinnings of consciousness. Roughly, IIT proposes that the subjective component of consciousness emerges when an information-processing entity has lots of informational states, is interconnected (integrated), and has certain feedback properties. “Phi” is a computed property that can measure the instantaneous amount of integrated information an information in a system. According to IIT, consciousness emerges from any system that has a proper architecture, principally, having large numbers of independent, “integrated” states. Thus, the larger the Phi, the greater the conscious experience. The human brain has large information capacity and an integrated architecture; thus, during the waking state a human brain has lots of consciousness.
Quale is the “feeling” of a particular state of consciousness. According to IIT a distinct quale emerges from each brain state that is sufficiently different from other brain states. In IIT terms, “the distinction that makes a difference”. In order to accomplish this emergence (the transition from brain state to feeling) IIT postulates a type of panpsychism, a conscious experience that emerges when information processing contains sufficient Phi. Panpsychism refers to the notion that consciousness is a fundamental feature of the universe, a feature shared by complex entities, such as human brains, and much simpler entities. The difference in consciousness between the extremes are quantitative, not qualitative. Tononi and Koch seem to accept a special form of panpsychism in which sets of nearby structures, dynamically interacting in the proper way, will cause the emergence of consciousness. As with traditional panpsychism, Tononi’s panpsychism reduces. For example, diode’s and thermostats have tiny amounts of consciousness.
There’s lots to discuss and argue about here. But I’d like to take this one step further. Let’s accept all that IIT offers, and accept the special panpsychism of IIT that “solves” the hard problem. What do we get?
An observer without action. A conscious entity that can “feel” its current state, but cannot act.
It’s accepted that behavior is the output of our musculo-skeletal system and behavior is mediated by the output of motor neurons (the neurons that connect spinal cord and brainstem to muscles). It is accepted that the code of from motor neurons to muscle contraction is an action-potential code. That is, the action potential patterns of motor neurons determine behavior.
Now let’s return to IIT, and accept that the patterns of activity in the nervous system generate a conscious experience, exactly in the manner that IIT prescribes. We now have a conscious entity “knowing” and “feeling” the state of its nervous system. The problem, as I see it, is that IIT is one-directional. The consciousness produced by IIT has no mechanism to feed back on the activity of the nervous system. Specifically, consciousness, as described by IIT, cannot influence the pattern of action potentials in the motor neurons in order to influence behavior. IIT advocates frequently argue that consciousness does not require behavior, with their strongest piece of evidence being “locked in syndrome”. This argument can backfire. IIT, as currently construed produces type of locked in syndrome. The conscious observer can watch what’s going on, but has no influence.
A counter argument may be that there is sufficient integration in the nervous system to produce integrated behavior. This sounds sensible, but it leaves consciousness out in the cold. If the nervous system on its own can produce integrated output, what’s consciousness good for? All of the high-level decision making attributed to consciousness doesn’t need consciousness. Conscious experience does not produce the integration, since it’s already there.
Another way of saying this is that IIT has proposed an explanation of consciousness based on one magical action: the emergence of conscious experience from brain state due to a special panpsychism. I argue that for this consciousness to be complete, or at all useful, one would need two magical actions: the emergence of conscious experience from brain states PLUS the ability of conscious experience, which appears to be non-material, to influence brain states.
The core problem, as I see it, is the division of consciousness into two domains: perception (quale) and action (will). A powerful theory of consciousness can only emerge when the components are combined.
Frisby is the new captain of an extremely large ship. Firsby has had minimal training, but is instructed about what to do by the highly efficient chief officer Quiggly . Quiggly says,
Don’t worry captain, this ship is highly engineered and controlled. You will sit in on the captain’s deck and be shown clear video-like images of various viewpoints from the ship: ahead, behind, below (under water) etc. (While saying this Quiggly points to various monitors). Your responsibility is to make high-level decisions, determining the direction, destination and, sometimes, the speed of the ship. Here are your direction and speed controls. Notice the sensors on the walls (he points): these indicate speed, wind speed, currents, etc. They are well marked. Below decks are the engine rooms and computer control center. These are where your high-level decisions are interpreted and actuated. As you can imagine, there are thousands of sensors, hundreds of direction controls, and dozens of motors. You need not worry about these. In the old days we had engineers below deck performing these low-level operations. Now they are controlled by computer. But you set the master direction and control. Only you have access to the state of the entire ship; it takes a captain to really understand state. Thus, nowhere else can unified, high-level commands be implemented.
The captain’s deck is beautiful and impressive. The ship sets off, and Frisby sets course and controls speed. As Frisby sees a ship on your left, he carefully steers out of its way. Later, as he sees a sand bank on the right, he steers to avoid it. He smiles to himself. The ship behaves beautifully, responding slowly but beautifully to Firsby’s actions. He notices a slight drag on the steering mechanism, but it is not serious.
Three days out, on a lovely day, Frisby sees a beautiful island to his right, and thinks the crew and the passengers would like a closer look. He turns the steering mechanism but it resists. Finally, the wheel turns, but the ship continues a straight course. Five minutes pass, the ship moves straight ahead. Firsby calls Quiggly.
Quiggly arrives in 10 minutes. He hems and haws and talks gibberish about the control mechanisms. Finally, he says,
Captain, I wasn’t truthful with you. While its true that you know the high-level state of the ship, your controls do nothing. The computation center has total control. While it gives you a cohesive representation of the state of the ship, it also has the cohesive representation. It has been programmed with values that largely represent a captain’s values. While they correspond, the decisions it makes parallel the ones you would make. When you feel you have made a decision, it makes a similar one, and results are consistent. On occasion, like today, you and the computer might make different decisions. Since your decisions have no effect, it wins. In reality, your controls do nothing.
Frisby has a type of locked-in syndrome1. He has sensors but no effectors. He is not an agent of action.
— — —
As I see it, there are four ways to classify my reaction to IIT:
- I have misread IIT. The consciousness produced by IIT has both quale and conscious agency.
- Tononi has presented half a story of consciousness. He will fill out the theory later.
- Consciousness is a “locked in syndrome”. There is no conscious agency. No will or free will. Tononi’s dystopic account of consciousness is complete.
- IIT makes a fundamental mistake is separating quale from conscious agency.
— — —
Two related points:
- If consciousness has no output — no function for the organism — it has no survival value and no mechanism for evolution.
- Point #3 in the summary is that there is no will or free will. There are current debates whether free will is illusory. Book-length arguments are made in Sam Harris’ Free Will and Daniel Wegner’s The Illusion of Conscious Will. While I feel there are flaws in the argument that conscious will is an illusion, they won’t be presented here.
1 “locked-in syndrome” is a condition where a patient is awake but completely paralyzed. In some cases, only the eyes can move; in others, not even the eyes. Locked-in syndrome is commonly due to strokes the destroy motor pathways traversing the brainstem (midbrain or pons) but do not affect the forebrain. Some have argued that “locked in syndrome” shows that output and agency are not essential components of consciousness. The counter-arguement is that individuals with locked-in syndrome have had a long history of agency.