Is IIT Consciousness a One-Way Street?

one-wayIntegrated Information Theory (IIT) is Giulio Tononi’s bold concept of the the neural underpinnings of consciousness. Roughly, IIT proposes that the subjective component of consciousness emerges when an information-processing entity has lots of informational states, is interconnected (integrated), and has certain feedback properties. “Phi” is a computed property that can measure the instantaneous amount of integrated information an information in a system. According to IIT, consciousness emerges from any system that has a proper architecture, principally, having large numbers of independent, “integrated” states. Thus, the larger the Phi, the greater the conscious experience. The human brain has large information capacity and an integrated architecture; thus, during the waking state a human brain has lots of consciousness.

Quale is the “feeling” of a particular state of consciousness. According to IIT a distinct quale emerges from each brain state that is sufficiently different from other brain states. In IIT terms, “the distinction that makes a difference”. In order to accomplish this emergence (the transition from brain state to feeling) IIT postulates a type of panpsychism, a conscious experience that emerges when information processing contains sufficient Phi. Panpsychism refers to the notion that consciousness is a fundamental feature of the universe, a feature shared by complex entities, such as human brains, and much simpler entities. The difference in consciousness between the extremes are quantitative, not qualitative. Tononi and Koch seem to accept a special form of panpsychism in which sets of nearby structures, dynamically interacting in the proper way, will cause the emergence of consciousness. As with traditional panpsychism, Tononi’s panpsychism reduces. For example, diode’s and thermostats have tiny amounts of consciousness.

There’s lots to discuss and argue about here. But I’d like to take this one step further. Let’s accept all that IIT offers, and accept the special panpsychism of IIT that “solves” the hard problem. What do we get?

An observer without action. A conscious entity that can “feel” its current state, but cannot act.

It’s accepted that behavior is the output of our musculo-skeletal system and behavior is mediated by the output of motor neurons (the neurons that connect spinal cord and brainstem to muscles). It is accepted that the code of from motor neurons to muscle contraction is an action-potential code. That is, the action potential patterns of motor neurons determine behavior.

Consciousness is a Roach Motel; Information can check in; but it can't check out.

Is consciousness is a Roach Motel, where Information can check in, but can’t check out.

Now let’s return to IIT, and accept that the patterns of activity in the nervous system generate a conscious experience, exactly in the manner that IIT prescribes. We now have a conscious entity “knowing” and “feeling” the state of its nervous system. The problem, as I see it, is that IIT is one-directional. The consciousness produced by IIT has no mechanism to feed back on the activity of the nervous system. Specifically, consciousness, as described by IIT, cannot influence the pattern of action potentials in the motor neurons in order to influence behavior. IIT advocates frequently argue that consciousness does not require behavior, with their strongest piece of evidence being “locked in syndrome”. This argument can backfire. IIT, as currently construed produces type of locked in syndrome. The conscious observer can watch what’s going on, but has no influence.

A counter argument may be that there is sufficient integration in the nervous system to produce integrated behavior. This sounds sensible, but it leaves consciousness out in the cold. If the nervous system on its own can produce integrated output, what’s consciousness good for? All of the high-level decision making attributed to consciousness doesn’t need consciousness. Conscious experience does not produce the integration, since it’s already there.

Another way of saying this is that IIT has proposed an explanation of consciousness based on one magical action: the emergence of conscious experience from brain state due to a special panpsychism. I argue that for this consciousness to be complete, or at all useful, one would need two magical actions: the emergence of conscious experience from brain states PLUS the ability of conscious experience, which appears to be non-material, to influence brain states.

The core problem, as I see it, is the division of consciousness into two domains: perception (quale) and action (will). A powerful theory of consciousness can only emerge when the components are combined.

An story:
Frisby is the new captain of an extremely large ship. Firsby has had minimal training, but is instructed about what to do by the highly efficient chief officer Quiggly . Quiggly says,

Don’t worry captain, this ship is highly engineered and controlled. You will sit in on the captain’s deck and be shown clear video-like images of various viewpoints from the ship: ahead, behind, below (under water) etc. (While saying this Quiggly points to various monitors). Your responsibility is to make high-level decisions, determining the direction, destination and, sometimes, the speed of the ship. Here are your direction and speed controls. Notice the sensors on the walls (he points): these indicate speed, wind speed, currents, etc. They are well marked. Below decks are the engine rooms and computer control center. These are where your high-level decisions are interpreted and actuated. As you can imagine, there are thousands of sensors, hundreds of direction controls, and dozens of motors. You need not worry about these. In the old days we had engineers below deck performing these low-level operations. Now they are controlled by computer. But you set the master direction and control. Only you have access to the state of the entire ship; it takes a captain to really understand state. Thus, nowhere else can unified, high-level commands be implemented.

The captain’s deck is beautiful and impressive. The ship sets off, and Frisby sets course and controls speed. As Frisby sees a ship on your left, he carefully steers out of its way. Later, as he sees a sand bank on the right, he steers to avoid it. He smiles to himself. The ship behaves beautifully, responding slowly but beautifully to Firsby’s actions. He notices a slight drag on the steering mechanism, but it is not serious.

Three days out, on a lovely day, Frisby sees a beautiful island to his right, and thinks the crew and the passengers would like a closer look. He turns the steering mechanism but it resists. Finally, the wheel turns, but the ship continues a straight course. Five minutes pass, the ship moves straight ahead. Firsby calls Quiggly.

Quiggly arrives in 10 minutes. He hems and haws and talks gibberish about the control mechanisms. Finally, he says,

Captain, I wasn’t truthful with you. While its true that you know the high-level state of the ship, your controls do nothing. The computation center has total control. While it gives you a cohesive representation of the state of the ship, it also has the cohesive representation. It has been programmed with values that largely represent a captain’s values. While they correspond, the decisions it makes parallel the ones you would make. When you feel you have made a decision, it makes a similar one, and results are consistent. On occasion, like today, you and the computer might make different decisions. Since your decisions have no effect, it wins. In reality, your controls do nothing.

Frisby has a type of locked-in syndrome1. He has sensors but no effectors. He is not an agent of action.

— — —


As I see it, there are four ways to classify my reaction to IIT:

  1. I have misread IIT. The consciousness produced by IIT has both quale and conscious agency.
  2. Tononi has presented half a story of consciousness. He will fill out the theory later.
  3. Consciousness is a “locked in syndrome”. There is no conscious agency. No will or free will. Tononi’s dystopic account of consciousness is complete.
  4. IIT makes a fundamental mistake is separating quale from conscious agency.

— — —

Two related points:

  1. If consciousness has no output — no function for the organism — it has no survival value and no mechanism for evolution.
  2. Point #3 in the summary is that there is no will or free will. There are current debates whether free will is illusory. Book-length arguments are made in Sam Harris’ Free Will and Daniel Wegner’s The Illusion of Conscious Will. While I feel there are flaws in the argument that conscious will is an illusion, they won’t be presented here.

1locked-in syndrome” is a condition where a patient is awake but completely paralyzed. In some cases, only the eyes can move; in others, not even the eyes. Locked-in syndrome is commonly due to strokes the destroy motor pathways traversing the brainstem (midbrain or pons) but do not affect the forebrain. Some have argued that “locked in syndrome” shows that output and agency are not essential components of consciousness. The counter-arguement is that individuals with locked-in syndrome have had a long history of agency.


8 thoughts on “Is IIT Consciousness a One-Way Street?

    • Bill: yes, I think “epiphenominalism” nails it. I hadn’t known the term.

      There is an interesting and complex discussion of IIT in a blog post by Scott Aronson (mostly math).

      Although the discussion is mostly math, I added a comment on the one-way nature of IIT and evolution:

      John kubie Says:
      Comment #58 June 1st, 2014 at 3:52 pm
      Concerning the evolution of consciousness. As I understand IIT, it’s a one-way street: complexity in the world produces consciousness, but consciousness has no access to affecting the natural elements that create complexity (or anything else). If true, then consciousness cannot be under selective pressure and won’t evolve. This makes the approach much less interesting.

      Christof Koch appeared to reply:

      Christof Koch Says:
      Comment #63 June 1st, 2014 at 5:46 pm
      Regarding the link between evolution and consciousness. IIT takes no position on the function of experience as such – similar to physics not having anything to say about the function of mass or charge. However, by identifying consciousness with integrated information, IIT can account for why it evolved. In general, a brain having a high capacity for information integration will better match an environment with a complex causal structure varying across multiple time scales, than a network made of many modules that are informationally encapsulated. Indeed, artificial life simulations (“animats”) of simple Braitenberg-like vehicles that have to traverse mazes and whose brains evolve, over 60,000 generations, by natural selection, show a monotonic relationship between the (simulated) minimum of integrated information and adaptation (Edlund et al 2011; Joshi et al. 2013).

      That is, the more adapted individual animats are to their environment, the higher the integrated information of the main complex in their brain. Furthermore, ongoing work (Albantakis et al. 2014) demonstrates that over the course of the animats’ adaptation, both the number of concepts and integrated conceptual information increases. Thus, evolution by natural selection gives rise to organisms with high PHI because they are more adept at exploiting regularities in the environment than their less integrated competitors.

      I further commented:

      John kubie Says:
      Comment #67 June 1st, 2014 at 7:29 pm
      Re: Christop Koch #63.
      I have little doubt that brains with lots of complexity and phi have selective advantage. But, as I see it, that would be the case if there was no consciousness; that is, if the high phi brain were in a robot or zombie. The consciousness that IIT is supposed to produce doesn’t do anything. No apparent value. In my mind, consciousness must DO something, must provide some value (even if what it does is determinist) to even be interesting.
      Begins to make me think Dennett is right; everything will be in the known material domain, and consciousness will disappear when we understand it.

      that was it. My reading of Koch’s reply is an acknowledgement that IIT is “epiphenomenalist” (or whatever the right word is).

  1. I didn’t fully understand what Koch wrote, but I don’t see how he could be saying that. He says that IIT explains why consciousness evolved, but that wouldn’t make any sense at all unless consciousness has adaptive value.

  2. Hi John, very interesting post.
    I roughly support IIT thought. I understand you stand on a view point of doubt of IIT. My reaction is 1 & 2. Though Tononi did not explain clearly, I think actually quale and conscious are very close and come up simultaneously.

    Though he identified consciousness with integrated information mathematically, I think actually quale and conscious are very close and come up simultaniously. (Detailed stand point may different due to back ground understanding.)

    Regarding behavior, Tononi (and Koch) may understand it is not necessary to discuss output behavior. IIT may show behavior come up with consciousness indirectly.

    My understanding is they are inseparable and simultaneous. And free will also inseparable. That shows also “Firsby and Quiggly are same person”. However it is not about IIT. That is another discussion.

    Note: my understanding is probably different from Tononi and Koch. So more and more discussion would necessary independently.

    • Mambo,
      I’m not certain about whether Koch and Tononi think consciousness requires an output. IIT doesn’t have it. In Koch’s comments he seems to dance around the issue (see my “reply” to Bill Skaggs, above).

      • I think “consciousness requires an output” is a little difficult concept. To avoid a misunderstanding, could you re-explain again?

        I think IIT doesn’t show activity of a nerve itself. IIT shows integrated information mathematically, however, (I think) it shows only from input-side.
        Actual activity seems that “after information inputs, something happen.” And that is simultaneous.

        *discussion candidate:
        Input-side discussion is not enough for introduction of discussion?
        Output is indispensable for consciousness?
        Quale and Consciousness
        Free will

    • Not clear what’s confusing about the output of consciousness.
      Example of Conscious output: consciously think “I am going to raise my right arm”, and raise your right arm. If you believe that that the conscious process of thinking “I am going to raise …” was in any causal way connected to raising your arm, then consciousness has an output. Its not a mere passive onlooker to the events in your life. Generally speaking, there are 2 consciousness mysteries. Subjective experience and (free) will. Subjective experience is what IIT tries to explain. Its the input. From what I can gather, IIT makes no attempt to explain how the content of consciousness, separate from the already-integrated processing in the nervous system, can influence behavior. From what I understand, IIT consciousness is not synonymous with integrated information, it is produced by integrated information.

      • I did not recognize this comment was for mine. Here is a response.
        Free will is a little more complicated than simple consciousness. If possible, we should discuss simple one first. However I can discuss it now.
        I understand consciousness has an output (, as I wrote that is simultaneous.)

        Indeed IIT does not explain the output behavior. MY understanding is shown as follows.
        While considering “I am going to raise my right arm”, it is not about simple consciousness, but it is about (also) free will. While decision making from “some input and previous memory information, final decision may be “raising my right arm”.

        My understanding is:
        1. Input and output is simultaneous (again).
        2. While the decision is “raising my right arm”, that is equal to actual behavior of “raising my right arm”.
        3. Free will (decision making) is included in consciousness in a broad sense, and its mechanizm is based on that of consciousness.

        Following is also my understanding (my hypothesis).
        When starting consciousness discussion, it is easier to discuss simple consciousness than discuss all about consciousness in a broad sense. For example, imaging BLUE after looking at the sky. Of course we can continue to discuss “raising my right arm”. But it is easier to understand it comparing with “looking at the sky”.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s