Consciousness Wars: Tononi-Koch versus Searle

kochGuilio Tononi has proposed a theory of consciousness he calls “Integrated Information Theory” (IIT)*. Very roughly, the theory is based on Shannon’s concept of information, but extends this by adding a property refers to as “Integrated Information”. Consciousness Information will exist in an entity when it has information and is connected. This property is called “Phi” (rhymes with “by”, written φ) and can be computed. The higher the Phi, the more conscious the entity.

Although theories of consciousness are not new, this one is special: it has high-profile converts, perhaps the most noteable being Christof Koch. Koch, Cal Tech professor and chief scientific officer of the Allen Brain Institute, is best known for his book, The Quest for Consciousness: a Neurobiological Approach. A new Koch book, Consciousness: Confessions of a Romantic Reductionist is largely a description of and paean to IIT. It’s fair to view Tononi and Koch as collaborators.

John Searle is an eminent philosopher who thinks about the brain and is taken seriously by Neuroscientists. Until recently he and Koch were on the same page. For example, Searle has endorsed Koch’s concept of studying the Neural Correlates of Consciousness (NCC). Searle frequently writes for the New York Review of Books, and has on occasion generated debate. Notable was Searle’s 1995 critical review of Daniel Dennett “Consciousness Explained” that generated a prolonged exchange.

In the January 10, 2013 issue of the New York Review of Books Searle reviews “Confessions” and solidifies his disputative reputation**. The review is devastatingly critical. The essence of Searle’s criticism is that IIT employs a mindful observer to explain mind. There is a little man in the middle of the theory; that information isn’t information until it is “read” by an entity with a mind. There may be message in the information carrier, but it becomes information when read.***

The story doesn’t end there. The March 7 issue of the New York Review of Books contains an exchange of letters between Koch-Tononi and Searle (not behind paywall).

My read: I thought the original Searle article was clear and powerful. I’ve read both Tononi and Koch and never quite gotten IIT. I found the Tononi/Koch letter a muddle, and Searle’s reply clear. Since I don’t really get IIT, I don’t want to take sides. Opinions are welcome in the comments section. It will be interesting to see how this plays out.

Update March 18. Panpsychism is a battleground in the Koch/Tononi letter and Searle’s response. According to wikipedia, which seems an adequate source here,

In philosophy, panpsychism is the view that all matter has a mental aspect, or, alternatively, all objects have a unified center of experience or point of view. Baruch Spinoza, Gottfried Leibniz, Gustav Theodor Fechner, Friedrich Paulsen, Ernst Haeckel, Charles Strong, and partially William James are considered panpsychists.

My take: both get hits. Searle doesn’t acknowledge the “local” panpyschism of IIT. IIT has a spatially restricted, spatially centered panpsychism, according to Tononi and Koch in their response letter. That’s why my consciousness doesn’t mix with yours. If a theory of consciousness uses panpsychism, especially a special form, isn’t it assuming the very hard part, asking for special help from novel laws of physics?

Update 2 march 19 A few hours ago I re-read chapter 8 of Koch’s “Confessions”, which contains the entirety of Koch’s description of IIT. I also reread Searle’s review of “Confessions”, and the NYRB letter exchange. In Chapter 8 I searched for a clear description of “connectedness” but couldn’t find it. I don’t know if connectedness is statistical or involves causality. I also looked for an indication that IIT’s panpsychism is localized — that it is centered around a local maxima — but couldn’t find it. My conclusion is that the Koch book is, at best, a remarkably incomplete description of IIT. (and the Koch book is what Searle reviewed.) IIT depends heavily on connectedness; to evaluate IIT we must know what what connectedness means and how a system could detect its own localized connectedness without an external observer. Perhaps readers could direct us to answers.

Update 3 Jan 1, 2014. Christof Koch describes panpsychism in a Scientific American: Is Consciousness Universal?. Refers, nicely, to Searle.

Update 4 March 27, 2014 (much later). I remain skeptical.  I’d like to raise a new topic that hasn’t been discussed here: what makes a connection? Clearly, IIT requires connections. What are they? If Neuron A has an axon that synapses on Neuron B, why are thy connected in the IIT world? How does Neuron B know  that Neuron A contributed to its activation? Is it simply correlated firing? Is it correlated firing within a certain distance? Is the axon, perhaps microtubules within the axons, part of the connection system? If we can have a physical hypothesis of what a connection is, there  would be a mechanism to test, rather than a quantitative description of “size” of consciousness.

* Tononi has a book “Phi: a voyage from the Brain to the Soul“. It’s not a traditional scientific explanation of Phi, but an important resource. Seems intended to deliver the feeling, rather than bedrock substance of Phi.

**Can Information Theory Explain Consciousness? Most of the review is behind a paywall. Contact me (jkubie@mac.com). Zenio permits me to send out the text of the paper, one email at a time. March 2014; or download here.

*** Colin McGinn makes this argument eloquently in a more recent article in the NYRB, Homunculism which is a review of Kurzweil’s current book. This is not behind a paywall.

About these ads

42 thoughts on “Consciousness Wars: Tononi-Koch versus Searle

  1. I didn’t read the Tononi & Koch reply, but the beautiful thing about IIT is that actually no external observed is needed to read out the information, i.e. it is not a theory with a “man in the middle”. Of course, information in the traditional sense (a book, a CD, and so on) is meaningless until read and interpreted by human beings and Searle’s confusion likely arises from here. But Tononi’s theory and his Phi do not measure the amount of information that is readable by an external observer, but the amount of information which is “read out” by other parts of the system.

    This is information has an “intrinsic” sense: you put a whole bunch of interacting units together (could be neurons but also anything else). Then you divide the system into two parts which are sharing information (you can measure this with Shannon’s tools). The amount of information they share is observer-independent: one part of the system generates information which is read out by the other part and vice-versa. Then, you take all possible partitions of the system into two sets (this can be a huge number!). For some partition, the maximum amount of information will be shared and Phi is defined in this way.

    The claim that large values of Phi correspond to “more consciousness” can, of course, be contended. This is probably the weakest and more controversial point, since a “causal leap of faith” is needed here: believing that a large Phi is a correlate of the conscious state is not enough, you need to believe that a causal relationship exists between the two. This is another discussion, but Searle’s critique of a man in the middle can be dismissed by mathematical arguments.

    • Good explanation. But its not clear where the unity of consciousness would come from. Still seems to need an entity to get outside and read the entire structure. Where in the system is Phi? Phi is the entire system. The Tononi-Koch discussion of information in a photocell points that direction. I agree that if we posit something outside of the known objects of physics, photocells can have consciousness. But that’s cheating, a bit. I’m hesitant to get involved in the argument, but I’m still leaning towards Searle’s view.

      • No – no entity is needed to understand the value of PHI from the outside, nor is the system “read” in any way. In a similar manner, an object innately has mass, but this is an intrinsic property (observer-independent). Same for PHI.

        Searle knows a little bit about Shannon Information, and the moment he heard PHI involved Shannon Information, he couldn’t get over it. In fact, the current formulation of PHI does NOT involve Shannon Information at ALL. It originally used a modified form (the reduction of uncertainty) but it now uses causal work (Earthmover’s Distance).

        The premise of PHI is to ask – given a system in a state, what it can it possible know about itself, given it’s state. This is a mathematical property of the system, not of any observer. And the reason why PHI explains the unity of consciousness is because this intrinsic knowledge is irreducible (for systems like the cortex) while it is reducible for systems like the cerebellum.

      • Bob,
        Still not clear. Tononi and Koch say that they accept a special version of panpsychism to get PHI to work. This is not a trivial assumption. If I’m understanding correctly, the ‘reader’ is a local panpsychic. It knows this about itself because it has panpsychic properties. This makes both Shannon or non-Shannon work. Saying that ‘current’ PHI doesn’t require Shannon information doesn’t solve any major problem.

    • Ya, Searle’s critique fails on mathematical grounds. However I wonder: it seems likely my retina has more ‘phi’ (can integrate more information) than a C. elegans; does it make sense that a retina is more conscious than a living nematode?

      (Or possibly a drosophila; I can’t figure out what the number of neurons in a retina. But I think this is a broader problem with the idea.)

      • I don’t understand the statement, “Searle’s critique fails on mathematical grounds.” I don’t see math in either side of this debate. About your retina, good and interesting question. It would seem we should have many consciousnesses. For example, the neurons of your gut, the enteric nervous system, which may be conscious and totally separate from the conscious you know about. Perhaps same with your retinas, since the connectivity to rest of brain is one-way (retina is part of brain). The notion that there are a bunch of consciousnesses that you own, but only one is talking to you is a bit freaky.

      • @neuroecology: it’s hard to know whether your retina has a larger Phi than a C. elegans without actually computing it. Why can you say that your retina integrates more information? I am really not an expert on human vision (far from it) but I’d think that peripheral structures such as the retina consist mostly of parallel pathways which rely information to the thalamus and then to the cortex (please correct me if I am missing something!). A system which consists of replicas of units which process information separately will not have a large Phi, on the contrary. The same happens for the cerebellum, which is the canonical example Tononi always gives in his writings.

        @jkubie: Phi is a property of the entire system, yes. It makes no sense to speak of where in the brain Phi is. It follows that it doesn’t make sense to discuss where in the brain the consciousness is. This concept agrees with the view of consciousness as a process instead of a property or a state. We do not speak of individual neurons or brain structures, but of a dynamical process going on, and this process has certain characteristics. According to Tononi, it integrates/segregates information in an optimal fashion, i.e. it has a high Phi. At different times different neurons can be involved into this process, engaging and disengaging themselves from it. But the way the process occurs is here postulated to be the fundamental thing. I think it is really clever, but a pure formal take on this will lead to panpsychism easily. A lot more of neurobiology (and less information theory) will be needed here to establish whether Phi actually means something or not. This can be done but is complicated, one would need to actually change Phi with some clever manipulation and probe conscious awareness at the same time. Since it is difficult or impossible to actually measure the Phi of real systems now, I’d say we are far from this.

        So my general view is that information integration is an important feature of consciousness but cannot be the whole story. Actually I’d bet that there are already man made systems with Phi values close to those of the brain (the Internet perhaps!).

        Also, when I say Searle’s viewpoint can be dismissed by mathematical arguments, I just mean that Searle probably got carried away with the usual interpretation of the term “information” and is not understanding that information here is just the outcome of some mathematical formula. One can measure information independently of any observer and this is exactly what Shannon managed to do: a formal theory to quantify information in a channel. This is what made his theory so controversial at first: the separation of information from meaning.

        Summing up: Tononi just says that some systems are good at integrating info and some others aren’t. For example, a system of independent, non-interacting units is bad. Also a system in which every unit does the same is bad. In between you have complexity, when cool and interesting things happen, and consciousness has to be here. I wouldn’t say this is wrong, and certainly you do not need an external human observer to decide if the system is complex and good at integrating information. My critique would be that this might be rather trivial. More experiments are needed to decide this.

        Enzo (previously as latexdf)

  2. Searle has been claiming for decades that computation and information require a conscious interpreter, and so cannot provide an objective grounding for consciousness. His arguments for these claims have always been a bit weak, however. He began by following Putnam in erroneously thinking that anything (e.g., a rock) could instantiate any computation, and he’s never quite appreciated how limiting a dynamical (e.g., functional) characterization can be.

    (I studied with Searle as an undergraduate; he probably deserves a good deal of the blame for my becoming a philosopher.)

    • Correct me if I’m wrong. If you have a pure shannon-information view of the neural processing, there need to be an interpreter. No? But the information is, say, the trajectory of a thrown rock seems more analog, and not Shannon-like.

      • Hi again,

        no, the term “information” has here different meanings and there is room for misunderstanding (which I think is what happened to Searle). An example will help to clarify this.

        Imaging you are recording, for example, from a neuron in the hippocampus of a rat in a labyrinth. As the experimenter you want to know what this neuron does, i.e. what “information” is contained in the series of spikes you record. For you, this information could be the position of the rat in the labyrinth, whether the rat has been there or not before and so on. You base this on observations of the rat, how it behaves and how the neuron is spiking. You obviously need an interpreter (the experimenter) to understand and make sense of all this information.

        Then there are properties of the series of spikes which are quite objective and observer-independent. For example, the fire rate, the inter-spike intervals, the voltages, and so on. Shannon’s information is one of these. Some sequences of spikes will contain more information than others, in the same way that some will have different fire rates. For example, if you take a file containing the sequences of spikes and then you compress it with your computer, the final size of the .zip (relative to the original size of the file) will approximate the (so-called algorithmic, which relates to Shannon view) information contained in the spikes. A very regular series of spikes (say “1 0 1 0 1 0 1…”) will contain less information (i.e. compress to a smaller size) than others without any evident rule. It does not matter what the spikes are coding for, one can still assign a number to them, which is also called information.

      • reply to Enzo (above). I think Searle, and McGinn (and perhaps I) would say that you’re confusing message with information. The firing of the cell and the zipped file contain message. They only become information when observed by a mind. IIT would say the observer is the entire system, but I think they accept that the message must be received and decoded.

      • It’s common to think that Shannon-info requires an interpreter, and one can can make this a requirement by definition.

        However, one doesn’t actually need a conscious observer, all that we really need is the effective sensitivity of some system to some differences in another system. We need effective dynamical states, but we don’t interpretation. (Depending on how one defines things one might say this is an analogue to Shannon information, but the relevant point is that nothing in information theory hinges on conscious interpretation.)

        You’re right to say that Searle will only consider this real information if and when it is used by some conscious agent, but my point is that there is a well-defined objective notion of physical information that one can use to explain the (weak) emergence of intentionality. This physical notion of information has many of the features that Searle believes one cannot have without consciousness (it refers to a particular object, it has a “point of view”, it is about some aspects of the intentional object and not others, etc.)

  3. Intended as a reply to Enzo and a general comment. The window we have on consciousness is our subjective experience. Does the IIT fit with subjective experience? I’d say not too well. If our subjective consciousness is due to the connectedness of our CNS, why is it so limited. Koch, in his previous work, Quest for Consciousness, suggests that about 75% of neocortex (and related thalamus) is part of NCC (neural correlates of consciousness); he specifically rejects, or example, primary sensory cortical areas as part of NCC. This is seems a solid argument. Would this be a naive prediction of IIT? I’d say no. IIT would predict conscious awareness of primary sensory cortical areas, probably of cerebellar processing, probably of certain processing in brainstem and spinal cord. Probably of processing in the retina, and perhaps in the neural net in the gut. So, IIT has to be bent to accommodate these ‘facts’, by citing things like gamma synchrony. What about cerebellum? Lots of neurons, synchrony and rhythmicity there. Does it contribute to Phi? does it have its own Phi? Is it sufficiently connected to neocortex? Or, perhaps, it is independent solely because its distance is too far.

    • Yes, that is a good point in favor of Koch’s arguments, but it is certainly not against IIT. This is because IIT actually provides the only (as far as I know) theoretically grounded, algorithmic way to decide (for example) if the primary sensory cortices or the cerebellum are or not involved in the consciousness-generating systems of the brain. IIT actually provides an algorithmic way to identify this system, which is usually termed by Tononi (and also Edelman) the “dynamical core”, i.e. the subset of the brain which achieves maximal information integration. If you add another part of the brain to this core, even a single neuron (or you remove one part from it) the information integration will decrease. The dynamical core is well-defined mathematically and it exists in some way. So, IIT can, in principle, not only predict if the sensory cortices are part of the NCC but also what exactly the NCC are. Of course, to do this you need to record activity from all brain cells simultaneously and then run a combinatorial algorithm which will take literally forever to converge. But still, IIT offers a way to identify the NCC. Unfortunately, since nobody can compute this right now, it is impossible to decide whether the result is correct or not (point against Tononi)

      I like to compare IIT to string theory in physics. It is elegant and has a minimum set of ad-hoc experimental assumptions. Both have the potential to make testable predictions but the calculations are so hard that we have nothing yet (a lot of physicist attack string theory because of this). I just think it is unfair to compare IIT with other (less theoretical, more fact-based, experimental) theories of consciousness. The objectives are higher and of course a lot harder to achieve…

      • Haven’t read it carefully, but sounds post-hoc. If you simply knew the connectivity, wouldn’t primary sensory areas be part of the conscious system. As for cerebellum, shouldn’t it at least have its own consciousness? The parallel fiber-purkinje connections are densely linked.

      • Just curious: does Koch say that the experience is defined only by “the subset of the brain which achieves maximal information integration” ? It seems bit arbitrary.. why should be experience defined by this subset X and not by another one, (X’) which integrates 1 bit information less ? Lets say the cortex produces 10^15 bits of integrated information, and cerebellum only 10^12 bits of integrated information. If we disable the cortex (with anesthetic e.g.) why doesnt the consciousness suddenly move to the cerebellum (in other words, our experience should change to that generated b cerebellum) ?

  4. As with all laws of nature currently produced by science, IIT is merely a description that may/may not correlate with consciousness, and will therefore be devoid of all explanation. An analogy: Consider Newton’s 2nd law F=MA, the force correlates of acceleration. It presupposes the (conscious scientific observer) and says nothing about _why_ F = MA. It only says that, to a presupposed observer/conscious scientist, the universe will appear consistent with F=MA. There is no explanation of inertia here. There is merely description and prediction.

    IIT fails to deal with explanation for the exact same reason: It only describes. It fails in the exact way all science fails to explain everything. All _what_, no _why_.

    Overall the problem is with science itself. We simply don’t produce descriptions _prior_ to the existence of an observer. We always presuppose the (scientific) observer.

    To explain consciousness is to explain a scientific observer. Science practised at the moment surgically excises the observer from the beginning…. and then gets all teary-eyed and befuddled about why all the results fail to explain an ability to scientifically observe?

    I say again: To explain consciousness is to explain a _scientific_ observer. Are you a scientist working on consciousness? Have you ever stopped to ponder the fact that explaining consciousness and explaining how you do your empirical observations are part of the same explanation?

    Until science is allowed to construct descriptions of the universe _prior_ to how it is observed/appears from within, but consistent with how an observer that is predicted by it will perceive that universe (a universe that reveals the familiar ‘laws of nature’), there will be no explanation.

    Not because explanation is impossible, but _because we haven’t ever actually explained anything!_ Ever.

    http://theconversation.edu.au/learning-experience-lets-take-consciousness-in-from-the-cold-6739

    • If it’s only correlation, you are correct. I discuss this in another blog post: “Fire and Consciousness: A metaphor“. But IIT does propose a mechanism via local panpsychism; that is, by some magical methods, if a localized region of space (mostly, cerebral cortex) both contains sufficient information, and the information is ‘connected’ across regions (?synchronized?) then consciousness will emerge. This goes way beyond known laws of physics, but it is a mechanism, not a correlation. At least that’s what I think they are saying.

      • Yes, IIT has a proposed correlate/mechanism (what), but no ‘why’.

        Nice post. Fire analogies are very useful.
        I have played with fire analogies myself, and their role in explanation:

        http://theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881

        Indeed I am learning about consciousness by building it without having to understand it! It’s the chip I am prototyping at the moment. I will be able to make consciousness claims because it is ‘artificial consciousness’ the same way ‘artificial fire’ is fire. All the necessary physics is there, just without the biological background overheads. There is no computing at all. No models, no simulation. Just the same physics itself, as it is in the brain.

        In my world the way an explanation of consciousness would look is: “The appearance of brain electromagnetism is how consciousness presents, to itelf, the natural world in the act of delivering consciousness. This too is mechanism devoid of explanation. Like IIT is devoid of explanation. It also includes IIT insofar as the EM field contains the integrated information itself.

        Like I said, explaining it means changing science, not ‘discovering anything’ (except insofar as scientists must ‘discover themselves’!). Meanwhile, we all have to speak in ‘correlates’ tongues or we fail to get published. All very tedious. :-)

        cheers

      • Bionic Brain, Remarkable similarities between your post on Fire and mine. For awhile I thought we even used the same fire image (a little different), but the core idea is very similar. “Great minds …”

        Here’s another thought. Engineers (and BAM neuroscientists) think that if you can capture the action potential activity of brain and model it, you’ve got it all. But action potentials are not THE language of the brain, they are at best a partial story. In the entire brain of c elegans there are no action potentials. In the human retina, no action potentials. In each, synapses, but no action potentials. What? The take home story (for me) is that action potentials are used for long distance communication (c elegans is very small). If you only examine action potentials, you’ll have lots of plane crashes (read Bionic Brain’s post for the reference).

  5. Just when did IIT become a non-Shannon based model. Koch-Tononi’s letter is the first time this claim appears, no? All earlier papers heavily point to using Shannon’s work as a basis. Now they are pointing to Bateson’s definition of information in this letter, where there have only been vague allusions in earlier papers.

  6. The cells in Searle’s body want to survive. They’ve come to depend most on Teamwork with other bodies of cells for survival. Working in a team with other bodies of cells has become so Fruitful, that the system which facilitates participation as a Team Member, has grown exponentially. Some cells claim its grown far too dominant. But, no other system in the Brain is as capable of securing things… for the Cells who want to Survive. And so the cells allow it… for now.
    Ego ‘event participates’ with the Team, securing food, water, shelter, survival, for the Cells who want to Survive.. The other systems are still present, and useful, eyes/visions, ears/hearing, etc, but aren’t as capable as Ego has become.. it can secure Everything. And so much of it.
    We can even get by, as a team member, losing any number of “sense” systems… as long as we still have our trusty Personality/Ego. The Union of Ego’s demand it. Without it we’ll be deemed a savage, and executed. Not good… for the Cells who want to Survive.
    The Ego can secure All things for the Cells who wants to Survive.
    A philosophy which decentralizes the Ego… threatens it. Threatens its role in the Group as the King who secures everything… for the Cells who want to Survive. And its learned how to retaliate.

  7. Great analysis here. Some in-depth thought on the nuances of information theory and the IIT. Always worth reading. To comment on the “rejection” of Shannon information, it was in Tononi’s updated 2012 account of the IIT that he makes explicit that he is no longer using Shannon information or the “Kullback-Leibler divergence” to measure the difference between two probability distributions. He writes: “one would want a measure of the difference made by a mechanism (e.g. before and after a partition) not just as a reduction of uncertainty (information as communication capacity in the Shannon sense), but as specification – what it takes to transform something (here a distribution) and make it into something else (information as giving a particular form, in the classic sense of the word.).” He then goes on to articulate the concept of “information distance.” See note 7 for details.

    Also, Searle didn’t get the theory AT ALL! He was stuck on the idea that information requires a conscious receiver to make sense, but the IIT is the exploration of the notion that a system, if “wired” in a particular way can be its own information channel, sending and receiving causal information about itself to itself. In this kind of system information doesn’t become divorced from meaning as Shannon information did; rather, when the system is integrated, information BECOMES meaning.

    • phiguy110 I appreciate that either Searle didn’t get it; or he destroys IIT. No middle ground. I’m a little stale on the material, but here goes. 2 deep problems with IIT; first panpsychism. While a nice idea, physical support is lacking. If we accept panpsychism its probably possible to make other “consciousness” explanations. Although Koch and Tononi appear to be late in admitting to panpsychism, they (or at least Koch) are fully committed. See Koch’s recent Sci American blog post (“is Consciousness Universal?”). Second is locality. How are pieces of information connected? Axons are not an adequate answer. Need a physical process that can differentiate local and non-local informative structures.

  8. Just to respond to your two points.

    First, panpsychism you say is lacking “physical support.” I don’t know what sort of “physical support” you would be looking for. Panpsychism, if true, emerges from a proper conceptual analysis of how consciousness is generated. If consciousness is integrated information, and if integrated information systems are causally autonomous, as the theory predicts, then panpsychism would have to hold, as all real, irreducible causation would have to be “conscious” on some level. (Though certainly the consciousness of a system with only two states, like Tononi’s photodiode example, would be so minimally conscious as to make deep sleep look existentially rich by comparison.) Still, some consciousness, however minuscule, is not no consciousness, and the IIT predicts a conscious state that is truly, irreducibly minimal, containing 1 bit of information. As for other “consciousness explanations,” I suppose we’d have to evaluate what those were and whether they were as conceptually and empirically rich as the IIT. We are not accepting the IIT because we accept panpsychism, we accept panpsychism because of the IIT, and that makes a world of difference. Also, the idea that Tononi was late in accepting panpsychism is a canard; his book with Edelman years ago was titled “A Universe of Consciousness” and the IIT has always been explicit about panpsychism, as the photodiode thought experiment shows. Perhaps it’s just that, being a rigorous scientist, Tononi was reluctant to go full-on Deepak Chopra in his articulation of the theory and choose not to stress the panpsychism as people have confused notions of what panpsychism really implies.

    Second, pieces of information are connected through mechanisms which have the power to cause a difference that makes a difference to other mechanisms. Mechanisms can exist on any spatiotemporal scale. Re-wire the Internet to integrate information like a brain and the mechanisms can span the whole world, the important thing is that they work as one and the behavior of the system cannot be reduced to any partition of its parts. In fact, discover the irreducible systems of integrated information in the world and you’ve truly carved nature at its joints. In a brain, Tononi speculates it’s a central corticothalamic network made of cortical columns that act as the elements in the brain’s “dynamic core” (a term he used more with Edelman), that is to say, the system of columns in your brain which is wired to behave as irreducibly and maximally one from a causal point of view. It is this system that generates human consciousness, with other parts of the brain feeding in information to the system but not contributing to it’s irreducibly intrinsic phi value. Also, information can only be counted once, so a mechanism can only contribute to one irreducible structure. He calls this the “exclusion principle.” Because of exclusion, in the IIT, consciousness has real spatiotemporal borders.

    • Sorry for the late reply. By “physical support” I would be looking for something from physics, (subatomic physics) to be an implementation of panpsychism. What I see is a wish or an assertion. Its not there in standard physical models as I understand them (which is a pretty low level). Its not clear to he how “proper conceptual analysis” bridges this problem. Either integration is part of nature or it isn’t. Using ‘consciousness’ as evidence of integration is circular. I’d be looking for evidence of integration in nature. Perhaps “entanglement” is a start (I really don’t understand entanglement).

      • How about the brain? That’s in nature. Why privilege the subatomic? It’s not “more real” than biology. Also, Tononi has a note in one of his papers suggesting a way to see quantum systems as, perhaps, fundamentally irreducible systems, though generating little (though not nothing) in the way of consciousness (compared to a brain).

        Also, this is just my opinion, but it’s consciousness science that will ultimately be most fundamental (especially a theory like the IIT with it’s basic conceptual picture of order, entropy, mechanism, and causal autonomy) with physics something that consciousness DOES.

        Finally, there are nascent attempts to incorporate the IIT into subatomic physics by Max Tegmark, but I don’t think he really gets the concepts right and I find the math impenetrable (though that’s just my cognitive weakness perhaps) so I can’t really evaluate the validity of his ideas. Here’s the paper: http://arxiv.org/abs/1401.1219

      • Phiguy100, There are two responses to “what about the brain?” I can think of.

        First, “connection” in the brain does not, in and of itself, make an entity. Imagine an assembly of neurons that fire together whenever an apple is viewed or thought of. They represent are the brain representation of apple. Moreover, there are synapses that directory, or by means of few intermediaries, interconnect all members. All we have is a set of neurons that fire when the concept of apple arises. I can imagine a zombie with an artificial brain composed of neurons that fire together when apple is present. And this zombie would behave as a human would to apples. Would the zombie have a conscious subjective experience of “apple”? I don’t see why or how. The individual neurons in my brain or the zombies would fire as part of the cell assembly, but they would not know they were part of an apple representation. Importantly, they would have no way of knowing who they are connected with or why the fire. They just fire. The connections of the assembly are only visible to an outside (conscious?) observer. If, no the other hand, there were a physical principle, such as entanglement, there would be a possibility that a neuron would know the state of other neurons in its assembly, and be part of an assembly concept, as in a hologram.

        The second reason I go back to fundamental particles is that I find it hard to imagine that such an important thing as consciousness arose in such a small segment of the evolution of the universe. Was there a pre-conscious universe? When did consciousness first arise in the universe? How did it arise?

        I’m a neuroscientist. I record single neurons in behaving rats and make models of neural networks. I feel I understand networks of neurons much better than physics. But I don’t see an easy way that networks of neurons could create conscious experience.

        (I’ve read about Tegmark, and I’ll look at the paper, but I doubt I’ll understand the math.)

  9. I think all your concerns are exactly the kind of objections that the IIT attempts to solve. So, just a couple points:

    1. “’Connection’ in the brain does not, in and of itself, make an entity.” Agreed. In fact the IIT is a hypothesis about exactly what kind of connections, or wiring scheme, is required to create an entity, which, in the IIT, is the same thing as an causally irreducible conscious state. A system has to be wired in such a way that the amount of information generated by the system of connections as a whole cannot be reduced to the behavior of its parts. Comprehending this with a system as complicated as the brain is almost impossible, but the point can be clearly demonstrated mathematically with simple systems consisting of only a few elements. It’s the principle that is important.

    2. As for your zombie, well, I would say if it was really wired in such a way that the organization of the brain replicated the causal activity of a human exactly, then yes, it would have conscious experiences, and therefore not be a “zombie.” (Whether that kind of artificial brain is possible as an engineering feat is another question.) According to the IIT, even though aspects of conscious perception and cognitive processing are spatially localized in the brain, it’s not exactly right to say that “that’s the part of the brain where the apple representation is formed.” The system, the whole system, is, again, irreducible, and all perceptions, modes of thought etc. are a product of the whole. So, while the red apple does ignite neurons that we can associate with “red,” understanding why those neurons serve that function can only be understood by analyzing how they relate causally to the rest of the system to which it is irreducibly connected. Think about it like a conscious state itself: just as the apple occupies a PART of my visual scene, the whole scene cannot be reduced to just the apple. Localization is real, both in physical space and in consciousness, but ultimately only in relation to a totality. The straightforwardness of most of our concepts (like apple) mask they deep contextual integrations in our conceptual structure; the concept “apple” to be acquired at all, requires that you already have a very complicated conceptual model of the world within which something like an “apple” makes sense and has meaning. This is why AI scientists are always perplexed that computers get “simple things” wrong all the time. Most “simple things” require an ridiculously large background of information to be understood. Fundamentally, probably some absolutely primordial sense of space and time (and therefore memory) is required as a platform from which complicated concepts like “apple” can “bloom,” conceptually. If there ever is AI, it’s gonna have to evolve; top-down strategies will never work for this reason.

    3. “The connections of the assembly are only visible to an outside (conscious?) observer.” This is the central point the IIT REJECTS. It’s really the crux of the theory. If wired in a certain way, (a causally irreducible way) a system can generate information about itself as a single thing, making itself its own conscious observer by “measuring” its own state. It’s either right or wrong, but the whole theory turns on this point.

    3. There actually is something to hologram idea, though showing how the IIT can be re-imagined as a holographic metaphor is not a flight for my wing. (But given the holographic principle in physics, probably ultimately a valuable idea.)

    4. “The second reason I go back to fundamental particles is that I find it hard to imagine that such an important thing as consciousness arose in such a small segment of the evolution of the universe. Was there a pre-conscious universe? When did consciousness first arise in the universe? How did it arise?” On this the IIT is clear: consciousness is a fundamental ingredient of reality and occurs whenever and wherever causation occurs (the IIT is as much a theory of causation as consciousness), though most of the universe would be, according to the theory, extremely simple from a conscious point-of-view. Almost like nothing, but not quite. Evolution is the process wherein reality better “wires” itself to increase its own causative powers, allowing it to succeed in the environment and evolve.

    Hope that was clear, even if you disagree. Thanks for the exchange. I can’t recommend reading Tononi’s papers enough, especially the footnotes.

    • I get back to panpsychism. What I see is a description of how it would behave, not a mechanism. If it were a mechanism, it should be testable. IIT’s panpsychism relies on “information” being a real thing, not a description of a system. There must be something — like particles of information or information fields or something. While it is nice to have mathematical descriptions, math is a descriptive system, not an essence. As I understand it, some mathematicians and physicists have speculated about information particles. That might solve the problem. But there’s no basis I know of, other than offering a cute solution.

  10. IIT is not Panpsychism in the traditional sense.

    Panpsychism claims that everything is imbued with awareness ad-hoc.
    IIT claims that awareness is a spefic organization of matter – the brain has specialized mechanisms which share their information with each other, the resulting ‘mixture’ of information from these mechanisms is ‘integrated’. At this point we are no longer describing a physical structure, but an Information Structure.

    IIT posits that when information integrates into an information structure, it will feel like something to be that structure.

    For components to be sharing/integrating their information, there is a back-flow of information between all structures contributing to the integration. If these mechanisms are receiving information which is altering the info it holds and shares then they become irreducible to the larger info-structure.

    IIT is similar to Panpsychism in that it is not matter/substrate dependent and therefore any matter could be conscious.
    But again – IIT claims it is not the MATTER, but the ORGANIZATION of that matter that makes the difference.

    it’s like magnetism – a certain organization of elements creates the field, and if two fields interact strongly, they become one field.

    • Ioz, thanks.

      As I see it, IIT requires a “special” panpsychism, not standard panpsychism. IIT panpsychism, for example, has distance constraints. Nonetheless, like standard panpsychism, and unlike magnetism, IIT panpsychism requires a fundamental addition to standard physics. This is how IIT addresses the “hard problem”. But IIT may succeed without addressing the hard problem. In a recent paper, Tononi creates postulates, axioms and a single identity.

      The identity:

      specific brain states create specific consciousness states (qualia).

      The the task of IIT is, therefore, to define the “differences that make a difference”, the set of brain states. This is a big task, and an important one. But it is one that does not address the hard problem. IIT’s panpsychism proposes a structure for the hard problem, and one that would complement IIT.

  11. Joun, thank you so much for your post, I’ve really enjoyed reading it along with comment threads. My major is analytical chemistry, but the need to comprehend subjectivity led me to neuroscience and eventually to IIT. I’ve been digging into consciousness topic for a decade. Tononi’s IIT approach so far is the closest to my understanding of it’s phenomena. Though I can not claim that I comprehend it to the tiny details, but hopefully got the major points

    To me as for the scientist hypothesis should poses a bit more than just a peace of mind for the bearer. Viable hypothesis should be useful as a framework for understanding of current observations and be able to predict new phenomena … From your understanding which phenomena IIT can help with?

    • Tononi has been trying one application: Using Phi to calculate levels of consciousness in people anesthetized for surgery and people who are in coma or vegetative states. While this may not appear to be a challenging or high-level goal, it is a test, and, furthermore, may be useful. The presently available tools for analyzing level and anesthesia and coma are imperfect.

      • John, thank you for the reply.
        The text below is just my thought and notes but I hope it will be useful for readers.

        Just today digging dipper in to Tononi’s article “IIT of Consciousness an Updated account” 2012 … ops…Digging into pubmed for the link I’ve just discovered very recent article with theory update … http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4014402/

        But anyway at the current stage I see that Tononi and Koch, saying consciousness = exerience (or quale), do not distinguish between the awareness and attention.

        e.g. I’m sitting in the cafe in the middle of shanghai typing this message, being aware of my environment and feelings in my body but not actively attending to it. There is couple in conversation beside me, so i’m registering that their dialog is going on … but do not attend to the content of it. But some times my attention is being pulled… to the conversation when I here familiar words (the talk in mandarin)… Do you see what i mean? To your understanding to what extent do “subconscious” processes are part of the experience (Quale)?

        Once again thank for this post, food and environment for thoughts.

        Kind regards.
        ILYA

        P.S.
        Christof Koch and Giulio Tononi on Consciousness from FQXi Jan 5th 2014

  12. “Information will exist in an entity when it has information and is connected.”

    Did you mean consciousness will exist?

  13. I’m very late to this conversation, but intrigued. Are you aware of Walter Freeman’s take on consciousness? He uses dynamical systems theory and his own extensive research on neurodynamics, but also integrates pragmatist philosophy. I find his take on consciousness more compelling than IIT. I am fresh from reading Tononi and Koch’s 2008 “Neural Correlates,” and I also read some of “Confessions,” and find them muddled as well. But in fairness I’ll say that I don’t know much about IIT other than those works.

  14. Pingback: From Informatics to Consciousness - early extract from my new bookMark Skilton

  15. I know I’m late to this conversation but thought I’d chime in. I just read a bunch of papers on IIT by the main protagonists. I feel it does have something going for it. It is Searle’s objection I fail to understand. Indeed I don’t abid by the idea that there could be information “out there” in “the wild” without some observer observing it. If information is difference that makes a difference, then it already implies that it makes a difference for some experiencing observer or that it could potentially mean something for an observer (entropy). Searle seems to think that there is such a thing as data “in the wild”, what is sometimes called “dedomena”. But when you think about it, it is difficult to see how such a thing could exist: it simply would not qualify for information if it did not make a difference for some observer. An example I like to give is that of what cosmologists call the “observable universe”. In our cosmological bubble, it seems there is a limit to the amount of data that can be registered. This limit has to do with the reciprocal constraints between the forces and constants of the universe from our point of view. The planck length is the smallest size anything can have in the observable universe, and therefore limits the degrees of freedom of the universe, and thus the amount of bits that can be registered within it. But the very fact that this limit is tied to the “observable universe” implies that this limit is not observer independent. So, outside this inflationary bubble corresponding to our universe within the “champagne glass” multiverse it potentially floats in, there may be more degrees of freedom allowing more data to be registered, but of course, this data would then only potentially “make a difference” for the observers in those universes. Essentially, Phi is an extrinsic measure of the information that is intrinsic for a system, according to the irreducibility of that information to the information generated by the parts of the system.

    I am still unsure about what to make of their probabilistic account of the “cause-effect repertoires” the theory depends on. How do these probabilities translate into temporal experiences? If a system state “constrains” the past and future of the system as they claim it does, does this mean a system can potentially experience one single event of experience for ever as long as its state stays the same? Isn’t experience intrinsically temporal? What happens in between the states? How is a continuity of experience experienced for such a discrete laplacian system?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s