(the following is a short piece I wrote two years ago in a private blog. I’m making it public and reposting because of similarities to arguments in John Searle’s review of Chistof Koch’s book in the NYRB. I found Searle’s review excellent. Unfortunately, most is behind NYRB firewall)
David Chalmers proposes that consciousness is inherent in informational structures1,2. As a reductionist example, he suggests that a computer, which organizes large quantities of information, or a thermostat, which organizes much smaller quantities, has a measure of consciousness. Some physicists (Penrose, Wheeler) have proposed that when natural phenomena are better understood, ‘information’ (non-random organization) will be recognized as a principal feature.
Let’s think about the thermostat with respect to our ideas about consciousness. The thermostat above works something like this. It has a coiled piece of metal (hidden in center) which expands with heat. There is a glass vial half-filled with mercury attached to one end of the metal. The vial rotates with temperature changes that cause the metal to expand or contract. When the vial rotates to the left, the mercury goes to the left of the vial, closing a circuit (yellow wires) and turning on the heater (or air conditioner). The ‘set point’ is controlled with the lever on the right which rotates the metal and glass vial, so it takes a hotter (or colder) temperature to get the glass vial to the level position. The ‘computation’ is provided by the expansion of metal that, at a certain point, causes the mercury to slide from right to left.
The thermostat has:
- Environmental sensors (thermometer – metal that changes size with changes in temperature)
- Environmental expectations (set point – temperature at which vial is level)
- Computation (expansion of metal until the level set point is reached)
- Output (switch is closed, turning on heater or air conditioner)
Seems pretty good. The fundamental trouble, as I see it, is that the thermostat does not know that it is a thermostat. It is only an entity in our eyes. What makes it a singular thing, rather than a set of 4 or 20 or 10,000 things which, when we view them, linked together, can be described as one thing? As I see it, it takes a intelligent agent to describe a thermostat as a thermostat, and there is no central entity within the thermostat capable of understanding itself. Another way of saying this is that the thermostat is a process of our brains, not the brain of the thermostat or, more important, it is not in any obvious sense, a singular thing in the world. If you were to take it apart and put the pieces on a table, would it still be a thermostat? Would it still be an information processor? If the dissected thermostat is not a thermostat, what makes the assembled thermostat an entity, and what about putting it together makes it gain thermostat-hood and become an information-processing agent?
Another way of saying this is that the thermostat has no self. No inside and no outside. With no inside, it has no inner state. With no inner state it has no drives. It’s in no way ‘good‘ for the thermostat if the outside world is cold or hot or exactly at set point. Its behavior is automatic, reflexive — a Rube Goldberg machine, without internal states or drives.
I’m on thin ice considering the thermostat as an information processor. But, in rough outline, if it turns out that information is fundamental to consciousness then we need to consider a central organizer for information and a bounded territory for the information processor.
Considering more complex systems, we must ask what makes an animal or a person bounded? What about a computer? What makes any of these a singular entity rather than a list of things? This may be the crux of the hard problem of consciousness.
1Chalmers, D.J. “The Puzzle of Conscious Experience“. Scientific American (2002)
2Chalmers, D.J. “Facing up to the Problem of Consciousness“, Explaining Consciousness, The ‘Hard Problem’. Cambridge: MIT, 1997. pp 9-30.