Does a thermostat have consciousness?


(the following is a short piece I wrote two years ago in a private blog. I’m making it public and reposting because of similarities to arguments in John Searle’s review of Chistof Koch’s book in the NYRB. I found Searle’s review excellent. Unfortunately, most is behind NYRB firewall)

David Chalmers proposes that consciousness is inherent in informational structures1,2. As a reductionist example, he suggests that a computer, which organizes large quantities of information, or a thermostat, which organizes much smaller quantities, has a measure of consciousness. Some physicists (Penrose, Wheeler) have proposed that when natural phenomena are better understood, ‘information’ (non-random organization) will be recognized as a principal feature.

Let’s think about the thermostat with respect to our ideas about consciousness. The thermostat above works something like this. It has a coiled piece of metal (hidden in center) which expands with heat. There is a glass vial half-filled with mercury attached to one end of the metal. The vial rotates with temperature changes that cause the metal to expand or contract. When the vial rotates to the left, the mercury goes to the left of the vial, closing a circuit (yellow wires) and turning on the heater (or air conditioner). The ‘set point’ is controlled with the lever on the right which rotates the metal and glass vial, so it takes a hotter (or colder) temperature to get the glass vial to the level position. The ‘computation’ is provided by the expansion of metal that, at a certain point, causes the mercury to slide from right to left.

The thermostat has:

  1. Environmental sensors (thermometer – metal that changes size with changes in temperature)
  2. Environmental expectations (set point – temperature at which vial is level)
  3. Computation (expansion of metal until the level set point is reached)
  4. Output (switch is closed, turning on heater or air conditioner)

Seems pretty good. The fundamental trouble, as I see it, is that the thermostat does not know that it is a thermostat. It is only an entity in our eyes. What makes it a singular thing, rather than a set of 4 or 20 or 10,000 things which, when we view them, linked together, can be described as one thing? As I see it, it takes a intelligent agent to describe a thermostat as a thermostat, and there is no central entity within the thermostat capable of understanding itself. Another way of saying this is that the thermostat is a process of our brains, not the brain of the thermostat or, more important, it is not in any obvious sense, a singular thing in the world. If you were to take it apart and put the pieces on a table, would it still be a thermostat? Would it still be an information processor? If the dissected thermostat is not a thermostat, what makes the assembled thermostat an entity, and what about putting it together makes it gain thermostat-hood and become an information-processing agent?

Another way of saying this is that the thermostat has no self. No inside and no outside. With no inside, it has no inner state. With no inner state it has no drives. It’s in no way ‘good‘ for the thermostat if the outside world is cold or hot or exactly at set point. Its behavior is  automatic,  reflexive — a Rube Goldberg machine, without internal states or drives.

I’m on thin ice considering the thermostat as an information processor. But, in rough outline, if it turns out that information is fundamental to consciousness then we need to consider a central organizer for information and a bounded territory for the information processor.

Considering more complex systems, we must ask what makes an animal or a person  bounded? What about a computer? What makes any of these a singular entity rather than a list of things? This may be the crux of the hard problem of consciousness.

1Chalmers, D.J. “The Puzzle of Conscious Experience“. Scientific American (2002)

2Chalmers, D.J. “Facing up to the Problem of Consciousness“, Explaining Consciousness, The ‘Hard Problem’. Cambridge: MIT, 1997. pp 9-30.


9 thoughts on “Does a thermostat have consciousness?

  1. Unfortunately, you have no certainty while saying that a thermostat has no “self”. The same mistake is repeated over and over. There is no way to be sure about anything if it has an inner self or consciousness without becoming the object of interest or being “entangled” with it. A grain of sand can be conscious on some level, you will never be sure, no matter how many logical chains will be constructed around the question.

    • medoved, yes, you are right. I have no ‘certainty’. That’s because we’re talking about ‘consciousness’, a subject very difficult to discuss. Start from square 1. The only access we have to consciousness, the only information we have about consciousness, is from humans to have consciousness. The approach that I (and Searle) have taken is to make guesses about consciusness from the human condition. Another approach is the one Chalmers and Koch have taken. It is, roughly: 1. undersand that there is a thing called consciousness from the human condition; 2. guess about its function and 3; guess about how that function could be implemented in the most simple manner. What Searel and I have done is look for weaknessses in the other argument. But proof? You won’t find it.
      Searle makes a very interesing argument about ‘self’ and the thermostat. He says that the thermistat has no ‘unity’ (self is a unity concept). He says that giving a collection of things unity takes a conscious mind; in the universe without consciousness there are only swirls of partiicles. The Searle argument is that the Chalmers/Koch argument is circular, because it uses consciousness to define consciousness. I find this very strong, but this argument itself has one weakness. It is that there are no emergent properties in the universe without consciousness; no clouds, rivers, planets, people. These are simply patterned collections that we see as unified. People would argue with that.
      You could start a third thread; that a grain of sand has consciousness. I could argue against it, but it would be an argument based on intuition, personal experience about what a consciousness is and who could have it. I could not prove you wrong.

  2. Pingback: I’ve got your missing links right here (29 December 2012) – Phenomena: Not Exactly Rocket Science

  3. You say that: “The thermostat does not know that it is a thermostat.”
    But similarly, this body-mind does not know that it is a body-mind. Only consciousness ever knows anything: everything that is known is known by consciousness. Consciousness is single, indivisible, non-local, and universal. Consciousness enjoys pretending to be self-contained body-minds, and when playing such roles consciousness identifies thoroughly with each separate body-mind. When this identification is relaxed, consciousness “remembers” reality – from the perspective of the body-mind this is experienced as “awakening”.

  4. Good site! I really love how it is easy on my eyes and the data are well written. I’m wondering how I might be notified whenever a new post has been made. I’ve subscribed to your RSS which must do the trick! Have a nice day!

  5. Pingback: thickness | style is violence

  6. Nice post. I be taught one thing more challenging on different blogs everyday. It’s going to at all times be stimulating to read content from other writers and follow a little bit one thing from their store. I’d want to make use of some with the content on my blog whether you don’t mind. Natually I’ll give you a hyperlink in your net blog. Thanks for sharing. green smoke coupons

  7. Agree, but I’ll propose two related questions related to minimal consciousness (here taken to mean sentience).

    1) What would it take to create a thermostat that could actually feel discomfort–to feel too hot or too cold (to suffer or experience relief from suffering, or pleasure at being at the ideal temperature to which it was set)? That is, what is the minimal circuitry that it takes to make a device capable of experiencing rewards (pleasure) and punishment (displeasure/dyspleasure)?

    2) What advantage might there in to having a thermostat that could FEEL temperature and make adjustments accordingly? I can imagine that a thermostat that “had a stake in the outcome” might do a better job than an unfeeling thermostat. For example a “difficult to regulate room” such as a large meeting room where the temperature will change due to body heat of the audience and changing outside temperatures (some changes predictable, but others not) might be controlled by a thermostat having access to lots of inputs (meeting schedules and expected number of people in attendance, weather forecasts, etc.) and several means of changing the temperature (heaters, air conditioners, fans, etc.). of course, the temperature might be controlled by a computer program written by engineers (hundreds of miles and several years distant) that takes numerous scenarios into account. But the distant engineers don’t suffer if they got something wrong or ignored a scenario. On the other hand, a feeling thermostat would have an interest in doing all it could to keep the temperature stable. If there is evolutionary value to “caring about what happens” (note that business and management programs believe is important for employees to care about their jobs), then we would expect selection for this feature.

    One could imagine stress/strain gauges put in bridges and skyscrapers, crash detectors in cars, etc. that could actually suffer if a problem arose. What would it take to make a self-driving car able to be rewarded or punished–to CARE? currently engineers are programming in scenarios and letting the cars learn as they gain experience, but they don’t care yet. This is operant conditioning, which relies on rewards and punishments, on values.

    A note about programming the pleasure/displeasure settings on a feeling thermostat: one could program using rewarding settings, such that pleasure would be intense when the temperature was at the ideal setting, with no or only mild discomfort when the temperature deviated. On the other hand, a cruel programmer use only punishment, making it extremely uncomfortable for the thermostat if the temperature deviated only slightly, while giving only a lessening of discomfort when the temperature reached the ideal.

    I suspect we’ll find that it isn’t that difficult to create a true motivation center. We’ll recognize it once we can train using operant conditioning, which depends on rewards. Of course, then the ethical issues will get REALLY interesting.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s