Morning Rumination on Consciousness

snoopy doghouse2While lying in bed this Sunday morning a few thoughts on consciousness came to me. Morning insights can be useful or vapid — not sure which these are. But they’ve stuck in my head, like a tune that keeps replaying. I’d like to share them and discuss them. Three semi-awake assertions:

  1. A conscious agent must be able to make a statement of fact
  2. Consciousness is an act of communication
  3. The statement of fact cannot be the state itself; it must be a symbolic representation of state

Added about 10 minutes later:

  1. Meaning is the ability to translate the code to a representation of the state. That is, the message must be able to be encoded and decoded — it must include a transform and a reverse transform. Such a reverse decoder must be able to make internal representations of external state “feel” like the external state. Hence, the necessity of qualia.
  2. To demonstrate that these internal representations are true representations of external states there must be conditions in which they can become synchronized with, and predictive of, external states; for example, voluntary action.

Elaborations: 

item 1, ‘a conscious agent must be able to make a statement of fact‘. Let’s start with a photodiode and see why it isn’t conscious. One might imagine that the diode could make one of two statements: “I am on” or “I am off”. This is impossible for two reasons. First, for the diode there is no “I”. “I” requires a separation of self from non-self, and there’s no reason to guess that the diode would know a boundary. Second, the diode has no memory. If we somehow  grant that the diode can look within itself and see that it is “on”, how would it know that the current state can change, and that “off” is a possibility. It needs awareness — representation or memory — of alternate states. While an external observer might see that if electric input would turn off the diode state would change, I can’t see how the diode itself could encode alternate states. Carrying this example further, being able to make a statement of fact means that the conscious agent must be able to symbolically represent things (objects) and states of things. 

Items 2 and 3. Conscious items must be symbolically encoded and thus be acts of communication. A rock falling is not, in itself, a conscious act. A statement that a rock is falling may be part of consciousness. Saying that consciousness is an act of communication carries a lot of baggage. There must be a sender, a representation and a receiver. Is this too heavy a burden? We now know that the CNS of advanced animals is more than a reflexive device. Specifically, it relies on internal representations. A reflexive device need not symbolically encode, while a representation must. We know, for example, that rat hippocampal neurons replay previous courses of action (paths taken) while the rat pauses or sleeps, and anticipates possible courses of action at choice points (paths, again). These patterns of activity fit nicely with symbolic representations, but are incompatible with a reflex action system. 

If a conscious representation is an act of communication, who or where is the reader? 

Items 4 asserts that, in addition to a symbolic encoder,  there must be a decoder mechanism. Close your eyes and imagine your bedroom: you’ve performed an act of consciousness. The representation is in your brain. It is not your bedroom, but an activity pattern of neurons. The transform from bedroom to brain is encoding. What you “see” in your minds eye is your bedroom, or something like it. This means that activity pattern is decoded. 

Item 5 tries to deal with meaning. In Searle’s famous Chinese room argument, a translator could encode and decode between Chinese and English without a sense of meaning; without knowing what the languages (codes) represent. If the internal state can not only respond to external states, but change external states and predict them, it can generate a model of external states. Generating a model of external states is how an internal representation can generate meaning. In brief, the internal representation must not only respond to external states, it must be a source of action upon external states. My guess is that there must also be a priori stuff in the internal states (Kant); things like space and time, qualia and causality. 

Postscript: this is clearly very primitive, and I’ve made no attempt to connect these thoughts to the ideas of others. (I am aware of lots of other stuff). The post is an invitation for discussion. 

Advertisements

4 thoughts on “Morning Rumination on Consciousness

  1. May I ask, are you here discussing only ‘consciousness’ i.e. mental representations that are ‘with knowledge’? Would you accept that there is also something we might call ‘awareness’ which does not convey knowledge, is not a psychical representation, is not therefore storable and retrievable as some approximation in memory, and yet its existence is self-evident via an apparent lucidity? You may well not, but if you do, then the whole discussion changes does it not?

    All the best.

    Hariod Brawn.

    • I don’t see a clear boundary between awareness and consciousness. I’m a physicalist. For me, “awareness” (cousin of attention) must be, in some fashion, reelected in activity patterns in the nervous system. As such, it is both a “physical representation” and capable of storage (memory) and, via storage, capable of being retrieved.
      “apparent lucidity” seems related to the “feeling” of consciousness, which remains a mystery (at least for me).

      • Only a small point, but I mentioned a ‘psychical representation’, not a ‘physical’ one such as you refer to; though as a physicalist there would be no such distinction in any case for you of course.

        I’m not suggesting that awareness (let’s call it that for now), isn’t anything other than some alternative mode of consciousness, nor that it obtains independently of the nervous system as some sort of uncoupled epiphenomenon.

        I’m saying that consciousness isn’t necessarily ‘with knowledge’, or as you put it ‘an act of communication’. If you were to experience a state of lucidity which was objectless, which was prior to any representation of the senses including mentation and what you refer to as ‘feeling’, would your definitions alter? Would you then ‘see a clear boundary between awareness and consciousness.’?

        Thanks, your views are appreciated and respected.

  2. Very interesting. Since there are very few chances to find consciousness mechanism hypothesis, at this point I would like to express something comparing to my thought.

    Regarding a photodiode, it does not have “I” unfortunately. Considering “I”, metacognition is necessary, but it is kind of higher level system.
    (My hypothesis is: It need plural memories that “I feel something.” And also need memories that “Not I (ex. your friend) feel other things.” After comparing those, “I” concept arises relatively in the system. This hypothesis is not general. Probably it is easier to understand my basic conscious hypothesis at first.)

    A simple photodiode does not have memories, either. If the system has memories, consciousness arises from associative memories, and also metacognition arises. My hypothesis is about consciousness in the narrow sense, and consciousness from macro-perspective which include free will, metacognition, seamless access between memory & instinct, and creativity.

    A photodiode example is the most primitive. (There seemed some more restriction condition in IIT v.3.0.)

    Regarding a rat, it can predict next things from its memories. But a rock and a photodiode may not be able to do same things. It is recommended to consider to add memory and associative memory system to the reflex action system. (This may be closer to the answer.)

    Chinese room is a representative example of that can not conscious. On the other hand, conscious system can feel it is “blue” that I saw before if there are a memory of blue sky, after I see “blue” sky. It would be better if it can have associative memory ex. white clouds.

    If it has desire or instinct, it may have emotion. I doubt a priori stuff.

    It may be difficult to express qualia. Qualia needs to be decoded? Actually it is so if we make sentences with some words from the feeling. However is it the same in the brain? I think it is possible to recall it if we can have memories of themselves even if they are not encoded nor decoded. (Please consider blue sky, and a small red light, and also electric signal of stimulus. Small red light seems to be able to be use without encoded and decoded?) My stand point is qualia arise, but it does nothing. (This is is not general, either.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s