List 1: “essential consciousness (level 1)”. I suggest these are the core features of consciousness, common to all conscious creatures on planet earth.
List 2 : “Level 2 consciousness”. Level 2 consciousness is a higher consciousness, present in most humans and, likely some other mammals. Level 2 is qualitatively different from level 1 and some consider it to be true “conscious”. I’ve taken the approach of distinguishing two levels. My dogs are conscious, but not level 2.
List 3: “Biological Features Connected to Consciousness” A speculative list of behavioral and structural features whose evolution may be tied to consciousness.
(the following is a short piece I wrote two years ago in a private blog. I’m making it public and reposting because of similarities to arguments in John Searle’s review of Chistof Koch’s book in the NYRB. I found Searle’s review excellent. Unfortunately, most is behind NYRB firewall)
David Chalmers proposes that consciousness is inherent in informational structures1,2. As a reductionist example, he suggests that a computer, which organizes large quantities of information, or a thermostat, which organizes much smaller quantities, has a measure of consciousness. Some physicists (Penrose, Wheeler) have proposed that when natural phenomena are better understood, ‘information’ (non-random organization) will be recognized as a principal feature. Continue reading
David Marr was a brilliant Neuroscientist who died too young, in 1980, at the age of 35. Marr’s work was theoretical — he was at the leading edge of a computational wave.* Marr’s contributions spanned many areas of Neuroscience: cerebellum, hippocampus, and especially vision. Marr is also well known for proposing that brain/behavior function should be approached in three phases that are largely sequential:
- The computational level: what is the problem that confronts the animal?
- The algorithmic level: How is it logically solved? (including shortcuts)
- The implementation level: How does the brain do it?
A year ago Dayu Lin and co-authors published a landmark study in Nature on the hypothalamic nucleus which, when optically stimulated, produces undifferentiated rage. At that time Ed Yong wrote a wonderful summary of the work.
The point: in the mouse there is a region in the hypothalamus which, when stimulated, produces undifferentiated rage. There is reason to believe there is an equivalent region in humans. While we don’t go around with optogenetic probes in our brains, the state of undifferentiated rage is not uncommon. Many of us have experienced times when rage is out of control — difficult to keep in check by reason or logic. This is why I don’t like having guns within easy reach.
A fanciful depiction of how a grid cell based navigation system could compute a direct path across unvisited regions of space.
A week ago Stensola et al published evidence that Entorhinal Grid Cells are modular, and on the same day I wrote a glowing commentary, The Significance of the Modular Organization of Grid Cells. In addition to praising the paper, I tried to explain why evidence for modular organization was welcome news, in that it supported computational mechanisms that grid cells could perform. In the rest of this post I outline a model Andre Fenton and I have been working on that relies on discrete modules of grid cells. Our model extracts a function we call linear look-ahead and uses this function for efficient navigation. We feel it represents the begining of a process of explaining high order cognitive functions at the neuron network level Continue reading
Enthusiastic and Excited response to the publication of “The entorhinal grid map is discretized” (Stensola, et al, Nature, 492, 72-78; 2012)
The hippocampal formation is an amazing place, populated by strange characters called place cells, head-direction cells and grid cells. Hippocampal place cells exhibit “location-specific firing” (see figure). A single place cell will “fire” only when the rat crosses a restricted region of space. The figure below is an overhead-view map of the firing of a single place cell averaged over a 16-minute recording session (the animal was in a cylindrical enclosure; that’s why the map is round). John O;Keefe, followed by many others, has suggested that the collective firing of hippocampal place cells forms the rat’s “cognitive map”, and permits efficient navigation.
I’d like to start by thanking Gary Marcus for starting a terrific series of Neuroscience discussions from his post as New Yorker’s Neuroscience Blogger. Each topic has been juicy, and each perspective fresh.
This is a continued discussion from Gary’s first post: Moral Machines. As with Gary, I’m going to focus on the design strategy for programming moral behavior into a robot.
Gary’s post focuses on Isaac Asimov’s rules for programming moral behavior into robots. The most prominent commandment is the first:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Gary explores whether this is sufficient.
Over the past few days the question has been raised: “Does Neuroscience Need a Newton?”. Gary Marcus says yes. Scicurious says “no”.
I think Neuroscience has aready had one: Cajal. Read Gordon Shepherd’s 1991 book Foundations of the Neuron Doctrice. The 19th century scientists were lost, arguing about which of the artifacts seen under the microscope were which. Cajal, in remote Spain, peered through his microscope and saw neurons.
Living on the current side of this revolution, its hard to imagine the 19th century image of neural function. Were juices flowing down tubes? Were strings being pulled?
Gary Marcus has a marvelous article in this week’s New Yorker that points out the over-reach the media has imposed on Neuroscience. Neuroscience has progressed, but is still in its infancy. I have no disagreement with Marcus. This post is part of a conversation.
A major culprit has been functional imaging (fMRI) as presented in the media. Functional imaging has been a wonderful advance that permits a glimpse of the activity in regions of the normal human brain. A wonderful tool, but it is a tool with serious limitations. It doesn’t lead to direct understanding how the brain works. Let me explain.