Judicial Punishment in a Neuroscientific World
We’ve witnessed a steady stream of books and articles about the relationship between a Neuroscience and judicial philosophy. Although I am far from an expert, I’ll describe what I believe are the rationales for legal punishment. This will be followed by personal reflects the legal system, Neuroscience and Psychology1. Lets start with a criminal act. An individual commits a crime, and may deserve punishment. Considerations for punishment include:
- Did the defendant understand alternative courses of action and choose the anti-social, criminal act?
- Did underlying biological factors limit the defendant’s choice?
- Is the defendant a threat to society and should be removed from society to remove the threat?
- Will punishment help teach the defendant that “crime does not pay”.
- Will punishment serve as a deterrent to others? (also, “crime does not pay”).
- Will (should) the punishment help placate desire for revenge from victims and society?
- Will incarceration lead to rehabilitation, teaching better moral values and respect for social rules?
- Are there mitigating circumstance? For example, personal or family survival may have come into conflict with social rules.
- Does the punishment support and promote the societal values of fairness, love, freedom, and life?
Before discussing these individually, it’s important to point out that there are numerous rationales embedded our judicial system. Eliminating or altering some does not invalidate the rest. Final judgements should be an integration of these factors.
Data from Neuroscience and Psychology influence several of these criteria.
- Free Choice and Free Will. Since the early studies of Libet and more recent work that supports Libet’s results, there have be questions about whether conscious decisions cause, or even precede, volitional action. Separately, philosophical work, based on a physically determined universe, suggest that even if consciousness determined voluntary action, the content of consciousness and choice are pre-determined; an idea going back at least two centuries to “Laplace’s demon“. Although these are interesting discussions, I don’t believe either has direct impact on the judicial decisions about responsibility. My sense is that the current judicial concept of free will — did the defendant recognize alternative choices — is completely sufficient. Our personhood is, in part, defined by our behavior. The behavior an individual engages in at any moment in time depends on an internal evaluation of needs and values. Although conscious will may not be involved in every muscle twitch, it is certainly involved in selecting major goals and courses of action. As long as the individual has an awareness that there are alternative courses if action, if he/she chooses an illegal action, he/she is responsible.
- Biological Constraints and Limitations. What if a defendant has special biological condition, such as a tumor of the amygdala causing a predisposition to violence? What if the tumor is removed, removing the predisposition? What if the individual has very low IQ? These are difficult cases. My sense is that these should be considered mitigating factors, but do not preclude guilt unless the person does not understand alternative courses of action or the harm his action may incur. In situations were the putative causal factor is reversed (for example, surgery to remove a tumor), this should be considered in rehabilitation. There are also biological predispositions that are common to almost all humans. For example, there is a human pre-disposition for revenge. These should not mitigate resposibility or guilt
- Conditioning and Learning. Three of the rationales above deal with conditioning (4, 5 and 7). These are the domain of psychology. What works and doesn’t work are important empirical considerations.
- Revenge is accepted as an inborn, natural reaction in our species2. This is not a justification. On the contrary, I feel that revenge should play no role in a judicial system. The establishment of state-run judicial systems, and the (partial) elimination of revenge and vigilante justice is a great victory for civilization and peace. Victims cannot be on juries3.
- A society’s values are reflected in its laws and system of justice. In the words of John Dewey, “the best way to judge a culture is to see what kind of people are in the jails.”4
- Personal Identity5 can be described as an apparently continuous series of episode memories that that add up to an individual’s life history. Although insights into the biological substrate of and individual’s episodic memories, these do not, yet, impact judicial decisions. An individual is responsible for his or her actions, past and present. Actions are the output of brain, body and mind. They cannot be separated; all are responsible.
1 Mea Culpa. I am not an expert in our judicial system or philosophy. As a neuroscientist and citizen I’ve read lots of stuff on these topics over the years. I decided to leave out citations, thinking that: a. I don’t remember where I got some of these ideas; b. I don’t recall any specific book or article that covered all of these; and c. any citation list would be unbalanced. d. These are jumbled ruminations, collected over the years. I welcome — encourage — guidance and discussion.
2 Various works by Jonathan Haidt describe biological predispositions for “moral” action, including a predisposition for revenge. See, for example, “The Moral Emotions“.
3 I’m reminded of Michael Dukakis’ disastrous presidential debate gaff. When asked what he would do if his wife were raped, Dukakis replied in a dispassionate tone that he would rely on the courts. Dukakis didn’t think through human reactions, and the strong desire for revenge. The answer should have been, “Of course I’d want to kill the rapist myself, with no trial, with my bare hands, and inflict pain. But we are fortunate to live in a society that has gotten beyond revenge, and we are far better for it. I hope I would not act out on my desire for revenge. Correctly, I would, be disqualified from being judge or jury in the trial. Although I assume I would be forever damaged, acting on instincts for revenge would be a return to barbarism, and would result in much more harm than good.”
4 As quoted by J. Stanley and V. Weaver in the NY Times, Is the United States a ‘Racial Democracy’?
5 This description of personal identity can be traced to John Locke. In addition he said that if there is a memory break, as in amnesia, the person is not responsible for acts committed in the amnestic interval.
Lots of interesting issues raised here — too many to deal with at one time. In my view the central dilemma is that, whether through biology or culture, we are disposed to use a theory of justice that depends on notions such as responsibility, rationality, and freedom. These are interface-level constructs — that is, they work reasonably well at the whole-organism level, but they can’t be applied to the internal parts of the brain without becoming incoherent. When we try to account for brain damage or brain disease in these terms, it’s a no-win scenario. Whatever you try, you end up in a boggle. In the worst case, you end up with dangerous people roaming the streets because you can’t figure out a justification for restraining them.
The hard question is what to do about it. We basically have three choices: (1) Resolve that all people will be treated as rational entities who make decisions that they believe will maximize their satisfaction (in other words, pay no attention to what happens in the brain); (2) Create a new theory of justice that doesn’t use mentalistic notions but rather focuses on maximizing the overall health of society; (3) Muddle through as best we can. What we ought to do is number 2 — what I expect that we will actually do is number 3.
I’ll reply in more detail later. About people in the street: I think there are parallel reasons for restraint and restriction; some independent of whether they are really responsible. Protection of society is a strong consideration. The central issues of responsibility, as I see it are choice and understanding.
The first assumption is to judge people by behavior, not intent. What they do, not what they think. This can be mitigated if their thinking is compromised.
Yes – this is an interesting start to a discussion, but I hope not the end. I agreed with Bill’s comments and I would also vote for 2 but expect 3.
I don’t think the idea of responsibility can easily be made to fit with the observations that increasingly suggest that a lot of behaviour and the brain activity that underlies it is unconscious and that our intuition and explicit memories are a poor guide to our motivations and internal processes. However, thinking in terms of responsibility is probably a useful heuristic that guides individual decisions and societal principles of justice so that they are roughly aligned with the overall health of society in Bill’s point 2. For example, if a murderer is jailed, it is beneficial if they are exposed to an aversive treatment (jail) that reduces the likelihood that they’ll reoffend (if that’s true empirically) and physically removes them from society so as to limit the danger they pose. It’s interesting to think about cases where the degree of ‘responsibility’ (or intent) varies and consider whether in cases of reduced responsibility, the balance of costs and benefits of different treatments should be different for maximum benefit.
Maybe a way to reconcile these ideas is to consider that concepts of free-will, responsibility, guilt, etc. (even consciousness itself, and social impulses and drives such as revenge) have evolved in part or whole in response to natural selection based on something correlated with the overall health of society. However like other heuristics (implicated in cognitive biases) they may be ‘good enough’ for biological fitness without necessarily being optimal. We can probably work out a better way to societal health and well being by focusing on evidence based measures that directly maximise it, as Bill suggests.
Tom, yes, the beginning of discussion.
I don’t feel we yet have the info that high-level decisions or goals are unconscious. Moment-to-moment behavioral modifications may be, almost certainly is, unconscious, but I don’t think deliberations are unconscious. This is a fascinating area for discussion. Law appears to take this, in part, into consideration. Was the action “premeditated”. A spontaneous, unpremeditated attack may be done without conscious supervision. This is, properly, a mitigating factor in the courts. the legal system seems to understand what we are discovering in this part of neuroscience. But I don’t believe that strategic action is or can be unconscious.
But I am not sure the fact that one has the conscious intuition that a choice was available necessarily implies that you could have made it. I think i) the conscious deliberation may not be a critical part of the process that led to a decision and ii) even if it is that the outcome of conscious deliberation may in any event be determined by unconscious/biological mechanisms or ‘chance’ (indeed I can’t see any non-paranormal alternative); I don’t see much room for free-will, and I can’t see ‘who’ is responsible (i.e., it seems like it must be a homunculus who is doing the deliberation).
Nonetheless I think the illusion of a ‘me’ that is doing the thinking, and the idea that I could have made alternative choices may be valuable, not least for understanding others behaviour and knowing how to respond to it in a socially useful way (more accurately for generating adaptive responses to other’s behaviour). These could thus be “symptoms” that might be taken account in implementing justice at the societal level; if someone has deliberated to the point of generating alternative strategies and still went on to murder, it may indicate a greater risk and/or reduced sensitivity to societal norms or punishments, than someone who impulsively kills without reflection, impulsively or through recklessness. So even if everything is either deterministic or stochastic, there may still be a place for evaluating free will and deliberation, as long as we don’t regard these as truly causal. However, it could also be that the current weight we place on conscious intuitions about intentions and alternative courses of action and on explicit memories of these is greater than it should be for an optimal outcome.
(responding to both Tom and John) I’ll try to summarize my view on this, though I’m afraid it might be hard to understand without explaining the background. My view is that the mind should be treated as a virtual entity, and concepts such as belief, understanding, and responsibility as virtual properties. The important thing about virtual entities and properties is that they are not implemented directly — they are implemented by implementing the behaviors that are associated with them. In other words, if we look in the brain for a belief, we won’t find it in any explicit sense, instead we will find a set of mechanisms for implementing the behaviors that we associate with the word “belief”.
The upshot is that notions such as responsibility are coherent and useful at the whole-person level as long as certain conditions are met, such as rationality. But if we look for them inside the brain, they dissolve into lower-level things that don’t have the properties we naively expect. We could react to that by calling them illusions, but I think it is more useful to treat them as part of the human-to-human interface — a set of mechanisms we have developed for interacting with one another.
Thus a concept such as “pre-meditation” is useful at the whole-person level, but if we try to look inside the brain to decide whether an action was pre-meditated, we find that pre-meditation dissolves into a set of scattered and disunified mechanisms, and there is no principled way of coming up with an answer.
Very interesting; I’m sure its not going be easy or short for a full explanation. First, I’m trying to understand what a “virtual entity” is. A direct translation of “virtual” is “imagined”; this doesn’t get us very far, and is not, I think, what you are getting at. A second notion is that these ‘virtual entities’ are ’emergent properties’ in that they cannot be understood by their component parts. My view is that there are 2 types of emergent properties. The first is something like the concept of ‘cloud’. This is a low-level type of emergent property and CAN, in principal, be understood by careful and complex construction of the component parts. in that sense, ‘cloud’ does not exist; its a short-hand generalization of a huge number of component parts. And, unless there is a ‘mind’ to conceptualize cloud, there is not cloud at all. The second type of emergent property is one that cannot be understood by simply adding the properties of component parts. This is what “mind” seems to be (the “hard” problem of Chalmers). But here we’ve come full circle, by saying a “virtual” thing is something like “mind” which we really don’t understand.
Getting back to Bill, perhaps this is where you are going. As I see it, science can be applied at the level of biology (neuroscience) and at the level of the mind (psychology) and we can see how the two correlate, but we don’t have true explanations for mind. A person can have brain damage, and consequences of brain damage on the functioning of remaining neurons, etc, can be studied at the Neuroscience level. The same person can exhibit distorted mental operations, which can be studied, categorized etc. But this does not lead to a causal (biological) explanation of why his mental operations are distorted, or who or what is responsible. The who does not reduce to a what. Who is responsible is the person with the brain damage. What is responsible is the brain damage. But you can’t blame the who on the what. (sounds like jibberish I know, but makes sense to me).
Virtuality, as I use the word, is a concept that derives from computer science. A virtual entity is something that replicates the interface of another entity, but with a different internal mechanism. In other words, a virtual entity is a duplicate as seen from the outside but not as seen from the inside. That’s the basic concept, but it can be extended to “pure virtuality”, in which an entity is defined entirely by its interface, with no reference to the mechanism that implements it.
Virtuality is interesting in part because it provides a level of existence in between reality and fiction. A virtual entity is not real, but it behaves as though it were real. It is impossible for a fictional Sherlock Holmes to solve real crimes, but for a virtual Sherlock Holmes — that is, for a simulation of Sherlock Holmes that has an interface that allows it to interact with humans in the same way a “real” Sherlock Holmes would — it *would* be possible to solve real crimes.
So when I say that responsibility is a virtual property, I mean that it is defined by its external manifestations, not by the brain mechanisms that implement them.
I think your central premise is exactly right: recognizing counterfactual options when making a decision is what makes you responsible for your decision, since you’ve recognized your choice as “MY CHOICE” among the others available to you. To experience choice is just to select one option from a menu of real alternatives. A viable defense of “insanity” is that an insane defendant lacked the conceptual apparatus to make the crucial distinctions which ground our common sense notions of responsibility, and without being able to entertain the concepts of “right” and “wrong” one cannot be held to standards of right and wrong. (It’s why a lion who eats a person is innocent.) This is always why “pre-planned” crimes strike us as most morally egregious; it implies the agent had time to consider and parse the decision, giving him time to entertain many counterfactuals and high-level moral concepts which must inform the final outcome, whatever it may be.
Does the existence of an optional course of behavior available to the consciousness of the agent imply that that alternate course was in fact metaphysically possible? Yes, on some level I think so, but it’s strange. Once an agent makes a choice, for good or evil, he “determines” what outcome was selected from the options available to it and therefore eliminates any counterfactual possibility; that conscious agent choose THAT, and that is what that conscious agent DOES in that circumstance. But IF the agent had chosen something else then THAT alternate choice would have been what that agent chose. The reason we can still play the hypothetical game even after the agent has chosen is that we can find no constraint outside of the agent himself as to WHY he made the selection he did. And so, he was free. The will determines itself. Anyway, the best thing I’ve ever read about free will is from Tononi; here’s the link: http://everything-list.105.n7.nabble.com/Tononi-s-Defense-of-Free-Will-td38492.html
Took a quick look at the Tononi link and realized it will take more than a quick look. I like Bob Doyle’s two stage model found on his “information philosopher” web site. He does not claim to be unique or completely original, but he gives good history and synthesis.