Rules and Values for Moral Robots

I’d like to start by thanking Gary Marcus for starting a terrific series of Neuroscience discussions from his post as New Yorker’s Neuroscience Blogger. Each topic has been juicy, and each perspective fresh.auto-man2

This is a continued discussion from Gary’s first post: Moral Machines. As with Gary, I’m going to focus on the design strategy for programming moral behavior into a robot.

Gary’s post focuses on Isaac Asimov’s rules for programming moral behavior into robots. The most prominent commandment is the first:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Gary explores whether this is sufficient.

I’m going to take a different starting point. This is largely borrowed (?stolen?) from the ideas of Joshua Greene (and here) and is influenced by concepts in Neuroeconomics. The first choice you, the programmer, have to make is whether to program rules or values.

Rule-based morality is, roughly: Given situation X always do response Y (or never do response Z).

Value-based morality programming is, roughly: Compute and perform the behavior that is most likely to produce maximal moral value. This implies that the robot can measure values, which might be something like happiness or survival of humans. Importantly and significantly, in principal the programmer can insert any value system. (lots of sci-fi adventures here).auto-man3

Why are there two systems? Josh Green suggests that the rule-based system is easy and efficient, while a value-based system is slow, but ultimately, closer to moral standards. I first heard Josh Green describe this about a year ago in an internet audio broadcast, where he described an analogy to a modern digital camera that has automatic and manual modes. I Loved the idea and analogy and made a slide (with attribution). I heard Greene speak a few months later and he had made an almost identical slide. The picture above is the one I made.

Let’s return to the task of programming our robot. Should we use rules or values?

Rule-based programming is a lot easier. Simply specify all of the conditions and all of the responses. The problem, clearly, is predicting all possible situations. There will be mistakes, big time. Contingencies cannot be predicted.

Value-based programming has its own problems. First, and foremost, what are the values, and what weightings should you use to obtain maximal value? While easy cases are easy, this is the issue that has confronted moral philosophy for millennia. Second, the program may be difficult, inaccurate and slow, messing up responses and response time.

While there are no easy answers, I think this is the programming framework robot programmers should employ. Perhaps they do.

Green and I and others feel that humans use both Rule-based and Value-based moral decisions. Further, there is interplay between the two. One can deliberate over a moral quandary, and create a moral rule, or one can practice moral behavior until it becomes automatic. Shifts can go the other way as well. One may have a moral rule, such as “never lie”, which is counter-productive to moral values in certain situations. An individual who maintains the “never lie” rule may deliberate, and tell a lie, it leads to a higher value.

What about the Azimov implementation of morality? I feel i’ts poorly formed. Not clearly a rule-based or value-based system. It sounds like a rule (“may not injure”) based on values (“harm”). As such, there are lots of situations where the robot is just confused, as when an action might cause unequal harm to different people. My suggestion: start over.

A final speculation relating to Neuroscience. As with other habits, I’d guess that rule-based morality is driven by the basal ganglia. And, as with other examples of deliberative choice, I’d guess that value-based moral decisions rely more heavily on frontal lobes. For this we need more science.

1 thought on “Rules and Values for Moral Robots

Leave a comment