“I, Robot” and the morals of AI Robots

In the collection of stories, I, Robot, Isaac Asimov raises a few philosophically interesting problems. One problem has to do with programming robots to be moral, and act in such a way as to avoid harming either human beings or themselves. Discuss Asimov’s three rules in the context of this issue, and how Asimov explores the difficulties of such a project in the story “Runaround”.  Do you think finding the correct set of rules is possible?  Explain.

In the story I Robot, Isaac Asimov lays out three rules of robotics which all robots in his world must follow. These rules are the foundation for a moral code for robots. The three rules of robotics are the following. Firstly, a robot may not injure a human being or through inaction, allow a human to come to harm. Secondly, a robot must obey orders given to it by human beings except where such orders would conflict with the first law. And thirdly, a robot must protect its own existence as long as such protection doesn’t conflict with the first or second laws. I will first explain Asimov’s interpretation of how these rules relate to morality through his story “Runaround.” I will then argue that these set of rules can’t create a perfectly moral world, and then I will argue that there is no possible correct set of moral rules that will always lead to a completely moral world.

Runaround is one of nine short stories in I Robot, the science fiction book written by Isaac Asimov which explores various moral questions about robotics, society, and the future. Two Engineers Gregory Powell and Mike Donovan are trying to harvest Selenium on Mercury and reopen the mining stations on the sun side of the planet. They accomplish this with a robot named SPD-13 or Speedy who is built for the inhospitable atmosphere. For humans, the planet is accessible only with special insosuits, which protect against the atmosphere and the sun for up to 20 minutes. The two engineers realize that Speedy has been gone for two long and upon further inspection they realize that Speedy is circling one of the Selenium pools. They Find two older model robots that take them out to where Speedy is circling, and when they arrive, they realize that Speedy is drunk, or the robot equivalent of being drunk; Speedy speaks incoherently, stumbles around and circles the Selenium deposit. This is strange since by circling Speedy is going against what Donovan ordered earlier and following orders is the second rule of robotics. Powel and Donovan return to the base since their suits are starting to reach their limits and when they are back, they go over the fundamental rules of robotics to figure out what is wrong with Speedy.

They start by explaining the three rules of robotics: [1] A robot may not injure a human being or through inaction, allow a human to come to harm. [2] A robot must obey orders given to it by human beings except where such orders would conflict with the first law. [3] A robot must protect its own existence as long as such protection doesn’t conflict with the first or second laws. Typically, the rules are ranked in a way that overrides the others in a tier: rule one overrides rule two and rule three, and rule two overrides rule three. Speedy is an expensive robot and so Powel and Donovan explain that rule three is strengthened for speedy. Since Donovan didn’t add urgency to his order when he instructed Speedy to collect Selenium, the two engineers deduce that rules two and three reached an equilibrium. Donovan ordered Speedy to get selenium which is the case of rule two. There is volcanic activity near the selenium pool which is a threat to Speedy; this is the case of rule three. Usually rule two would outweigh rule three and speedy would get the selenium despite the danger, but in this runaround case Speedy has rule three enforced. Speedy goes away from the danger, and when they are far enough away rule two takes back over and then they go closer to the selenium pool and when they get too close rule three kicks in making speedy go further away and this continues in an endless cycle. The way that the Powel and Donovan solve the runaround problem is by using the first rule. Powel goes out towards Speedy until he passes out which over-rides the first two rules since he would die if Speedy didn’t take him back to the base. While there aren’t any explicit critiques of morality or moral theory in Runaround, there are many underlying critiques of the idea of a perfectly moral world.

One thing that Asimov highlights in this story is that even if there are a set of perfect moral rules, they can be broken under certain situations—situations with human error, situations that are complex, and situations where there are a lot of variables that affect how a given rule is interpreted. When describing Speedy’s malfunction, Asimov likens him to being drunk where he isn’t thinking straight and doesn’t make decision how he normally would. While humans become drunk through their own actions, Speedy malfunctioned due to external circumstances that couldn’t have been predicted. While this is possibly reading into the text, this difference highlights how complexity can lead to a breakdown of rules that lead to the correct action. Notably, the runaround situation is also partially the fault of Donovan since he did not add enough urgency to the order. Human error is so prominent in society. Even if the three rules of robotics were infallible, human error would lead to a world that isn’t perfectly moral.

At the core of this story is the presupposition that a perfectly moral world can be created through a set of rules. Basing morality on rules condenses it down to if-then statements. This understanding of morality is a consequentialist understanding. Morality is how things ought to be and on a more practical level morality answers the question of how we should act in any given situation. There are two main issues that arise here. Firstly, there is the issue of what goals or desired outcomes we should consider moral. Secondly, there is the issue of knowing what actions will reach that desired outcome. The first is an issue of values, and the second issue is an empirical issue. Both are critical for consequentialism to work as a moral theory.

Runaround doesn’t directly touch on how values impact morality, but this is very problematic for consequentialism. When it comes to the values of runaround, the rules of robotics have three: human safety above all, subjugation to human orders, and self-preservation. If human safety is the highest moral principle, then this could lead to some very intuitively questionable situations. For example, someone who is already dying, experiencing extreme pain, and wants to end it wouldn’t be allowed to do so. Also, in cases where people will die no matter what actions the robot takes, how do they decide which group dies? Likely it will be a utilitarian decision that saves the largest number of people. Another variation of this scenario is even more questionable. If a robot with the values of the three rules of robotics was aware of an impending ethnic genocide, then they are morally required to save those who would be killed. Since inaction would lead to the genocide taking place, this robot would be morally required to preemptively kill the leaders of the nation or those in the military of that nation since in the end it would save many more lives. But killing goes against the first rule of robotics so they wouldn’t be morally permitted to kill yet their inaction would lead to even more death. This creates a paradox and creates a situation with no clear moral action.

One could argue that this situation is overly simplistic and there could be other ways to prevent the genocide, and while this could be true, it brings up the empirical issue. There is no way to understand all the variables that affect how the world changes and so there is no perfect way to understand what actions will lead to a certain outcome. We can generalize and understand causality to a certain extent by limiting the variables we consider there are always possible confounding variables which always leaves a level of uncertainty. In a world of uncertainty there is no way to have a perfect set of moral rules and consequentialism is merely idealistic. Even in a world without the empirical issue, consequentialism falls apart and there still couldn’t be a set of perfect moral rules.

Even if there was a way to know every effect of every action, on every level of society and the effect of every action on the physical world, differences in values would make a perfectly moral consequentialist world impossible. Each human has a unique and different set of moral beliefs, and more importantly and different ranking of what values are more important. If there were a set of rules that would best preserve the life of all humans on the planet, this could go against someone’s moral view of the world who values freedom over the preservation of life. Under the moral world of preservation, one could imagine the best way to do this would be to minimize risk and have all humans live in a very controlled environment or taken a step further living in the matrix.

The logical breakdown of this argument is the following. [P1] If one knows the results of an action, and if the results of that action are in alignment with the morally desired outcome (or value), then one is acting morally. [P2] it is impossible to know with certainty the outcome of many actions, and it is impossible for everyone to align with the morally desired outcome. [C] So, it is impossible for there to be a perfectly moral world.

(P1^P2) —> C

~P1^~P2

~C