Isaac Asimov, possibly one of the greatest science fiction authors of all time and a very intelligent individual himself, wrote stories about numerous futuristic topics ranging from the colonization of space to the concept of “psychohistory” to the introduction of robots into human society. Robots in particular were a subject written about heavily by Asimov, although he took a different approach to other science fiction authors at the time. While much of media at the time depicted robots as machines that would inevitably turn against their creators, Asimov introduced a series of laws that would help govern robots’ behavior in society. These are the Three Laws of Robotics.
The Three Laws of Robotics became an important feature of many of Asimov’s works, but are mainly dealt with in his Robot series which covers the development of robots in Earth society and their use in the exploration of space. The laws are as follows:
1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Three Laws first appeared in the story “Runaround” published in 1942, although they had been suggested at in other novels. However, later on Asimov introduced a Zeroth (or Fourth) Law, which stated:
0.) A robot may not harm humanity, or by inaction, allow humanity to come to harm.
Unlike the first three laws, the fourth law was not introduced until robots were used in the governing of entire planets and civilizations. What sets these laws apart from average laws followed by humans is that humans have a choice of whether or not to follow the law, whereas in Asimov’s stories the Three Laws are deliberately hardwired into the robots’ “positronic” brains (essentially the CPU of the robot which also provides its consciousness). Thus, the robots are designed to be incapable of breaking the Three Laws (and the Zeroth Law).
Aside from Isaac Asimov’s novels, the Three Laws of Robotics have been featured in other science fiction media since their creation. For example, the 1956 film Forbidden Planet featured a now famous automaton named “Robby the Robot” who was programmed with the Three Laws. The horror films Alien and Aliens also include androids, or synthetic humans (robots designed to appear like humans) who are programmed with “Behavioral Inhibitors” that emulate the Three Laws. These are just films though, as there have also been many other science fiction writers who have utilized or modified the Three Laws in their own novels. However, despite their widespread adoption in science fiction, there begs the question of whether or not Asimov’s Three Laws would be applicable to robotics in real life.
In Asimov’s Robot series, the Three Laws of Robotics are programmed into the robots’ brains by the fictional corporation U.S. Robots and Mechanical Men, Inc. This is done in an effort to combat the idea of “the Frankenstein complex,” or that a robot (or any other scientific creation) will be led to inflict harm on those who created it. Although the term “Frankenstein complex” was created for Asimov’s novels, the term has connections for the real world. For example, the Association for the Advancement of Artificial Intelligence (AAAI) states that when there will be intelligent, self-aware robots in society, there will be a huge amount of public fear and prejudice towards robots that would have to be overcome.
Even though much of this distrust towards robots and Artificial Intelligence (A.I.) stems from science fiction, such as The Terminator or Call of Duty: Black Ops 2, there is a growing influence from the real world, such as the growing use of drones and UAV’s (Unmanned Aerial Vehicles) to kill individuals in the Middle East. There was even a computerized anti-aircraft cannon that unfortunately killed nine soldiers and wounded fourteen others in South Africa back in 2007. Thus, it would make sense for the manufacturers of robots to implement some sort of safeguard in order to build public trust in robotics.
Regardless, there are many of those in the fields of A.I. today who argue that Isaac Asimov’s Three Laws of Robotics would be unable to guide A.I. development or robots’ actions. The major issue being that the Three Laws are too vague and open-ended to effectively guide a robot’s actions in the field. For example, in order for a robot to gain any meaningful definition from the Three Laws, the robot would have to have an A.I. at least equal or above that of the average human’s. That is because a robot would have to not only be able to comprehend what constitutes “harm” for a human, but be able to possess the foresight to determine whether or not its actions could possibly lead to harm.
This essentially means that the issue of the Three Laws of Robotics is whether or not they could guide robots, but rather A.I. Many scientists involved in the development of A.I. have also stated it would be impossible to develop any effective or “safe” laws similar to the Three Laws to help govern A.I. because there has not been any super-intelligent A.I. (or S.A.I. for Super A.I.) developed yet. Laws or protocols could not be developed until after S.A.I. had been created and studied.
Other individuals, such as Ulrike Barthelmess and Ulrich Furbach, state that humans feel the need for rules such as the Three Laws because of works such as Frankenstein, the Jewish golem, and Rossum’s Universal Robots (R.U.R.). These very early and influential works all possess heavy religious undertones to reinforce the notion that humans should not play God or create life. However, these works depicted the creation of living creatures rather than machines (although R.U.R. depicts “robots” as humans that are manufactured but living nonetheless). Barthelmess and Furbach even go back as far as the story of the Greek titan Prometheus, who gives fire to humans and is punished by the gods. The story of Prometheus illustrates the long-reaching theme of humans being punished by the gods which are thousands of years old. Regardless, the religious tones in such famous works have carried on to today and affect public opinion towards robots and A.I.
However, Barthelmess and Furbach also suggest that the human fear towards robots is based not necessarily on the idea that robots will destroy all life, but rather our way of life. For example, in Japan robots are more welcomed in the workplace and as aides to humans due to Japan’s rapidly aging population. In the United States, there is a rise in factories and other areas becoming more automated to the point there is the introduction of driver-less cars, such as those being developed by Tesla and Google. The arrival of robots in certain aspects of life is what unsettles some people and turns them against the idea of robotics, Barthelmess and Furbach argue.
Of course, many argue that programming in rules as simple as Asimov’s Three Laws would eliminate the issue altogether because a computer, no matter how advanced its A.I., would have no choice but to obey its programming. If it is programmed to not question the Three Laws (or similar rules), then it would not. Of course, then it merely becomes an issue for the manufacturers of the robots to insert such programming. Unfortunately, unlike Asimov’s novels where one company manufactures the world’s robots, the real world is much more diverse. For example, a civilian robot company may implement such rules, but a Predator drone in the U.S. Air Force would have no need for such rules.
This issue is also addressed in other works of science fiction. For example, the androids in Alien and Aliens have such programming, but can be overridden. In Alien, the android Ash states that his programming (which comes straight from the company which owns and produces him) was “Bring back life form. Priority One. All other priorities rescinded.” This is problematic for the human members of the crew, as although they are all employed by the same company they are not informed of Ash’s programming or even that Ash is an android.
Aliens addresses a similar issue, as an android Bishop states that since the events of Alien androids have been upgraded with “Behavioral Inhibitors” to prevent human deaths caused by robots. This would be very strange though, as Bishop is built by the same company which built Ash, implying he may very well have the same classified programming which could override said Behavioral Inhibitors. However, unlike Ash, Bishop does not betray his human counterparts implying he cannot override the Behavioral Inhibitors, or that similar classified orders were never activated. There is also the issue that Bishop is owned by the U.S. Colonial Marines, and even drives their APC and dropships. It would conceivably cause a contradiction in Bishop’s programming to act as he does, since he is programmed to not endanger humans and yet he is actively deploying humans into a warzone where they will likely be injured or killed.
Although Isaac Asimov himself stated in multiple interviews that he believed his Three Laws of Robotics could effectively govern robots, it appears that the laws themselves are too general and too open-ended to efficiently do so. There is also the rise in A.I. and the many different uses of robots that Asimov was unable to foresee which has yet to be sufficiently studied. Perhaps someday when robots take more part in society some form of the Three Laws can be implemented to help bridge the gap between humans and robots.