Encyclopedia > Three Laws of Robotics

  Article Content

Three Laws of Robotics

A set of three laws written by Isaac Asimov, which most robots appearing in his fiction have to obey:

  1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by the human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict the First or Second Law.

Asimov attributes the Three Laws to John W. Campbell from a conversation made on December 23, 1940. However, Campbell claims that Asimov had the Laws already in his mind, and they simply needed to be stated explicitly.

Although Asimov pins the Laws creation on one date, their appearance in his literature happened over a period of time. Asimov wrote two stories without the Three Laws mentioned explicitly ("Robbie[?]" and "Reason[?]"), however Asimov assumed that robots would have certain inherent safeguards. "Liar[?]", Asimov's third robot story makes the first mention of the First Law, but none of the others. All three laws finally appeared together explicitly in "Runaround[?]". When these stories and several others were compiled in the anthology I, Robot, "Reason" was updated to acknowledge the Three Laws.

The Three Laws are often used in science fiction novels written by other authors, but tradition dictates that only Dr. Asimov would ever quote the Laws explicitly.

A trilogy situated within Asimov's fictional universe was written in the 1990s by Roger McBride Allen[?] with the prefix "Isaac Asimov's ---" on each title (Caliban, Inferno and Utopia). In it, a set of new laws is introduced. According to the introduction of the first book, these were devised by the author in discussion with Asimov himself.

Some amateur roboticists have evidently come to believe that the three laws have a status akin to the laws of physics, that a situation which violates these laws is inherently impossible. This is incorrect, as the laws are quite deliberately hardwired into the positronic brains of Asimov's robots. The robots in Asimov's stories are incapable of knowingly violating the Three Laws, but there is nothing to stop any robot in other stories or in the real world from being constructed without them.

This is strikingly opposite to the nature of Asimov's robots. At first, the Laws were simply carefully engineered safeguards, however in later stories Asimov clearly states that it would take a significant investment in research to create robots without these laws because they were the mathematical basis that all the robots were based on.

Unfortunately, in the real world, not only are the laws optional, but significant advances in artificial intelligence would be needed for robots to easily understand them.

The Three Laws are sometimes seen as a future ideal by those working in artificial intelligence - once an intelligence has reached the stage where it can comprehend these laws, it is truly intelligent.

None of the robot stories written by Asimov complimented the Three Laws of Robotics. On the contrary, they showed flaws and misconceptions through very serious glitches. Asimov once wondered how he could create so many stories in the few words that made up these laws. For a few stories, the only solution was to change the laws. A few examples:

A later law, called the 'Zeroth Law' in order to avoid losing the famous name "The Three Laws of Robotics," was extrapolated. It was supposedly invented by Robot Daneel Olivaw[?] and Robot Giskard Reventlov[?] in The Robots of Empire[?], but it was mentioned earlier in "The Evitable Conflict[?]" by Susan Calvin[?].

0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
A condition stating that the Zeroth Law must not be broken was added to the original Laws.

Several NS-2 robots[?] (Nestor robots) were created with only part of the First Law. It read:

1. A robot may not harm a human being.
While this may have solved the original problem of robots not allowing anyone to be subject to necessary radiation even for proper time limits, it caused much other trouble as detailed in "The Little Lost Robot[?]".

The Solarians[?] eventually created robots with the Three Laws as normal but with a warped meaning of "human". Similar to a short story in which robots were capable of harming aliens, the Solarians told their robots that only people speaking the Solarian language were human. This way, their robots did not have any problem harming non-Solarian human beings (and actually, they had specific orders about that).

Many times has the problem of robots considering themselves human been alluded to. Humaniform robots make the problem more noticeable. Examples are The Robots of Dawn[?] and Bicentennial Man[?].

After a murder on Solaria in The Naked Sun[?], Elijah Baley[?] claimed that the Laws had been deliberately misrepresented because robots could unknowingly break any of them.

A parody of the Three Laws was made for Susan Calvin by Gerald Black[?]:

  1. Thou shalt protect the robot with all thy might and all thy heart and all thy soul.
  2. Thou shalt hold the interests of US Robots and Mechanical Men, Inc. holy provided it interfereth not with the First Law.
  3. Thou shalt give passing consideration to a human being provided it interfereth not with the First and Second laws.

Gaia, the planet with combined intelligence adopted a law similar to the First as their philosophy:

Gaia may not harm life or, through inaction, allow life to come to harm.

The laws are not considered absolutes by advanced robots. In many stories, like Runaround, the potentials and severity of all actions are weighed and a robot will break the laws as little as possible rather than do nothing at all.

See also:



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Cucurbitaceae

... ...