What are the Three Laws of Robotics?

The laws and fundamentals of robotics are a set of laws, principles, or rules which are intended to be a fundamental framework to underline the behavior of robots that are designed to have a large degree of autonomy.  Although robots of this degree are not yet existent, they are widely expected with the advances in technology.

They have also been depicted in science fiction films and are an active topic of research in the fields of robotics and artificial intelligence. The highly-evolving field of robotics is producing a vast array of devices, from autonomous vacuum cleaners to drones used in the military to entire production lines in factories.

Artificial Intelligence and machine learning at the same time are more and more behind a lot of the software that affects our lives on a daily basis, even if we are searching the internet or getting government services allocated to us.

All these developments are very rapidly culminating into a time when robots of all kinds will become widespread in almost every aspect of human society and the interactions between humans and robots will increase very significantly.

What are the Three Laws of Robotics?

Table of Contents

The three laws of robotics are a set of rules that were devised by a science fiction author named Isaac Asimov.  They are also known as Asimov’s laws. They were first introduced in his 1942 short story, titled “Runaround”.  But they are more known from the collection he published in 1950 titled, “ I, Robot”.

The three laws in the story are quoted as being from the “Handbook of Robotics, 56th Edition, 2058 AD”. These three laws are;

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These three laws form an organizing philosophy and a major theme in all of Isaac Asimov’s robotic-based fiction. These three laws appear frequently in his Robot series and other stories linked to it.

Why are These Laws Important?

These laws are important because they are a safety measure for humans. They were devised to protect humans in their interactions with robots.

Isaac Asimov’s laws are still mentioned as a template that guides the development of robots. The South Korean government in 2007 proposed a “Robots Ethics Charter” that reflected Asimov’s laws.

See also  Three Examples of Savvy Outsourcing Your Business Should Consider

Are the Three Laws of Robotics Real?

The three laws of robotics are not real in the sense that there are no robots that have been programmed with these laws. As we inevitably get closer to the world envisioned by Issac Asimov, there are more and more debates on whether there is a need for Asimov-like laws that will govern the behavior of robots.

Most people argue that there is no need for such laws as robots can only do what they are programmed to do and that the fear over the robots’ potential to destroy humankind is unfounded. And it is only rooted in our culture due to the fact that science fiction stories and films use plots in which robots destroy their creators.

There are also real conflicts between machines and humans.  For example, in the industrial revolution in Europe, there was a profound fear of machines and their manifest ability to disrupt the world in ways that will have a profound influence on many people.

The fear of machines was really great in those days that there was a movement to destroy weaving machines and was not stopped until the Parliament made demolishing of machines a capital crime.

Due to some of these reasons it is argued that the fear of robots stems from the possibility that other humans will use them to destroy humanity’s way of life in ways that are uncontrollable and not the fear that the robots will take over and destroy us.

However, Asimov laws are still a concept that guides the development of robots in the advancement of technology.

Are the Three Laws of Robotics Flawed?

In Isaac Asimov’s books, the three laws have been incorporated into all robots and cannot be bypassed. They are intended to be a safety measure for humanity. Asimov’s robot-focused stories involve robots behaving in counter-intuitive ways, which is a result of how each robot applies the three laws in any situation it finds itself.

These original laws are flawed in the respect that they lead the robots to act in unusual ways as they do not cover for every eventuality or situation. The original three laws have been altered by Asimov himself in his later books in order to further enhance the way robots interact with humans and each other.

See also  25 Stunning Sites Like 123movies To Watch Movies Online 2022 {Updated}

In his later books, where robots take responsibility for the government of whole planets and human civilizations, Isaac Asimov adds a fourth or a zeroth law that is to precede the original three laws.

Zeroth Law

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

When Isaac Asimov devised his three laws of robotics it is believed that he was thinking about androids. He envisioned a world where the robots would act as servants and therefore would need a set of programming rules that will prevent them from causing harm to humans.

In the over 75 years since his publications that feature his ethical guidelines, significant technological advancements have taken place and there is now a different conception of what robots will look like and even how humans will interact with them.

Issues Facing the Laws of Robotics

One of the major issues facing these laws today is that the robots of today seem to be more varied and in some instances far simpler than the ones mentioned in Asimov’s stories. So there arises a need to consider if there should be a threshold of complexity below which these rules might not be needed.

We have robots that have been programmed to perform a single predetermined task. It is then very difficult to believe that a robot vacuum cleaner might have the capability to harm humans or even require the ability to obey orders.

There are also robots that have been designed to operate in military combat environments.  They are designed for spying, bomb disposal, and load-carrying purposes. These purposes appear to keep in line with Asimov’s laws as they are created to highly limit the risk to human lives in very dangerous environments.

It can also be assumed that the ultimate military goal in robotics will be to create robots that are armed and can be deployed on the battlefield. In this scenario, the first law of not harming humans or allowing them to come to harm becomes very problematic as the role of the military is to save the lives of civilians and its soldiers by most times harming its enemies on the battlefield.

A question also arises of what can be considered as harming humans. Some robots are designed to help in the caretaking of humans and to perform certain functions, all of which might lead to the owners to develop emotional attachments with these robots.

See also  Use of Robotics in Education and Why is Robotic Education Important?

This could eventually lead to emotional or psychological harm which may arise from the actions of the robots and may not be apparent until many years after the human-robot interaction has ended. This issue also applies to simple machines like artificial intelligence that make use of machine learning to create music that elicits certain human emotions.

In artificial intelligence, it is the goal to make robots that can think and act rationally as humans do. But the emulation of human behavior has not been researched well and has only been researched in limited areas.

With is in mind, a robot will only be able to operate within a limited sphere and the rational applications of the laws will be highly restrictive. Also, with our current technology, a system that would reason and make decisions based on these laws would need considerable computational power which is not cost-effective at this time.

Other Variations of the Three Laws of Robotics

With all these issues facing the laws of robotics, a vast number of variations have been postulated by many people. some of these variations postulated are;

By Roger MacBride Allen

His modification of the laws follows the theory that robots should be partners and not slaves and has the following differences;

  • The first law is modified in order to remove the ‘inaction’ clause
  • The second law is modified to call for cooperation and not obedience
  • The third law is also modified so as not to be superseded by the second law
  • He also added a fourth law which permits the robot to do what it likes as long as it does not come into conflict with the previous laws

Other additions to the laws include;

A fourth law, In 1974 by Lyuben Dilov which states that, “A robot must establish its identity as a robot in all cases.”

A fifth law was added by Nikola Kesarovski which states that, “A robot must know it is a robot.”

In 2011, the Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council (AHRC) of Great Britain also jointly published its own set of five laws.

A lot of other laws and variations have been added to these original laws and might continue to be added as we further advance the field of robotics. Asimov’s laws, however, offer founding principles for anyone looking to create a robotic code today.

About Mike Stanley

Mike Stanley is a dedicated and passionate writer with a keen interest in the world of celebrity finance. With a background in journalism and economics, Mike has found his niche in researching and documenting the net worth of some of the most influential figures in the entertainment industry. His work is characterized by meticulous research and a commitment to providing accurate, up-to-date financial profiles.