The Laws of Robotics by Isaac Asimov (1902-1992)

0
380

Fr. Rogerio Gomez C.Ss.R., professor at the Alphonsian Academy in his blog entry on the webpage of the Academy continues his reflection on the interesting new field of ethics that emerged in connection with the development of the contemporary technology and science. 

The Laws of Robotics by Isaac Asimov (1902-1992)

In the movie Bicentennial man the robot NDR Andrew presents the three laws of robotics. They are: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. A fourth law, added later by the author as the so-called Zero Law reads: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Would these laws apply in the current context? Asimov somehow has already understood the complexity arising from the action of robots, even incipiently, and elaborated such rules considering certain actions of these machines, built with a level of artificial intelligence. They namely could achieve some autonomy, leaving aside the programmer itself. Thus he formulates such laws that, as a whole, have ethical content. Obviously, times have changed and the level of application of these laws is very limited as they are intuitive and taken from the context of science fiction. Even if they were in force today, like all ethics, they would not solve all problems and dilemmas, but they open the discussion in this field.

The Engineering and Physical Sciences Research Council (EPSRC), commenting on Asimov’s laws, states: Although they provide a useful departure point for discussion Asimov’s rules are fictional devices. They were not written to be used in real life and it would not be practical to do so, not least because they simply don’t work in practice. (For example, how can a robot know all the possible ways a human might come to harm? How can a robot understand and obey all human orders, when even people get confused about what instructions mean?).

Asimov’s stories also showed that even in a world of intelligent robots, his laws could always be evaded and loopholes found. But finally, and most importantly, Asimov’s laws are inappropriate because they try to insist that robots behave in certain ways, as if they were people, when in real life, it is the humans who design and use the robots who must be the actual subjects of any law.”

Starting from the reflection prompted by Asimov’s laws, its limits and the need for guiding principles for robot designers, builders and users, an expert group from EPSRC and the Arts and Humanities Research Council Robotics Retreat formulated these principles:

1) Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

2) Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy.

3) Robots are products. They should be designed using processes that assure their safety and security.

4) Robots are manufactured artifacts. They should not be designed in a deceptive way to exploit vulnerable users; instead, their machine nature should be transparent.

5) The person with legal responsibility for a robot should be attributed.

Vincent C. Müller makes an interesting critique of these five principles in his Legal vs. ethical obligations – a comment on the EPSRC’s principles for robotics by reflecting on the need to differentiate legal obligations and ethical requirements. Müller takes up these principles, seeks to restructure them on legal and ethical grounds and also highlights the limits that exist in them since this field of knowledge is recent and open to new ethical and legal issues.

Therefore, the validity of Asimov’s principles does not lie today in their practical application, but in the intuition that it is necessary, in this new field of knowledge, to advance further, preparing theoretically and technically to formulate principles that consider both positivity and advances in this new area, so that it expands, always respecting the principle of acting and living in concord between technology and society.

Fr. Rogério Gomes, C.Ss.R.