skip to main content
Skip header Section
Governing Lethal Behavior in Autonomous RobotsMay 2009
Publisher:
  • Chapman & Hall/CRC
ISBN:978-1-4200-8594-5
Published:27 May 2009
Pages:
256
Skip Bibliometrics Section
Bibliometrics
Skip Abstract Section
Abstract

Drawing from the authors own state-of-the-art research, this book examines the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system in autonomous robot systems, taking into account the Laws of War and Rules of Engagement. It discusses how robots can ultimately be more humane than humans in the battlefield. The author addresses the issue of autonomous robots having the potential to make life-or-death decisions and provides examples that illustrate autonomous systems ethical use of force. He also includes the opinions of the public, researchers, policymakers, and military personnel on the use of lethality by autonomous systems.

Cited By

  1. Omotoyinbo F (2023). Smart soldiers: towards a more ethical warfare, AI & Society, 38:4, (1485-1491), Online publication date: 1-Aug-2023.
  2. ACM
    Canavotto I and Horty J Piecemeal Knowledge Acquisition for Computational Normative Reasoning Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, (171-180)
  3. Liu H and Zawieska K (2017). From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence, Ethics and Information Technology, 22:4, (321-333), Online publication date: 1-Dec-2020.
  4. Gunkel D (2017). Mind the gap: responsible robotics and the problem of responsibility, Ethics and Information Technology, 22:4, (307-320), Online publication date: 1-Dec-2020.
  5. Sharkey A (2017). Can we program or train robots to be good?, Ethics and Information Technology, 22:4, (283-295), Online publication date: 1-Dec-2020.
  6. Sharkey A (2019). Autonomous weapons systems, killer robots and human dignity, Ethics and Information Technology, 21:2, (75-87), Online publication date: 1-Jun-2019.
  7. ACM
    Welsh S Regulating Lethal and Harmful Autonomy Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, (177-180)
  8. ACM
    Wächter L and Lindner F An Explorative Comparison of Blame Attributions to Companion Robots Across Various Moral Dilemmas Proceedings of the 6th International Conference on Human-Agent Interaction, (269-276)
  9. Berreby F, Bourgne G and Ganascia J Event-Based and Scenario-Based Causality for Computational Ethics Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, (147-155)
  10. ACM
    Kuipers B (2018). How can we trust a robot?, Communications of the ACM, 61:3, (86-95), Online publication date: 21-Feb-2018.
  11. Yilmaz L, Franco-Watkins A and Kroecker T (2017). Computational models of ethical decision-making, Cognitive Systems Research, 46:C, (61-74), Online publication date: 1-Dec-2017.
  12. Berreby F, Bourgne G and Ganascia J A Declarative Modular Framework for Representing and Applying Ethical Principles Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, (96-104)
  13. Yilmaz L Verification and validation of ethical decision-making in autonomous systems Proceedings of the Symposium on Modeling and Simulation of Complexity in Intelligent, Adaptive and Autonomous Systems, (1-12)
  14. Cointe N, Bonnet G and Boissier O Ethical Judgment of Agents' Behaviors in Multi-Agent Systems Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, (1106-1114)
  15. Malle B, Scheutz M, Forlizzi J and Voiklis J Which Robot Am I Thinking About? The Eleventh ACM/IEEE International Conference on Human Robot Interaction, (125-132)
  16. ACM
    Björk I and Kavathatzopoulos I (2016). Robots, ethics and language, ACM SIGCAS Computers and Society, 45:3, (270-273), Online publication date: 5-Jan-2016.
  17. Johnson A and Axinn S Acting vs. being moral Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology, (1-4)
  18. Bringsjord S, G. N, Thero D and Si M Akratic robots and the computational logic thereof Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology, (1-8)
  19. Bello P Mechanizing modal psychology Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology, (1-8)
  20. Tonkens R (2012). Out of character, Ethics and Information Technology, 14:2, (137-149), Online publication date: 1-Jun-2012.
  21. ACM
    Hourcade J and Bullock-Rest N HCI for peace Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (443-452)
  22. Lokhorst G (2011). Computational Meta-Ethics, Minds and Machines, 21:2, (261-274), Online publication date: 1-May-2011.
Contributors
  • Georgia Institute of Technology

Recommendations

Reviews

George A. Bekey

Robots have evolved rapidly from science fiction and university laboratories to becoming commonplace in industry, education, entertainment, and the military. As their capabilities and ability to operate autonomously have grown, so has concern about possible dangers from their deployment, especially in military applications where they may be equipped with lethal weapons. At long last, questions on the ethical use of robots are being raised. Until the year 2000, the only significant voice on this matter was that of Bill Joy, the former Chief Scientist of Sun Microsystems. Joy raised questions about the potential dangers of autonomous robots, nanotechnology, and genetic engineering [1]. More recently, Wallach and Allen raised a series of ethical issues in connection with autonomous robots [2]. Singer's Wired for war [3] and Krishnan's Killer robots [4] are expositions of developments in robot technology that may dramatically change the nature of war. Now, Arkin, a leader in robotics research and applications, has written this remarkable book concerning methods of controlling potentially lethal actions by autonomous robots, based on his long experience with military robotics programs. Arkin's book is based on three fundamental assumptions. First, wars will continue to be waged, as they have from time immemorial. This assumption is clearly based on historical evidence and substantiated by psychological research that documents humanity's simultaneous attraction and aversion to war [5]. Second, it is essential to regulate robots' behavior during wartime to insure that, at the very least, they conform to the laws of war-the internationally accepted codes of conduct for military personnel, as embedded in the Geneva Conventions-and the rules of engagement (ROE)-used by US military services to clarify the laws of war. Third, while it may not be possible to design robots that always behave ethically, they can be built to be more ethical than human soldiers in wartime. These assumptions lead to the notion of "ethically-justified lethal behaviors" on the part of military robots. The major portion of the book is devoted to a formalization of this concept, including the development of software architecture for military robots equipped with lethal weapons, which makes such ethical behaviors possible. There are certainly people who consider ethical killing an oxymoron, but as Arkin points out, the notion of a just war and the international acceptance of the laws of war emphasize limits on the use of lethal weapons. Thus, as discussed in chapter 7, the laws of war require the following: military actions should attempt to limit casualties among noncombatants; wounded enemy soldiers are to be assisted; and the use of lethal force in response to an attack should be proportional to the scale of the attack. These restrictions on the absolute use of lethal force form the basis of the software architecture proposed by Arkin for governing the behavior of military robots. The first three chapters describe current military robots equipped with lethal weapons and the failings of human soldiers that may lead to violations of the laws of war. Chapter 4 is a review of philosophical approaches to the question of ethics, and chapter 5 is a fascinating report on a survey undertaken by Arkin's laboratory to find out what people think about robots equipped with lethal weapons. Chapter 6 develops the formalism used as a basis for designing ethical controllers. Chapter 7 is "Specific Issues for Lethality: What to Represent." Chapter 8 is "Representational Choices: How to Represent Ethics in a Lethal Robot." The second half of the book begins with chapters 9 and 10, which discuss architectural considerations and design options for the design of ethical controllers. Chapter 11 presents recent military actions in which lethal force was used and describes applicable ethical considerations. The concluding chapter presents a prototype implementation of a control architecture that includes a number of carefully designed restraints on the robot system, to ensure that its use of lethal force conforms to the ethical guidelines laid out in the laws of war and ROE. This book is a courageous step in the design of autonomous robotic systems that satisfy a set of ethical constraints. It should be required reading for anyone in the computer science (CS) and engineering communities who is interested in military robots and their deployment. It is easy to condemn the military services for using robots equipped with deadly weapons, but Arkin has taken a much more constructive course of action, by showing us how to control them using ethical guidelines. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.