Artificial Intelligence and Fully Autonomous Robotic Weapons. The Legal and Moral Necessity for a Ban

Mary Ellen O'Connor | Robert and Marion Short Professor of Law and Professor of International Peace Studies, Kroc Institute, University of Notre Dame

Artificial Intelligence and Fully Autonomous Robotic Weapons. The Legal and Moral Necessity for a Ban

Artificial intelligence (AI) is now being used to produce fully autonomous robotic weapons. These weapons are the next stage of weapons development following remotely piloted or unmanned systems, commonly known as drones. Unlike drones, AI-enabled weapons will no longer need human intervention in the selection of individuals or groups for killing or property for destruction. To protect human rights and to promote peace, an international legal prohibition is urgently needed on these inherently immoral weapons.[1]

Formal discussion and assessment of the need for a ban on autonomous weapons has been underway since 2013.[2] Proponents of AI-enabled weapons see them as an essential new addition to the national security of advanced technology states.[3] This view includes the perspective that if legal or ethical norms impede staying ahead of perceived adversaries in the possession of military arms, those norms need to be reinterpreted, modified, or dismissed. The Holy See has led the way in articulating a very different position.[4] Weapons that dehumanize are counterproductive in achieving true security. This viewpoint maintains that security depends first and foremost on robust respect for legal principles that are derived from fundamental moral principles. Such principles are not subject to reinterpretation or modification. The proper role for the military and law enforcement is defending the rule of law, not superseding it. This perspective dates to the emergence of law among the earliest human groups as an alternative to physical force in the ordering of society. The primacy of law viewpoint has been shaped by the theory of natural law, which combines legal and moral teaching. It provides the basis for efforts by the majority of states, technologists, civil society movements, and the Vatican to achieve a ban on AI-enabled weapons.[5]

The alternative arms-race viewpoint has become embedded in the foreign and military policies of advanced technology states since the 1960s. Unlike the primacy of law view, the arms race orientation began as a mid-20th century Western academic idea. The idea known as “Realism” emerged in the late 1930s.[6] Following the invention of the atomic bomb, it took hold and within twenty years had become the dominant view in states with major militaries. Realism and has now spread far beyond these states.[7] It helps account for why more resources are devoted to weapons development than green technologies, eradicating poverty, or promoting governance institutions within states and at the international level. Realist thinking also helps explain why three unlikely allies, China, Russia, and the United States, have joined together to prevent a ban on AI-enabled weapons.

The United States government has provided a useful definition of these weapons, which also indicates why developing them is consistent with realist thinking: AI-weapons are “a special class of weapon system that uses sensor suites and computer algorithms to independently identify a person or persons and or objects to kill or destroy without manual human control of the system”.[8] AI-weapons go a significant step beyond drones by being programmable to select targets unknown at the time the weapon is deployed. Also, unlike current drones, there is no need for a human operator to be associated with a weapon after deployment. Weapon systems can roam in search of targets without regard to time, place, or human oversight. Any restrictive parameters included in the original programming can potentially be superseded by the computer-learning algorithm running the system.[9] Currently, computer scientists cannot foresee what decisions AI-enabled weapon systems will reach. Learning programs are consistently characterized as a “black box” – meaning it is impossible to predict how a computer-learning program will reach decisions, owing to the problem of overseeing massive data inputs.[10] The algorithms are black boxes even to their creators. They simply cannot map out the decision-making process to predict outcomes of these complex networks of artificial neurons.

Assumptions are being made in the current debate that AI weapons will deploy missiles and bombs on a legally-defined battlefield only. A similar assumption is made about drone use, but weaponized drones have never been reserved for battlefield use despite the fact that for much of their history they have deployed only battlefield munitions. The first weaponized drones carried Hellfire missiles designed for killing tanks, a munition lawful only in a zone of active military hostilities. That first drone, however, was deployed by the CIA to attempt to assassinate an individual far from any combat zone in the year 2000.[11] Fully autonomous weapons will have even greater capacity for law violation.

International Law on AI-Weapons

The drone case provides no assurance fully autonomous weapons carrying battlefield munitions will be deployed only in armed conflict zones. This is true even before the learning program takes over and makes its own decisions. Thus, the potential scenarios involving AI weapons are myriad, which means the relevant law is far broader than the law of armed conflict. At least four interrelated categories of international law apply: human rights law, law on resort to force (jus ad bellum), law on the conduct of force (jus in bello), and arms control law.

Under human rights law, authorized law enforcement agents may use lethal force to save lives under immediate threat. There is, however, no tolerance for unintentional deaths as there is during armed conflict. For this reason, police forces may not use bombs or missiles even against heavily armed criminal organizations.[12]

The UN Charter’s principles promoting peace begin with the comprehensive prohibition on resort to armed force in Article 2(4).[13] The Charter’s drafters intended Article 2(4) “to state in the broadest terms an absolute all-inclusive prohibition”.[14] They wanted “no loopholes".[15] The UN Security Council may authorize the use of armed force when other nonviolent means to address a threat to the peace, breach of the peace, or act of aggression prove inadequate as provided for in UN Charter Articles 39-42.[16] Authorizations have slowed following the disaster following NATO’s authorized use of force in Libya in 2011.[17] NATO exceeded the Council’s mandate, which led to the collapse of the Libyan government and a civil war for control. China and Russia remain highly critical of NATO’s failure to comply with the Council’s mandate and are cautious about authorizing force. The poor results of armed interventions should give all members of the Council concern about the utility and therefore legality of using military force to try to remedy complex situations of social crisis.

            Other than Security Council authorization, the only basis in the Charter to use force is provided by Article 51, the right of self-defense.[18] The terms of Article 51 are highly restrictive. Resort to force is permitted “if an armed attack occurs” until the Security Council acts. The defending state must promptly report its actions. In addition, restrictions from general law beyond the Charter apply, the most important of which are the principles of attribution, necessity, and proportionality. The principle of attribution mandates that any use of force in self-defense must aim only at a state responsible for a significant armed attack on the defending state. Even then, a counterattack must be necessary to achieve legitimate defense and must be proportionate to the injury sustained. Defending states may request assistance in collective self-defense, but any such request must come from a government in effective control of the state. “Effective control” is the standard test in international law for identifying an entity that qualifies legally as a state’s government.

If self-defense turns into armed conflict – intense exchange of armed fighting of some duration – international law permits the armed forces of a party to the conflict to intentionally target the armed forces of the adversary. All targeting within armed conflict is governed by extensive treaty law. Superior to all of these treaties are the four fundamental principles of IHL: civilian distinction, necessity, proportionality, and humanity. These and other in bello rules mean that certain weapons are unlawful to use, such as weapons that are indiscriminate and that cause unnecessary suffering.[19]

During the Cold War the two superpowers tended to justify interventions on the basis of false facts rather than false interpretations of law. With the end of the Cold War, the U.S. began using force in violation of the Charter, at first by saying little or nothing, then with newly invented legal claims. To justify drone strikes beyond armed conflict zones, for example, the U.S. has cycled through four approaches: maintaining secrecy; declaring a “global war on terrorism”; reinterpreting Article 51 to permit current attacks against future potential threats; and claiming that the United States may attack when it deems a state is “unable or unwilling” to resolve a terrorist threat. The unable or unwilling argument is the weakest of all in that it lacks all features of legality by depending completely on a subjective assessment by Washington in place of the objective evidence of an armed attack requirement as required by Article 51. NATO member states consistently fail to object to the use of this inadequate, new attempt to expand the right to use force outside the restrictions of Charter Article 2(4).[20]

This idea that one state’s subjective claim that another is “unable or unwilling” to control terrorism as a justification for the use of force is an attempt to stretch the law to meet policy. It is found in the UN Charter and lacks the objective and generalizable indicia of legal principles. It overlooks the fact that the prohibition on the use of force and the inherent dignity of all human beings are premised on enduring natural law precepts and substantive principles. The prohibition on force is a peremptory natural law norm of jus cogens not subject to diminution through reinterpretation, new treaties, or new rules of customary law.[21] Such norms endure regardless of technological developments.

Necessity of an AI-Weapons Ban

The law just described aims at protecting the human right to life and the preservation of peace. It supplies the normative basis for the worldwide movement to ban AI weapons. Opponents of a ban have responded to the movement with various arguments in addition to realist advocacy to stay ahead in the arms race. Two stand out. First, defenders of these weapons argue that computers will follow law and norms more closely than human beings, and second, regardless of law and morality, AI weapons are the future. It is better, therefore, to adjust principles to meet the technology than for them to be ignored.

With respect to AI-weapons being superior to human operators, the arguments inevitably fail to engage with the fact AI learning functions lead to unknowable outcomes. Predicting whether a robot will decide to resort to force in violation of international law cannot be known. Moreover, programming may incorporate flawed views on when resort to force is lawful. The flawed arguments cycled through by the U.S. in its “war on terror” or Russia respecting Ukraine are examples.

In addition to unlawful resort to force, AI weapons must be treated as though they could potentially kill indiscriminately. Programmers simply do not know who will be killed. This is the very definition of indiscriminate killing. One author argues that “the unrestricted employment of a completely unpredictable autonomous weapon system that behaves in entirely unintelligible ways would likely be regarded as universally injudicious and illegal”.[22]

Even if the black box problem can be solved, the missing human conscience is an insurmountable barrier. Archbishop Silvano Tomasi emphasized at the CCCW for the Holy See that “decisions over life and death inherently call for human qualities, such as compassion and insight”.[23] He went on to say that although “imperfect human beings may not perfectly apply such qualities in the heat of war, these qualities are neither replaceable nor programmable”.[24] Regardless of how sophisticated computers become in exemplifying compassion, empathy, altruism, emotion, or other qualities, the machine will be mimicking, not experiencing them as only a human can.

A further argument holds that a “ban is pointless”, so just regulate on the margins. A ban is not pointless, however, even if major military/high tech states ignore it. A ban makes clear that states deploying AI-weapons are lawbreakers. Should the CCCW not reach a protocol banning AI-weapons, a specific treaty is needed like that for nuclear weapons to educate and build a coalition against the next generation of AI-WMDs. Legal regulation of AI-enabled weapons, which would amount to a ban, would require that a human being must be in full contextual and situational awareness of a specific attack; have the ability to perceive unexpected change in circumstances; retain power to suspend or abort the attack; and have time for deliberation on the significance of the attack.[25]

Conclusion

Even the father of AI, Geoffrey Hinton, supports banning AI weapons.[26] He and other supporters of a ban have ancient, enduring law, based on moral principle on their side. What is needed now are more international lawyers fully committed to teaching, writing, and litigating about the actual law at issue, which also supports a ban. It is time to move away from the trend of interpreting law to fit government policy or preferences and speak, write and teach about authentic law. This law holds up peace as the goal of law. It mirrors the goals of Pope St. John XXIII’s encyclical, Pacem in Terris.

 

[1] These remarks draw from the forthcoming article, Mary Ellen O’Connell, Banning Autonomous Weapons: A Legal and Ethical Mandate, Journal of Ethics and International Affairs (2023).

[2] Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, U.N. Doc. A/HRC/23/47 (Apr. 9, 2013) (by Cristof Heyns).

[3] This view is captured here: Congressional Research Service, Defense Primer: U.S. Policy on Lethal Autonomous Weapons Systems (Washington, D.C.: Congressional Research Service, 2022), crsreports.congress.gov/product/pdf/IF/IF11150

[4] See Diego Mauri, The Holy See’s Position on Lethal Autonomous Weapons Systems, An Appraisal through the Lens of the Martens Clause, II J. Int’l Hum. Legal Stud. 116 (2020) and most recently, Jennifer Peltz, Vatican Presses World Leaders at UN to Work on Rules for Lethal Autonomous Weapons, AP (Sept. 26, 2023), 

[5] Some of the organizations opposed to AI-enabled autonomous weapons are listed on the website of Stop Killer Robots at the “Our Member Organisations”.

[6] E.H. Carr, The Twenty Years’ Crisis 1919-1939 (1939).

[7] Nicolas Guilhot, After the Enlightenment: Political Realism and International Relations in the Mid-Twentieth Century (2017).

[8] See Congressional Research Service, Defense Primer: U.S. Policy on Lethal Autonomous Weapons Systems (Washington, D.C.: Congressional Research Service, 2022), crsreports.congress.gov/product/pdf/IF/IF11150

[9] See, generally, Markus Wagner, The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapons Systems, 47 Vanderbilt J. of Trans’l L. 1371 (2014).

[10] The UN Institute for Disarmament Research defines a black box as “a system for which we know the inputs and outputs but can’t see the process by which the former turns into the latter”. Arthur Holland Michel, The Black Box, Unlocked: Predictability and Understandability in Military AI, p. III (Geneva: United Nations Institute for Disarmament Research, 2020).

[11] For a discussion of the development and legality of weaponized drone use, see, Mary Ellen O’Connell, Game of Drones, Review essay: Chamayou, Grégoire. A Theory of the Drone (Janet Llyod, trans.); Shah, Sikander Ahmed. International Law and Drone Strikes in Pakistan: The Legal and Sociopolitical Aspects; Woods, Chris. Sudden Justice: America’s Secret Drone Wars, 109 Am. J. Int’l L. 889 (2015, published 2016).

[12] United Nations Basic Principles for the Use of Force and Firearms by Law Enforcement, adopted by the Eighth United Nations Congress on the Prevention of Crime and the Treatment of Offenders, Havana, Cuba, August 27 to September 7, 1990.

[13] “All members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations”. https://www.un.org/en/about-us/un-charter

[14] 6 Documents of the United Nations Conference on International Organization 335 (1945).

[15] Id.

[16] https://www.un.org/en/about-us/un-charter

[17] See UNSC Res. 1973 (March 17, 2011).

[18] “Nothing in the present Charter shall impair the inherent right of individual or collective self-defence if an armed attack occurs against a member of the United Nations, until the Security Council has taken measures necessary to maintain international peace and security. Measures taken by members in the exercise of this right of self-defence shall be immediately reported to the Security Council and shall not in any way affect the authority and responsibility of the Security Council under the present Charter to take at any time such action as it deems necessary in order to maintain or restore international peace and security”. https://www.un.org/en/about-us/un-charter. The literature on this provision is extensive, amongst the most informed discussions, is Albrecht Randelzhofer, Article 51, in The United Nations Charter: A Commentary 661-78 (B. Simma et al. eds, 1994).

[19] For an overview of this law, see, The Handbook of International Humanitarian Law (D. Fleck ed., 4th ed. 2021).

[20] The U.S. has used the “unable or unwilling” claim in several letters to the UN Security Council when reporting on uses of military force as required under Article 51. See, e.g., https://www.justsecurity.org/wp-content/uploads/2022/09/8.26.2022-Art.-51-Letter-Syria.pdf

[21] Mary Ellen O’Connell, The Art of Law in the International Community, ch. 2 (2019).

[22] Michel, supra note 3, at 10.

[23] Silvano Tomasi, quoted in Cindy Wooden, “Vatican Official Voices Opposition to Automated Weapons Systems”, Catholic News Reporter (May 14, 2014). See also, Pelz, supra note 4.

[24] Id.

[25] Noel Sharkey, “Guidelines for the Human Control of Weapons Systems” (working paper, International Committee for Robot Arms Control, April 2018).

[26] “It’s a Machine’s World”, On the Media, January 13, 2023.