Foreign Affairs and International Relations

Autonomous Weapons and The Jus Ad Bellum

Dr. Tim McFarland201808arms_killerrobots_main

In this piece, Dr. Tim McFarland evaluates how the rise in the use of autonomous weapon systems (AWS) in ‘self-defence’ transforms the landscape of geopolitical warfare, ‘military’ conflict, and the customary international law surrounding it viz jus ad bellum to highlight how AWS have the potential to do more harm than good for international peace.

For approximately seven years, States parties to the Convention on Certain Conventional Weapons (CCW) have been debating a possible regulatory response to one of the most significant current developments in military technology: the rise of increasingly autonomous weapon systems (AWS). AWS are weapon systems of any type (aircraft, maritime or ground vehicles, stationary installations, etc.) which ‘once activated, can select and engage targets without further intervention by a human operator’ to some significant extent, generally by means of a computer-based control system which replaces a human operator. Thanks to a conflation of advances in artificial intelligence, robotics and related technologies, AWS are playing an ever more prominent role in the arsenals of State and non-State actors, a trend which seems certain to continue. The discussions hosted by the UN in connection with the CCW have attempted to address some of the legal questions about AWS, principally matters arising under the jus in bello, the law governing the conduct of armed conflict, but have, unfortunately, excluded many others. This post remarks on one under-investigated field, namely the implications of AWS development and use for the jus ad bellum, the field of international law governing the conditions under which States may resort to the use of armed force.

Formally set out in article 2(4) of the UN Charter, the general prohibition on the threat or use of force in international relations is a core value of the international community and is universally accepted as a norm of customary international law. Unfortunately, the UN Charter’s system for ensuring international security was founded on what has been described by at least one author as a ‘false assumption’: that the UN Security Council, particularly the five permanent members, would act in concert to police threats to international peace. The invalidity of that premise became apparent very soon after the Charter was signed and over subsequent years – in the absence of consistent action by the Security Council, States considering a resort to force have instead relied on various exceptions to the general prohibition. In specific circumstances, States are permitted to make unilateral decisions to resort to force according to the parameters given in article 51 (self-defence), and collective decisions according to articles 52 and 53 (regional arrangements) of the UN Charter. The precise scopes of those exceptions are, of course, hotly debated by States wishing to employ force and by those against whom the force would be directed. Along with the foundational matter of what constitutes ‘force’, those exceptions are now the primary focus of developments in the jus ad bellum.

At the same time, the international security and conflict environment has changed radically from that which existed at the time the Charter provisions governing the use of force were formulated. Formally declared wars and large-scale interstate conflicts have largely given way to smaller, low-level conflicts, ‘grey-zone’ activity and non-international and internationalized conflicts; new modes of fighting including cyber-attacks and targeted killings have emerged; new technologies such as remotely piloted aircraft have expanded the battlefield in unprecedented ways; and all this is taking place in a world which is interconnected to a degree never before seen, with communications and surveillance technologies feeding vast volumes of information to any State or non-State actor with the ability to tap into it.

The result of the above is that States grappling with decisions to use force today must contend with a complex, dynamic security environment and an equally complex, dynamic and decentralized legal framework governing their actions.

Against that backdrop, the novel capabilities of AWS, generally unanticipated in the development of existing law, promise to add yet another layer of complexity. Autonomous technologies offer the possibility of delegating to weapon systems not just the act of applying a quantum of lethal force but parts of the decision-making process leading to that act as well as ongoing management of the operation in which it occurs. That in turn may lead to their being used in ways that will further some trends which are currently challenging the efficacy of the international prohibition on the use of force. States need to be very careful about how they integrate AWS into their operations, both for their own benefit, to avoid being accused of violating norms governing the resort to force, and for the greater good, to avoid lending legitimacy to practices which might further weaken the Charter’s general prohibition on the use of force in international relations.

The first concern to note is that the factors which make autonomous technologies appealing in a military context – their capacity for persistence, their suitability for covert, low-level uses of force, the fact that human soldiers need not be put in harm’s way, and so on – are themselves potentially problematic in a jus ad bellum context. They make the resort to force a relatively more viable and attractive means of settling, or discouraging, disputes. Of course, that is true of weapon development in general, but by taking human beings somewhat ‘out of the loop’ and replacing them with machine capabilities, AWS arguably do more to alter the (short term) costs and benefits of employing armed force in situations of tension, instability or crisis.

That is problematic for at least two reasons. The narrower concern is that, depending on one’s interpretation of the jus ad bellum requirement of proportionality, deploying AWS even in a defensive capacity might be seen by other parties as an unwarranted threat to the peace and as justification for an escalatory response. The broader concern arises because the Charter’s prohibition on use of force is not merely a simple, static criterion to be applied in isolation to test the validity of proposed military action. It is part of a larger system aimed at achieving some of the most important purposes of the United Nations: to ‘maintain international peace and security’, to ‘develop friendly relations among nations’, and to ‘achieve international cooperation in solving international problems’, among others. In light of that, advancements which make it possible to more easily employ force in international relations must be carefully managed to ensure their existence and use does not undermine those broader goals, to the detriment of either the States in question or the international community.

Problems may also arise in applying specific rules of the jus ad bellum to factual situations involving AWS. A State considering the use of force in self-defence according to article 51, for example, must make an assessment of the threat it believes to exist and of the degree of harm that may be inflicted if it fails to defend itself. That can be a difficult and error-prone process, particularly where an attack has not yet materialized and the defending State is relying on the controversial and poorly delineated doctrine of anticipatory self-defence. Accordingly, many States have accumulated bodies of knowledge for assessing potential threats based on indicators such as movements of soldiers and materiel. When a State is facing an adversary wielding AWS in place of human soldiers in a crisis situation, the typical indicators of an imminent attack may be missing. The risk of misunderstandings, such as from misinterpreting the behaviors of unfamiliar machines, will be significantly elevated. States operating AWS, even in strictly defensive roles, will need to carefully manage the ways in which their actions are perceived by other States, particularly in situations of tension or instability.

These are just a few of the jus ad bellum concerns resulting from States integrating AWS into their armed forces. They are reasons to be cautious but should not be taken to mean that autonomy in weapon systems is necessarily to be avoided. Rather, they should be seen as reminders for States considering adoption of AWS that the effort to ensure peace in international relations faces many challenges and that use of autonomous technologies for military purposes will create new risks as well as new opportunities. Care will be required in navigating those risks and opportunities, lest the purposes of the United Nations be undermined.


Dr. Tim McFarland is a Research Fellow at the TC Beirne School of Law, University of Queensland. His current research focuses on the legal challenges connected with the defence and security applications of science and technology, with a particular focus on the impact of autonomous systems. His broader research interests include the law of armed conflict and international criminal law. He is the author of Autonomous Weapon Systems and the Law of Armed Conflict (Cambridge University Press, 2020)

Image Credits: Human Rights Watch (source, source)