Weaponized Artificial Intelligence (AI) and the Laws of Armed Conflict (LOAC) - the RAILE Project

Open Access
Conference Proceedings
Authors: Morgan BromanPamela Finckenberg Broman

Abstract: Today much has already been written about Artificial Intelligence (AI), robotics and autonomous systems. In particular the more and more prevalent autonomous vehicles, i.e. cars, trucks, trains and to a lesser extent aeroplanes.This article looks at an emerging technology that has a fundamental impact on our society, namely the use of artificial intelligence (AI) in lethal autonomous weapon systems (LAWS) – weaponized AI - as used by the armed forces. It specifically approaches the questions around how laws and policy for this specific form of emerging technology - the military application of autonomous weapon systems (AWS) could be developed. The article focuses on how potential solution(s) may be found rather than on the well-established issues. Currently, there are three main streams in the debate around how to deal with LAWS; the ‘total ban’, the ‘wait and see’ and ‘the ‘pre-emptive legislation’ path. The recent increase in the development of LAWS has led to the Human Rights Watch (HRW) taking a strong stance against ‘killer robots’ promoting a total ban. This causes its own legal issues already in the first stage, the definition of autonomous weapons, which is inconsistent but often refers to the Human Rights Watch (HRW) 3-step listing – human-in/on/out-of the loop. However, the fact remains that the LAWS are already in existence and continues to be developed. This raises the question of how to deal with them. From a civilian perspective, the initial legal issue has been focusing on liability in relation to accidents. On the military side, international legislation has been and still is, through a series of treaties between states, striving to regulate the behaviour of troops on the fields of armed conflict. These treaties, at times referred to as Laws of Armed Conflict (LOAC) and at times as International Humanitarian Law (IHL) share four (4) fundamental core principles – distinction, proportionality, humanity and military necessity. With LAWS being an unavoidable fact in today’s field of armed conflict and rules governing troop behaviour existing in the form of international treaties, what is the next step? This article will look to present a short description of each debate stream utilizing relevant literature for the subject matter including a selection of arguments raised by prominent authors in the field of AWS and international law. The question for this article is: How do we achieve AWS/AI programming which adheres to the LOAC/IHL’s intentions of the ‘core principles of distinction, proportionality, humanity and military necessity?

Keywords: Autonomous Weapon System, International Law & Treaties, Compliance, Raile© Project

DOI: 10.54941/ahfe100871

Cite this paper: