COGNITIVE SUBJECTS OR OBJECTS OF MILITARY STRATEGY? THE COMPATIBILITY OF AUTONOMOUS WEAPONS WITH HUMANITARIAN LAW PRINCIPLES
Image from Pexels
The law
of war does not require weapons to make legal determinations… rather, it is
persons who must comply with the law…[1]
Introduction.
French scholar Jean-Jacques
Rousseau is often quoted for asserting that “war is not a personal conflict
between individuals, but rather a relationship between States, where
individuals become enemies only incidentally; not as human beings or even as
citizens, but solely as soldiers.”[2] This statement, which virtually
abstracts man from the conflict, is especially relevant in today's context. We
are particularly focused on the rise of Autonomous Weapons Systems (AWS) as a
significant aspect of modern warfare. To do other war novelties justice,[3] AWS are not the only novelty, be
it as it may, they stand quite distinctive especially due to their
complexities.[4]
Without prejudice to any other understanding,
we can define autonomous weapon systems as weapon systems that, once activated,
can identify, select, and engage targets with lethal force without further operator
intervention.[5] These emerging military
technologies, and for purposes of this article, ‘emerging’ does not necessarily
mean novel, can select and engage the targets autonomously using robotics,
sensors, and Artificial Intelligence (hereinafter referred to as AI). Such
weapon systems can, though not unchangeably, act partially or wholly autonomously.[6] I say not unchangeably because
such autonomy exists within a spectrum, that is to say, it can be dialed up or
down to bring about the most edifying effect for the time being.[7]
The paper first recognizes the
technological Configuration of Autonomous Weapons and then their functionality.
We will then look at the challenges posed by such weapons, as pertaining to
their configuration and their functionality. The paper finally gives
recommendations on the way forward and also possible solutions.
The Configurations
of AWS.
For the purposes of this paper, there
are two main approaches for addressing the impact of Autonomous Weapons Systems
(AWS) under International Humanitarian Law (IHL): the techno-configuration
approach, which focuses on the technological aspects of AWS, and the functional
approach, which emphasizes humanitarian concerns like accountability. While the
functional approach is most relevant to this essay's focus on humanitarian
issues, I will first outline the technological aspects of AWS to establish a
background for their operation.
The Three main characteristics are;[8]
·
AI
·
Robotics
·
Sensors
1.
AI:
Artificial
intelligence can be described as a computation technique capable of completing
tasks that would otherwise require human intelligence.[9] The AI systems can be programmed by
correlation to think and act like humans. By ‘correlation’ I mean that the data
is trained on, in a technique called ‘machine learning',[10] which is based on statistical
pattern recognition, which allows predictions to be made for related data. Thus
AI systems identify the correlation between the data it has been fed and the
present task using a predictive algorithm. An algorithm is a computer code that
can be seen as a set of instructions to allow a computer to perform a certain
task.[11]
Human
intelligence unlike AI not only relies on correlation but also on abstraction
and causality[12]. AI systems cannot abstract; they
cannot by themselves synthesize data by either generalization or particularization.
AI systems do not recognize a target as an object itself but as an attributive
representation of patterns in the data.[13] For example, an AI-powered soap
dispenser may fail to recognize hands of different skin tones if it is only
trained on images of white hands. Such limitations can lead to serious errors,
especially in critical applications like autonomous weapons, which may not
adhere to international humanitarian law by failing to assess the
proportionality of their actions.[14]
2.
Robotics:
Robotics
technology constitutes the hardware platform for AWS, allowing for the
mobility, manipulation, and physical execution of tasks[15]. They include autonomous drones
and uncrewed combat vehicles, among others.[16]
3.
Sensor
Technologies: Turek[17] et al. describe sensor-based
targeting systems as systems designed to support the targeting process by
detecting and proposing potential targets to human operators, where such
systems operate by matching sensor inputs from the environment against encoded
profiles of intended target types. Sensors can collect data from the environment,
informing the human user of the situation and also identifying potential
targets. The data is then processed by AI before triggering the use of force.
Challenges
posed by the Autonomous Weapons.
These are probable challenges which
can either be proper to the configuration of the weapons or the challenges
posed in the functioning of the weapons.
A.
Configurational
challenges with AWS
These are challenges that are somewhat
proper to the technological domains of autonomous weapons. They include;
1.
Black
box problem; The
black box problem in AI refers to the opacity and lack of interpretability in
AI algorithms, making it challenging to comprehend how these systems generate
their conclusions and predictions.[18] Even the designers of the systems may
not fully comprehend how some algorithms function.[19] This would then mean that the
operator would not be able to predict and explain what effects the system would
have in the area of operations. The human user may not understand what might
trigger an application of force if they do not know based on what
characteristics this will happen.[20] This risk is heightened when AI
has to deal with new situations, it would be impossible for the programmers to
determine the scope within which the systems will operate.
2.
Garbage
in, garbage out. AI is only as intelligent as the data it is trained
with.[21] When
the AI datasets are trained on negative stereotypes, they will end up portraying
these biases on their agencies. In the instance these biases are imported into war
the results would most likely be displeasing.
3.
Misaligned
AI systems; These
are AI systems that are actually doing what they are supposed to be doing, and
quite efficiently in that case, but in unintended ways.[22] The situation will be more
amplified where the system is operating in a prior unknown environment, the
systems will have results that were not foreseen or intended.[23] In the case of AWS, there is the
possibility that the human user will not have programmed the proper Legal
judgement for the unforeseen circumstance.
4.
A
dearth of data. Though
wars are rather recurrent, we cannot say that all nuances and subtleties of
previous wars will be evident in the future wars.[24] AI systems, which rely on former
data that is trained in algorithms. Apparently, it is still contested on
whether there is sufficient data with which AWS can rely on at war.[25]
B.
Functional
challenges of Using AWS.
We will deal with the subject under
two subtopics:
1.
Compatibility
of AWS and IHL
2.
The
question of accountability and human Control
1.
Compatibility
of AWS with IHL
Additional protocol to GC of 1949
mentions in article 36 in regards to new weapons reads as
“In the
study, development, acquisition or adoption of a new weapon, means or method of
warfare, a high contracting party is under an obligation to determine whether
its employment would, in some or all circumstances, be prohibited by this
protocol or by any other rule of international law applicable to the high contracting
party.”
This article imposes duty on states
to at least bring their weapons criteria that may not result in IHL violation[26]. The employment of such weapons
should fall within the purview of IHL principles like distinction and proportionality.
A weapon that is already unlawful per se, whether by treaty or custom, does not
become lawful by reason of its being used for a legitimate purpose.[27] Even the right to self-defense is
constrained by humanitarian principles that are inherent to the very concept of
self-defense.[28] From the foregoing, it is apparent
that the use of AWS, even in the last resort as a means of self-defense must be
compatible with IHL principles. The state is bound to take humanitarian considerations
into account when assessing what is necessary and proportionate in the pursuit
of legitimate military objectives
(a) The Principle of Distinction.
The principle of distinction holds
that targets that are aimed at must be legitimate military objectives and must
be capable of identification. [29]Attacks must not be directed
against civilian objects.”[30] In the case of autonomous weapons,
sensors collect data from the surroundings then AI systems process the data by
identifying the pattern between its dataset and the target. As I explained
above AI systems only process data by correlation but not abstraction thus
there is the threat of making errors. It follows then, as Asif [31] rightfully observes, that AWS
might not comply to the principle of distinction because existing machine
sensors may be able to identify an object as a human, but they cannot make the
discriminations among humans to the specifications that are required. AWS
cannot tell fully capture the necessary subtleties of the moment like differentiating
soldiers who have surrendered, from those who are wounded or who are still a
threat.
(b) The Principle of Military Necessity
The St Petersburg Declaration 1868
states that ‘‘the only legitimate object which States should endeavor to
accomplish during war is to weaken the military forces of the enemy.’’[32] In regards to the principle of
military necessity, the first element is the ability to identify the legitimate
target and then the possible assessment of whether the destruction could result
in a direct and concrete military advantage.[33] AWS, which do not have the mental
faculties of abstraction might not be fully equipped to evaluate the situation.
Military Necessity is quite context-dependent; as we know, AI systems cannot by
themselves particularize the general data they have been fed to deal with
specific contexts.
(c) The Principle of Precaution.
The principle of precaution
requires parties in military operations to consistently protect civilians and
civilian objects [34] while assessing whether any
potential incidental loss of civilian life or damage is excessive compared to
the anticipated military advantage.[35] In complex circumstances, the
machine's inability to evaluate a variety of different circumstances will
interfere with its compliance. It would be impossible to Programme all
foreseeable situations and how exactly such robots would appreciate the
specificities before precautions judgment.
(d) The Principle of Proportionality.
This principle mandates that the
force used must be to the degree necessary to secure military defeat and the
prompt submission of the enemy[36] prohibiting attacks that may cause
excessive civilian harm in relation to the anticipated military advantage. [37]The idea that force is only
necessary until the military objective can be secured, for instance, the “complete
submission of the enemy” [38]might be a rather tenuous Maxim to
hold for robots for it is already rather obscure to humans who possess the
abilities of judgment. The US Air Force specifies that “proportionality in attack is an inherently
subjective determination that will be resolved on a case-by-case basis.”[39] It then follows that setting up
objective criteria of proportionality in the dataset of an automatic weapon
would at the end of the day lead to catastrophic results. Robots like the
faculty of causality thus they may not be able to fully judge the consequences
which will directly result from an attack. A robot which is unable to determine
whether a combats has been neutralized or is faking damage may along these
lines shoot the individual again imposing excessive harm.[40]
2.
Accountability
and Human control.
Responsibility and accountability
can never be transferred to the machine. [41] Only Individuals can be held accountable for their
roles as operators, commanding officers, programmers, engineers, and
technicians, while the state also bears responsibility for the development and
deployment of autonomous weapon systems (AWS).[42]
Article 8 of the Draft Articles on
Responsibility of States for Internationally Wrongful Acts[43], provides that; “the conduct of a
person or group of persons shall be considered an act of a State under
international law if the person or group of persons is, in fact, acting on the
instructions of, or under the direction or control of, that State in carrying
out the conduct.” Whenever, during armed conflict, an autonomous weapon makes a
breach, the state that had deployed such weapons can be held liable.[44]
It might not be easy to hold an
individual responsible for the actions of a robot. For instance, regarding the
human user, it is already clear that the autonomy of most weapons will be
spectral, that is to say sometimes fully autonomous, others semi-autonomous. The
programmers cannot also be held completely liable in case there is a black-box
situation where they cannot predict all the actions of a weapon. The broad conveyor
belt agency of AWS which requires them to be handled by several operators who will
deal with various parts of the procedure brings about concern of who will
actually be the subject of responsibility[45].
The NGO Human Rights Watch confirmed
“an accountability gap” as “neither criminal law nor civil law guarantees
adequate accountability” for those involved with AWS.[46] Finally, as to the accountability
of robots, it is reasonable that fully autonomous systems cannot possibly be
held liable. However In the event of errors in machine judgement, there exists
an uncertainty whether engineers, product designers, users, or leadership teams
are to be held responsible[47]. The appeal chambers in Tadic[48] was of the opinion that the
criteria for arriving at state responsibility is the same for arriving at
individual responsibility. If then it is complex to arrive at individual
liability in cases of machine malfunction the same can be expected for state
responsibility.
Recommendations
and possible Solutions.
The emergence of autonomous weapons
systems (AWS) after the establishment of International Humanitarian Law (IHL)
does not exempt them from its principles. The International Court of Justice
(ICJ) has affirmed that the humanitarian character of legal principles applies
to all weapons, including future ones.[49] It is essential to codify laws
that specifically address the unique characteristics of AWS. Key
recommendations include ensuring meaningful human control over these systems;[50] as to the black-box issue, prohibiting
opaque computational techniques that hinder operator understanding;[51] and setting definitive limits on
weapon autonomy to align with the operator's intent.
Moreover, the deployment of AWS
should only occur after sufficient machine learning, ensuring no bias is
present. Weapons that require ongoing learning during missions should be banned
to maintain operator understanding. The strengths of AI should complement human
weaknesses, particularly in target profiling, where weapons may assist but should
defer to human judgment before selection. Finally, using multiple sensors and
swarm robotics can enhance risk assessment, allowing for better-informed
decisions by operators through effective data fusion.[52]
Conclusion.
When assessing Autonomous Weapons
Systems (AWS), we may be influenced by idealism, seeking the perfect combatant
humanity has yet to achieve. Despite the existence of International
Humanitarian Law (IHL) since the early nineteenth century, humans have often
deviated from these norms, and we might even set higher standards for AWS than
for human combatants. As long as AWS are not inherently incompatible with IHL,
we should consider whether they significantly deviate from established norms. The
Third Committee of the Diplomatic Conference[53] commenting on article 36 of the
Additional Protocol 1 of Geneva convention rendered itself thus;
“… A State
is not required to foresee or analyze all possible misuses of a weapon, for
almost any weapon can be misused in ways that would be prohibited.”
Foreseeable deviations of AWS,
similar to those of human combatants, should not warrant a complete ban.
Instead, we can cautiously establish stringent but practical limits for their
use within the framework of IHL.
* Patrick Muema is a law student at
the University of Nairobi, Kenya. He is passionate about matters of
International Law and Constitutional Law. He can be contacted at: patrickmuema29@gmail.com.
Patrick was responsible for
the conceptualization, research, and drafting of the paper. Soinato contributed
to editing and provided feedback on the final version
*Soinato is a law student at the
University of Nairobi, Kenya.
[1] A submission of the United States
where it insisted that the responsibility lies with the operator of the
autonomous weapons to employ weapons in a discriminate and proportionate
manner. United States of America, ‘Autonomy in Weapon Systems’ (2017) CCW/GGE.1/2017/WP.6 <https://ogc.osd.mil/Portals/99/Law%20of%20War/Practice%20Documents/US%20Working%20Paper%20-%20Autonomy%20in%20Weapon%20Systems%20-%20CCW_GGE.1_2017_WP.6_E.pdf?ver=Vh75581oFwDjfaDK0EE8MQ%3D%3D#:~:text=This%20working%20paper%20seeks%20to,functions%20in%20acquisition%20or%20development%3B%20(> accessed 2/26/2025. Para 12-13.
[2] Quoted in; International
Humanitarian Law, Answers to your Questions (ICRC 2014) <https://www.icrc.org/sites/default/files/external/doc/en/assets/files/other/icrc-002-0703.pdf> accessed 2/25/2025. pg. 6
[3] Some newer developments include
the use of private military actors, the rise of asymmetrical conflicts as to
the ability of the parties and the rise of cyber-warfare. ‘International Humanitarian Law: A
Comprehensive Introduction | International Committee of the Red Cross’ (ICRC
2022)
<https://www.icrc.org/en/publication/4231-international-humanitarian-law-comprehensive-introduction>
accessed 26 February 2025. 37-43.
[4] Whether autonomous Weapon systems
are actually a novelty might be up for debate. Take, for example, the U.S.
air-dropped torpedo from World War II, which featured passive acoustic homing
capabilities and exhibited autonomous functions. See Robert O. Work, ‘A Short History of
Weapon Systems with Autonomous Functionalities’ (Center for a New American
Security 2021). 5
[5] Group of Governmental Experts on
Emerging Technologies in the Area of Lethal Autonomous Weapons System Geneva,
4-8 March and 26-30 August 2024, pg. 2
[6] Autonomous weapons can be
categorized into three types: Human-in-the-Loop Weapons, which require a
human command to select targets and deploy force; Human-on-the-Loop Weapons,
which operate under human supervision, allowing intervention to override their
actions; and the most concerning, Human-out-of-the-Loop Weapons, which
can independently identify targets and apply force without any human
involvement. Vivek Sehrawat, 'Autonomous Weapon System and Command
Responsibility' (2020) 31 Fla J Int'l L 315, 318
[7] Group of Governmental Experts on
Emerging Technologies in the Area of Lethal Autonomous Weapons System Geneva.
(n 5)
[8] Group of Governmental Experts on
Emerging Technologies in the Area of Lethal Autonomous Weapons System Geneva,
(n 5) pg. 3
[9] Daan Kayser
and Marius Pletsch, 'Increasing complexity: Legal and moral implications of
trends in autonomy in weapons systems, PAX 2023, p. 11
[10] Ibid.
[11] Ibid
[12] Ibid.
[13] Ibid, 25
[14] Asif Khan, Autonomous Weapons
Systems and the Principles of International Humanitarian law, (1st Ed Kindle
Direct Publishing 2022) pg. 27
[15] Group of Governmental Experts on
Emerging Technologies in the Area of Lethal Autonomous Weapons System Geneva,
(n 5) pg. 3.
[16] Taylor Jones, ‘Real-Life Technologies
That Prove Autonomous Weapons Are Already Here’ (Future of Life Institute,
22 November 2021) <https://futureoflife.org/aws/real-life-technologies-that-prove-autonomous-weapons-are-already-here/>
accessed 26 February 2025.
[17] Anna Turek and Richard Moyes, ‘Sensor-Based
Targeting Systems: An Option For Regulation, Article 36, 2021, pg. 2
[18] ScaDS_PubRel,
‘Cracking the Code: The Black Box Problem of AI’ (ScaDS.AI, 19 July
2023) <https://scads.ai/cracking-the-code-the-black-box-problem-of-ai/>
accessed 26 February 2025.
[19] Kayser (n 9) pg. 12
[20] In the process of AI learning,
after the AI model has been given the necessary dataset, it can progressively
improve its performance by adjusting its strengths based on the given data; a
sort of autonomous learning. At the end of the day, the features it learns
might not all have been explicitly defined by the developer who will not even
know about these self-learned features. In a simpler demonstration, in a
situation where a model is being trained to recognize particular guns, it may
recognize some complex features without being explicitly programmed to do so.
See Davide Castelvecchi, ‘Can We Open
the Black Box of AI?’ (2016) 538 Nature News 20
<http://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731>
accessed 26 February 2025.
[21] Tim
Jenkins, ‘Garbage in Garbage Out’
<https://www.sandvine.com/blog/garbage-in-garbage-out> accessed 26
February 2025.
[22] Kayser (n 9) pg. 14.
[23] ‘Why
Deep-Learning AIs Are so Easy to Fool’
<https://www.nature.com/articles/d41586-019-03013-5> accessed 26 February
2025.
[24] Kayser (n 9) pg. 14
[25] ‘Automating
the OODA Loop in the Age of Intelligent Machines: Reaffirming the Role of
Humans in Command-and-Control Decision-Making in the Digital Age’
<https://www.tandfonline.com/doi/full/10.1080/14702436.2022.2102486>
accessed 26 February 2025.
[26] Asif Khan, (n 14) pg. 35
[27] The ICJ clearly suggests that if
it had the necessary “elements to enable it to conclude with certainty that the
use of nuclear weapons would necessarily be at variance” with the laws of war,
then it would have certainly found that “recourse to (nuclear) weapons would be
illegal in any circumstance owing to their inherent and total incompatibility
with the law applicable in armed conflict.” Legality of the threat or use of
nuclear weapons (Advisory Opinion) July 8, 1996 <https://www.law.umich.edu/facultyhome/drwcasebook/Documents/Documents/Advisory%20Opinion,%201996%20I.C.J.%20226.pdf> accessed 2/26/2025. Para 95.
[28] Military and Paramilitary
Activities in and against Nicaragua (Nicaragua v. United States of America)
(I.C.J. Reports 1986, p. 94, para. 176): "there is a specific rule whereby
self-defense would warrant only measures which are proportional to the armed
attack and necessary to respond to it, a rule well established in customary
international law"
[29] Customary International
Humanitarian Law, op cit (note 4), Vol I, Rul 1
[30] AP I, Art. 48; CIHL, Rules 1 and
7.
[31] Asif Khan, (n 14) pg. 25.
[32] ‘Declaration
Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes
Weight. Saint Petersburg, 29 November / 11 December 1868.’
<https://ihl-databases.icrc.org/en/ihl-treaties/st-petersburg-decl-1868>
accessed 26 February 2025.
[33] Asif Khan (n 14) pg. 32
[34] AP I, Art. 57(1); CIHL, Rule 15.
[35] CIHL, Rule 18
[36]
M. Bothe, ‘‘Legal restraints on targeting: Protection of civilian
population and the changing faces of modern conflicts’’, IYHR, Vol. 31, 2001,
p. 195.
[37] See 3 AP I, Arts 51(5)(b) and
57(2)(a)(iii) and (b); CIHL, Rules 14, 18 and 19.
[38] Rogers posits that, ‘‘[t]he reference
to the complete submission of the enemy, written in the light of the experience
of total war in the Second World War, is probably now obsolete, since war can
have a limited purpose …’’; Rogers, Law on the Battlefield, Manchester
University Press, Manchester and New York, 2004, p. 5
[39] International Operational Law
Department, The Judge Advocate General’s Legal Center and School, US Army
Charlottesville, Virginia, VIRGINIA LTC Bo-varnick, Jef f A. Et al., Law of war
deskbook, (Ed., CAPT Bill, Brian J., 2010, page 155.
[40] Asif Khan (n 14) pg. 32
[41] ‘Ethics
and Autonomous Weapon Systems: An Ethical Basis for Human Control?’
(International Committee of the Red Cross (ICRC) 2018) <https://www.icrc.org/sites/default/files/document/file_list/icrc_ethics_and_autonomous_weapon_systems_report_3_april_2018.pdf>
accessed 26 February 2025. Pages 2 and 11
[42] Vivek Sehrawat, 'Autonomous Weapon
System and Command Responsibility' (2020) 31 Fla J Int'l L 315, 323-324,
[43] Draft articles on Responsibility
of States for internationally wrongful acts adopted by the International Law
Commission at its fifty-third session (2001) (extract from the Report of the
International Law Commission on the work of its Fifty-third session, Official
Records of the General Assembly, fifty-sixth session, Supplement No. 10
(A/56/10), chp.IV.E.1)
[44] Asif Khan, Autonomous Weapons
Systems and the Principles of International Humanitarian law, (1st Ed Kindle
Direct Publishing 2022) pg. 41
[45] Ibid pg. 43
[46] Docherty, B. Mind the gap. The
lack of accountability for killer robots. Human Rights Watch (2015). Online: <https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots# >
[47] Shayne Longpre1, Marcus Storm and
Rishi Shah, “Lethal autonomous weapons systems & artificial intelligence:
Trends, challenges, and policies” vol. 3 MIT Science Policy Review [2022] pg.
50
[48] ICTY, The Prosecutor v. Dusko
Tadić, IT-94-1-AR72, Appeals Chamber, Decision, 2 October 1995; available on http://www.un.org, para. 103
[49] Legality
of the threat or use of nuclear weapons (Advisory Opinion) July 8, 1996,
accessed 2/26/2025 [86]
[50] On the control, the US Department
of Defense policy states that the weapons must be such as would allow
commanders and operators to exercise appropriate levels of human judgment over
the use of force; United States Department of Defense, Autonomy in Weapon
Systems, (DoD Directive 3000.09, November 21, 2012) pg. 3. Sleeman and
Huntley observe that the Us favors the “Human Judgement” standard over that of
“human control” because as the US has pointed out "an operator might be
able to exercise meaningful control over every aspect of a weapon system, but
if the operator is only reflexively pressing a button to approve strikes
recommended by the weapon system, the operator would be exercising little, if
any, judgment over the use of force." See Richard J. Sleesman & Todd
C. Huntley, 'Lethal Autonomous Weapon Systems: An Overview' (2019) 2019 Army
Law 32, 33-34. See also the United States of America, Human-Machine Interaction
in the Development, Deployment and Use of Emerging Technologies in the Area of
Lethal Autonomous Weapons Systems, (2018) CCW/GGE.2/2018/WP.4.<https://ogc.osd.mil/Portals/99/Law%20of%20War%202023/US%20Working%20Paper%20-%20Human-Machine%20Interaction%20in%20the%20Development%20Deployment%20and%20Use%20of%20Emerging%20Technologies%20in%20the%20Area%20of%20LAWS%20-%20CCW_GGE.2_2018_WP.4_E.pdf?ver=XTEzZdrpDipbObK_aY1zPw%3D%3D> accessed 2/26/2025. Para
11.
[51] Kayser (n 9) pg. 32
[52]Kayser (n 9) , pg. 23, 27.
[53] Report to the Third Committee on
the Work of the Working Group Commit-tee III, Doc No CDDH/III/293 in Levie,
Howard S., Protection of War Crimes: Proto-col 1 to the 1949 Geneva
Conventions, ( Ocena Publications, 1980, vol 2), page 287
Comments
Post a Comment