Technologies have come to a point where such systems can be implemented within years (if not legally but practically). The instant introduction of automated technology and AI implies that in the near future, Lethal Autonomous Weapon Systems (hereinafter LAWS) can become a reality. The stakes are high: LAWS, after gunpowder and nuclear weapons, would be described as the third revolution in the warfare. Without human involvement, LAWS identify and set targets; they become fatal when they involve individuals. They are associated with a number of social, ethical, legal, financial, technological and safety issues.
Most of the discussion regarding the moral validity of LAWS concentrates on issues relating to its wartime application (jus in bello) and how such systems are able to preserve the standards of proportionality and distinction. That being said, this paper suggests that we should understand how the suspected use of them by a State in dispute affects standards of international law. Each statement is a two-fold argument. Firstly, we explore the concept briefly to offer the most detailed perspective. Secondly, we argue that assessments and implementations on the principles of international law allow us to look at the ultimate implications of war beyond a one-round clash with an opposing army. We must not simply look at the theory, in particular, but practice the same idea, which implies what the war as a whole would look like and the possible consequences that would lead.
LAWS are specified by the International Committee of the Red Cross (ICRC) as “weapons that independently search for, identify, and attack targets without human intervention.” They contravene a plethora of international humanitarian principles because of their highly autonomous existence. There are no specific provisions for such autonomy in international humanitarian law that regulate attacks on human beings in times of conflict, though it is still applicable. The Geneva Convention on human conduct in War of 1949 allows every attack to meet the requirements set out above. There are qualitative judgements which current AI systems cannot achieve.
The Martens Clause
Starting with the Martens Clause, it offers significant aspects for Nations to consider in evaluating emerging weapon technologies, namely fully autonomous systems. It provides a normative structure for the assessment of such weapons. In cases which are not covered by IHL conventions, Martens Clause stipulates that the conduct of the belligerents is governed by principles of the law of Nations resulting from the customs of international law, from the principles of humanity, and from the requirements of public conscience. The Clause refers to entirely AWS, although international law does not expressly cover them. LAWS are contradictory to both facets of the Martens clause: the principle of humanity and the collective conscience rules. States should enact a pre-emptive ban on the creation, manufacture and use of weapons in accordance with Martens Clause. The Governments, therefore, referred to among their considerable problems with LAWS to conformity with the clause and moral deficiencies. By July 2018, 26 states were supporting a preventive prohibition, while more than 100 nations had requested a lawful instrument to resolve issues raised by LAWS. Nearly every Chemical Weapons Convention State Party speaking in April 2018 meeting emphasised that human control on the use of power must be maintained.
Principle of Humanity
Likewise, since they have no thoughts and feelings and no ethical and legal judgment, LAWS would face major barriers to conform with the Principles of Humanity. The values of fair treatment and regard for human dignity and individual rights are central for everyone. People are inspired to treat one another humanly because they feel empathy and understanding for one another. LAWS, being insentient, lack this crucial ability to judge and arrive at a sound decision based on a particular situation. Instead of judging, these systems are based on pre-programmed algorithms that operate poorly in complicated and unpredictable circumstances. This is because it would be difficult, if not impossible, to pre-program them with every possible scenario under accepted legal and ethical norms followed by humans. This deficiency further cripples their ability to incorporate the Principle of Distinction as set by IHL, making minimization of harm to civilians a grim possibility.
Principle of Human Dignity
Even if they do passably protect human life, respecting the Principle of Human Dignity would be virtually impossible as unlike humans, these robots would be unable to comprehend the value of life and death of a human being. They would make fateful decisions based on algorithms, relegating their human targets to mere objects and they might be programmed to perform questionable acts, say, to eliminate anyone who exhibits ‘threatening behaviour’. Thus, breaching the dictates of humanity in every respect.
Dictates of Public Conscience
More and more indignation about LAWS suggests that this technological innovation is also against the second element of the Martens clause i.e., the Dictates of Public Conscience. Those criteria consist of moral principles focused on the comprehension of the right and wrong issues. They can be determined by the views of the public and governments. The budding accord for the inclusion of human control to a certain extent in operation of machinery indicates that public conscience is firmly against LAWS. The use of LAWS has been opposed by the public at large, including 20 Noble Peace Laureates, over 60 NGOs, and 160 religious leaders, as well as 3000 robotics specialists and scientists. Thus, giving testimony to the fact that use of LAWS is illegal according to the Martens Clause.
Principle of Distinction
During the battle, the IHL’s Principle of Distinction, Principle of Proportionality and Principle of Precaution apply. The Principle of Distinction requires the ability of LAWS to differentiate between militants and non-militants. Distinction principle is enshrined in Articles 48, 51, 52, 53, 54, and 57 of the Additional Protocol-I to the Geneva Convention. Each of these bolsters the fact that parties to a conflict must make a clear distinction between civilians and military, attacking only the latter. Owing to the extensive autonomous nature of LAWS, we assert that it is challenging for them to differentiate, especially during counterinsurgency military conflicts. They are unable to clearly distinguish between military and civilians especially in cases where the outfits worn by army officials are indistinguishable from the general public. The problem is all the more aggravated since these robots are pre-programmed, and decision making by humans is absent.
Principle of Proportionality
Also, the Principle of Proportionality states, if the collateral civil damage is severe, then attacks on legitimate targets are forbidden. Our assertion is that jus in bello proportionality is context-based and that AI is incapable of making these challenging judgemental decisions now and into the near future. We can’t fight proportionally as the judgement required for these assessments is beyond these systems’ programming and learning abilities. As mandated under Articles 51(5)(b) and 57(2), a subjective evaluation of the warfare is required in order to mitigate the harm done to civilians. However, LAWS does not make the assessment, which makes it more likely that their utilization will entail a significantly higher human cost because it does not have the ability to consider the many varying factors needed before attacking – such as the number of civilian casualties and the impact on the enemy targets.
Principle of Precaution
Furthermore, the Principle of Precaution must be respected during the initial stage of development of weaponry itself. It states that any kind of weapon must demonstrate the reliability to remain within the confines of an acceptable failure rate. But what would be an acceptable failure rate has not been defined. Unless a suitable criterion is demarcated, it is only wise that these deadly weapons should not be employed in the military.
Principle of Military Necessity
Moreover, the Principle of Military Necessity notes that only a magnitude of the force that is appropriate for the legitimate purpose of the confrontation can be used. It forbids the infliction of such harm, which for the rational purposes of the conflict is needless. Yet these decisions cannot be taken by LAWS. Due to this incompetence, the robot will shoot the human excessively a second time, neglecting the necessity concept. If projected fears are not recognized there is a strong possibility that it may lead to curtailment of future ethical concerns at a critical point in the development of autonomous robots.
Concluding Remarks
Using LAWS to combat an unjust aggressor will negatively affect the capacity of the diplomatic community and third-party states to compromise and negotiate peacefully. In particular, the use of LAWS by one state will possibly launch an armed conflict and increase weapons across the entire network. International organizations and NGOs alongside delegates in the fields of disarmament and civil rights, peace and religious concerns, and science and technology, have felt compelled to enforce a ban on LAWS, primarily on ethical grounds. They have labelled LAWS as “unconscionable,” and “unethical.” Integrating the factors listed above makes it clear that a multitude of problems, including political, ethical and legal, haunt the current state of LAWS. An evaluation of LAWS under the Martens Clause underlines the requirement for new legislation that is both precise and robust. Regulations which allowed them to exist wouldn’t be enough. As even if regulations limited use of fully autonomous bots to particular locations or for restricted purposes, after LAWS enters national armouries, States that usually abide by IHL could be lured in the middle of combat or in trying circumstances to use these autonomous weapons in ways that escalate the perils of laws of war breaches. Consequently, the current regulatory framework suffers from major infirmities which ineptly provide for a holistic regime regulating the use of LAWS. Therefore, it becomes imperative that until the gap between the AWS and its governing laws is abridged, and a sufficient legal framework is established, States and international organizations should impose a pre-emptive ban on LAWS. A preventive ban is a much-needed step in establishing, manufacturing and using LAWS to ensure compliance with both human values and public conscience. As rightly pointed out by Albert Camus, technological superiority does not entail success, and technological hubris raises all the dangers.
_____________________________________________________________________________________
ABOUT THE AUTHOR(S)

Kabir Jaiswal is a 4th-year law student at National University of Study & Research in Law, Ranchi.

Shruti Singh is a 3rd-year law student at NMIMS School of Law, Mumbai.
In Content Picture Credit: Medium