THE KARGU-2 AUTONOMOUS ATTACK DRONE: LEGAL & ETHICAL DIMENSIONS

Share this:

In March 2021, a UN Panel of Experts on Libya reported a possible use of lethal autonomous weapons systems—such as the STM Kargu-2—which “were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability” (para 63). The UN report refers to the deployment of this system in the context of the Government of National Accord Affiliated Forces (GNA-AF)—with Turkish military support—launching offensive campaigns against the Hafter Affiliated Forces (HAF) in what appears to be a non-international armed conflict.

The report does not say for certain that human beings were killed by such systems operating without human supervision. Nevertheless, there has recently been an upsurge of media commentaries debating whether this use could mark the first human fatality caused by autonomous robots. There is a great value, from a military point of view, in finding whether such autonomous weapons systems operated as intended while working offline without human supervision. However, from a humanitarian perspective, this debate about whether autonomous robots caused a human fatality is misguided and misses important issues to be raised—were the attack drones capable of operating in compliance with the law of armed conflict governing the conduct of hostilities? Were they employed lawfully under the attendant circumstances? Are there any accountability or ethical issues to be resolved?

The report is silent on whether Kargu-2 attack drones were used unlawfully, though it does record various international humanitarian law and human rights law violations elsewhere (paras 32-55). Accordingly, the following analysis addresses legal and ethical considerations that could have been relevant to the deployment of Kargu-2, without prejudice to its legality in the specific context of the conflict in Libya. This will show why the potential human fatality caused by autonomous systems without human supervision at the point of direct engagement with human targets is not an imperative element for humanitarian consideration.

The Kargu-2 and the Law of Targeting

The Kargu-2 is a quadcopter drone built by the Turkish company STM. It uses “machine learning algorithms embedded on the platform” enabling it to operate autonomously as well as manually controllable. Unlike Bayraktar TB2 or Israel’s Harpy loitering munition, the Kargu-2 is designed to be an anti-personnel weapon capable of selecting and engaging human targets based on machine-learning object classification. Although various ammunition options are available, the Kargu attack drone detonates an explosive charge close to the target, minimizing the range of collateral damage.

Because of these characteristics of the Kargu-2, the focus of humanitarian concerns should be the drone’s ability to distinguish legitimate military targets from protected civilians and to direct its attacks against the former in a discriminate manner. The technical difficulties that accompany proportionality assessments when civilian collateral damage is expected are not relevant unless the Kargu-2 is deployed in a heavily civilian populated area.

Weapons that are incapable of making this distinction or of limiting the effects of their attacks are already prohibited under customary international law. Both commanders and operators have an obligation to do everything feasible to verify that targets are legitimate military objectives. Moreover, they must cancel or suspend an attack if it becomes apparent that the target is not a military objective.

The drone’s machine learning-based object classification using real-time image processing capabilities may go some way to satisfy the distinction requirement. This is particularly the case if the system was trained to recognize features that were commonly observed among legitimate military targets (such as uniforms and other visual indicia that signal membership in a military organization or an organized armed group). However, particularly in the context of a non-international armed conflict, members of an organized armed group may be wearing civilian clothing, complicating efforts to visually distinguish them from protected civilians. The same technical challenge applies to civilians engaging in hostile activities, who are legitimate targets for such time as they directly participate in hostilities.

The drone’s image processing for object classification could be combined with various weapons detection capabilities to enhance its accuracy and reliability. There is a range of detection sensors and software-based algorithms currently in use or under development. A technical challenge lies in finding appropriate sensitivity settings and technological solutions that eliminate the risk of error, such as failing to distinguish farmers holding rifles to defend their land from legitimate military targets. The U.S. National Security Agency’s SKYNET project has experimented with machine learning as a method to improve predictive target classification and acquisition. However, target acquisition based on behavioral patterns such as the use of mobile phones has not yielded desired results.

Deployment in a Controlled Battlefield Environment

The challenge of distinction can be almost entirely circumvented by limiting the Kargu-2 attack drone’s operating parameters to a particular battlefield environment. Restricting its operation to areas where no civilian presence is expected, and all feasible precautions have been exercised to eliminate the chance of their presence, could resolve problems related to law of war distinction. In such circumstances, commanders can discharge their responsibility to ensure that various law of targeting requirements are complied with even when attack drones operate offline without real-time human supervision.

Of course, technical challenges may still arise. Those involved in hostilities could become defenseless because of injury or sickness and may express an intention to surrender. Attacking such persons (recognized as hors de combat) is prohibited under customary international law. The question is whether machine learning-based object classification is sufficiently capable of identifying those who are wounded, sick, or otherwise express an intention to surrender and can autonomously suspend engagement as required by law.

The issue may have been relevant during the deployment of the Kargu-2 in Libya. The UN report noted the drone’s effective use in hunting down retreating HAF forces. In such situations, image processing may be inadequate for detecting and identifying persons who are unconscious or suffering internal injury or illness. On the other hand, abandoning a weapon can be a machine detectable surrender event, especially when target acquisition relies on weapons detection capabilities. However, weapon detection capabilities can be misleading when those individuals are unable to jettison their weapons.

The employment of autonomous systems—like the Kargu-2—without appropriate capabilities to identify and spare those who are recognized as hors de combat raises legitimate issues regarding the systems’ ability to comply with the law of armed conflict. Indeed, using such a system while knowing that it cannot comply with these legal requirements may amount to an order to give no quarter—showing no mercy or clemency to spare the life in return for surrender—which is clearly prohibited under customary international law.

Accountability

The use of artificial intelligence to address these technical and operational challenges does not create an accountability gap in the deployment of lethal autonomous weapons systems. States are under a general obligation to respect and ensure respect for the law of armed conflict. A variety of practical measures can be put in place to implement this obligation with the use of artificial intelligence. Military commanders are always accountable for the employment of all means and methods of warfare, including autonomous weapons systems that operate without human supervision.

Human judgment at the point of direct engagement with a human target is not an imperative element of relevant humanitarian considerations for ensuring compliance with the law of armed conflict. Military commanders and operators of autonomous weapons systems can satisfy their legal obligations by exercising feasible precautions to limit the conditions under which such systems operate without human supervision so that the target is engaged in a manner that complies with the law of armed conflict.

International law already prohibits the deployment of lethal autonomous weapons systems that are not capable of complying with these legal requirements. The use of attack drones such as the Kargu-2 cannot circumvent the obligation to comply with the law of armed conflict, whether it is operated with or without human supervision at the point of direct engagement with a human target.

Ethical Considerations

Despite the applicability of the elaborate law of war regime, ethical concerns might also be raised regarding the use of lethal autonomous weapons systems to select and engage human targets without human intervention or supervision. These concerns are not directed at the integrity of the targeting decision or the ability to comply with legal requirements. Rather, ethical concerns appear to take issue with the machine’s lack of ability to account for humanitarian considerations or to respect human dignity.

However, numerous munitions already operate without human intervention or supervision at the point of direct engagement with human targets. Indeed, the desire and search for methods to increase the distance between the attacker and the target, shielding the former from counter-offensive while retaining the capacity to attack the enemy, has been a major factor in driving the development of weapons technologies. With this greater distance, it has already been difficult to take account of humanitarian considerations or respect human dignity on an individual and subjective basis. This is precisely the reason why the notion of humanity has been incorporated into the law of armed conflict governing the conduct of hostilities, namely, to set out how humanitarian considerations must be balanced against military necessity in an objective manner.

The notion of humanity is an admittedly elusive concept. Why is it less humane to employ lethal autonomous weapons systems that are capable of selectively engaging legitimate military targets with no or little civilian collateral damage (assuming its accuracy and ability to comply with legal requirements as discussed above) than long-range artillery that engages targets en masse causing greater incidental harm? Ultimately, the onus is on those raising ethical concerns to explain the internal logic of humanity as the basis for their claim.

The adoption of a new treaty for the comprehensive ban on lethal autonomous weapons systems does not help the ethical cause either. It is in human nature to exploit technology once invented—the Kargu-2 attack drone is but a latest manifestation of such exploitation. Weapons regulation by a treaty is only effective when there is a shared political interest among States and the costs associated with the development and use of the weapon outweigh the strategic and operational benefit it is expected to bring to meet the demands of military necessity. This is particularly difficult when the full potential of a new technology is yet to be unraveled. Any regulatory attempt driven by fear as the instinctive and primitive response to unknowns is destined to be short-lived and unsuccessful.

Source: lieber.westpoint.edu

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *