From Ukraine to Armenia, drones take centre stage in modern warfare

admin
8 Min Read

Drones offer significant cost savings compared to traditional manned aircraft. AP

Drones are transforming modern warfare by providing unprecedented levels of precision, surveillance, and operational flexibility. In the Russia-Ukraine conflict, unmanned aerial vehicles (UAVs) have been pivotal for intelligence, surveillance, and reconnaissance (ISR) and kinetic operations, enabling real-time situational awareness and precision strikes. The Nagorno-Karabakh conflict between Armenia and Azerbaijan showcased the strategic impact of UAVs, with Azerbaijan leveraging Turkish Bayraktar TB2 drones for precise, lethal engagements. In the Middle East, various actors have utilised drones for missions ranging from ISR to targeted eliminations.

Drones offer significant cost savings compared to traditional manned aircraft. A RAND Corporation study found that the cost per flight hour of a Reaper drone is approximately $3,624, while an F-16 fighter jet costs about $22,514. This economic advantage allows for prolonged surveillance and strike missions without the financial burden associated with manned aircraft. In the Ukraine-Russia war, FPV (First-Person View) drones, which can be assembled for as little as $600-$1000, are significantly cheaper and accessible even to smaller military forces. They are valuable in asymmetric warfare, conducting high-risk missions without heavy financial or personnel costs, and can evade electronic warfare systems by adjusting control frequencies.

Advancements in solid-state inertial navigation systems, impervious to jamming and signal spoofing, along with the emergence of the Starlink communications system, enable secure control of drones from up to 1,000 kilometres away. These systems ensure reliable drone operation in contested environments, while high-resolution satellite imagery allows for precision target selection within several metres. This combination allows drones to conduct precise strikes and persistent surveillance, effectively challenging traditional adversaries relying on conventional strengths like large armies and fortified positions.

The use case for drones in warfare must be balanced. However, the advent of Artificial Intelligence (AI) in warfare, particularly through the development of Lethal Autonomous Weapon Systems (LAWS), marks a pivotal shift in military strategy and ethics. Unlike traditional drones operated by humans, LAWS can identify, select, and engage targets without human intervention. This raises profound moral and philosophical dilemmas, as autonomous machines, devoid of human empathy and ethical reasoning, could execute missions with lethal precision. This scenario echoes science fiction yet reflects the potential reality of advanced LAWS, emphasizing the serious ethical concerns and palpable risks involved in their deployment.

Efforts to regulate LAWS face significant challenges. The United States has proposed international norms for military AI at the United Nations, aiming for a treaty similar to the Nuclear Non-Proliferation Treaty (NPT). However, achieving international consensus is difficult, especially given varying interpretations and interests among countries like the USA and China. The ambiguity in defining LAWS and the blend of autonomous and automated systems complicate legal regulation. Moreover, initiatives like the U.S. Department of State’s “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” offer frameworks emphasizing international law and human oversight but lack specificity in critical areas, hindering their efficacy and applicability.

The move to regulate LAWS offers the first realistic opportunity for states to act on AI weapons, but this is easier said than done. Efforts to control and regulate weapons date back hundreds of years, and today’s main international restrictions are through the UN Convention on Certain Conventional Weapons (CCW). The CCW has been investigating AI-boosted weapons since 2013, but progress has been slow due to the need for international consensus and opposition from countries actively developing these technologies. The lack of a consensus on what exactly constitutes a LAW further complicates regulation efforts.

AI weapons, such as autonomous drones and loitering munitions, present substantial tactical and strategic advantages in modern military operations. These systems leverage sophisticated AI algorithms, including deep learning and neural networks, to perform complex tasks autonomously. For instance, autonomous drones can execute reconnaissance, surveillance, and strike missions without human intervention, relying on AI for navigation, target identification, and engagement. Loitering munitions, often referred to as “suicide drones,” can autonomously patrol designated areas, identify targets using onboard sensors, and execute precision strikes. These capabilities enable effective operation in electronically contested environments where traditional communication links may be disrupted by jamming or other electronic countermeasures. The AI systems onboard can make real-time decisions based on sensor inputs, maintaining mission efficacy even when communication with human operators is compromised.

However, the reliability of these AI systems and their potential for errors, particularly in visual image recognition, raise significant ethical and technical concerns. AI weapons rely on sensor fusion, combining data from various sources such as cameras, infrared detectors, and radar to build a comprehensive situational awareness. The image recognition algorithms, often based on convolutional neural networks (CNNs), classify objects and identify targets. Despite advancements, these systems can be susceptible to adversarial attacks where subtle changes to input data can lead to misclassification. For example, slight modifications to an image can cause an AI system to misidentify a civilian vehicle as a military target.

The debate over the level of human involvement in the operation of these weapons centres on the concept of a ‘human in the loop’ (HITL). In this model, human operators are integrated into decision-making, providing oversight and final approval before an AI system can execute a lethal action. This approach aims to combine the precision and efficiency of AI with human judgement and ethical considerations. Various configurations exist within this framework, ranging from direct control, where humans intervene in every decision, to supervisory control, where AI handles routine tasks, but humans retain the authority to override decisions.

Ensuring accountability in deploying and using fully autonomous weapons systems is a critical challenge. These systems operate based on complex algorithms and data inputs, making attributing responsibility for their actions difficult. In cases of unintended or unlawful harm, determining accountability can be problematic. Traditional accountability mechanisms, which rely on human judgement and decision-making, may not be directly applicable to autonomous agents. As such, the ongoing discussions at the United Nations and other international forums aim to develop a comprehensive regulatory framework that addresses these challenges. This is a welcome move.

Share This Article
By admin
test bio
Please login to use this feature.