Lethal Autonomous Weapons Systems: The Pakistani Approach


This insight examines Pakistan's perspective on Lethal Autonomous Weapons Systems (LAWS), highlighting ethical and regulatory challenges. It underscores Pakistan's support for a global LAWS ban at the UN and the significance of strategically adopting AI in non-lethal areas. Additionally, it emphasizes the necessity of international cooperation to prevent unregulated LAWS proliferation.
Dec 11, 2023           3 minutes read
 
Written By

Ayesha Malik, Visiting Fellow

ayeshamalik@rsilpak.org

“They don't get hungry. They're not afraid. They don't forget orders. They don't care if the guy next to them has just been shot. Will they do a better job than humans? Yes.” These words were spoken by Gordon Johnson, a member of the now-defunct Pentagon Joint Forces Command, regarding the benefits of autonomous weapons systems. It seems that the introduction of autonomous weapons to the battlefield is inevitable, in fact they were already used in the Libyan civil war in March 2020, though their mainstream use may be incremental. We can see this in Israel’s current military operation in Gaza, the “Habsora” (which translates to The Gospel) is an AI system which is generating targets for the IDF. It had done so at such a high rate though (leading to Israel attacking 15,000 targets in Gaza in 35 days of Operation Swords of Iron) that it has been described as a ‘mass assassination factory’. When machines engage in the business of war, they make wars much worse.

LAWS are weapons systems which think for themselves, in that they select and engage targets without further human intervention. Many AI-enabled weapons systems maintain humans in the loop to oversee them when they are deployed, however, their involvement and oversight will diminish over time. While many states favour prohibitions and regulations on their use in battle (Pakistan included), others (especially the US, Russia and India) are against it. Instead, they argue that the rules of war are enough to regulate AI in armed conflict. The US, Russia, China, Iran, India and Israel are all currently undertaking development of LAWS technologies in what many perceive to be a new international arms race. Most worrying for Pakistan is the fact that India has also acquired its first swarming drone system which is AI-enabled and can attack targets up to 50 kilometres away.

The UN Secretary General Antonio Guterres tweeted in March 2019, that “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” A key issue under the laws of war is that as software becomes more complicated, it becomes less predictable. Programmers do not understand or know the entire piece of software, leading to what is called a ‘black box’ effect in which interactions within it become unpredictable as well. This could lead to situations where LAWS applies force indiscriminately because of a software error. And we may not even know whether it was a software error or improved programming. Moreover, it would be difficult for machines to apply complex principles such as proportionality and precaution which are complicated for even humans to undertake. Some argue that even if we equip machines with the ability to tell the difference between civilians and combatants, they will not have a human level of common sense necessary to correctly apply this requirement to distinguish.

In 2017, Russian President Vladimir Putin declared that whoever mastered the field of artificial intelligence “will become ruler of the world”. Some predict that Al technology will be a game-changer in terms of state war-fighting power and military domination, analogous to the change brought about by nuclear weapons and before that by gunpowder. The major powers have already committed significant funding and research efforts into Al development, believing it will be highly influential in future military conflicts. It is unlikely that they will stop this funding especially as negotiations to draft laws to prohibit or regulate LAWS have been ongoing for 10 years at the UN Group of Governmental Experts (GGE) meetings and remain unfruitful.

States remain reluctant to regulate LAWS because of the benefits of removing humans from the loop; chief among them is how cheap it is to have robots in battle compared to people. For the US, a soldier costs the state an average of 4 million USD across their lifetime whereas a robot would cost 10 percent that and can be thrown out when it becomes obsolete. In Iraq and Afghanistan, they were used to conduct dirty, boring, or dangerous tasks like disrupting IEDs or surveilling in threatening circumstances like in caves. They also have faster response times which are indispensable in battle, when used in a simulated dogfight, the AI fighter pilot beat a human pilot 15 times to 1.

Pakistan has adopted a forceful stance at the GGE in favour of the prohibition and regulation of LAWS. Such a strong approach may have been a bit premature, especially given the inevitability of India’s development of these weapon systems. However, It may be better for Pakistan to continue on its current trajectory and work together with states towards a resolution at the United Nations General Assembly. States against LAWS are now considering how to capture any form of policy coherence in a forum that can progress the global normative and operational framework on LAWS, and build momentum towards an international legally binding instrument. Moving to the General Assembly may guarantee more success than the last ten years in the GGE. The thirty countries currently developing these weapons systems and which are so against their regulation will have less power here in stymying a legal framework.

In the meanwhile though, Pakistan should also keep a close eye on the states which are building up their capacity in these weapons systems while maintaining its legal stance. The use of AI systems by Israel in Gaza is emblematic of what the future of warfare may look like - data-driven targeting where algorithms become assassination factories. Pakistan should also make it clear that failure at the General Assembly or ongoing frustration with the GGE may lead us changing our approach. Given India is likely to get more AI-enabled drones for surveillance and perhaps more lethal purposes, it is in Pakistan’s best interests to also ensure that it has access to such technology while maintaining that it is for non-lethal and non-military purposes. Pakistan can then better weigh its options while laying the groundwork now by emphasising that a lack of prohibitions and regulations will lead to future proliferation (which will not of course be unlawful).

Disclaimer

The views expressed in this Insight are of the author(s) alone and do not necessarily reflect the policy of NDU.