The Boundaries of Lethal Autonomous Artificial Intelligence Weapons

The Boundaries of Lethal Autonomous Artificial Intelligence Weapons

The Boundaries of Lethal Autonomous Artificial Intelligence Weapons
Kargu-2 is the first AI-powered drone that hunted human targets without being instructed to do so (pictured here at the campus of OSTIM Technopark in Ankara, Turkey)

Topic

Artificial Intelligence has been disrupting all sectors, and the defense industry is no exception. Digitalization and AI integration in sectors like healthcare, finance, and law increase efficiency and reduce costs without risk touching human lives. Incorporating AI and digital components into weapon systems to make them autonomous is dangerous as programmed algorithms are empowered to make life-and-death decisions against humans. The self-learning capabilities and scans for specific targets using sensor data, including facial recognition powered with AI technologies bring significant challenges and create a sense of fear in the international community. This paper investigates the response to such problems in the eyes of the international regulatory framework. 

Relevance

 We are now in the 21st century, and the entire landscape of science, technology, warfare, and notably, ethics, has undergone significant shifts.  The paper is both topical and contemporary which enables a profound theoretical understanding of the tension between legal and data-driven weaponized AI. It identifies gaps in the international regulatory landscape and areas that require further exploration. The paper contributes valuable practical insights that inform policymakers and shape future developments in this controversial AI machine that is empowered to make decisions about who lives and dies in the world.

 Results

Human control and oversight (human element) are crucial requirements to balance legal tension between AI-driven military innovation and regulatory framework.

Existing international regulations are inadequate to govern this weaponized AI system. Legislative measures have a long way to go to catch up with this rapid pace of technological progress. Strategies to address these challenges include comprehensive human oversight, legal tools, rigorous testing, and validation.

 Implications for Practitioners

• Practitioners need to develop and enforce an adaptive international regulatory framework that can keep pace with technological advancements.

• Set clear binding standards for transparency, responsibility, and accountability in using Weaponized AI.  

• Delay any deployment of weaponized AI before making sure humans are in the loop, predictability in the actions of lethal autonomous AI weapons, clear line of responsibility, and accountability, test, and validation.

Methods

The study uses empirical qualitative and doctrinal methodology. For semi-structured interviews, purposively sampled military personnel, academics, researchers, intelligence and security experts, computer scientists, and lawyers were considered to achieve wide-ranging perspectives and highly regarded expertise. Data collection involved gathering qualitative insights through these semi-structured interviews which were then analyzed using MAXQDA 24 software with its AI features. The abductive coding process blends both inductive and deductive coding methods. The fact that abductive coding is iterative provided an opportunity for uncovering unexpected patterns and providing deeper insights into the complex phenomena under study.