​​From Kill Switch to Peace Switch: Rethinking AI Governance in African Military Law

The escalating integration of artificial intelligence (AI) into warfare has ignited a fierce global debate, often narrowly confined to the perils of ‘killer robots’ or lethal autonomous weapons systems (LAWS) and calls for outright bans. This framing, while highlighting critical ethical concerns regarding accountability and human control over life-and-death decisions, risks obscuring AI’s broader, transformative potential and the imperative for more holistic governance. Indeed, humans remain superior to AI in judgment and knowledge under significant uncertainty. This paper posits that merely focusing on a “kill switch” – the prohibition or stringent control of autonomous weapons – is an insufficient and reactive approach to AI governance. Instead, it advocates for a “peace switch” paradigm, urging a proactive re-evaluation of AI’s diverse applications to actively foster international security and stability. Beyond autonomous targeting, AI profoundly enhances intelligence, surveillance, reconnaissance, logistics, and decision support. Drawing on global efforts to regulate military AI, particularly the EU’s human-centric approaches, this paper argues that the dual-use nature of AI, where commercial advancements significantly outpace military development, necessitates integrated, forward-looking legal frameworks. While current sources do not specifically address African military law, the principles explored here offer a crucial foundation for any region to transcend a purely prohibitive stance. This approach would proactively embed human-centric ethical guidelines within comprehensive legal and policy frameworks, ensuring that AI’s evolution serves not just to avert catastrophic conflict, but actively builds pathways to lasting peace and security.

Keywords: Artificial Intelligence, Humanitarian, Drones, Peace, Law, Military

Introduction

The Harmattan, a relentless wind whipping sand across the vast plains of the Sahel, carries with it a new kind of chill. It’s the chill of uncertainty, a fear that whispers on the edges of the sandstorm – the fear of machines making life-or-death decisions on the battlefield. According to the International Committee of the Red Cross (ICRC), “AI and machine-learning systems could have profound implications for the role of humans in armed conflict, especially concerning: increasing autonomy of weapon systems and other unmanned systems; new forms of cyber and information warfare; and, more broadly, the nature of decision-making.” Artificial intelligence (AI) is no longer science fiction; it’s a specter haunting the war-torn landscapes of Africa. Some African countries, e.g., Zimbabwe, Eswatini, Angola, and Mozambique, have used national security justifications to deploy surveillance technologies to control and discipline non-law-abiding citizens. The challenge with mass surveillance is that it creates a climate of fear, and citizens are presumed to be guilty unless proven innocent. This eventually alters the ‘power balance between a state and its citizens.’ In Nigeria, 2.2 billion Naira was allocated in the 2018 budget for the “Social Media Mining Suite” in addition to ordering the military to watch for anti-government content online. In Libya, autonomous lethal weapons systems have already been used in fighting. In Zimbabwe, a controversial, military-driven national facial recognition scheme has raised concerns over the technology’s alleged use as a surveillance tool by the government. The draft AU-AI policy didn’t explicitly address the use of AI by African governments for national security interests, but it acknowledges that there could be hazardous AI risks.

Potential Benefits of AI in African Militaries

In the realm of peace and security, AI can enable more effective conflict analysis and early warning. It can support peace-making and mediation by addressing information asymmetry. AI-driven technology can also enable state institutions to enhance their capacity for enforcing law and order and fighting criminality, thereby contributing to the security of citizens. AI-driven surveillance and policing platforms are deployed for tracking organized criminal networks and responding to or preventing the activities of terrorist or insurgent groups.

The Nigerian Navy is embracing AI since it and other emerging technologies are increasingly used in ship construction. The technology has grown in popularity as maritime battlefields become increasingly complex. Chief of the Naval Staff Vice Adm. Emmanuel Ogalla revealed this during the presentation of a paper by Navy participants at the National Defence College, titled “Artificial Intelligence and Ship Maintenance: Strategic Options for the Nigerian Navy by 2035.”

Ogalla said the Nigerian Navy is embracing AI since it and other emerging technologies are increasingly used in ship construction.“The Nigerian Navy must continue to adopt and integrate these technologies in order to maintain a competitive edge during operations,” Ogalla said in a report by the Nigerian newspaper Leadership. In light of the need for guidelines and legal frameworks that promote transparency, accountability, and compliance with human rights in the adoption and use of AI-driven technologies, at least seven African countries in Africa (Benin, Egypt, Ghana, Mauritius, Rwanda, Senegal, and Tunisia) have developed national AI programs. While the adoption of such regimes at the national level is important, the nature of the governance and regulatory challenges posed by AI-driven technologies is beyond the capacity of individual states. One area where AI is already being used and will undoubtedly spread with unpredictable benefits and possibly much harm is in the many wars and conflicts across the continent.

Risks and Ethical Concerns

AI also carries negative aspects, some of them of particular concern for Africa. As a general-purpose technology, AI is susceptible to being used for negative ends. There are increasing concerns associated with generative AI linked to disinformation, cybersecurity threats, hate speech targeting women and minorities, and fomenting or inciting violence in times of crises and conflicts. For example, deepfakes involving AI-driven voice and image technologies have been used to impersonate political figures, propagating false information during elections in Nigeria and the ongoing civil war in Sudan. “AI is already baked in as part of the technology for surveillance platforms or satellite image analysis. One use could be to look for terrorists, by spotting signs of movement such as tyre tracks,” said Nate Allen, a specialist in military use of AI on the continent and an associate professor at the Africa Center for Strategic Studies, Washington.“You don’t need an autonomous F-35 [a state-of-the-art fighter jet] but an autonomous drone that can understand the difference between a tank and a car. What will work best in Africa is everything that is low-cost and relatively easy to use.”Many point to uses such as autonomous weapons systems. Drones have already been widely used in conflicts across the continent.

Unprecedented Capabilities

AI systems, leveraging probabilistic reasoning and advanced algorithms, can collapse the Observe-Orient-Decide-Act (OODA) loop to near-instantaneous speeds, creating ‘Hyperwar’ scenarios where machine-led interactions far outpace human cognition. This translates into superior battlefield awareness through the fusion of vast, multi-source sensor data, enabling precise target identification and strategic planning at scales unfathomable to humans. Crucially, AI facilitates a paradigm shift towards “mass over quality”, allowing the deployment of expedient, autonomous robotic systems—from drones to lethal autonomous weapon systems (LAWS)—that operate in hazardous environments with superhuman endurance and speed, drastically reducing human risk and capital investment. Furthermore, AI profoundly bolsters both offensive and defensive cyber warfare, providing tools for more potent attacks and continuously evolving countermeasures, ultimately becoming a critical strategic advantage that fuels a global arms race and fundamentally alters power equations. This technological leap promises a future where wars are fought with enhanced speed, scale, and autonomy, demanding a fundamental recalculation of deterrence and military strength.

International Humanitarian Law and AI

International Humanitarian Law (IHL), the cornerstone of civilized warfare, is in danger of being trampled by the march of machines. Who is accountable for the actions of an autonomous weapon system? How can we ensure respect for proportionality and distinction – principles enshrined in IHL – when the trigger finger belongs to an algorithm? The legal frameworks governing warfare haven’t caught up to this new reality, leaving a dangerous vacuum.

There is some jurisprudence on this issue, though very scarce. In August 2020, the case of R (Bridges) v Chief Constable of South Wales Police was the first challenge to AI invoking human rights law in the UK. South Wales Police was trialing the use of live automated facial recognition technology (AFR) to compare CCTV images of people attending public events with images of persons on a database. The Court of Appeal found that there was not a proper basis in law for the use of AFR, breaching the Data Protection Act. The case temporarily halted South Wales Police’s use of facial recognition technology but allowed the possibility of its reintroduction in the future with a proper legal footing and due regard to the Public Sector Equality Duty.

Some have raised the nightmarish prospect of swarms of heavily armed automated drones that are supposed to be able to distinguish legitimate targets such as combatants from civilians but in reality rarely discriminate. Optimists have suggested AI could soon be used to predict war and civil unrest, allowing early interventions that could preserve peace or at least mitigate violence. Ecowas, the West African regional body, is thought to be working towards incorporating AI into such predictive models using intelligence gathered from its various members. ​​The advent of artificial intelligence, while dramatically reshaping the mechanics of warfare, also holds a profound, albeit nascent, power in defeating war’s inherent chaos and ethical dilemmas. Technically, AI’s capacity for rapid, multi-source sensor fusion and probabilistic reasoning promises to collapse the “fog of war”, providing decision-makers with unprecedented situational awareness and potentially leading to more informed and rational strategic choices. This enhanced informational landscape could mitigate uncertainties about intentions and outcomes, a fundamental driver of conflict, and even prompt a “convergence in expectations” among international actors. Furthermore, AI’s potential to bolster cyber defences, improving their speed, scale, and effectiveness, could ultimately tip the long-term balance towards network resilience and protection, thereby eroding attackers’ structural advantages and possibly deterring aggression. Crucially, the ongoing global discourse on “meaningful human control” over autonomous weapon systems and the emphasis on ethical guidelines underscore a collective striving to ensure that human judgment, empathy, and moral considerations remain paramount, aspiring to a future where technology’s very power helps to constrain the violence it could otherwise amplify, guiding conflict away from its most indiscriminate and dehumanising forms.

Conclusion

South Africa now stands as a continental pioneer, inaugurating an artificial intelligence (AI) hub dedicated to defense within the Military Academy, which is one of the latest arms of the AI Institute of South Africa (AIISA). Yet, this development is not an isolated leap; it marks a tectonic shift in how African states are beginning to entangle war, peace, and machine logic. In stark contrast, the African Union, unlike the European Union, remains structurally hamstrung in its ability to harmonize policy, legislate enforceable AI standards, or govern the military applications of emerging technologies remains aspirational at best. Even if the AU’s draft continental AI strategy wins parliamentary nods, it will still fall to sovereign states to breathe life into the vision through national frameworks.

The crossroads is technological, philosophical, moral, and historical. Will Africa become a canvas for experimental militarized intelligence, haunted by the “ghosts in the machine,” or will it reimagine itself as a global leader in forging responsible, peace-centered AI governance? These questions are not rhetorical flourishes. They are blueprints awaiting architects.
We may not yet possess definitive answers but raising such questions is a good start. It is an invitation: to policymakers, to scholars, to African technologists and jurists, to co-create the principles, processes, and peace pathways we do not yet have. For in interrogating the uncertain, we prepare. The future of war and peace in Africa may well be written in algorithms, but the moral code, the ethical north, must be authored by human hands. Africa’s technological awakening must not only be intelligent; it must be wise.

Bibliography

ICRC, ‘Artificial Intelligence and Machine Learning in Armed Conflict: A Human-Centred Approach’ accessed 27 July 2024
Mare A and Munoriyarwa A, Digital Surveillance in Southern Africa (Palgrave Macmillan Cham 2023)
Anyemedu D, DIGITAL SURVEILLANCE IN VIOLATION OF HUMAN RIGHT AND DATA JUSTICE accessed 26 July 2024
Hernandez J, ‘A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says’ (NPR, 1 June 2021) accessed 26 July 2024
Tsani A, ‘Africa’s Push to Regulate AI Starts Now ’ (MIT Techonology Review, 15 March 2024) accessed 26 July 2024
‘Nigerian Navy to Harness Artificial Intelligence to Strengthen Operations’ accessed 25 July 2024
Davison, N., “Autonomous weapon systems under international humanitarian law”, in Perspectives on Lethal Autonomous Weapon Systems, United Nations Office for Disarmament Affairs (UNODA) Occasional Papers No. 30, November 2017:
205 [2020] EWCA Civ 1058
Section 149(1) of the Equality Act of 2010.

Wendo Githaka

I am a lawyer, Certified International Mediator, and researcher with a strong focus on Alternative Justice Systems. As a recent law graduate with a nano degree in software engineering, I bring a multidisciplinary approach to the mediation arena. I have contributed to Access Law Kenya, specializing in human rights advocacy, and participated in the International Mau Mau Conference, showcasing my expertise in historical research. My diverse internship experience, including roles at UN-Habitat, and Community Health Kenya, highlights my commitment towards diversifying impactful legal initiatives. My vision for joining the World Mediation Organization is to contribute to global peacebuilding efforts by leveraging my legal expertise and certified mediation skills to resolve conflicts effectively and equitably. I am passionate about fostering a dialogue that bridges cultural and social divides, informed by my experiences in human rights advocacy, legal research, and community development. Through the WMO, I aim to collaborate with diverse professionals to advance innovative approaches to conflict resolution, promote sustainable solutions, and create spaces for transformative conversations that empower individuals and communities worldwide.

This Post Has 2 Comments

  1. Zachariah Winkler

    Like any technology, AI can be used for both good and evil. Unfortunately, those in power tend to lean towards the latter. As fears of authoritarian surveillance-states grow, I fear the global south will become the first testing grounds for AI overseers flagging individuals as “radicals” for persecution. Very thought-provoking article.

    1. Wendo Githaka

      I agree. The Global South can, however, counter such facilitations through stronger governance frameworks. There is hope.

Leave a Reply