Implications of Emerging Technologies on Peace and Security in Africa

Emerging technologies produce challenges for peace and security in Africa.

Africa Embracing the Fourth Industrial Revolution

Like other continents, Africa has embraced the so-called Fourth Industrial Revolution. Emerging technologies are defined as those that are radical or novel and may have disruptive effects in the sectors where they are deployed and in society.[1] For example, in Africa, drones have improved the way healthcare is provided in Rwanda as they are being used to deliver blood in remote areas, thus helping to ensure the security of the people. Some of these emerging technologies are being researched and tested. Others have already been deployed, including mixed reality merging the virtual and the real world, Augmented Reality (AR) and Virtual Reality (VR), 5G, Artificial Intelligence (AI), and Blockchain, among others. Currently, Africa is also engaged in technological Research and Development and deployment. Some countries on the continent already have thriving AI hubs, such as Nigeria and Ethiopia. Google has its own AI hub in Ghana, and the United Nations (UN) has an AI centre, the UN Global Pulse lab, in Kampala.

GHANA-AFRICA-TECHNOLOGY-GOOGLE AI
Google has an Artificial Intelligence (AI) hub in Ghana. Cristina Aldehuela/Afp via Getty Images

Unfortunately, development and success come with challenges. Worldwide, we have seen the various destructive roles such technologies can have and the negative impact on people. For example, technologies powered by AI have been used in persecuting, surveilling, and monitoring minorities (including the Chinese use of AI to target the Uyghur community), targeting specific groups during disinformation operations during elections, or employed in everyday policing. The security literature has started to explore the implications of technologies at the levels of the state and within institutions, but less so among the population at large. Despite the growing but still very scarce body of literature on emerging technologies in relation to peace and security,[2] very little is known about their implications for peace and conflict dynamics, and even less so in the global South, more specifically in Africa. This article explores significant trends and the implications of emerging technologies for peace and conflict in Africa. 

Community Conflicts and Emerging Technologies

            As in other parts of the world, social media plays an important role in the election dynamics in Africa. Some countries, such as Nigeria, Kenya, Madagascar and Uganda, experienced disinformation operations instigated by states such as Russia (for example, the campaign led by Russia’s Private Military Company (PMC), the Wagner Group),[3] or by non-state and illusive actors.[4] Some of these operations have succeeded in creating trouble, turmoil, and unrest. They raised the level of anxiety and fear among civilians and succeeded in decreasing the level of trust between the population and the authorities. During the 2017 Kenyan elections, people were victims of the micro-targeting campaign launched by Cambridge Analytica. This company targeted Kenyans using their private data (related to ethnicity, gender, religion, age). People were exposed to horrific messages, including the manipulation of past violence in the country and the instigation of fear related to a future in which Raila Odinga (the opposition) would annihilate certain tribes. Such occurrences can have significant psychological effects on the population as they are entrenched in fear and suspicion. 

Other cases show the major role of disinformation or misinformation in situations involving killing and community conflict. For example, in cases where it is difficult to trace the individual(s) or group(s) engaged in disinformation or misinformation operations, the distribution of content, such as images, memes, videos, and even voice messages, leads to a high level of violence among communities. In Nigeria, images of corpses in mass graves were used to fuel animosity between the Fulani Muslims and Berom Christians, which resulted in violence and killing. Other situations have heightened animosity between communities or among conflicting parties. Recently, in Ethiopia, there was a series of misinformation cases relating to the Tigray conflict in which people shared fabricated content using images and videos from other conflicts (for example, from Nagorno-Karabakh), which could fuel violence among conflicting parties. 

In Africa, there has been an increase in the number of users and the roles AI plays in social media. In recent years, there has been an alarming increase in destructive AI technologies using a variety of techniques. For example, deep-fake technologies employing deep learning creates synthetic media, including videos and voices. Other techniques use Generative Adversarial Network to manipulate images, videos, and sounds and then superimpose them onto source files so that the latter is altered in a very subtle manner. Deep learning has also been used in the autoregressive language model of GPT-3, a technology that can generate texts independently. The results are surprising, as many of these texts appear to be written by humans, but they are not. 

The Wider Image: Nigeria's restive north
Fake images are threats in the digital world. Innocent civilians are targeted and are unaware of the impacts of their actions when (re)sharing such content. Reuters/Afolabi Sotund

We are now hearing more and more about selective editing and shallow or cheap fakes: ‘A “deepfake” is a video that has been altered through some form of machine learning to “hybridize or generate human bodies and faces,” whereas a “cheap fake” is an AV [audiovisual] manipulation created with cheaper, more accessible software (or none at all).’[5]All these fakes are now the real threats in the digital world. Malevolent actors in Africa are increasingly using them to target innocent civilians who are unaware of the possible impacts of their actions when (re)sharing this content. They share these fakes with friends, colleagues and loved ones without checking their veracity and origin. Very often, a person will believe they are true because they received the information from trusted individuals, and they may thus automatically reshare the content. Some people are not aware that such content was created to achieve particular aims, such as the instigation of collective violence. In 2018 in Gabon, a deepfake video of President Ali Bongo was  ‘cited as the trigger for an unsuccessful coup by the Gabonese military’.[6]

The power of algorithms can also be seen on social media platforms used by civilians in their daily lives. These platforms all use algorithms that create addictive behaviours, echo chambers, and new dynamics of trust among users. For example, some platforms will suggest videos to watch ( such as, autoplay in YouTube) and groups to join (on Facebook), or recommend popular and trending articles which might contain fake news (on Twitter). This amplifies extremist rhetoric, violence, hatred and discrimination among users online and in the real world. The more people view, share and watch repetitively, the more ideas and news become deeply anchored in people’s minds. 

DAL_BIOMETRIC_TSA
Biometric technology is used to verify identity at the airport or online in the case of banks. John Paul Van Wert/Rank Studios

Symbolic Violence and Biometric Technologies

Biometric technologies have been deployed in various areas, such as border control, predictive policing, the banking and health sector, and identification. Biometric technology is used to verify identity at the airport or online in the case of banks. Authorities all over the world are using biometric data from video surveillance to fight crime. China has somewhat become a hegemon in this area with its biometric technologies spreading all around the world. It has been argued that China is trying to create global AI norms based on its own values. While there have been some solutions built locally, Africa has recently been the beta-testing ground for powerful emerging technologies among civilians who are completely unaware that they are being subjected to the testing or roll-out of these technologies. Beta testing pilots a technology using human participants. For example, through the Belt and Road initiative, China set up partnerships with African countries to roll out its facial recognition technologies in elections, schools, and other situations. Civilians are not aware that their private data (images and videos of their faces) have been collected for AI-enhanced technologies. 

Biometric technologies contribute to the creation of situations where civilians are subjected to symbolic violence, which manifests itself, for example, through surveillance capitalism and data colonialism. For a vulnerable population, minorities, or people living in areas where technologies are under-regulated, the collection, gathering, selling, and storing of biometric data can be qualified as symbolic violence. This violence is not direct or physical; it is born out of the asymmetrical power dynamics at play in which powerful actors (for example, large tech companies, states, the political elite) have control over these activities. Weak and vulnerable populations have no control over the kind of data collected about them (for example, ethnicity, race, gender), the purpose for which it will be used (policing minorities), how long this information will be stored, and who will have access to it.  

However, symbolic violence can become direct and physical when in the hands of authoritarian and ‘thug’ states. Recently, countries like Ghana, Kenya, South Africa, Uganda, and Zimbabwe have been at the centre of an alarming deployment of facial recognition technologies for mass surveillance. Among the most well-known Chinese companies deploying these technologies is Huawei, which has frequently been accused of spying on the African Union (AU). Some of Huawei’s projects in Africa are implemented within the framework of initiatives such as the safe city programmes. In 2018, as part of the Belt and Road Initiative, the Chinese firm Cloudwalk Technology concluded a deal with the Zimbabwean State to launch a large-scale facial recognition programme.[7] Smaller companies, such as Transsion, also offer cheap technologies (smartphones) to civilians and deploy facial recognition technologies that collect data from African people without oversight of the companies’ activities.[8] The gathering and selling of such data is an extremely lucrative business, and marketing companies have had no scruples about using these means to micro-target the local population. These types of technologies threaten people’s privacy and liberty. More importantly, they are a significant threat to people’s security if deployed in authoritarian and police states. An example is the use of facial recognition in Uganda to identify and track opposition politicians during protest movements.[9] In (post)conflict zones, such occurrences threaten democratic processes and peace.

Somali police officers watch instructor Brett Velicovich fly a DJI Inspire drone during a drone training session for Somali police in Mogadishu
Somali police officers receive drone training in Mogadishu (25 May 2017). Reuters/Feisal Omar

Colonial Governance Practices and Lethal Autonomous Weapons Systems (LAWS)

            The concept of the Foucauldian boomerang effect[10] suggests a particular shift in how technologies are produced and deployed. This concept refers to ‘colonial’ practices (including ‘colonial models of pacification, militarization and control’)[11] and technologies developed and deployed in the global South or conflict zones to target people in these regions, which are then increasingly deployed in America, Europe, and East Asia. For example, police forces in the North are now routinely using Israeli drones in policing activities among civilians. This technology was originally deployed to police Palestinians. 

171002-M-CK339-049
Lethal autonomous weapons can identify, select and engage a target without meaningful human control. Lance Cpl. Jeremy Laboy/Released

Despite such a shift, unfortunately, many of the technologies that are currently used or beta-tested in wars and conflicts (mostly located in the global South) are still researched, prototyped, and developed in the global North, often without stringent regulations. These colonial practices continue unchallenged. In the postcolonial era, we see such practices transferred, copied, learnt, and diffused in the processes behind the deployment of emerging technologies like AI as a tool to enhance weapon systems. LAWS is a case in point here. 

According to the Future of Life Institute, ‘Lethal autonomous weapons are weapon systems that can identify, select and engage a target without meaningful human control.’[12] There were allegations that the US strike on the Iranian scientist Mohsen Fakhrizadeh employed an AI-powered weapon,[13] and recently, Israel has recognised that AI was central to its operations against Hamas in the Gaza strip during the latest conflict.[14] However, there has not been a single major situation where AI has been used to target innocent civilians in conflict zones in Africa. Nevertheless, the use of AI during the Israeli-Palestinian conflict is a sign of the future of warfare. LAWS will be used in conflicts and wars, and civilians will suffer as a result of the violence. 

It is crucial to delve deeper into the governance of AI. Recently, there has been a rise in colonial governance practices in this area. The United States (US) and its allies dominate the governance of AI in the military and defence sector. Within this framework, they are only willing to invite other actors who they think deserve to be part of endeavours to regulate AI in the military (for example, technological powerhouses from the global South, such as India).[15] This transatlantic governance platform aims at developing standards, norms, and regulations with so-called ‘western’ and ‘democratic’ values. Unfortunately, these violent and colonial technologies that will be used in warfare will be accompanied by western values and will be adopted and used in wars in the global South, including Africa. African governments that are supposed to be protecting African people are excluded from the spaces where the military governs AI. The only platform many of them are part of is the Group of Governmental Experts on Lethal Autonomous Weapons Systems in Geneva.   

Premier Zikalala inspects damaged telecommunications infrastructure in Durban
South African officials conduct an inspection of damaged telecommunication infrastructure in Durban (15 January 2021). The vandalism of towers belonging to major network companies came in the wake of viral social media posts suggesting a link between 5G and the spread of the COVID-19 virus. Arren Stewart/Gallo Images via Getty Images

Invisible Threats and 5G

Emerging technologies can also be invisible threats that affect the peace of civilians. Disinformation, misinformation, and fake news have been rife in Africa over the last few years. Digital threats are diverse, given the multiplicity of emerging technologies that are being tested and rolled out. Once people feel distrust and fear, they engage in actions such as protests, riots and conflicts. During the COVID-19 pandemic, many virus conspiracies were spread in the African cybersphere and, thereafter, disseminated via offline methods, such as rumour. 

In Africa, civilians themselves play important roles in securitising the fifth-generation mobile network (5G). 5G is said to provide high-speed, optimum performance and improved connectivity. Some African people have joined an anti-5G conspiracy. The conspiracy theory surrounding this technology, such as its role in diffusing coronavirus, caused people to view 5G as a threat that must be controlled and destroyed. In the North, this global movement led to arson attacks, and some engineers received physical and verbal threats. An example of this securitisation of technology and the implication of 5G in peace and conflict processes in Africa is the recent influence of a global conspiracy theory online in South Africa. In January 2021, Vodacom and Mobile Telecommunication Company (MTN) towers were burnt because of conspiracy theories linking the spread of COVID-19 to 5G. When civilians engage in actions such as the destruction of public goods, it is a sign of dissatisfaction and fear. This demonstrates the extent to which the population is vulnerable if drastic measures are not put in place to manage and regulate the deployment of emerging technologies. 

The United Nations Multidimensional Integrated Stabilization Mission in the Central African Republic (MINUSCA) tests drones in Bangui, Central African Republic, ahead of the Pope’s visit to the country (24 November 2015). UN Photo

Looking Forward: The Future of Emerging Technologies in African Peace and Conflict Processes

There has been a push towards the implementation of initiatives around cyber peace and security.[16] This concept is still being discussed and defined by scholars, practitioners and actors from the public and private sectors. These initiatives could revolve around defining the norms of responsible state behaviours; control and governance of the internet based on democratic principles; cyber peacekeeping; arms control, including a ban on offensive cyberweapons and AI-powered drones facilitating signature and personality strikes; and the use of AI in peacebuilding, conflict resolution and mediation. It is also about the fight against cyberwarfare and cyberconflict, mass surveillance and espionage, disinformation operations, and misinformation, among others. The list is long because technologies in the digital world evolve rapidly, while regulation is slow to catch up. Some of these phenomena are already pervasive in certain countries, and malevolent actors are taking advantage of this to sow discord, disturb the peace, and create conflict and war. For example, in the Middle East and more specifically in Syria, AI helped Russians and the Assad regime organise and amplify disinformation campaigns drawing attention away from the abuse they had committed by using anti-imperialist messages and pretending to denounce human rights violations. 

The problem with emerging technologies used by malevolent actors to create situations of conflict and insecurity is that, despite their so-called decentralised, emancipatory, and empowering aspects, they allow and facilitate the exploitation of vulnerabilities and very often target what is anchored within people: feelings, identities, the historical past, attachment to loved ones, and anger and frustration. Africa is embracing emerging technologies, but at the same time, people are already paying a high price with community violence and conflicts, an erosion of trust between the people and authorities during elections and the pandemic, online organised crime, and disinformation operations targeting the vulnerable. Extremist groups such as Islamic State of Iraq and the Levant (ISIS) seem to have found a new home in Africa. They are known to be tech-savvy and use digital tools for recruitment. This is all only just starting on the continent.

Societies and individuals that are neither equipped nor ready to face and deal with the threats posed by emerging technologies suffer the most. Little is currently known about the roles emerging technologies play in peace and security in Africa despite its already overwhelming presence in people’s everyday lives. There should be more research into how these technologies co-exist and interact with civilians during times of peace and periods of conflict. Recently, organisations deploying peace operations on the continent or involved in peace processes such as mediation (for example, the UN Department of Peace Operations) have been using AI to support their decision-making processes or analyse data and forecast occurrences of conflicts, such as sentiment analyses using data collected from open-source platforms. With sentiment analysis, an AI technique known as Natural Language Processing identifies and classifies opinions and emotions in data collected from various platforms such as Facebook, Twitter, Youtube, websites, and so on. The analysis and models derived from this technique can be helpful in predicting the occurrence of a crisis or conflict. To what extent do such technologies support peace processes? Do they help resolve conflict?

While recognising the positive outcomes of these emerging technologies, it is crucial to identify the threats they pose to peace and security. Very often, local governments are complicit in the deployment of colonial technologies. An example mentioned earlier is the deployment of facial recognition on the continent. Technology is still under-regulated in African countries. This gives leeway and creates loopholes for colonial actors to operate without oversight and very often with significant support from local governments. Africa must be brought to the fore in international governance platforms dealing with emerging technologies in the fields of peace and security, and it is vital to insist on establishing further local regulations to protect civilians against such threats.  

Finally, with Africa’s colonial past, it is also critical to investigate the roles of these technologies in postcolonial settings. It is no secret that big technology companies and powerful nations resort to colonial practices when deploying technologies, such as AI, in Africa. For example, Africa’s youth and technology talents have been drawn into lucrative jobs carrying out misinformation operations at home and abroad. Such operations manipulate various issues, including colonialism and imperialism, and involve physical violence. For instance, Africa continues to be a victim of wars between powerful nations such as the US, France and Russia. The cyber domain has been central to the tactics they have employed. The recent disinformation operations which Russia led against France in the Central African Republic (CAR) is proof of this. These campaigns targeted France’s presence in the CAR and led to violent street protests involving the local population. In view of all of this, there are many issues that require further inquiry into the implications of such technologies on peace and security on the continent. 

Dr Velomahanina Tahinjanahary Razakamaharavo is a Scientific Collaborator at Université Catholique de Louvain, Louvain-la-Neuve, Belgium.


Endnotes

[1] Rotolo, Daniele; Hicks, Diana; and Martin, Ben (2015) ‘What is an Emerging Technology?’ Research Policy, 1 December, 44(10), pp. 1827–43. Available at: <http://www.sciencedirect.com/science/article/pii/S0048733315001031>

[2] Garcia, Denise (2018) ‘Lethal Artificial Intelligence and Change: The Future of International Peace and Security’, International Studies Review, 1 June, 20(2), pp. 334–41. Available at: <https://doi.org/10.1093/isr/viy029>  

[3] The Wagner Group is a Russian PMC supported by President Vladimir Putin. It conducted various disinformation operations in Africa targeting elections using various strategies, including fake Facebook profiles of political parties and news pages, the creation of think tanks and local newspapers, and so on. 

[4] Knight, Tessa (2021) ‘Social Media Disinformation Campaign Targets Ugandan Presidential Election’, Daily Maverick, 12 January. Available at: <https://www.dailymaverick.co.za/article/2021-01-12-social-media-disinformation-campaign-targets-ugandan-presidential-election> [Accessed 23 January 2021].

[5] Paris, Britt and Donovan, Joan (2019) ‘Deepfakes and Cheap Fakes’, Data & Society. Available at: <https://datasociety.net/library/deepfakes-and-cheap-fakes> [Accessed 23 January 2021].

[6] Westerlund, Mika (2019) ‘The Emergence of Deepfake Technology: A Review’, Technology Innovation Management Review, 9(11).

[7] Chutel, Lynsey (2018) ‘Zimbabwe Introducing a Mass Facial Recognition Project with Chinese AI Company CloudWalk’, Quartz Africa. Available at: <https://qz.com/africa/1287675/china-is-exporting-facial-recognition-to-africa-ensuring-ai-dominance-through-diversity> [Accessed 23 January 2021].

[8] Hawkins, Amy (2018) ‘Beijing’s Big Brother Tech Needs African Faces’, Foreign Policy. Available at: <https://foreignpolicy.com/2018/07/24/beijings-big-brother-tech-needs-african-faces>

[9] Wilson, Tom and Murgia, Madhumita (2019) ‘Uganda Confirms use of Huawei Facial Recognition Cameras’, The Financial Times, 20 August. Available at: <https://www.ft.com/content/e20580de-c35f-11e9-a8e9-296ca66511c9

[10] Graham, Stephen (2009) ‘Cities as Battlespace: The New Military Urbanism’, City, 13(4), pp. 383–402. 

[11] Ibid.

[12] Future of Life Institute (n.d.) ‘Lethal Autonomous Weapons Systems’. Available at: <https://futureoflife.org/lethal-autonomous-weapons-systems> [Accessed 23 January 2021].

[13] Kleinman, Zoe (2020) ‘Mohsen Fakhrizadeh: “Machine-gun with AI” Used to Kill Iran Scientist’, BBC News. Available at: <https://www.bbc.com/news/world-middle-east-55214359

[14] Ahronheim, Anna (2021) ‘Israel’s Operation against Hamas was the World’s First AI war’, The Jerusalem Post, 27 May. Available at: <https://www.jpost.com/arab-israeli-conflict/gaza-news/guardian-of-the-walls-the-first-ai-war-669371

[15] Freedberg, Sydney (2020) ‘Military AI Coalition of 13 Countries Meets on Ethics’, Breaking Defense, 16 September. Available at: <https://breakingdefense.com/2020/09/13-nations-meet-on-ethics-for-military-ai> [Accessed 23 January 2021].

[16] Inversini, Reto (2020) ‘Cyber Peace: And How it can be Achieved’, The Ethics of Cybersecurity, pp. 259–276. 

Velomahanina Tahinjanahary Razakamaharavo
Visiting Postdoctoral Fellow in Global Transformations at the Hochschule für Politik München at the Technical University of Munich
TRANSLATE THIS PAGE