Is There a Place for AI in Peace and Conflict Resolution?

Photo Credit: AFPC

AI and other forms of computational support is already being used in war to support threat detection, target identification and to manage autonomous weapon systems such as drones

Artificial Intelligence (AI) can influence the speed, scale and complexity of decision-making in peace and conflict settings. AI and other forms of computational support is already being used in war to support threat detection, target identification and to manage autonomous weapon systems such as drones. Less is known about the potential and risks of using AI in peacemaking, peacekeeping and conflict resolution. 

AI is especially suited to process large amounts of data and synthesising information in response to user queries. It can also generate scenarios, assess probabilities and even make policy recommendations. However, users should understand that AI tools are neither neutral nor designed to provide the most accurate or comprehensive information available. 

The AI tools that are commercially available have been developed by private sector-firms in the North Atlantic. Most are based on Large Language Models (LLMs) that have been trained on data that represent the political, economic and cultural biases and prejudices of the industrialised societies where they have been developed. This is why the Global Digital Compact was adopted as part of the Pact for the Future in 2024, with a particular focus on closing the digital divide and ensuring that all states and stakeholders participate in the governance, development, and use of artificial intelligence. Similarly, the African Union recognises the potential and risk of AI in the area of peace and security. Among its initiatives, it has established an Advisory Group on Artificial Intelligence to help inform how it can make use of AI that is reliable and inclusive.

The goal of AI is not to benefit humanity, or even to provide users with the best possible evidenced-based information. Its primary purpose is to generate profit for developers and owners. Processing time is expensive, and when using the free versions of AI tools, it is important to understand that the algorithm is optimised to deliver responses using minimal processing power and limited data. The result is a product designed to encourage continued use rather than accuracy. Users should therefore avoid assuming that the information provided is quality assured or information based. Instead, they should understand that they are using a product that has used a thin layer of unverified data to produce outputs that appear credible, but are not always reliable. 

The goal of AI is not to benefit humanity, or even to provide users with the best possible evidenced-based information. Its primary purpose is to generate profit for developers and owners

What most of us find impressive is the speed with which AI is able to put together an answer to a question about a complex issue. For example, when I asked one commonly used AI tool if there is a place for AI in peace and conflict resolution, it responded within seconds with an impressively structured answer with a list of potential uses of AI, and it ended with a balanced conclusion that cautioned that whilst AI can play a meaningful and multifaceted role in peace and conflict resolution, its use also raises important ethical and practical challenges. In contrast, it took me a few hours to write this blog to answer the same question. 

One difference is that whilst I took care to present a well-reasoned argument based on the best possible peer-reviewed research, the AI produced an output that was well structured without being transparent about its sources. When asked about the data on which its answer was based, the AI tool explained that the answer was based on general knowledge and synthesis from a wide range of sources as recent as 2024, used during its training. It then offered to do a live web search and with further prompting it generated a list of the 10 most cited articles on the topic. The lesson is thus to critically engage with the AI tool until one has clarity about the validity of the data.

More concerning than AI’s lack of transparency about its sources is a growing body of evidence that AI tools sometimes generate false or imaginary data in a phenomenon known as ‘hallucination.’ When Tobias Ide asked AI tools for academic sources related to a climate security topic, the answers did not only contain errors; the AI tool also invented non-existent articles and fabricated authors. Florian Krampe argues that the problem lies in our tendency to think of AI as a brilliant librarian, when in fact it behaves more like a novelist. It produces a compelling and plausible-sounding story, but it has no inherent commitment to factual accuracy. This is an important warning. As users we need to understand the added value and shortcomings of the tools we are using. We need to understand that AI tools are not designed to be truthful, they are designed to synthesise data. They do not take the validity of the data into consideration, unless prompted to do so. In fact, they may even fabricate data to fit a pattern without being transparent about it.

This is especially concerning when using AI in the peace and conflict resolution field where trust is a critically important ingredient, and where mistakes can cost lives. The lessons outlined earlier suggest that AI’s roles in peace and conflict resolution should be to enhance either the process or the information, but it should never replace human decision-making and human interaction. An AI tool can, for example, support a peace process by enabling a much larger number of people to relatively safely engage in a dialogue process, using a digital platform. However, whilst such tools can augment a dialogue process, they can never replace the importance of human interaction, relationships and trust-building. We cannot automate the role human relationships play in peace processes. This distinction between enhancement and automation is critical for understanding the potential and risks of using AI in peace and conflict resolution.

We cannot automate the role human relationships play in peace processes. This distinction between enhancement and automation is critical for understanding the potential and risks of using AI in peace and conflict resolution.

Yes, there is a role for AI in peace and reconciliation. AI is potentially excellent for synthesising large amounts of data, for example for early warning and foresight, remote surveillance of cease-fires and for detecting misinformation or hate speech. However, it is essential to critically assess and quality-control the sources, information selected and results generated by AI tools, while remaining aware of their limitations. Developing AI literacy among peacebuilders and policymakers is equally important to ensure that these technologies are used in the most responsible manner. One effective way to achieve this is to employ adaptive peacebuilding and anticipatory governance approaches, applying AI in ways that are sensitive to context, responsive to changing dynamics, and continuously informed by feedback from those directly involved in the process. In this way we can ensure that humans remain at the centre of, and in control of all aspects of the peace process.

Cedric de Coning is a senior advisor for ACCORD and a research professor at the Norwegian Institute of International Affairs (NUPI). He is a principal investigator in the peace and security cluster of the Norwegian Center for Trustworthy AI.

Article by:

Cedric de Coning
Cedric de Coning
Senior Advisor and Chief Editor of the COVID-19 Conflict & Resilience Monitor

ACCORD recognizes its longstanding partnerships with the European Union, and the Governments of Canada, Finland, Ireland, Norway, South Africa, Sweden, UK, and USA.

TRANSLATE THIS PAGE