Direkt zum Inhalt

From Promise to Prospects: Artificial Intelligence and Inclusive Peacemaking

Using digital technologies in strategic communications and outreach | Andreas T. Hirblinger
Andreas T. Hirblinger

Andreas T. Hirblinger is a postdoctoral researcher at the Centre on Conflict, Development and Peacebuilding, Graduate Institute of International and Development Studies, Geneva. His research explores the effects of the increasing use of digital technologies by conflict parties, conflict stakeholders and those who aim to make or build peace. 

The Trend Towards Digital Inclusion

There are moments when a change in the environment challenges established approaches and innovation becomes key. The COVID-19 pandemic, which forced large parts of the global workforce – including those who make and build peace – into new ways of meeting and collaborating online, has been such a moment. Of course, many efforts to prevent conflict and make or build peace already somewhat relied on using a diverse set of digital technologies, with efforts to promote “ICT4Peace” dating back to the beginning of the century (Hattotuwa 2004). Yet, the core practices of making and building peace continued to be viewed as “human-centred” – and many professionals stuck to the idea that technology should only play a marginal role in facilitating peace processes (Lanz and Eleiba 2018).

However, international efforts to build peace are commonly guided by yet another important “soft” norm, namely that peace processes should be inclusive by taking into account the views and needs of all conflict stakeholders, not only those of powerful conflict parties (Hellmüller 2019). The pandemic seems to have tilted the balance between these two principles. Today, there exists a considerable number of cases in which mediators employ digital technologies in their effort to seek more broad-based participation, such as by running consultations on messaging apps or developing new crowdsourcing tools to collect data about rumours.

In fact, “digital inclusion” can serve a broad array of strategic purposes – from strengthening the legitimacy of processes and outcomes to empowering particular stakeholders, to protecting vulnerable groups (Hirblinger 2020). At the same time, there are increased concerns about the new exclusions and hierarchies that result from this new  reliance on digital and internet-based technologies. This is especially true in conflict contexts that are characterised by limited connectivity and digital literacy, making it difficult or impossible for many to make their voice heard.

Artificial Intelligence and Conflict Analysis

Digital technologies are vehicles to gather and share information about what those affected by conflict want or need, thus bringing their “voice” into the conversation. Yet, they also offer new opportunities to automate conflict analysis through Artificial Intelligence (AI) (Höne 2019). This potential is increasingly explored for tasks that require processing large amounts of data – such as analysing (social) media content to understand a population’s perceptions or preferences, or calculating conflict risks emerging from climate change. AI-based technologies can make conflict analysis more efficient and afford new methods that respond to challenges of dealing with the vast amounts of data that are now produced in the context of armed conflicts and peace processes – partly due to efforts to enhance participation through technology. Despite these potentials, the use of AI-based methods may stand in tension with the goal of making peace processes more inclusive. For instance, AI enables new approaches to governing populations without having to involve them in democratic processes (Helbing et al. 2019). However, political decisions produced by AI systems tend to be perceived as illegitimate (Helbing et al. 2019) – the same is likely the case for conflict resolution options.

It is important to note that AI-methods cannot produce meaningful results without keeping humans “in the loop” (Wang 2019).  Where AI-based analysis is used without participation, it may also produce poor outputs. Many of the available methods, such as sentiment analysis, have been developed by private sector companies to monetize data extracted from social media, for example. These methods do not require the active participation of users: they do not collect data that is intentionally provided by the users to inform a process, but data that they produce in the course of their everyday online interactions. However, such data should not be mistaken for “voice”: While we may have agreed that social media platforms collect and use our data through accepting their Terms of Service, we remain unaware about when and how exactly this data is used. We do not provide this data intentionally, but it is used indirectly to produce correlations between what we do and what we want.

The limitations of this approach becomes visible to us, for instance, when we are confronted with targeted online advertisements that promote products that don’t quite fit what we want or need. Of course, those who use digital technology in peacemaking will unlikely experiment with suggesting peace agreement provisions based on an analysis of user preferences collected through an analysis of Facebook posts or Tweets. In fact, the most promising explorations of AI for conflict analysis combine active participation, for instance, through online focus groups, with AI tools, such as sentiment analysis (Warrell 2020).

Perils: Bias, Politics and Complexity

Yet, there are further challenges in reconciling the analytical tools offered on the digital marketplace with the requirements of peacemaking. For instance, machine-learning struggles to make sense of more semantically complex text information, such as opinions and arguments. Existing tools and methods are strong in detecting patterns in relatively simple data formats but often cannot sufficiently represent the complexity of peace processes and the plurality of conflict party perspectives on the process. They also often produce results with relatively low reliability.

We also know that algorithmic code developed through machine learning may discriminate against particular population groups by rendering them more or less visible in the analysis (Alake 2020). Moreover, voice is potentially expressed across a diversity of languages, but existing Natural Language Process (NLP) models only work well in a few – thus making less visible those parts of the population that can only participate in local languages.  

We also need to be concerned with how the choice of AI models and the availability of data limits the range of questions that can be asked and answered. While big data sets now exist on armed conflict events, such as battle deaths, it is often difficult to obtain sufficient amounts of data on more intricate, sensitive or informal properties of a conflict, such as the population’s perceptions about conflict parties or gender-based violence.

Finally, research on the use of data for political decision-making, including data generated with the help of AI, points to the risk of data becoming politicized, particularly in contexts characterized by disasters or violence (Burns 2018). This means that we should think about ways in which AI-outputs can be produced and used by mediators in such ways that they are accepted by the conflict parties and can be used in a constructive manner.

Prospects for Inclusive Peacemaking AI

This means that AI has the potential to undermine participatory processes. But even if it does not, it will likely change how we make sense of armed conflicts and how we go about developing options for conflict resolution. Of course, the effects of technology on the dynamics of peacebuilding are not written into stone (or into code, for that matter). We know that technologies used in peacebuilding co-evolve with the particular considerations, agendas and objectives, to which they are put to use. As a result, there are avenues for making AI-based technologies fit the approaches and normative commitments of those who intend to use them. There are three ways in which the peace mediation community should think about making the use of AI-based technologies more participatory:

Inclusive AI-Design

Most AI-based methods relevant for conflict analysis will be derived from existing applications developed outside the field of peace mediation. The availability of these methods determines the scope of possible applications, such as sentiment analysis or object recognition, and therefore potentially influences our understanding of what conflict analysis is and how it can be done. It is crucial to counter this trend with a demand-driven development of methods. This may be achieved, amongst other ways, by involving insider mediators in design processes to design methods that are tailored to a peacemaking context. For instance, in a context such as South Sudan, where local conflicts over natural resources and cattle raiding play an important role in conflict dynamics, could we develop tools that capture such dynamics, to understand their relationship with environmental factors and events in the formal peace process?

Inclusive Data Collection

A related challenge pertains to the collection of data used both for the training of algorithms and for the actual machine-supported conflict analysis. Efforts to develop community-based indicators have demonstrated new ways to determine the risks and likelihood of violence based on local perceptions and may provide insights into how to choose relevant indicators and data sources (Firchow 2018). Once these are identified, it is equally important to make sure that the data collection does not go unnoticed. Where conflict-affected populations are involved in the provision of data, such as through surveys or crowdsourcing platforms, they receive the important signal that their voice is being “heard”. This may raise expectations about change, which need to be carefully managed. Yet, inclusive approaches to data collection also make sure that populations stay engaged and that they feel that they are part of the political change that may result from the peace process. They make data collection part of larger social and political change processes that actively involve those affected by conflict.

Inclusive Data Analysis

Finally, once mediators use machine-generate data, conflict parties and other stakeholders may also question the results of machine-generated data analysis and indeed the legitimacy of methods. This risk is increased by the fact that AI methods are often difficult to understand for a lay audience, and some even remain black-boxed for those who design and implement them. Conflict party representatives might also perceive the use of machine-generated data as a threat to their role as primary representatives of their constituents’ interests. The relevance and effectiveness of AI-based methods, therefore, depends on the parties’ constructive engagement with them. This could be achieved through participatory approaches to data analysis, in which representatives of conflict parties are involved in the review of machine operations and their outputs. This requires the use of transparent and explainable inference methods that help us understand how AI-based tools arrive at certain conclusions. Finally, it will be important to make data actionable and increase the chances that it will actually support the search for a political settlement. For instance, in joint data review workshops, mediators, conflict parties and stakeholders could engage in the joint analysis of data, discuss its implications and the possible way forward.

There is new momentum behind the use of digital technologies to prevent conflict and to make or build peace.  Artificial Intelligence, in particular, is taking on a more prominent role, not least as it offers the promise of effectively analysing the large amounts of data produced in the context of peace processes. However, AI can also be employed in ways that  re-enforce exclusion,  disempower those affected by conflict, and produce results with little relevance. Balancing the promises and perils of AI may be achieved through inclusive AI-design, as well as inclusive data collection and data analysis.

Download article

IMAGES

  • Author | Andreas T. Hirblinger priv.
  • Image "AI" | Pixabay, Gerd Altmann

REFERENCES

  • Alake, Richmond. 2020. ‘Algorithm Bias In Artificial Intelligence Needs To Be Discussed (And Addressed)’. Medium. 28 April 2020. https://towardsdatascience.com/algorithm-bias-in-artificial-intelligence-needs-to-be-discussed-and-….
  • Burns, Ryan. 2018. ‘Datafying Disaster: Institutional Framings of Data Production Following Superstorm Sandy’. Annals of the American Association of Geographers 108 (2): 569–78. https://doi.org/10.1080/24694452.2017.1402673.
  • Firchow, Pamina. 2018. Reclaiming Everyday Peace: Local Voices in Measurement and Evaluation after War. Cambridge, United Kingdom ; New York, NY: Cambridge University Press.
  • Hattotuwa, Sanjana. 2004. ‘Untying the Gordian Knot: ICT for Conflict Transformation and Peacebuilding’. Geneva: ICT4Peace Foundation.
  • Helbing, Dirk, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari, and Andrej Zwitter. 2019. ‘Will Democracy Survive Big Data and Artificial Intelligence?’ In Towards Digital Enlightenment: Essays on the Dark and Light Sides of the Digital Revolution, edited by Dirk Helbing, 73–98. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-90869-4_7.
  • Hellmüller, Sara. 2019. ‘Beyond Buzzwords: Civil Society Inclusion in Mediation’. In Conflict Intervention and Transformation: Theory and Practice, edited by Ho-Won Jeong, 47–63. Maryland: Rowman & Littlefield.
  • Hirblinger, Andreas T. 2020. ‘Digital Inclusion in Mediated Peace Processes: How Technology Can Enhance Participation’. https://www.usip.org/publications/2020/09/digital-inclusion-mediated-peace-processes-how-technology….
  • Höne, Katharina. 2019. ‘Mediation and Artificial Intelligence:Notes on the Future of International Conflict Resolution’. Geneva: DiploFoundation. https://www.diplomacy.edu/sites/default/files/Mediation_and_AI.pdf.
  • Lanz, David, and Ahmed Eleiba. 2018. ‘The Good, the Bad and the Ugly: Social Media and Peace Mediation’. Information Systems Frontiers 20 (3): 419–23.
  • Starke, Christopher, and Marco Lünich. 2020. ‘Artificial Intelligence for Political Decision-Making in the European Union: Effects on Citizens’ Perceptions of Input, Throughput, and Output Legitimacy’. Data & Policy 2. https://doi.org/10.1017/dap.2020.19.
  • Wang, Ge. 2019. ‘Humans in the Loop: The Design of Interactive AI Systems’. Human-Centered Artificial Intelligence. Stanford University. 2019. https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems.
  • Warrell, Helen. 2020. ‘UN Tries Mass Polling to Build Peace Deals’. Financial Times, 20 February 2020. https://www.ft.com/content/9da969d8-518e-11ea-8841-482eed0038b1.
Andreas T. Hirblinger

Andreas T. Hirblinger is a postdoctoral researcher at the Centre on Conflict, Development and Peacebuilding, Graduate Institute of International and Development Studies, Geneva. His research explores the effects of the increasing use of digital technologies by conflict parties, conflict stakeholders and those who aim to make or build peace. 

Further questions?

I agree to the collection, processing or use of my personal data in accordance with the privacy policy. *

Fields marked with * are mandatory fields.

Related Articles