OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 07:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Editorial: The future of research on artificial intelligence in conflict management

2026·1 Zitationen·International Journal of Conflict ManagementOpen Access
Volltext beim Verlag öffnen

1

Zitationen

1

Autoren

2026

Jahr

Abstract

The unprecedented development and rapid growth in popularity of ChatGPT models have led the world to recognize AI’s importance, relevance and utility. Driven by the confluence of increased big data availability, advances in natural language processing (NLP) and enhanced computing power, AI has generated levels of excitement and investment that few technological innovations have achieved.Scholars in many fields are beginning to integrate AI to assist and augment their research. Within the field of conflict management, research on AI is emerging. As is often the case, researchers trained in one discipline are reluctant to adopt and integrate new paradigms from other fields. However, AI has great potential to drive significant advances in our understanding of conflict management. Therefore, this editorial helps to bridge the gap between AI and conflict management research by providing an explication of key AI paradigms and illustrating how they can be usefully adopted and integrated into our field. This results in a vision for a better future in which conflict management scholarship advances more rapidly and in more profound and sophisticated ways.Based on a careful review of the literature in AI, conflict management and related fields, five AI paradigms were identified. These paradigms are Machine Learning, Generative AI, AI Agents, Agentic AI and Physical AI (Hofmann and Kruhse-Lehtonen, 2025). Each of these five paradigms presents a critical area for integrating AI into conflict management research. Below, each paradigm is discussed. This discussion includes definitions, an overview of mechanisms, examples and future research questions.Machine Learning (ML) is a process by which systems analyze data and learn how to improve (Cummins and Jensen, 2024). ML goes beyond aggregating data; it also learns from data. This paradigm differs from prior efforts to create computer-based expert systems designed by humans to facilitate decision-making and prediction. Instead, ML identifies and learns patterns in the data to make predictions. ML can use unsupervised, supervised or reinforcement learning. After learning, ML enables data-driven prediction. Instead of being given rules of reasoning and knowledge from humans, the system is instructed to discover its own rules to follow by identifying patterns and correlations in large data sets. It often creates its own rules to make predictions that can be applied to new cases. ML has become increasingly valuable and important because of the growing accessibility of big data, large language models (LLMs), advances in NLP, the sophistication of software and the rapid growth of computing speed and capacity.For example, some recent research indicates that ML can predict whether a conflict will be violent or non-violent (Hundt et al., 2025; Lata and Garg, 2023). However, we should recognize that ML may not lead to the more sophisticated and nuanced judgments, creativity and human intuitions that experienced conflict management scholars and practitioners possess. We cannot always rely on ML to properly analyze and assess conflicts. Therefore, future research should examine the risks of using ML for conflict management while integrating human insight and oversight (Cummins and Jensen, 2024).This leads to several new and interesting research questions. For example, ML uses historical LLM training data to generate and create rules that summarize historical data on conflicts. Will this impair the identification of potentially novel trajectories and solutions for future conflicts? Can ML techniques accurately predict conflict styles and outcomes in volatile and changing environments? Are there biases built into ML models that could impair the effectiveness of conflict management for some groups or in some contexts? These are just a few examples of how scholars could integrate the ML paradigm to conduct studies that would generate significant new insights in conflict management.Generative AI is a process in which systems analyze large data sets to learn and generate new content (Feuerriegel et al., 2024). Using NLP, Generative AI creates new text or images in response to prompts based on patterns it has learned. Unlike humans or robots, which have access to the real world, Generative AI tends to operate in digital environments. There are many well-known Generative AI applications, such as ChatGPT, Gemini, Claude, Perplexity, Grok and DeepSeek.Prior research has suggested that in customer relationship management systems, Generative AI can convert verbal conversations into text, analyze the text and provide real-time suggestions for de-escalating conflicts (Hsu and Chaudhary, 2023). Future research could investigate whether Generative AI can effectively assist and augment the analysis of other types of conflicts, match conflict patterns with effective conflict strategies and propose solutions. This could include drafts of the text of offers and counter-offers in negotiations. Future research could also examine collaborations with, and the augmentation of, human negotiators by gathering and analyzing vast amounts of relevant data, using that data to predict counterparty reactions and to project multiple, alternative and practical strategic negotiation scenarios. These kinds of complex Generative AI analyses may exceed humans’ capacity to generate in a timely manner. Therefore, can Generative AI augment human negotiators? Research could also examine whether Generative AI chatbots could exhibit structural characteristics that either escalate or reduce conflict intensity, enabling them to facilitate effective conflict management.AI Agents are systems that do more than analyze data, predict and create new content. AI Agents act, not just generate. They take action based on the data and what has been learned (Russell and Norvig, 2022). AI Agents not only generate content but also act independently based on data using multiple tools integrated via Application Programming Interfaces (APIs). Yet, AI Agents execute tasks authorized within a defined scope.Thus, unlike Generative AI, AI Agents can collect and generate their own data. AI Agents do not need to wait for user prompts; they can operate independently. Then, AI Agents can assume one or more of several distinct, increasingly sophisticated roles (Hofmann and Kruhse-Lehtonen, 2025). The roles of AI Agents range from simple to complex and could be labelled as Assistant, Analyst, Tasker, Orchestrator and Guardian (Hofmann and Kruhse-Lehtonen, 2025). Assistants perform basic tasks. Analysts evaluate, forecast and recommend. Taskers execute some limited actions via tools or APIs. Orchestrators plan and execute more complex multi-step workflows across a system or delegate them to subagents. Guardians audit, evaluate, monitor and enforce policies.For example, scholars have investigated agent-based simulations of negotiations, demonstrating the potential usefulness of AI Agents acting as helpful assistants (Lempp, 2020). Also, AI Agents could schedule people, call meetings and use APIs. AI can provide real-time advice by acting as conflict coaches (Brigg et al., 2025). AI Agents can act as members of negotiation teams (Dennis et al., 2023).However, several more intriguing research questions go beyond AI Agents acting as assistants. Research could also examine how AI Agents can augment conflict management. For example, can AI agents act independently as analysts to accurately identify potential escalation points, classify dispute types and make valid recommendations for conflict strategies? Using AI Agents could be important because they can work 24/7, perform more complex analyses and complete tasks more rapidly than humans who may suffer fatigue from ongoing conflicts. Could AI Agents, working as analysts, evaluate communications from the negotiation counterparty to identify potentially unethical tactics, such as misrepresentation or false expressions of emotion, and suggest how to respond? Also, to what degree should AI Agents, acting as Taskers, be allowed to take actions without direct human-in-the-loop approval? How can these systems be designed to ensure safety? Can customer service AI Agents more effectively handle conflicts with customers or how much and what types of human intervention are necessary? How should negotiators effectively allocate tasks and responsibilities between humans and AI Agents in negotiations? How can the different roles of AI Agents be most effectively used in conflict management? Can AI Agents, acting as part of negotiation teams, help improve conflict management within and between teams?There are also important value-based research questions. For example, how can we embed human values into AI Agents, such as ethical reasoning in conflict management situations? What are the dangers to humans, and what guardrails should be embedded in AI Agents? Could humans choose to use AI Agents to avoid conflicts with other humans? Are AI Agents, because they do not feel emotions, more or less effective at managing conflicts than humans? What are the measurable governance frameworks and guardrails that are needed to enable sufficient levels of trust when deploying AI Agents involved in conflicts?Agentic AI aggregates multiple, potentially collaborative AI Agents within a system (Nisa et al., 2025). Multiple agents interact and collaborate by sharing context, dividing complex goals into subtasks and dynamically adapting. In this way, Agentic AI integrates the actions of multiple AI Agents, much as social interactions occur between groups or teams of humans with different types of expertise (Nisa et al., 2025). Agentic AI systems could effectively model Agent negotiator personality, empathy, transparency and competency, thereby affecting negotiation outcomes (Cohen et al., 2025). In more complex negotiations involving multiple parties, there has been some evidence of more efficient conflict resolution when using AI (Aydoğan et al., 2021).Building on these results, future research could address several intriguing questions about the structure of Agentic AI in conflict management. For example, could one AI Agent, acting as an Orchestrator, structure requests into several tasks assigned to other AI Agents, each specializing in specific areas, such as analyzing competitors’ prices, comparing company prices and predicting the counterparty’s Best Alternative to a Negotiated Agreement (BATNA)? Could Agentic AI facilitate predicting outcomes from various first offers and sequences of offers and counteroffers? Could Agentic AI systems then present these to humans, ask for feedback and refinement and synthesize the human and AI predictions?Could Agentic AI augment conflict management, thereby increasing the likelihood of integrative and collaborative agreements? Could AI Agents act as Orchestrators of multiple AI agents, help model complex multi-party disputes with numerous stakeholders? Could the trust in Agentic AI systems during negotiations be enhanced by adding an Output Validation Agent that checks responses to prompts before they are delivered to users?Other research questions could focus on the processes and outcomes of Agentic AI for conflict management. For example, could multiple AI Agents simulate the interactions of various stakeholders’ preferences? Could Agentic AI generate possible resolutions that human mediators could use to enhance the effectiveness of third-party conflict resolution? Could Agentic AI be effective as mediators? To what degree should there be human oversight of Agentic AI actions? Could there be systemic conflict escalation or de-escalation of conflicts between AI-Agents, and are these the same or different from conflicts between humans?Research could also examine ethical questions about the use of Agentic AI. These questions could focus on disclosure, fairness and social responsibility. For example, if one party in negotiation is using Agentic AI, then should they disclose it to the counterparty? Should parties reveal the AI-predicted BATNAs of the other party? If one party uses Agentic AI and the other does not, then does this create an unfair competitive advantage in negotiation, and would it lead to outcomes that favor one side over the other? Could Agentic AI include an AI Agent that specializes in analyzing the impact of negotiations on social issues, such as the achievement of the United Nations Sustainable Development Goals, including peace, justice and strong institutions?Also, research could consider the scaling of and the tools within Agentic AI systems. Suppose an Agentic AI system is structured to facilitate effective negotiations in low-value or low-stakes talks. Can the same Agentic AI structure be effectively scaled for higher-value or higher-stakes negotiations? Are there advantages or disadvantages to combining different layered technologies in the tech stack for managing conflicts involving Agentic AI, for example, Google Cloud, Microsoft Azure and Copilot Studio, NLPs, Python and so forth?Research could focus on the effectiveness of going beyond using established LLMs. For example, could Agentic AI facilitate real-time knowledge management by consolidating insights from current news reports and proprietary or paywalled data to support contract negotiations, integrating retrieval-augmented generation with chat functionality? Can user interfaces help humans, AI Agents or Agentic AI in conflict management to create and execute negotiation plans, consult multiple agents, review the results and effectively use user interfaces to confer with users to refine and improve the process and output?Physical AI operates directly in the real world, interfaces with it and integrates with digital AI systems. Physical AI can gather information from, and act in, the physical world (Miriyev and Kovač, 2020; Sitti, 2021). Whereas traditional AI operates in the digital realm, physical AI directly interfaces with the tangible world. These interactions include Physical AI’s ability to continually and spontaneously perceive, learn, remember, decide, act and adapt (Sitti, 2021). Examples of Physical AI include autonomous vehicles, drones, extended reality systems and smart glasses (Yoo et al., 2025).There has been very little research on the intersection of Physical AI and conflict management. However, many interesting questions remain for future investigations. Some of these research questions focus on processes and outcomes.Process-focused research could examine the extent to which latency, that is, time lags, between current events and rapid analyses of conflict and the provision of recommended conflict strategies impair the effectiveness of AI-augmented human successes when integrated with Physical AI. Does the de-humanization of conflicts that are managed with Physical AI increase conflict, and why and what can be done to humanize Physical AI to reduce conflict intensity? Can robots interact effectively with humans in conflict situations or to what extent do humans need to exercise control over machines?Outcome-focused research could examine whether human–robotic interactions are more or less likely to result in conflicts and why. Can Physical AI perceive and act on human emotions, such as empathy, and express them effectively to humans in conflict situations, thereby improving outcomes? Can embodied agents, that is, human-like robots, use nonverbal behaviors, such as facial expressions and body movements, to facilitate the de-escalation of conflict? Can speech characteristics, prosody, of embodied agents, for example, inflection, speed, tone and volume, be equally or more effective than when used by humans in negotiation and conflict management? Can human–robotic interactions in conflict situations be designed to act as Guardians to avoid social inequalities and also facilitate social justice and the achievement of sustainable development goals?This editorial offers a structured framework that can encourage and facilitate innovative future research on AI and conflict management. Five AI paradigms for researching AI and conflict management were defined and compared. These paradigms are ML, Generative AI, AI Agents, Agentic AI and Physical AI. Additionally, this editorial identified roles AI can play in conflict management. The roles are Assistant, Analyst, Tasker, Orchestrator and Guardian. Using these paradigms and roles, several intriguing questions for future research were identified. This framework could suggest more insightful and profound research on AI and conflict management in the future.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Conflict Management and NegotiationArtificial Intelligence in Healthcare and EducationInnovation, Sustainability, Human-Machine Systems
Volltext beim Verlag öffnen