top of page
Abstract Digital Waves

The Rise of AI in Dispute Preparation: What Modern Mediators Need to Know

  • Writer: The DRA Team
    The DRA Team
  • Feb 27
  • 4 min read
AI in Dispute Preparation. The Dispute Resolution Agency

Artificial intelligence is no longer a novelty in dispute resolution. Increasingly, parties are turning to AI tools such as ChatGPT, Google Gemini and Microsoft Copilot as part of dispute preparation before they ever speak to a mediator or lawyer.


They are asking:

  • “Do I have a strong legal claim?”

  • “How much compensation should I expect?”

  • “What would a judge likely decide?”

  • “How should I negotiate?”


For many, AI is now the first port of call. That shift carries both opportunity and risk for the mediation profession.


Why Parties Are Using AI in Dispute Preparation

From a party’s perspective, AI offers:

  • Immediate answers

  • Low or no cost

  • Perceived neutrality

  • A sense of control


For unrepresented individuals especially, AI feels empowering. It can explain legal terms, suggest negotiation strategies, and provide structured arguments. In theory, this should support informed participation in mediation.


In practice, the reality is more complex.


The Core Risk: The Quality of the Prompt

AI output is only as good as the information it receives.


Most parties:

  • Omit critical facts

  • Present one-sided narratives

  • Fail to recognise legal nuances

  • Do not understand what information is legally relevant


A trained solicitor or mediator instinctively probes:

  • What happened before that?

  • What evidence supports this?

  • What is the other party likely to say?

  • What risks are you overlooking?


AI does not ask follow-up questions unless prompted correctly. It fills gaps confidently. That confidence can create false certainty.


A party may arrive at mediation believing:

  • Their legal position is stronger than it is

  • A particular outcome is “standard”

  • The other side is acting unlawfully

  • Court would deliver a specific award


These expectations may be built on incomplete or misframed prompts.


The Dangers for Mediation


1. Inflated or Distorted Expectations

If a party has asked, “What compensation should I receive for unfair dismissal?” without detailing evidential weaknesses or procedural history, the answer may appear authoritative yet lack context.


By the time mediation begins, that AI-generated “range” may feel like a benchmark.


2. Entrenchment

AI often structures arguments persuasively. A party may come armed with:

  • Bullet-pointed legal arguments

  • Draft position statements

  • Predicted court outcomes


While preparation is positive, rigidity is not. Mediation requires flexibility. AI-generated certainty can harden positions prematurely.


3. Misinformation Risk

Large language models do not provide legal advice in the regulated sense. They synthesise patterns. Occasionally they:

  • Generalise across jurisdictions

  • Oversimplify complex areas

  • Miss procedural constraints

  • Overstate likely remedies


Without professional oversight, parties may unknowingly rely on flawed guidance.


Consequences for Mediators

Modern mediators must now assume that AI may have shaped:

  • A party’s understanding of their rights

  • Their valuation of the claim

  • Their expectations of settlement

  • Their negotiation tactics


This does not make AI the problem. It makes unexamined reliance the issue.

Ignoring AI’s influence risks:

  • Late-stage breakdowns

  • Surprise shifts in negotiation stance

  • Disillusionment with the process

  • Allegations that mediation “failed” to deliver what was expected


Best Practice for Mediators


1. Ask the Question Early

During intake or pre-mediation calls, consider asking neutrally:

“Have you used any online tools, including AI, to explore your position or possible outcomes?”

This is not accusatory. It is diagnostic.


If the answer is yes, follow up with:

  • What questions did you ask?

  • What assumptions did you include?

  • What conclusions did you draw?


This helps surface hidden anchors.


2. Explore the Underlying Assumptions

If a party references an expected outcome:

  • “Help me understand how you arrived at that figure.”

  • “What factors did you include?”

  • “What might a court consider that hasn’t been discussed?”


You are not challenging them. You are expanding their analysis.


The goal is not to discredit AI. It is to contextualise it.


3. Reality-Check Without Undermining Confidence

Where expectations appear unrealistic:

  • Introduce uncertainty carefully.

  • Use conditional language.

  • Frame mediation as risk management, not prediction.


For example:

“AI tools can be helpful starting points, but outcomes often depend heavily on evidential detail and judicial discretion. Let’s explore both best-case and risk-case scenarios.”

This keeps the process respectful.


4. Encourage Professional Sense-Checking

Lawyers and advisers must also adapt.


Clients increasingly arrive having:

  • Drafted their own legal analysis

  • Estimated damages

  • Researched case law through AI


Professional advisers should:

  • Verify factual assumptions

  • Check jurisdictional accuracy

  • Stress-test predictions

  • Correct overconfidence


Mediators may gently encourage parties to seek clarification from advisers if reliance appears high.


5. Adjust Process Management

Where AI influence is evident, mediators may need to:

  • Spend more time on expectation alignment

  • Use structured risk analysis exercises

  • Explore BATNA/WATNA more explicitly

  • Break down “likely court outcome” into variables


This slows the process slightly, but prevents derailment later.


Opportunities Within the Challenge

AI is not purely a threat. It can:

  • Improve baseline understanding

  • Help parties articulate issues

  • Reduce informational imbalance

  • Encourage early preparation


Well-informed parties can engage more meaningfully.


The issue is not use — it is unexamined use.


Ethical and Professional Considerations

Mediators are not required to police AI usage. However, professional standards increasingly demand:

  • Awareness of technological influence

  • Competence in managing digital-era risks

  • Clear explanation of mediation’s purpose


Mediation is not a prediction service. It is a facilitated negotiation process grounded in autonomy and risk assessment.


If AI has shifted a party’s mindset from negotiation to validation, the mediator must gently recalibrate that orientation.


Practical Questions for Mediators to Add to Their Toolkit

Consider integrating questions such as:

  • “What research have you done about your position?”

  • “What assumptions are you relying on?”

  • “What would change your assessment?”

  • “If the outcome differed from what you expect, what would that mean for you?”

  • “How certain are you about the legal advice you’ve received?”


These questions surface certainty levels and expose fragile reasoning without confrontation.


The Modern Reality

Non-human input is now part of dispute preparation.

Some parties will have consulted:

  • AI platforms

  • Online legal forums

  • Automated claim calculators

  • Social media advice


The mediator’s role remains the same: facilitate informed, voluntary agreement.


But the pathway has evolved.


Understanding how AI shapes expectations is now part of professional competence. Those who ignore this shift risk increased impasse. Those who adapt can use it as a diagnostic tool to deepen preparation, improve reality testing, and strengthen outcomes.


The question is no longer whether parties are using AI.


It is whether mediators are prepared for the consequences when they do.

Comments


bottom of page