Gowling WLG | Jonathan Chamberlain

Ireland | United Kingdom

Believe it or not, Industrial Tribunals (as they were then called) were supposed never to have lawyers in them. The idea was there would be quick and simple resolution of workplace disputes on the basis of robust common sense. Fast forward 50 years and it hasn’t quite worked out like that.

The process though is very much supposed to support litigants in person, on both sides – although it’s usually the claimant who is unrepresented.

What it’s very much not designed for though is generative AI. Which is not helping.

Generative AI is really good at drafting communications which look and sound professional. They may even be right, in some places. Great, you might think: now everyone can have a lawyer. Except it’s a really really bad lawyer, one with tremendous confidence but little understanding and who doesn’t mind telling outright lies.

It is fast becoming the bane of Respondent practice ie what Gowling does. You can nearly always spot when Chat GPT has been lending a hand. Points tend to come in threes; buzzwords are used; the sense of outraged dignity is almost palpable, save that it is exactly the same as a dozen other examples.

There isn’t though any real understanding of the law or practice. More is always to follow, apparently. It all has to be worked through patiently: by Respondents, their representatives and ultimately by the Tribunals. In Ireland, they appear to have had enough.

First, some basics.

THE GRIEVANCE PROCESS

In order to raise a grievance against an employer in the UK, employees follow their company’s internal grievance procedure, which often includes raising a formal written complaint. This written grievance sets out the factual details of an employee’s complaints and at times, the relevant law.

Using AI to draft grievance complaints and submissions presents a myriad of issues for not only the employers who review the submissions, but also when the complaints are escalated to a tribunal level.

THE DANGERS OF AI GENERATED GRIEVANCES

AI drafted grievances present employers differ from their human produced counterparts, and are at risk of the following:

  • False Legal References – AI drafted complains can contain references to phantom legal cases, or false reports of case outcomes. AI often also inserts more legal references than a human would, whereas grievances were previously centred around factual process.
  • Lack of Reality – when grievances are written by an AI tool, as opposed to the employee who experiences situations firsthand, there is a risk that the description of a situation is exaggerated or detached from reality.
  • Optimism – AI models are trained to provide assistance and ‘helpful’ responses, and therefore employees using them in the grievance process may receive overly-optimistic advice on the merits or strategy of their case which impacts their expectations.
  • Data Sharing – employees using public AI models to draft may be inputting confidential information about their employment into the AI model. This then releases the sensitive information to a public network.

GUIDANCE FROM THE IRISH WORKPLACE RELATIONS COMMISSION

recent case heard before the Irish Workplace Relations Commission (the “WRC”) dealt with this issue head-on, where an employee was believed to have used AI to assist in generating their complaints submission. It is an eminently sensible and practical approach.

When considering the use of AI itself, the Adjudicating Officer found no issue. However, because of the errors highlighted in the complainant’s submissions as a result of AI involvement, the Officer offered the following warning:

Parties are reminded that their submissions must be relevant and accurate and do not set out to mislead either the other party or the Adjudication Officer.

The WRC have also set out in their guidance that use if AI in submissions has the potential to impede cases, by undermining a complainant’s arguments and credibility and causing delays where corrections must be made.

The WRC recommend that employees:

  • Double check all legal references and content which they’ve included in their complaints – checking that the references exist and are correct;
  • Understand their submissions and are able to fully explain everything included within them;
  • Avoid using confidential, personal or sensitive information when using public AI tools to generate a submission; and,
  • Avoid relying on AI for advice on, or strategies for a case.

So, although AI is permitted as a tool for preparing complaints, employees and employers should heed the above guidance and warnings. This guidance is a valuable reminder that AI should not be used as a substitute for employees preparing and understanding their cases themselves.

I should love some Presidential Guidance along the same lines and maybe something on the ACAS website too. Chat GPT isn’t always wrong, but the way it’s being used at the moment in UK tribunals and employee dispute-resolution generally is most unhelpful in an already overloaded system.

This article first appeared on Lexology | Source