Imagine this – it’s a week out from trial and you’re meeting with your star witness about the incident report that they created following an accident at work. You ask them to confirm that they created that report immediately after they witnessed the accident and that it contains their contemporaneous recollection of the events. And then they say “oh, I actually just jotted down some notes, scanned them into ChatGPT and then it wrote up a report that I emailed to my boss. I assumed it was accurate.”
Or you’re in court, and tendering cash flow records without a witness to prove the records under the “business records” rule of evidence. And opposing counsel jumps up to object, on the basis that those records are stamped as having been generated using AI, and therefore need to be proven by a witness.
These are not just lessons in properly proofing witnesses and preparing your case well before trial, but they are good examples of the kinds of issues that we are likely to face when analysing evidence in a world of AI generated content.
AI is useful to produce records that are time consuming to produce, are repetitive or communications where the author might benefit from assistance with spelling, grammar or tone. Businesses outside of the insurance industry are using AI to create business records such as invoices and receipts, cash flow forecasts, bank reconciliations, compliance reports, incident reports, meeting minutes and clinical notes.
The question that arises is not whether these records are admissible as evidence but, what weight will a Court give this evidence if the record that is produced is not in a form that contains the original words of the author.
Pursuant to section 79C of the Evidence Act 1906 (WA) (Evidence Act), a statement in a document is admissible if made by a “qualified person”, or derived from information they provided, or from a device designed to record or measure information. Genuine business records are generally admissible, often without the author needing to be called as a witness. This overcomes the “hearsay rule” but does not resolve how much weight a Court will attach to that record.
But will AI generated records meet the criteria for a “genuine business record”?
Other jurisdictions have taken varying approaches to the regulation of the use of generative AI in the legal profession. New South Wales (NSW) have taken the strongest stance on the use of generative AI in proceedings. It forbids the use of AI to generate contents of affidavits, witness statements and character references, but allows AI to be used for “preparatory purposes”. A “blanket” proscription of generative AI being used (without leave) has been implemented by NSW Courts for the preparation of expert evidence.
However, WA Courts have not yet provided significant commentary or guidance on the use of generative AI. This leaves questions about the admissibility of AI generated documents as evidence largely unexplored.
The Courts have long accepted that business records are admissible evidence, but their weight depends on reliability and the factual circumstances:
So how do AI generated documents change the evidentiary landscape?
Traditional software and devices accepted by Courts and considered under s 79C typically record data exactly as entered, without interpretation. The shift with generative AI lies in the fact that it does not just record information, it can summarise, rephrase or infer meaning. This introduces a layer of interpretation between the raw data and its final output.
The absence of a human author may not prevent admissibility, but it does introduce uncertainty about the source, intent and accuracy of the information – all of which will affect the weight in which the evidence carries.
Without corroborating evidence or transparency as to the creation of the evidence, AI generated documents will be viewed with caution because the Court cannot be confident that the output accurately reflects the original information entered into the program and its intent.
Anyone who might receive AI generated content as “evidence” in relation to a claim (eg lawyers, claims consultants) should be aware that the information is only as credible as the audit trail or independent verification. And it is not always going to be reliable evidence.
Documents may not always be marked as being AI generated. It is therefore increasingly important to ask questions about how records have been generated, to confirm whether the purported author of a record did in fact author the record and can verify that the content is accurate.
The risk posed by AI generated records therefore is the potential lack of credibility of the records. This applies to not only the insurance profession, but any profession that relies on records for decision making, assessments or reporting.
To mitigate risks associated with the use of generative AI for record keeping, it is important to consider:
As AI continues to emerge across the legal landscape, its role in our profession is inevitable. However, it is crucial to recognise that AI cannot replace the credibility and judgement of a human witness, or human created evidence.
Courts in Western Australia are yet to provide definitive guidance on the admissibility of generative AI evidence – we therefore must exercise caution finding a balance between efficiency and maintaining integrity of records and the reliability of the evidence they may present.
---
This article was written by Chiara Benino, Solicitor Insurance & Risk.