Back to insights

Hey, Siri…Did I just commit perjury?

17 Sep 2025

Alerts
Insurance & Risk Management

Imagine this – it’s a week out from trial and you’re meeting with your star witness about the incident report that they created following an accident at work. You ask them to confirm that they created that report immediately after they witnessed the accident and that it contains their contemporaneous recollection of the events. And then they say “oh, I actually just jotted down some notes, scanned them into ChatGPT and then it wrote up a report that I emailed to my boss. I assumed it was accurate.”

Or you’re in court, and tendering cash flow records without a witness to prove the records under the “business records” rule of evidence.  And opposing counsel jumps up to object, on the basis that those records are stamped as having been generated using AI, and therefore need to be proven by a witness.

These are not just lessons in properly proofing witnesses and preparing your case well before trial, but they are good examples of the kinds of issues that we are likely to face when analysing evidence in a world of AI generated content. 

AI is useful to produce records that are time consuming to produce, are repetitive or communications where the author might benefit from assistance with spelling, grammar or tone.  Businesses outside of the insurance industry are using AI to create business records such as invoices and receipts, cash flow forecasts, bank reconciliations, compliance reports, incident reports, meeting minutes and clinical notes. 

The question that arises is not whether these records are admissible as evidence but, what weight will a Court give this evidence if the record that is produced is not in a form that contains the original words of the author.  

Back to Basics – the Evidence Act 

Pursuant to section 79C of the Evidence Act 1906 (WA) (Evidence Act), a statement in a document is admissible if made by a “qualified person”, or derived from information they provided, or from a device designed to record or measure information. Genuine business records are generally admissible, often without the author needing to be called as a witness. This overcomes the “hearsay rule” but does not resolve how much weight a Court will attach to that record. 

But will AI generated records meet the criteria for a “genuine business record”?

Weight vs Admissibility

Other jurisdictions have taken varying approaches to the regulation of the use of generative AI in the legal profession. New South Wales (NSW) have taken the strongest stance on the use of generative AI in proceedings. It forbids the use of AI to generate contents of affidavits, witness statements and character references, but allows AI to be used for “preparatory purposes”. A “blanket” proscription of generative AI being used (without leave) has been implemented by NSW Courts for the preparation of expert evidence. 

However, WA Courts have not yet provided significant commentary or guidance on the use of generative AI. This leaves questions about the admissibility of AI generated documents as evidence largely unexplored.  

The Courts have long accepted that business records are admissible evidence, but their weight depends on reliability and the factual circumstances:

  1. Australian Securities & Investments Commission v Rich [2005] NSWSC 417 – The Court emphasised that even admissible documents carry little weight if the person responsible for it cannot attest to its authorship. The reliability and weight given to the documents will often depend on factors such as the source, how it was created and whether the author’s identity and authority is clear. The Court made clear that admissibility does not guarantee any particular weight will be attached.
  2. Commissioner of the Australian Federal Police v Zhang (Ruling No 2) [2015] VSC 437 - The Court stated that whilst the Evidence Act allows for broad admissibility of business records, Courts must scrutinise the reliability and authenticity of such records and not draw inferences in the absence of corroborating evidence.
  3. Aqua-Marine Marketing Pty Ltd v Pacific Reef Fisheries (Aust) Pty Ltd (No 4) [2011] FCA 578 – The Federal Court stressed that reliability and the regularity of the record keeping in determining the weight in which a Court will give the business records.
  4. Collopy v Commonwealth Bank of Australia [2019] WASCA 97 – the interpretation of the Evidence Act permitted business records being admitted even if not made by someone with direct knowledge. It was held that provided the record is genuine and can be authenticated by someone familiar with the business’s processes, it will be admissible.  
  5. These decisions reflect a consistent judicial approach; admissibility is broad, but the weight of the records depends on the document’s reliability and its connection the author. 

Where AI Changes the Landscape 

So how do AI generated documents change the evidentiary landscape? 

Traditional software and devices accepted by Courts and considered under s 79C typically record data exactly as entered, without interpretation. The shift with generative AI lies in the fact that it does not just record information, it can summarise, rephrase or infer meaning. This introduces a layer of interpretation between the raw data and its final output.

The absence of a human author may not prevent admissibility, but it does introduce uncertainty about the source, intent and accuracy of the information – all of which will affect the weight in which the evidence carries. 

Without corroborating evidence or transparency as to the creation of the evidence, AI generated documents will be viewed with caution because the Court cannot be confident that the output accurately reflects the original information entered into the program and its intent. 

Anyone who might receive AI generated content as “evidence” in relation to a claim (eg lawyers, claims consultants) should be aware that the information is only as credible as the audit trail or independent verification. And it is not always going to be reliable evidence. 

Documents may not always be marked as being AI generated. It is therefore increasingly important to ask questions about how records have been generated, to confirm whether the purported author of a record did in fact author the record and can verify that the content is accurate. 

Steps for Maintaining Credibility for AI Generated Records 

The risk posed by AI generated records therefore is the potential lack of credibility of the records. This applies to not only the insurance profession, but any profession that relies on records for decision making, assessments or reporting. 

To mitigate risks associated with the use of generative AI for record keeping, it is important to consider:

  1. Identifying that a document has been generated using AI  - ensure that if records are generated using an AI tool, the record is marked in a way that identifies that it was generated using AI and what the tool was.
  2. Explicit adoption of AI generated content – AI generated documents should be reviewed and explicitly endorsed as accurate reflections of the knowledge, observations or decisions that were input to create the document. For example – a teacher signing off on a report, a doctor preparing patient notes or an HR manager in a performance review should be able to confirm that the records reflect their input and actual judgement.
  3. Retain original source material – Where possible, it is important to retain copies of the original documents or material from which the AI tool has drawn its conclusions.
  4. Preserve metadata and maintain transparency – maintain logs showing how documents were created, including which AI tools were used, the data provided, and the changes made and how the data was validated.
  5. Policies and training – Policies and training should be developed and implemented that define acceptable AI usage, including human verification steps. 

Conclusion 

As AI continues to emerge across the legal landscape, its role in our profession is inevitable. However, it is crucial to recognise that AI cannot replace the credibility and judgement of a human witness, or human created evidence. 

Courts in Western Australia are yet to provide definitive guidance on the admissibility of generative AI evidence – we therefore must exercise caution finding a balance between efficiency and maintaining integrity of records and the reliability of the evidence they may present. 

---

This article was written by Chiara Benino, Solicitor Insurance & Risk.

Previous Next

Share Insight

Relevant Contacts

ERICA THUIJS

Partner | Insurance & Risk

Previous Next
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Stay up-to-date and subscribe to receive our latest news and insights