There are some great ways which Artificial Intelligence (AI) can help personal injury lawyers run their practice and help their clients.
But, there are some major pitfalls to leaning too heavily on AI.
Lawyers who use artificial intelligence the wrong way will get punished. I am reminded of the decision of Ko v. Li, 2025 ONSC 2965 (CanLII) where the lawyer included in her factum references to several non-existent or fake precedent court cases. In her oral argument in open court the lawyer relied on two non-existent precedent cases from her factum.
The problem came to light during the hearing when the Judge could not find any reference online to the cases relied upon by the lawyer as applicable precedents to state the law.
When questioned by the judge, the lawyer was unable to advise whether her factum had been prepared using generative artificial intelligence or whether the cases she listed in her factum and relied upon orally were “hallucinations” fabricated by an AI platform.
This is a bad use of AI. There will be no substitute for a lawyer reviewing the work being submitted to the Court. The lawyer is ultimately responsible. “The Buck Stops Here“.
Lawyers can use AI to prepare all sorts of pleadings and documents, but, if those lawyers don’t check the work being submitted, they are doing so at their own peril.