Lawyer Blames AI “Robo-Lawyer” for Inventing Murder Case Evidence, Sparks Courtroom Chaos

A high-profile murder trial in Australia was thrown into disarray when a senior defense lawyer submitted AI-generated legal arguments packed with fake quotes and imaginary case law.

The bizarre blunder, courtesy of an unchecked “robo-lawyer,” delayed proceedings, triggered a judicial dressing-down, and forced the attorney into a public apology. This mishap in the Supreme Court of Victoria is one in a series of legal blunders AI has caused in several court cases around the globe. This should lead many people to realize they need to check the information AI is generating.

Delays and embarrassment

Imagine being a lawyer and submitting arguments that are completely false. Add to that the humiliating experience of needing to admit the filing was done with incorrect information sourced through AI. This is exactly what Defense lawyer Rishi Nathwani had to do. Nathwani holds the prestigious legal title of King’s Counsel and has had to take full responsibility for the blunder.
“We are deeply sorry and embarrassed for what occurred,” Rishi Nathwani.

The blunder caused a delay of a full day in resolving a case that should have been resolved the day before. The case involved an underage client who was being charged with murder, and it was being argued that he was not guilty of such a crime because of mental impairment. Justice James Elliott had hoped to resolve the case sooner, but this blunder caused the delay. “At the rise of understatement, the manner in which these events have unfolded is unsatisfactory. The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice,”  Justice James Elliott.

How were the fake submissions discovered?

The fake documents included fabricated quotes from a speech to the state legislature and nonexistent citations from the Supreme Court. Elliott’s associate couldn’t find the case and requested that defense lawyers provide copies, which led to the discovery of these items being fictitious.
In this case, the defense lawyers assumed the accuracy of the information without checking, and the prosecutors did the same. If it weren’t for Elliott’s associates, the false information would have been submitted into the case.
Justice Elliott expressly noted the guidelines for using AI, which shows it can be used as a tool, but the product of the use must be thoroughly verified.

Not the only case of AI in the legal system

This case in Australia shows laziness on the part of lawyers and legal teams on both sides of the case, but it’s not the only use of AI in a legal case in which computer-generated information has been proven to be false.
In a case in the United States in 2023, a federal judge imposed $5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their fictitious research in an aviation injury claim. The sanctions would have been harsher had the lawyers not apologized and taken remedial steps to correct their actions.

How can you avoid similar problems?

In both of these court cases, verification of AI-generated information was missing and documents were submitted in court cases without being verified. If you’ve ever used Chat GPT, you may have seen the bottom of the results in which the program provides a paragraph that informs you it can make mistakes. Regardless of the AI tool being used, its always a good idea to verify information before moving forward.

Here are a few tips that could help:

Ask for sources

AI tools can provide sources to show where the information came from. This allows you to verify the source and use said source to support your content and documentation. If the AI tool can’t provide a source, look for your own before moving forward.

Limit what you ask AI for

Instead of asking an artificial system to provide content, arguments, or to do your job, limit what you ask of it. Asking an AI tool for ideas is a good idea, but asking it to handle a full document or product research project is not. When writing content, an AI tool can provide you with headlines or strong keywords, but should not be used to write your document or story.

Choose tools that align with your business needs

Every AI tool isn’t built or programmed the same way. It’s important to remember these tools are programs that have code and are meant to provide specific results. Some AI tools are great for manufacturing, while others are meant for the arts and humanities. The limits of the AI tool are important to ensuring you have the desired results.

Establish policies and rules for AI usage

AI tools are great for some aspects of your business, but if an AI system could do everything that a person can do, why would you need a staff? Your staff should understand this and adhere to the rules and policies established for the use of AI. If the lawyers in the court cases mentioned had followed these policies, they wouldn’t have faced public embarrassment.
There’s nothing wrong with using AI to help you get work done and create ideas, but AI isn’t flawless. It’s important to check the results to ensure you have accurate information before you submit information that isn’t correct. How do you use AI in your personal and work life?

More From Author

Mastodon in the Backyard: The Man Who Unearthed Prehistoric Ohio

When an Oil Change Becomes a Guinness World Record Attempt

Leave a Reply

Your email address will not be published. Required fields are marked *