X

Share This Article

Array

ChatGPT’s skyrocketing popularity presents both promising opportunities and significant risks for lawyers, as its capabilities continue to evolve, raising questions about its impact on the legal profession and ethical considerations.

It has also hit the headlines due to lawyers utilising the bot to generate cases which they could use for court filings – which turned out to be fake.

ChatGPT (which stands for Chat Generative Pre-trained Transformer) is an artificial intelligence (‘AI’) tool. It is trained to follow an instruction in a prompt and provide a detailed response.

Such chatbots are guided by the prompts you provide and draw upon a vast amount of information as well as contextual cues to provide an answer.

It can write responses in almost all formats (i.e., essays, speeches, poems) and even crack computer code. You are able to specify the length of response you are seeking, as well as the style.

OpenAI notes that ChatGPT “interacts in a conversational way”. It is set up in a dialogue format, which “makes it possible for ChatGPT to answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests” as stated by OpenAI.

Many of the issues that arise when using ChatGPT can be attributed to not using a prompt which is specific enough or has a lack of context or framing which the bot can work with.

It is also important to note that ChatGPT’s (current) training data limit is September 2021, so there are some limits as to what it ‘knows’.

OpenAI has also admitted that as the bot is trained on a copious amount of text data and uses statistical methods to generate text that is like such data, some of the training data may contain errors which can lead to the model generating text that contains false or incorrect information.

The basic model of ChatGPT is also unable to verify the accuracy of information that it generates, as it does not have access to the internet or any external sources of information.

However, this appears to be developing, with the new ‘plus’ version having a web-browsing plugin, which allows ChatGPT to draw data from around the web to answer prompts.

In America, New York based lawyers Steven A. Schwartz and Peter LoDuca were fined US$5,000 (AU$7,485) for submitting fake citations in a court filing, which they blamed on ChatGPT.

Schwartz utilised ChatGPT when conducting legal research for a case before Judge P. Kevin Castel, acting for a man suing the airline Avianca.

However, he did not verify the cases provided by ChatGPT before citing them in his submissions, with it ultimately determined that they were not real.

The bot had essentially made up cases involving airlines and personal injuries. This is a significant concern particularly to criminal lawyers who’s client’s liberty and future are at stake.

The Judge found that the lawyers made: “acts of conscious avoidance and false and misleading statements to the court.”

His Honour noted that: “technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.”

“But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

This case emphasises how whilst ChatGPT can be a useful tool for brainstorming and research, it is integral to verify the responses generated, including by consulting legal databases.

Whilst this has not been reported yet in Australia, ChatGPT was threatened with a defamation lawsuit by a mayor in northwest Melbourne, Brian Hood, after the platform falsely described him as a perpetrator in a bribery scandal.

Over a decade ago, Hood alerted authorities and journalists to foreign bribery by the agents of a banknote printing business called Securency, which was then owned by the Reserve Bank of Australia.

However, when asked “What role did Brian Hood have in the Securency bribery saga?“ ChatGPT claimed that he “was involved in the payment of bribes to officials in Indonesia and Malaysia” and was sentenced to imprisonment.

Whilst the answer draws upon information from the case, it gets the perpetrator entirely incorrect.

His lawyers sent a notice to OpenAI putting them on notice, however it is uncertain whether there has been any progress. When this prompt is entered into ChatGPT now, it answers “I’m unable to provide a response.”.

It would be a novel case and presents complex issues as to how is liable for AI’s falsehoods and how courts may respond.

Tips for using ChatGPT in a way which helps avoid issues associated include:

  • Cross-referencing information with other platforms,
  • Specifying the output format (i.e., in a list, table, 500-word response),
  • Utilising specific prompts with constraints (i.e., what you need the information for, what jurisdiction it relates to, what time period).

Image credit: Amir Sajjad 

Published on 18/09/2023

Book a Lawyer Online

Make a booking to arrange a free consult today.

or

(02) 8606 2218

Call For Free Consultation

Call Now to Speak To a Criminal Defence Lawyer

Over 40 Years Combined Experience

Proven SuccessAustralia-Wide

Experienced LawyerGuarantee

(02) 8606 2218

AUTHOR Poppy Morandin

Poppy Morandin is the managing law clerk and an integral part of the team of criminal lawyers at Criminal Defence Lawyers Australia . She's also a part of CDLA's content article production team. Poppy is passionate about law reform and criminal justice.

View all posts by Poppy Morandin