WhatsNew2Day – Latest News And Breaking Headlines
It is one of the most crucial moments in any court case, potentially when a criminal discovers that he will spend his life behind bars.
But despite the importance of the judge’s ruling, it is being offloaded, at least in part, to ChatGPT.
Judges in England and Wales will be able to use the AI chatbot to help them draft their legal rulings. the Telegraph reports.
This is despite the fact that ChatGPT is prone to inventing false cases and the tool even admits that it “may make mistakes” on its home page.
ChatGPT, already described by a British judge as “very useful”, is increasingly infiltrating the legal industry, raising concerns among some experts.
Judges in England and Wales will be able to use ChatGPT to help draft legal rulings, according to Judicial Office guidance.
The new official guidance from the Judicial Office, issued to thousands of judges, notes that AI can be used to summarize large amounts of text or in administrative tasks.
These are considered core job tasks, but more prominent parts of the process, such as conducting legal research or performing legal analysis, should not be offloaded to chatbots, the guide states.
According to Master of the Rolls Sir Geoffrey Vos, AI “offers significant opportunities to develop a better, faster and more cost-effective digital justice system.”
“Technology will only advance and the judiciary has to understand what is happening,” he said.
This is despite also admitting that the technology is prone to fabricating false cases and could end up being widely used by the public when filing legal cases.
“Judges, like everyone else, need to be acutely aware that AI can give both inaccurate and accurate answers,” Sir Vos added.
Judges have also been warned about signs that an AI chatbot may have prepared legal arguments.
The usefulness of ChatGPT knows no limits, as it has been used to write essays, code computer programs, prescribe medications, and even have philosophical conversations.
Sir Vos, head of Civil Justice in England and Wales, said the guidance was the first of its kind in the jurisdiction.
He told reporters at a briefing before the guidance was published that AI “offers great opportunities for the justice system,” according to Reuters.
“Because it’s so new, we need to make sure judges at all levels understand what it does, how it does it and what it can’t do,” he added.
Santiago Paz, an associate at the Dentons law firm, has urged the responsible use of ChatGPT by lawyers.
“While ChatGPT’s answers may seem convincing, the truth is that the platform’s capabilities are still very limited,” he said.
“Lawyers should be aware that ChatGPT is not a legal expert.”
Jaeger Glucina, chief of staff at legal technology firm Luminance, said generative AI models like ChatGPT “cannot be considered a source of facts.”
“Rather, they should be regarded as a knowledgeable friend and not an expert in a particular field,” he told MailOnline.
“The Judicial Bureau has done well to recognize this by pointing out the effectiveness of ChatGPT for simple text-based tasks, such as producing summaries, while cautioning against its use for more specialized work.”
One British judge has already described ChatGPT as “very useful”, having admitted to using it when drafting a recent Court of Appeal ruling.
Lord Justice Birss said he used the chatbot when he was summarizing an area of law with which he was already familiar.
And a Colombian judge went even further by using ChatGPT to make his decision, in what was a legal first.
Companies like ChatGPT and Google’s Bard, its main competitor, are useful for learning some simple facts, but an over-reliance on technology can backfire for users.
Earlier this year, a New York lawyer got into trouble for submitting an error-ridden brief he had drafted using ChatGPT.
Steven Schwartz presented a 10-page brief containing at least six completely fictitious cases, as part of a lawsuit against the airline Avianca.
Schwartz said he “very regrets” having trusted the bot and was “not aware of the possibility that its content could be false.”
Other AI tools besides ChatGPT have been used in the legal industry, but not without controversy.
Also this year, two AIs, created by legal technology firm Luminance, successfully negotiated a contract without any human involvement.
The AIs discussed the details of an actual confidentiality agreement between the company and proSapient, one of Luminance’s clients.
The world’s first robot lawyer also found himself in legal trouble after being sued for operating without a law degree.
AI-powered app DoNotPay faces accusations that it is “impersonating a licensed professional” in a class-action case brought by US law firm Edelson.
However, DoNotPay founder Joshua Browder says the claims are “baseless.”