As the world continues to embrace digital transformation, the legal sector is not immune to the sweeping changes. Artificial Intelligence (AI) technology has shown promising potential to streamline legal processes, making the field more accessible and efficient. However, as the saying goes, "With great power comes great responsibility". Case in point: ChatGPT, an AI-powered chatbot developed by OpenAI. While it has become a popular tool for tasks ranging from drafting legal letters to providing general advice, it has also opened the door to unexpected challenges, including what the legal community is now referring to as AI-induced "hallucinations".
Imagine the scenario: A lawyer, trusting in the power of AI, uses ChatGPT to prepare for a case. He is served with case precedents that seem to perfectly align with his argument. Fast forward to the courtroom, and the lawyer presents his arguments backed by these precedents. But there's a catch – the cases don't exist. They are the product of AI hallucinations, a term used to describe situations where AI systems generate incorrect or non-existent information.
This is not a hypothetical scenario, but a real incident that unfolded recently. The lawyer in question, Steven Schwartz, admitted to using ChatGPT in a federal court case against Avianca Airlines on behalf of his client, Roberto Mata. Schwartz, having been assured by ChatGPT that the cases were real, cited at least six other cases to show precedent. However, the court found these cases to be entirely fictional, with "bogus judicial decisions, bogus quotes and bogus internal citations".
Schwartz, who had learned about AI technology from his college-age children and was seduced by articles about the benefits of AI in professional settings, confessed in a June 8 filing that he was "mortified" upon learning about the false cases. He stated that he had used the tool under the impression that it was a search engine and not a generative language-processing tool, highlighting a common misconception about the nature of AI and its capabilities.
The use of ChatGPT was discovered after Avianca's lawyers couldn't find some of the court documents for the cases to which Mata's team referred, leading to an unprecedented situation in the court. The court is now considering sanctions against Schwartz and Mata's other attorney, Peter LoDuca, for citing non-existent cases.
Despite the backlash, Schwartz's team argued that sanctions would serve no purpose, as the incident had already turned Schwartz and his firm into "the poster children for the perils of dabbling with new technology". They stated that their lesson had been learned, pointing to the inherent risks and potential consequences of using new technology without fully understanding its workings.
The widespread adoption of AI-powered tools like ChatGPT has raised concerns about their capabilities. From students using AI to do homework to politicians using artificially created images in campaign ads, the technology's ability to alter images or draft text is advancing beyond human ability to detect fakes, fostering an environment of distrust. In particular, chatbots like ChatGPT are risky due to their ability to generate false information from their knowledge bases, as Schwartz experienced first-hand.
AI platforms and their algorithms are trained on existing databases of images or text and are designed to generate answers based on this database. However, they can often conflate information or create false answers to questions. The issue is not limited to ChatGPT – other platforms like Microsoft’s Bing chatbot and Google’s Bard have shown similar shortcomings, with some faring better than others. Yet, they all have one thing in common - they are described as being “fuzzy, in more ways than one,” which underscores the ambiguity and potential inaccuracies in their outputs.
In the legal field, the implications of these inaccuracies can be severe, as evidenced by the Schwartz case. The incident has brought the issue of "unauthorized practice of law" to the forefront. In a similar vein, New York-based startup DoNotPay created an AI-based method for people to contest traffic tickets. The user would wear smart glasses that would feed them information generated by AI to say in court. However, before this technology could be introduced in court, the creator received threats from multiple bar associations about "unauthorized practice of law".
The rise of AI in legal practice has undeniably opened up new avenues for efficiency and accessibility, but it also comes with its own set of challenges. As the Schwartz case illustrates, while AI can facilitate certain aspects of legal practice, it is not a substitute for the nuanced understanding and professional judgment that a human lawyer can provide.
The use of AI in the legal field is a rapidly evolving landscape, and it is crucial for professionals in the field to stay updated. To help navigate this complex terrain, we've curated a comprehensive guide on using ChatGPT for legal advice, including practical prompts and example situations. The guide also highlights potential pitfalls to avoid, to ensure users make the most out of AI in a safe and responsible manner.
While it's exciting to envision a future where AI significantly reduces the workload of legal practitioners and democratizes access to legal advice, we must proceed with caution. It's essential to remember that AI tools like ChatGPT are tools that should aid human decision-making, not replace it.
Check out our new section on ChatGPT and Legal Advice for a deep dive into the world of AI-powered legal counsel. It's a must-read for anyone interested in exploring the intersections of law and AI.
At the end of the day, technology is just a tool. It's up to us, the users, to wield it responsibly and ethically. As we continue to integrate AI into our professional lives, let's remember the wise words of Albert Einstein: "The measure of intelligence is the ability to change". With every technological advancement, comes the need for change - in our understanding, our approach, and our ethics. Let's embrace this change, armed with knowledge and a healthy dose of caution.
Donate (half) a cup of coffee ☕ if you enjoy our site. (with the current prices at Starbucks we don't dare to ask for a full cup 🙄 )