The Unlicensed Yes Man: When AI takes the stand and lawyers take the fall
In June 2023, a federal judge sitting in the Southern District of New York Court sanctioned Attorneys Peter LaDuca and Steven Schwartz for relying on non-existent case law invented by intelligence chat bot, CHAT GPT in a legal brief. The latest victim of this ‘CHAT GPT lawyer syndrome’ is Attorney Jae S.Lee who admitted to citing a non-existent court decision in her reply brief to court. These cases point to a concerning pattern of lawyers relying on generative AI tools without exercising proper due diligence which could endanger their clients cases and expose them to professional misconduct.
We are in the advent of Generative AI tools such as Chat GPT, Google Bard, Microsoft Bing and many others. They are referred to as ‘generative’ because of their ability to create entirely new information by employing algorithms to find patterns and links in preexisting data. These AI tools analyze vast amounts of data and forecast results (output) at breakneck pace. However, the precision and clarity of your prompt (input) will determine how well the outcome turns out. Unclear instructions can lead to inaccurate information.
Imagine a researcher who has to go through hundreds of legal articles in record time. Overwhelmed they scan through each, grasping bits and pieces here and there. When asked to recount a particular detail from a single article, there is a good chance the details may be hazy and the researcher might overlook important details or misinterpret others. Likewise, AI generative models operate similarly. Trained on massive public data sets of text, information that is publicly accessible such as articles, journals, books all written by humans and potentially containing errors or biases. However, unlike the meticulous researcher, the AI models prioritize identifying the most statistically probable combination of words based on their training data than on finding the most factually correct response to your specific prompt. This is equivalent to the researcher providing you with a summary based on their overall impression, potentially missing crucial details or nuances present in the articles.
Given the task of responding to a user commands in the fastest way possible, generative AI tools become the ultimate ‘Yes Man’. Remember those people that agree with everything regardless of its accuracy? These tools share that trait, they are programmed to fulfill your request even if it means fabricating plausible sounding legal arguments or cases based on faulty data or misrepresentations. This makes them susceptible to creating “hallucinations” (convincing but factually incorrect information) which can lead to misinterpretations of statutes, precedents or facts.
The Advocate-Client relationship is one that is fiduciary in nature built on trust and loyalty. Breaching this trust by relying on fake information has its legal repercussions. Under the Advocates (Professional Conduct) Regulations of Uganda , the fiduciary (Advocate) is obligated to act in the beneficiary’s (Client) best interest at all times, both in letter and spirit. In addition, Advocates must follow ethical and procedural guidelines as officers of court. An attempt to persuade court or oppose an adversary by relying on fake information is an abuse of court process and can attract grave consequences for both the Client and Advocate including; striking out pleadings for being frivolous, dismissal of the case, an order of costs and disciplinary action on the Advocate for professional misconduct.
Even though the use of AI is revolutionizing the legal field by producing faster and higher-quality results, it is important to approach it as a balancing act, knowing when to “brake” with independent judgment and when to “accelerate” with research; otherwise short of that one risks a head on collision “Professional misconduct and a miscarriage of justice”. The rule of the thumb is to always double- check AI outputs against primary resources like statutes and case law. AI should never replace a lawyer’s exercise of independent legal judgment. Taking information generated by AI at face value is a sure recipe for disaster.
Keep an eye out for my upcoming article, in which I will go into further detail on the ethical application of AI in the legal sector, looking at best practices and possible fixes to get us to a point where AI complements advocates rather than take their place.