The Dangerous Allure of AI-Generated Legal Citations: Understanding the Mavundla Case – Or How Not to Let ChatGPT Play Judge
The recent judgment in Mavundla v MEC Department of Co-Operative Government and Traditional Affairs and Others (7940 2024P) [2025] ZAKZPHC 2 (8 January 2025) has sent shockwaves through the South African legal community. In what could be dubbed “The Case of the Mysteriously Multiplying Precedents,” the South African legal community recently witnessed a fascinating debacle that proves sometimes truth is stranger than AI-generated fiction. The Mavundla case has become a masterclass in what happens when artificial intelligence meets artificial jurisprudence, and spoiler alert: it is not a legal love story.
Picture this: a candidate legal practitioner confidently presents citations to the High Court, blissfully unaware that most of her cited cases have about as much reality to them as a unicorn in a law library. Of nine cases referenced, seven turned out to be completely fictional – making this perhaps the most creative piece of legal writing since Deon Meyer’s last novel, though considerably less intentionally so.
The situation reached peak absurdity when the court decided to fact-check one of these phantom citations using ChatGPT itself. In a moment of what can only be described as digital déjà vu, the AI confidently confirmed the existence of these non-existent cases and even helpfully elaborated on their imaginary holdings. It is rather like asking your imaginary friend to verify that your other imaginary friend is real – technically consistent, but not exactly helpful.
But beneath the amusing surface lies a serious warning about the dangers of untrained AI in legal practice. As Judge Bezuidenhout discovered, these were not just simple errors – these were elaborately constructed legal fictions complete with believable citations, plausible-sounding principles, and even fictional judges (Judge JMS Van D Wessels might be disappointed to learn they never actually existed).
The court’s reaction was a masterpiece of judicial restraint. When confronted with the phantom cases, the legal team’s explanations evolved from “it is in the law journals” to “Google could not find it” to what amounts to “the dog ate my case law.” One can almost imagine the judge’s internal monologue: “I have heard of judge-made law, but this is ridiculous.”
The consequences, however, were no laughing matter. The judgment serves as a sobering reminder that while AI might be clever enough to write a convincing legal fiction, it is not yet sophisticated enough to replace good old-fashioned legal research, unless its trained on specific data without wondering off to make new law. As Associate Professor M van Eck pointed out, with considerably less humour than this article but significantly more gravitas, this represents a fundamental breach of legal practitioners’ ethical duties.
The case provides a valuable lesson for all legal practitioners: while AI might promise to make legal research as easy as asking Siri for directions, letting it write your legal citations is about as wise as letting a ChatGPT bot represent you in court. The legal profession has enough drama without adding science fiction to the mix.
While the Mavundla case might provide a few chuckles in legal circles, it serves as a serious reminder that in law, as in life, if something seems too good to be true – like magically generating perfect case law to support your arguments – it probably is.
Beyond Simple Mistakes: How AI Hallucinations Can Undermine Legal Professional Ethics
In the world of legal practice, where precision can mean the difference between justice served and justice denied, the Mavundla case presents a fascinating and troubling example of how AI hallucinations can create a cascade of ethical breaches that go far beyond simple citation errors.
At its core, an AI hallucination occurs when an artificial intelligence system generates content that appears plausible but is entirely fabricated. In the legal context, this becomes particularly dangerous because these hallucinations do not present as obvious errors – they manifest as complete, coherent, and convincing legal fictions. Consider how in the Mavundla case, the AI did not just invent case names; it created entire legal principles that seemed perfectly reasonable. For instance, the fictional case “Hassan v Coetzee” came complete with a citation, a court, a year, and a principle about corporate communication that, while entirely fabricated, sounded completely plausible within South African corporate law.
What makes these hallucinations particularly insidious is their compounding nature. When the court experimented with ChatGPT, asking about one of the non-existent cases, the AI not only confirmed the case’s existence but elaborated on its principles – essentially building a house of cards on foundations of sand. This creates what we might call a “hallucination loop,” where each AI verification adds another layer of apparent authenticity to the original fiction.
The ethical implications run deeper than mere academic concern. Legal practitioners have a fundamental duty to act as officers of the court, bound by what Judge Bezuidenhout emphasised as the “duty to be honest and act with integrity.” When AI hallucinations enter legal submissions, they create a form of unintentional deception that is particularly difficult to detect and correct. Unlike a simple misquoted case or an outdated citation, these hallucinations can create entire lines of false legal reasoning that could, if unchecked, influence judicial decision-making.
The Mavundla case also highlights a critical point about professional supervision. When the candidate legal practitioner claimed to have found these cases in “law journals,” it revealed a breakdown in the supervision chain that should have caught these errors. This demonstrates how AI hallucinations can exploit gaps in professional oversight, particularly when supervisors might assume that digital research tools are inherently reliable.
What is particularly concerning is how these hallucinations can undermine the very foundation of legal reasoning. In common law systems, where precedent plays a crucial role, introducing fictional cases is not just an academic error – it is a form of legal pollution that could theoretically influence future judgments if not caught and corrected. The fact that these hallucinations can be so convincing makes them particularly dangerous in a profession that relies heavily on the accurate transmission of legal principles through case law.
The Ripple Effects: How False Citations Impact Court Resources and Legal Credibility
The Mavundla case offers a compelling illustration of how AI-generated false citations can create a cascade of consequences that ripple through the entire legal system, affecting everything from court resources to the fundamental credibility of legal institutions.
Consider first the immediate impact on judicial resources. When Judge Bezuidenhout discovered potential issues with the citations, it triggered a series of additional court appearances and investigations that consumed valuable court time. The judge had to reconvene the court multiple times, specifically on 20 and 25 September 2024, to address these citation issues. This was not merely an administrative inconvenience – it represented a significant diversion of judicial resources that could have been devoted to other cases.
The strain on research resources was equally significant. The judge had to deploy two law researchers at the Pietermaritzburg High Court to verify the citations, effectively doubling the research burden for what should have been routine case verification. This reveals how false citations do not just waste time – they actively multiply the workload of court staff who must thoroughly investigate each questionable reference.
Perhaps more concerning is the erosion of trust between the court and legal practitioners. As the judgment notes, courts traditionally operate on the principle that they can “take counsel at their word.” When attorneys present cases as authority, there is a tacit understanding that these citations have been verified. The introduction of AI-generated false citations threatens this fundamental trust relationship. As Judge Bezuidenhout noted, this trust is not merely a professional courtesy but a cornerstone of efficient legal proceedings.
The damage to professional reputations extends beyond individual practitioners. The case required the judge to refer the matter to the Legal Practice Council, potentially affecting not just the attorneys involved but also creating ripple effects throughout the profession. Young practitioners might now face increased scrutiny of their research, while senior attorneys might need to implement more rigorous verification processes, adding time and cost to legal proceedings.
The financial impact is also significant. The court had to make specific cost orders, including requiring the attorneys to pay costs de bonis propriis (from their own pocket) for the additional court appearances. This demonstrates how false citations can create unexpected financial burdens that ultimately increase the cost of legal services.
Legal AI in South Africa: Opportunities and Challenges
The integration of domain-specific legal artificial intelligence (AI) into the South African legal system holds immense promise for improving the accessibility, efficiency, and reliability of legal services. However, the adoption of such technology must be carefully balanced with ethical considerations and an understanding of the unique challenges posed by South Africa’s legal and social framework.
Legal AI offers the potential to bridge the access-to-justice gap by reducing the cost and complexity of obtaining legal assistance. For many South Africans, access to justice remains hindered by financial constraints and the lack of affordable legal services. AI-powered tools, specifically trained on South African case law, statutes, and legal principles, could empower individuals to navigate legal processes independently. Such tools can assist with document preparation, provide guidance on procedural requirements, and enhance public understanding of legal rights.
The efficiency gains presented by legal AI are equally transformative. By automating routine tasks like case law research, statutory analysis, and document drafting, legal practitioners can allocate more time to strategic matters and client interaction. This enhanced efficiency not only benefits practitioners but also helps courts manage caseloads more effectively, potentially alleviating some of the backlogs that plague the judicial system.
Legal AI also offers opportunities to address language barriers within South Africa’s multilingual society. By leveraging natural language processing capabilities, AI tools can translate legal texts and judgments into multiple official languages, making legal information accessible to a broader audience. This advancement could play a crucial role in fostering inclusivity and ensuring that justice is not impeded by language constraints.
However, the adoption of legal AI also presents challenges that must be addressed. One critical issue is the training of AI systems on verified and contextually relevant legal datasets. South Africa’s legal framework, shaped by its unique history and constitutional values, demands a nuanced understanding that generic AI tools often lack. Domain-specific AI must be trained exclusively on South African case law, statutes, and legal commentary to ensure accuracy and contextual relevance. This requires significant investment in curating high-quality legal datasets and maintaining them through regular updates.
The digital divide poses another significant challenge. While legal AI has the potential to democratise legal services, its benefits may not reach marginalised communities without adequate digital infrastructure and education. Bridging this divide requires a concerted effort to expand internet access, improve digital literacy, and ensure that AI tools are user-friendly and accessible.
The Mavundla case underscores the importance of ethical compliance in the implementation of AI tools. The risks of relying on general-purpose AI for legal research, including the potential for “hallucinated” cases and fabricated legal principles, demonstrate the need for rigorous safeguards. Systems like Legal Genius, specifically designed for the South African legal context, provide a promising solution by ensuring reliability and accuracy through the use of verified datasets.
Legal Genius, South Africa’s first domain-specific AI platform tailored for South Africa, exemplifies the transformative potential of properly implemented legal AI. By training exclusively on verified South African judgments, statutes, and commentary, it avoids the risks associated with generic AI systems while delivering efficiency and accessibility. Legal Genius demonstrates how AI can enhance legal research, improve workflows, and bridge the gap in access to justice, provided it is integrated responsibly and ethically.
While trained AI offers numerous benefits, it is not a replacement for human lawyers. Legal practice requires judgment, empathy, and ethical decision-making—qualities that remain uniquely human. Instead, AI serves as a powerful tool that enhances lawyers’ capabilities, allowing them to focus on the strategic and interpersonal aspects of their work.
In conclusion, the implementation of domain-specific legal AI in South Africa presents an exciting opportunity to revolutionise the legal profession while addressing systemic challenges in access to justice. These advancements, if accompanied by robust safeguards and a commitment to inclusivity, can harness the transformative power of AI to build a legal system that is more efficient, accessible, and equitable. By adopting tools like Legal Genius, the South African legal community can lead the way in demonstrating the responsible use of AI in legal practice.
Written by Bertus Preller, a Family Law and Divorce Law attorney and Mediator at Maurice Phillips Wisenberg in Cape Town and founder of DivorceOnline and iANC. A blog, managed by SplashLaw, for more information on Family Law read more here.