The Supreme Court of India warned against careless dependence on artificial intelligence after fake precedents and misquoted judgments surfaced, stressing judicial integrity and professional responsibility. Surya Kant observed that unchecked AI use in pleadings risks credibility, urging lawyers to.

NEW DELHI: The Supreme Court of India raised critical concerns about the increasing reliance on AI for drafting legal documents, following instances of fake precedents and misquoted judgments surfacing in court. These developments highlight a need for a more cautious approach to technology in legal practice.
Chief Justice of India (CJI) Surya Kant pointed out a troubling trend among legal professionals who are increasingly turning to AI tools for drafting pleadings and legal documents.
Emphasizing the potential ramifications of such practices, he stated,
“We have received disturbing information that certain members of the Bar have begun relying on artificial intelligence tools for drafting pleadings,”
The rising use of these AI-powered tools raises critical questions about accuracy, authenticity, and the ethical implications of technology in legal contexts.
Justice B.V. Nagarathna elaborated on instances where incorrect or fictitious legal citations have caused significant concerns. One particularly egregious example cited was a fictitious case titled Mercy v. Mankind, which, upon investigation, proved to be non-existent. This incident underscores a grave risk that mere reliance on AI tools could compromise the integrity of judicial processes. Artificial intelligence can generate text that appears authoritative but lacks a foundation in reality, leading to issues that undermine the credibility of legal arguments.
Justice Nagarathna remarked,
“There was even a matter where a case titled Mercy v. Mankind was cited a judgment that does not exist at all.”
Such reliance on non-existent cases not only misguides judicial proceedings but also poses serious ethical concerns for lawyers who may inadvertently mislead the court.
CJI Surya Kant also referred to another troubling incident that took place in a case before Justice Dipankar Datta. In that instance, numerous precedents were cited, but none could be verified upon closer examination.
He explained,
“A similar situation arose in another case before Justice Dipankar Datta. Several precedents were cited, but upon verification, none of them were found to exist,”
ALSO READ: “Technology Must Remain a Servant of Justice, Not Its Substitute”: CJI Surya Kant
This scenario raises significant alarm bells regarding the rigor of legal scholarship and research. The efficacy of AI tools is often contingent upon the data they are trained on. If the underlying datasets contain inaccuracies or if users do not verify the AI-generated citations, the legal community risks becoming overly reliant on flawed information.
In some cases, legitimate Supreme Court decisions were invoked, but the arguments presented as coming from those judgments were entirely fabricated.
Justice Nagarathna noted,
“In some instances, actual Supreme Court decisions are cited. However, the passages attributed to those judgments are not to be found in the text at all,”
This phenomenon indicates a severe lapse in diligence among legal practitioners using AI tools. It also raises questions about the eventual impact of these practices on legal education and training. Future lawyers must be equipped to critically evaluate sources and ensure that their legal arguments are grounded in verified information.
The emergence of AI technology in the legal sector holds promise for efficient research and drafting, but it also requires a balanced approach. Legal professionals can leverage AI tools to streamline mundane tasks, enhance research efficiency, and analyze large datasets. However, as the Supreme Court has rightly pointed out, the use of these tools should not replace rigorous legal research and ethical obligations.
It is essential to remember that while AI can assist in improving productivity, it cannot replace the nuanced understanding of law, critical thinking, and professional ethics that human lawyers bring to the table.
ALSO READ: Will AI Replace Lawyers? The Future of AI in Law and Legal Practice
The ethical considerations surrounding the use of AI in legal drafting are manifold. Lawyers have a duty to provide accurate and truthful information to the court. By relying on potentially inaccurate AI-generated content, attorneys could inadvertently cause harm not only to their cases but also to the legal system as a whole. This raises critical questions about accountability and responsibility in an age increasingly dominated by technology.
As technology evolves, so too should the ethical guidelines governing its use. The Bar needs to engage in open discussions about how AI can be integrated effectively without compromising the legal profession’s integrity. It is vital to create a framework that encourages the responsible use of AI while safeguarding against misuse.
Given the recent concerns raised by the Supreme Court, there is an urgent need for enhanced training and education around the use of AI tools in law. Legal education must adapt to incorporate lessons on technology and its ethical implications.
The Supreme Court’s warnings about the misuse of AI in legal drafting highlight an urgent need for awareness and action within the legal community. While AI tools can enhance efficiency, they must be approached with caution. The integrity of the legal profession relies on accurate information and ethical practices, underscoring the importance of critical scrutiny in an era increasingly influenced by technology.
Read Live Coverage: