
Key Highlights
- Judicial Alarm: Chief Justice Surya Kant described the use of AI for drafting petitions as “absolutely uncalled for” and a “dangerous trend.”
- Fictitious Citations: Justice BV Nagarathna identified a non-existent case titled “Mercy vs. Mankind” that was cited in a recent filing.
- Systemic Burden: The bench warned that “AI hallucinations” are creating an immense verification burden for judges and court staff.
- Decline in Drafting: Justice Joymalya Bagchi lamented the decay of traditional legal drafting, noting that petitions now often consist of unverified, AI-generated blocks of text.
- Irony of Timing: The warning comes while the Global AI Impact Summit 2026 is being held simultaneously at Bharat Mandapam, just a short distance from the court.
In a significant observation on February 17, 2026, the Supreme Court of India expressed deep concern over the growing reliance on artificial intelligence by members of the Bar for drafting legal pleadings. A bench comprising Chief Justice of India (CJI) Surya Kant, Justice BV Nagarathna, and Justice Joymalya Bagchi characterized the unverified use of generative AI as a “dangerous trend” that threatens the integrity of the judicial process.
The issue came to light during the hearing of a Public Interest Litigation (PIL) filed by academician Roop Rekha Verma concerning guidelines for political speeches. During the proceedings, the bench noted that pleadings are increasingly containing “hallucinated” information, where AI tools invent legal precedents to fill gaps in arguments.
The “Mercy vs. Mankind” Mystery
Justice BV Nagarathna highlighted a particularly jarring instance where a petition cited a case titled “Mercy vs. Mankind.” Upon verification, it was discovered that no such judgment exists in Indian or international law. “We are alarmed to reflect that some lawyers have started using AI to draft. It is absolutely uncalled for,” the CJI remarked, emphasizing that the court has been presented with “shocking” instances of fabricated law.
The bench further noted that even when lawyers cite genuine Supreme Court cases, the specific paragraphs or conclusions quoted are often entirely made up by AI models. Justice Nagarathna added that such practices force judges to spend valuable time cross,referencing every single quote, a task that has become an “additional burden” on an already overworked judiciary.
Background: Previous Lapses and the Decline of Craft
This is not an isolated incident. CJI Surya Kant recalled a high-profile corporate battle heard earlier in Justice Dipankar Datta’s court, where a litigant submitted a rejoinder containing hundreds of fabricated case laws. In that instance, senior advocates expressed deep embarrassment, calling it a “calculated effort” to mislead the court using technology.
Justice Joymalya Bagchi took the opportunity to lament the broader decline in the “art of legal drafting.” He observed that modern Special Leave Petitions (SLPs) often lack original articulation, appearing instead as a patchwork of lengthy quotations and AI-generated summaries. He contrasted this with the precision and conciseness of legendary advocates from previous generations, whose pleadings were grounded in verified research and human reasoning.
Ethics in the Age of AI
The Supreme Court’s stance serves as a stern reminder to the legal fraternity that while AI can assist in research, the final responsibility for accuracy lies with the human practitioner. The court indicated that such negligence could be treated as a serious professional lapse in the future, potentially leading to disciplinary action or high costs.
The irony of the situation was not lost on observers, as the “Global AI Impact Summit 2026” was simultaneously taking place at Bharat Mandapam, less than a kilometer away. While the summit discussed AI as a tool for national progress, the Supreme Court’s courtroom served as a cautionary stage for the technology’s potential to undermine the foundations of truth and justice.















































