The Supreme Court of India has raised a red flag over the “menace” of litigants and legal professionals citing non-existent court judgments generated by Artificial Intelligence (AI) tools. A bench comprising Justices Rajesh Bindal and Vijay Bishnoi observed that this trend is becoming increasingly rampant not only across Indian courts but globally, posing a significant challenge to the integrity of judicial proceedings.
The observations were made while the top court was hearing a plea filed by a director of a company challenging certain remarks made by the Bombay High Court. While the Supreme Court agreed to expunge the specific remarks as a “matter of indulgence,” it utilized the occasion to issue a stern warning about the misuse of technology in the courtroom.
The matter originated from a case in the Bombay High Court where the court noticed discrepancies in written submissions filed in February and April 2025. The High Court had noted that the submissions appeared to be generated using AI tools like ChatGPT, citing several “give-away features” such as specific formatting styles, green-box tick-marks, and repetitive language.
Most notably, the High Court pointed out a reference to an alleged case law titled “Jyoti w/o Dinesh Tulsiani Vs. Elegant Associates.” Upon investigation, the court and its law clerks found that no such judgment existed in the real world.
“Neither citation is given nor a copy of judgement is supplied by the respondent,” the High Court had remarked. “This court and its law clerks were at pains to find out this caselaw but could not find. This has resulted in waste of precious judicial time.”
While the Supreme Court bench expunged the High Court’s specific remarks against the appellant, it validated the underlying concern regarding AI-generated misinformation.
“The fact remains that this menace is rampant in all courts now, not only in India rather throughout the world,” the bench stated. “Everyone needs to be careful about this. In fact, this court is already seized of this matter on judicial side.”
The judiciary emphasized that while AI tools can serve as an aid for legal research, the ultimate responsibility for the accuracy of the material remains with the parties involved. The Bombay High Court had previously noted that there is a “great responsibility on the parties to cross-verify the references and the materials generated” by AI tools before presenting them as legal authorities.
Legal experts suggest that the “hallucination” of facts and citations by Large Language Models (LLMs) is a known technical limitation. However, when these fabricated citations enter a court of law, they can lead to the obstruction of justice and the wasting of judicial resources. The Supreme Court’s observation underscores a growing need for a formal protocol or guidelines regarding the use of AI in legal drafting and research.

