Curb the “unregulated” use of artificial intelligence

0
73

AI Hallucination: A ‘legal’ menace

(Prof Madabhushi Sridhar Acharyulu)

The Supreme Court, on December 5, 2025, refused to entertain a PIL seeking to curb the “unregulated” use of artificial intelligence. Lawyers and Judges deliberated about machine learning in the judicial system and the impact of justice on AI.

Non-existent ‘facts’ and ‘judgment’!

The case originated in the Bombay High Court during a dispute over a residential flat eviction under the Maharashtra Rent Control Act, 1999. 

Are the parties ‘real’? The dispute involved a flat owner (Deepak Shivkumar Bahry) and a tenant company, Heart & Soul Entertainment Ltd, represented by its director, Mohammed Yasin. 

Are the submissions ‘real’? Between February and April 2025, the respondent (Yasin) filed written submissions seeking relief from eviction. 

The AI “Markers”: Bombay High Court Justice MM Sathaye noticed several “give-away” features in the documents: 

  • Green-box tick-marks and distinctive bullet points (standard UI elements from AI chat interfaces).
    • Repetitive phrasing and a generic structure typical of an LLM output. 

The “Phantom” Case: The most critical error was the citation of Jyoti w/o Dinesh Tulsiani Vs. Elegant Associates. When the judge and his law clerks attempted to verify the case, they found it did not exist in any legal database or reporter. 

The Outcome: The Bombay High Court imposed a fine of ₹50,000 on the litigant for “dumping” unverified material on the court. While the Supreme Court later expunged some of the High Court’s harshest remarks about the individual, it used the opportunity to label such AI misuse a global “menace.” 

A dangerous incident in courts

It is our interesting, legal system, our lawyers, and a dangerous incident involving Artificial Intelligence and justice. It is a unique case: the Jyoti w/o Dinesh Tulsiani Vs. Elegant Associates” because it is a textbook example of the dangers of using AI in the courtroom. It is assumed without human oversight. 

Here is the breakdown of how it happened, the technical “trap” behind it, and its impact on the legal system.  This time, it was Public Interest Litigation (PIL) on the ‘use or otherwise’ of artificial intelligence. Earlier, the system understood the significance of PIL. Such a great PIL was about a new technology called ‘AI’.

A ‘legal’ menace & “AI Hallucination.”

This is a landmark moment for the Indian judiciary, highlighting a phenomenon often called AI Hallucination.” When the Supreme Court refers to this as a “menace,” they are addressing a growing global crisis where the convenience of Large Language Models (LLMs) clashes with the rigid requirements of legal accuracy.

LLMs to find the “source of truth.”

To understand why this happens, one must understand that Large Language Models (LLMs) like ChatGPT are not search engines; they are probabilistic text predictors.

  • What is an LLM? It is an AI trained on massive datasets to predict the next most likely “token” (word or character) in a sequence. It understands the patterns of legal writing but does not have a “source of truth” like a database.
  • The Hallucination Trap: This occurs when the model generates text that is grammatically perfect and stylistically authoritative but factually non-existent.
    • Logic vs. Fact: If an LLM is asked for a “precedent regarding tenant rights in Maharashtra,” it knows that Indian citations usually look like [Name] vs [Name], [Year] [Volume] [Reporter] [Page].
    • It may “invent” a case that fits this pattern perfectly because its goal is to provide a response that looks like a correct answer, rather than verifying if the case actually exists.

LLMs predict the next most likely word in a sequence. In a legal context, if you ask for a case about a specific niche topic, the model might “hallucinate” a perfect precedent because its training data suggests that this is how a citation should look, even if that specific case was never argued.

The Bombay High Court bench of judges explained:

“As a matter of indulgence, we expunge the remarks made in the aforesaid paragraph. However, the fact remains that this menace is rampant in all courts now, not only in India but throughout the world. Everyone needs to be careful about this. In fact, this court is already seized of this matter on the judicial side”.

The High Court had noted in its order that the submissions of the appellant were generated using ChatGPT, including a judgment that had no citation in the real world.

The “Silent” Verdict

The judiciary’s stance is clear: AI can be a co-pilot, but never the pilot. Using AI for research is “welcome” for efficiency, but filing unverified AI-generated text is increasingly being viewed as a form of professional misconduct or contempt of court. 

The “Jyoti Tulsiani” case serves as a warning that in the age of AI, the old legal maxim remains more relevant than ever: Caveat Emptor (Buyer Beware), or in this case, Caveat Advocatus (Lawyer Beware).

At most, the AI is a powerful co-pilot, but it should never be the pilot, especially when “precious judicial time” and a client’s legal rights are on the line. Or do we need to think courts should move toward a mandatory “AI Disclosure” certificate for all legal filings to prevent this?

The CJI Kant said: “Someone with sincere intentions is most welcome to give us suggestions. You can mail them to us”.

Sensing the mood of the bench, senior advocate Anupam Lal Das, appearing for petitioner Kartikeya Rawal, sought permission to withdraw the petition, which was allowed.

Key “Give-Aways” Noted by the Court

The Bombay High Court pointed out several “tells” that signaled the use of AI in the filed submissions:

  • Green-box tick-marks: Standard UI elements often copied directly from a chat interface.
  • Specific bullet-point styles: The distinct, structured formatting common to AI responses.
  • Repetitive phrasing: AI tends to reiterate the prompt’s constraints or use “looping” logic.
  • Phantom Citations: The “smoking gun”—citing cases like Jyoti w/o Dinesh Tulsiani Vs. Elegant Associates, which does not exist in any official law reporter.

AI to fabricate: The Global Context

India isn’t alone in this. This mirrors the infamous Mata v. Avianca case in the United States (2023), where a lawyer was sanctioned after using ChatGPT to find precedents (Previous Judgments), resulting in the submission of six non-existent cases. The lawyer’s defense was that he “did not breathe” the possibility that the AI could fabricate cases.

Best Practices for Legal Professionals

The court isn’t banning AI; it’s demanding accountability. If you use AI for research, you must:

  1. Verify via Official Databases: Cross-reference every citation on platforms like SCC Online, Manupatra, or Westlaw.
  2. Human-in-the-Loop: Use AI for drafting or brainstorming, but the final legal “sanity check” must be performed by a qualified human.
  3. Disclose Usage: Some jurisdictions are now requiring lawyers to file a certificate stating whether AI was used and that all citations have been verified.

The Effect on Justice

The Supreme Court’s alarm stems from the fact that legal “hallucinations” are not just technical glitches; they are fundamental threats to the adjudicatory process.  The analysis of the impact on the Legal System

Judicial Efficiency: Our Courts, already burdened with backlogs, must now waste “precious judicial time” (as the HC put it) chasing ghost citations.

Integrity of Process: The legal system relies on the Rule of Law, which is built on verified precedents. Allowing fake cases into the record compromises the accuracy of judgments.

Professional Ethics: It shifts the burden of research from the lawyer to the judge. The court noted that lawyers have a “great responsibility” to cross-verify everything.

Risk of Miscarriage: If a judge (or a less experienced lawyer) fails to catch a hallucinated citation, a person could lose their property, liberty, or rights based on a law that doesn’t exist.

A lesson to Bar and Bench

While acknowledging the concerns, the Chief Justice of India stated that this is a lesson for the Bar and the judges both.  He said:

  1. We use it in a very over-conscious manner, and we do not want this to overpower our judicial decision-making.”
  2. Judges must cross-check. This is part of the judicial academy curriculum and is taken care of.
  3. AI may assist with judicial tasks, but it cannot replace or influence judicial reasoning.

One should know that it casts a duty on the lawyers and the judges to verify the AI-generated case laws, and this can be dealt with in judicial academies and lessons to train the bar and bench.

(Author is former Commissioner of Central RTI)

Courtesy: The Hans India

LEAVE A REPLY

Please enter your comment!
Please enter your name here