Chief Justice of India B.R. Gavai acknowledged AI misuse, remarking, “We’ve seen our morphed pictures too,” as the Supreme Court heard a PIL seeking guidelines to regulate the use of Generative AI in India’s judiciary.
Thank you for reading this post, don't forget to subscribe!NEW DELHI: The Supreme Court of India on Monday heard a public interest litigation (PIL) seeking directions to the Union government to frame guidelines or a policy for regulating the use of Artificial Intelligence (AI) and Generative AI (GenAI) in the Indian judiciary.
During the hearing, Chief Justice of India (CJI) B.R. Gavai acknowledged the growing misuse of AI technologies, noting humorously,
“Yes, yes, we have seen our morphed pictures too!”
The remark came as the Bench decided to list the matter for hearing after two weeks.
ALSO READ: PIL in Supreme Court Seeks AI Regulatory Framework to Tackle Deepfakes and Impersonations
What Petition Seeks
The PIL, filed by advocate Kartikeya Rawal, calls for the formulation of a comprehensive legislative and policy framework governing the use of Generative Artificial Intelligence in judicial and quasi-judicial institutions.
According to the plea, GenAI, unlike traditional AI, is capable of generating entirely new data, including non-existent case laws, which could create ambiguity and misinformation within the legal system.
“The characteristic of GenAI being a black box and having opaqueness has the possibility of creating ambiguity in the legal system followed in India,”
the petition stated.
The petition highlights that GenAI systems can “hallucinate”, a term used to describe instances where AI generates false or fabricated information. In a legal context, this could result in fake precedents, biased reasoning, and erroneous case laws.
“Such arbitrariness is a clear violation of Article 14,”
the plea argues, referring to the right to equality under the Indian Constitution.
ALSO READ: Let AI Assist, But Judges and Lawyers Must Be the Final Arbiters: Justice Surya Kant
The petitioner emphasized that the quality and transparency of training data are crucial in determining the reliability of AI outputs. If the data contains biases or discriminatory patterns, GenAI systems could unintentionally replicate and amplify social prejudices.
The PIL also highlights the ethical and legal challenges posed by the rapid integration of AI tools in judicial processes. The plea warns that AI bias could lead to discrimination against marginalized communities, and urges that data ownership and accountability mechanisms be made transparent.
“AI integrated into the judiciary should have data that is free from bias, and data ownership must be transparent enough to ensure stakeholders’ liability,”
the petition stated.
Supreme Court’s Observations
CJI Gavai, while acknowledging the potential misuse of AI, appeared to take the matter in stride. His remark on morphed images of judges underscores the real-world risks of deepfakes and misinformation made possible by generative AI technologies.
After briefly hearing the submissions, the Bench adjourned the case for two weeks, indicating that it will be taken up again for further consideration.

