The Punjab and Haryana High Court directed judicial officers not to rely on AI tools like ChatGPT, Gemini, Microsoft Copilot and Meta AI for drafting judgments or legal research, the court also warned that any violation will be treated seriously.
The Punjab and Haryana High Court instructed judicial officers not to use Artificial Intelligence (AI) tools including ChatGPT, Gemini, Microsoft Copilot, and Meta AI for drafting judgments or conducting legal research, warning that any breach “will be viewed seriously.”
The directive, to be circulated to district and sessions judges across the two states and the Union Territory, states,
“Hon’ble the Chief Justice has been pleased to ask you to direct the Judicial Officers working under your control not to use Artificial Intelligence tools including but not limited to Chat GPT, Gemini, Copilot, Meta etc. for writing of judgements and legal research. Any violation of these instructions will be viewed seriously.”
This makes the Punjab and Haryana High Court the second to curb AI use in the judiciary. Earlier, on April 4, the Gujarat High Court issued a comprehensive policy that barred AI from adjudicatory tasks while permitting limited supportive uses.
ALSO READ: Let AI Assist, But Judges and Lawyers Must Be the Final Arbiters: Justice Surya Kant
That policy forbids AI involvement in decision-making, judicial reasoning, drafting of orders or judgments, bail and sentencing considerations, or any substantive adjudicatory activity.
It emphasises that “AI should be used to improve the speed and quality of justice delivery, rather than as a replacement for judicial reasoning.”
The administrative notice from the Chandigarh-based high court follows recent judicial warnings about integrating AI into adjudication too hastily. At a North Zone-I Regional Conference on “Advancing Rule of Law through Technology: Challenges & Opportunities,”
Justice Ashwani Kumar Mishra cautioned against adopting AI in judicial decision-making without a strong legislative and institutional framework, noting systemic risks.
He argued that incorporating AI directly into adjudication raises grave concerns, especially because subordinate courts often follow precedents set by higher courts,
“We have to have a very strong note of caution… When we endorse a particular viewpoint, the lower judiciary also starts following it. Then this becomes a very serious problem.”
Justice Mishra also observed that the current legal infrastructure is not prepared for AI to perform central judicial functions,
“The minute we start taking it in the judicial dispensation itself, we are in for a very, very serious situation a crisis of sorts.”
The administrative order reflects these reservations, adopting a cautious stance that allows for technological assistance in support roles while keeping core judicial decision-making insulated from unregulated AI use.

