The Centre has amended the IT Rules to regulate AI-generated content and shorten takedown timelines for unlawful material. Notified by MeitY on February 10, the new provisions will take effect from February 20, 2026.
The Union Government announced amendments requiring that photorealistic AI-generated content be clearly labeled, alongside significantly reducing the time allowed for the removal of illegal material, including non-consensual deepfakes.
These changes, part of the Information Technology Act, 2021, are set to take effect on February 20.
Under the revised regulations, social media platforms will have just 2 to 3 hours to eliminate certain types of unlawful content, a substantial decrease from the previous 24 to 36-hour timeframe.
Also Read: Will AI Replace Lawyers? The Future of AI in Law and Legal Practice
Content deemed illegal by a court or an appropriate government must be removed within 3 hours, while particularly sensitive material, including non-consensual nudity and deepfakes, is to be taken down within 2 hours.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 defines synthetically generated content as “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or a real-world event.”
A senior government official stated on February 10, 2026, that the rules will also include exceptions for automatic enhancements commonly applied by smartphone cameras. The final definition is narrower than what was proposed in a draft released in October 2025.
Social media platforms will need to obtain disclosures from users if their content is AI-generated. If such a disclosure is absent for synthetically generated content, firms must either label the content proactively or remove it in cases of non-consensual deepfakes.
The amended rules require that AI-generated imagery be labeled “prominently.” Although the draft initially mandated that 10% of any imagery be covered with such a disclosure, platforms have been granted more flexibility after advocating against such a specific requirement.
Similar to the existing IT Rules, non-compliance with these regulations could result in the loss of safe harbour, which is the legal principle that protects sites allowing user-generated content from being held liable like publishers of traditional media.
Also Read: “AI For All”: Centre Introducing Legislation To Regulate Artificial Intelligence (AI)
The rules indicate that if a social media intermediary is aware, or it is established, that it knowingly allowed or failed to act on such synthetically generated information in violation of these rules, it would be deemed to have neglected its duty of care.
Additionally, these amendments partially reverse a prior regulation from October 2025, which restricted each state to designating a single officer with the authority to issue takedown orders. States can now appoint more than one officer as an administrative response to the needs of larger populations.

