top of page

India Cracks Down on Deepfakes: Decoding the February 2026 IT Rules Amendment

  • reetika72
  • 4 hours ago
  • 3 min read

In a decisive move to combat the rising tide of artificial intelligence misuse, the Ministry of Electronics and Information Technology (MeitY) notified significant amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules on February 10, 2026.


The primary reason for this sweeping amendment is the urgent need to regulate "Synthetically Generated Information"—commonly known as deepfakes. As AI tools become more accessible, the risk of realistic fake content being used for financial fraud, misinformation, and non-consensual deepfake pornography has escalated. The 2026 Amendment introduces strict liability for social media platforms to label AI content, verify user posts, and act on grievances faster than ever before.



Here is a breakdown of the 5 key changes every user and tech company must know.


1. Defining "Synthetically Generated Information"


For the first time, the IT Rules explicitly define synthetic content. It covers audio, visual, or audio-visual information created or modified by computer resources that appears to be real or depicts a person/event in a way that is indistinguishable from reality.


What is NOT considered a Deepfake? To protect creators and legitimate editing, the rules clarify that routine enhancements are exempt. "Synthetically generated information" does not include:


  • Colour adjustment, noise reduction, or formatting.

  • The use of templates or conceptual content for education/training.

  • modifications solely for improving accessibility or translation.


2. Mandatory Labelling and Watermarking


Intermediaries (platforms like X, Facebook, Instagram) are now legally bound to ensure AI content is identifiable.


  • Visual Labels: Any AI-generated content must be "prominently labelled" in a way that is easily noticeable to the viewer.

  • Audio Disclosures: AI-generated audio must have a prefixed audio disclosure alerting the listener.

  • Digital Watermarking: Beyond visible labels, platforms must embed permanent metadata or unique identifiers into the file to track the computer resource used to create it. Platforms are prohibited from allowing users to remove these labels or metadata.


3. Stricter Rules for Big Tech (Significant Social Media Intermediaries)


Significant Social Media Intermediaries (SSMIs)—platforms with large user bases—face additional hurdles. They must now:


  • Require User Declarations: Before a user uploads content, they must declare if the information is synthetically generated.

  • Verify Accuracy: The platform must use automated tools to verify this declaration.

  • Force Labelling: If the content is confirmed as synthetic, the platform must ensure the appropriate label is displayed.


4. Drastically Reduced Response Times


The amendment signals the end of long waiting periods for content moderation. The government has tightened the clock on how fast platforms must react to illegal content:


  • 3 Hours for Unlawful Content: Upon receiving actual knowledge (via court order or government notification) of unlawful content affecting national security or public order, intermediaries must remove it within 3 hours (previously 36 hours).

  • 2 Hours for Intimate/Deepfake Images: Complaints regarding nudity, sexual acts, or impersonation (including morphed images) must be acted upon within 2 hours (previously 24 hours).

  • 7 Days for Grievances: The Grievance Officer must now resolve user complaints within 7 days (previously 15 days).


5. Penalties and User Warnings


Intermediaries are now required to update their Terms of Service to explicitly warn users about the legal consequences of deepfakes. Platforms must inform users that creating harmful synthetic content—such as non-consensual intimate imagery or fraud—can lead to imprisonment under the Bharatiya Nyaya Sanhita, 2023 and other laws.


Users who violate these rules face:

  • Immediate account suspension or termination.

  • Reporting to law enforcement agencies.


Conclusion


The February 2026 Amendment represents a paradigm shift in Indian internet regulation. By mandating technical provenance (watermarking) and forcing rapid takedowns, the government aims to restore trust in digital media. For social media companies, the era of passive neutrality is over; they must now actively verify the reality of the content they host.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Subscribe to our newsletter.
Don’t miss out!

Thanks for subscribing!

Contact Us

Reetika Gupta

4 LH, Lanco Hills

Manikonda

Hyderabad- 500089

Email: reetika@aristolegal.co.in

Subscribe to our newsletter.
Don’t miss out!

Thanks for subscribing!

Head Office

Reetika Gupta

21082, Prestige Falcon City, Kanakpura road,

Bangalore- 560062

Email: reetika@aristolegal.co.in

Explore PoSH Solutions

Posh expert solutions logo
  • LinkedIn

©2023 by aristolegal

Terms & Conditions

bottom of page