✨ Read this trending post from TechCrunch 📖
📂 **Category**: AI,Government & Policy,Social,deepfakes,Exclusive,India IT rules,takedowns
✅ **What You’ll Learn**:
India has ordered social media platforms to step up oversight of deepfakes and other AI-generated impersonations, while significantly shortening the time needed to comply with takedown orders. It’s a move that could reshape how global technology companies moderate content in one of the world’s largest and fastest growing Internet services markets.
The changes, published (PDF) on Tuesday as amendments to India’s IT Rules 2021, bring deepfakes under a formal regulatory framework, mandating labeling and traceability of synthetic audio and visual content, while reducing compliance timelines for platforms, including a three-hour deadline for formal takedown orders and a two-hour window for some urgent user complaints.
India’s importance as a digital market increases the impact of the new rules. With over a billion internet users and a majority young population, the South Asian country is an important market for platforms like Meta and YouTube, making it likely that compliance measures adopted in India could impact global product and moderation practices.
Under the revised rules, social media platforms that allow users to upload or share audio and video content must require disclosures about whether the material was artificially created, deploy tools to verify those claims, and ensure deepfakes are clearly labeled and combined with traceable source data.
Certain categories of synthetic content – including deceptive impersonation, non-consensual intimate images, and material associated with serious crimes – are prohibited entirely in the rules. Non-compliance, especially in cases reported by authorities or users, could expose companies to greater legal liability by jeopardizing safe harbor protections under Indian law.
The rules rely heavily on automated systems to fulfill these obligations. Platforms are expected to deploy technical tools to verify user disclosures, identify and classify deepfakes, and prevent the creation or sharing of prohibited synthetic content in the first place.
“The revised IT rules represent a more calibrated approach to regulating AI-generated deepfakes,” said Rohit Kumar, co-founder of New Delhi-based political consulting firm The Quantum Hub. “Significantly compressed grievance timelines – such as two- to three-hour takedown windows – will materially increase compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbor protections.”
TechCrunch event
Boston, MA
|
June 23, 2026
The rules now focus on AI-generated audio and visual content rather than all online information, while making exceptions for routine, cosmetic or efficiency-related uses of AI, said Aparajita Rana, partner at AZB & Partners, a leading Indian corporate law firm. However, it warned that requiring moderators to remove content within three hours once they become aware of it deviates from well-established principles of freedom of expression.
“However, the law still requires intermediaries to remove content upon becoming aware or receiving actual knowledge, and that too within three hours,” Rana said, adding that labeling requirements will apply across formats to limit the spread of child sexual abuse material and deceptive content.
The Internet Freedom Foundation, a New Delhi-based digital advocacy organization, said the rules risk accelerating censorship by significantly compressing takedown timelines, leaving little room for human review and pushing platforms toward automated takedowns. In a statement published on X, the group also raised concerns about expanding categories of prohibited content and provisions that allow platforms to disclose user identities to private sector complainants without judicial oversight.
“These extremely short timelines eliminate any meaningful human review,” the group said, warning that the changes could undermine protections for free expression and due process.
Two industry sources told TechCrunch that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules. While the Indian government appears to have taken on board proposals to narrow the scope of information covered – focusing on AI-generated audiovisual content rather than all online material – other recommendations have not been adopted. The scale of changes between the draft and final rules warrant another round of consultations to give companies clearer guidance on compliance expectations, the sources said.
Government removal powers have already been a point of contention in India. Social media platforms and civil society groups have long criticized the breadth and vagueness of content removal orders, and Elon Musk even challenged New Delhi in court over directives to block or remove posts, claiming they amounted to overreach and lacked adequate safeguards.
Meta, Google, Snap, X and India’s Ministry of Information Technology did not respond to requests for comment.
The latest changes come just months after the Indian government, in October 2025, reduced the number of officials authorized to request the removal of content from the internet in response to X’s legal challenge over the scope and transparency of its removal powers.
The revised rules will go into effect on February 20, giving platforms little time to adjust compliance systems. The launch coincides with India hosting the AI Impact Summit in New Delhi from February 16-20, which is expected to attract top global technology executives and policy makers to the country.
⚡ **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#India #orders #social #media #platforms #remove #deepfakes #faster**
🕒 **Posted on**: 1770808514
🌟 **Want more?** Click here for more info! 🌟
