Meity’s New AI Rulebook Signals a Regulatory Shift India’s Tech Industry Cannot Ignore

MeitY’s 2026 AI rules introduce stricter regulations on synthetic media and intermediary obligations in India.

Law Firm - Khaitan & Co
By: :  Harsh Walia
By: :  Khyati Goel
Update: 2026-03-25 05:45 GMT


Meity’s New AI Rulebook Signals a Regulatory Shift India’s Tech Industry Cannot Ignore

The regulatory trend since 2023 is now clear where MeitY began by treating deepfakes and AI-generated misinformation as part of the existing intermediary regulation, which then progressed to advisories to push platforms towards stronger user warnings, faster takedowns, proactive monitoring and technical safeguards

Introduction

Since late 2023, Ministry of Electronics and Information Technology’s (MeitY) approach to misinformation, deepfakes and AI-generated content has evolved from advisory-based enforcement of existing intermediary obligations to explicit regulatory framework for synthetic media. The clearest expression of that shift is the recent amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2026 (Intermediary Guidelines), which has been brought into force from 20 February 2026 (Amendment). This Amendment introduced a dedicated framework for “synthetically generated information” and impose specific due diligence obligations where intermediaries enable or facilitate the creation and dissemination of such content.

Evolving trend

The evolution is visible in materials issued by MeitY since November 2023. An initial press release focused on misinformation and deepfakes within the existing intermediary due diligence framework. It required significant social media intermediaries (SSMIs) to identify such content, prevent users from hosting it, and remove reported content within timelines prescribed under the Intermediary Guidelines. The December 2023 advisory extended this compliance focus to all intermediaries and platforms, while strengthening expectations around grievance redressal.


At this stage, MeitY’s approach remained anchored in intermediary law. The March 2024 AI advisories marked a key shift. MeitY advised that intermediaries and platforms must ensure that AI models, LLMs and generative AI tools deployed on their computer resources do not enable unlawful content or violations of applicable law. Under-tested or unreliable foundational models were to be made available to Indian users only with clear labelling of potential fallibility. Synthetic text, audio and audio-visual content capable of misuse as misinformation or deepfakes was to be labelled or embedded with permanent metadata or identifiers. This was the first time MeitY expressly addressed AI systems rather than only intermediaries.

Advisories issued through 2024, 2025 and early 2026 follow a consistent pattern. MeitY identifies harmful content categories (for example, hoax bomb threats, violence-related material, obscene content, or distorted AI-generated religious content), reiterates due diligence obligations under the IT Act and Intermediary Guidelines, cautions that section 79 safe harbour may be lost for non-compliance, and emphasises prompt takedown, user controls, grievance mechanisms and cooperation with authorities. Over time, the language has shifted from “proactive” monitoring to mandating “technology-based measures” and “algorithmic safeguards” against unlawful content.

The shorter compliance timelines under the amended Rules, such as 3 hours for takedown upon actual knowledge, 2 hours for complaints against nudity, sexual content, impersonation or morphed material, etc. will increase operational pressure considerably

The Final Amendment

The Amendment gave the trend in relation to synthetic media a formal legal basis. First, they insert definitions for “audio, visual or audio-visual information” and “synthetically generated information” (SGI). SGI is defined as audio, visual or audio-visual information that is artificially or algorithmically created, generated, modified using a computer resource, such that it appears real, authentic or true and depicts it to be perceived as indistinguishable from a natural person or a real-world event. At the same time, the rules exclude routine or good-faith editing, formatting, enhancement, transcription, accessibility and translation functions, provided they do not materially alter or misrepresent the substance or meaning of the underlying content.

Secondly, the Amendment clarifies that references to “information” in relevant unlawful-content provisions include SGI. This is an important drafting clarification because it expressly places synthetic media within the scope of the Intermediary Guidelines where unlawful acts are concerned.

Thirdly, the amendments insert a new rule that applies where an intermediary offers a computer resource that may enable, permit or facilitate the creation, generation, modification, publication, or sharing of SGI. Such intermediaries must deploy reasonable and appropriate technical measures, including automated tools or other suitable mechanisms, to prevent unlawful SGI such as paedophilic content, non-consensual intimate imagery, obscene, pornographic content, false documents or records, content relating to explosives, arms, etc. By doing so, MeitY clearly defines a new category of intermediaries, other than the existing social media intermediaries, SSMIs and online gaming intermediaries.

Where SGI is not prohibited, the rules require it to be clearly and prominently labelled with permanent metadata or a unique identifier, to identify the intermediary computer resource used to create, generate, modify or alter it. The rules also state that intermediaries must not enable the modification, suppression or removal of such labels and metadata.

The Amendment also tightens obligations for SSMIs by requiring such intermediaries to (i) require user declarations in relation to the nature of the content (whether SGI or not) before allowing content to be displayed, uploaded or published, (ii) deploy technical measures to verify the accuracy of that declaration, and (iii) ensure clear labelling where content is confirmed to be SGI. The amended provision therefore replaces the earlier “endeavour” language with an obligation to “deploy appropriate technical measures”.

Industry impact

The impact on Indian industry will be significant. Social media platforms, AI image and video tools, voice synthesis services and other intermediaries that facilitate synthetic media will need to revisit content moderation systems, user disclosures, metadata and provenance capabilities, complaint-handling workflows and evidence-preservation processes. The shorter compliance timelines under the amended Rules, such as 3 hours for takedown upon actual knowledge, 2 hours for complaints against nudity, sexual content, impersonation or morphed material, etc. will increase operational pressure considerably.

For larger firms, the challenge will be one of scale and implementation. For smaller Indian startups, the challenge may be more structural: building technical safeguards, labelling systems and traceability measures into products from the outset. At the same time, the amendments may also create demand for compliance tools relating to provenance, watermarking, synthetic media detection and rapid grievance management.

Conclusion

The regulatory trend since 2023 is now clear where MeitY began by treating deepfakes and AI-generated misinformation as part of the existing intermediary regulation, which then progressed to advisories to push platforms towards stronger user warnings, faster takedowns, proactive monitoring and technical safeguards.

The Amendments convert that trajectory into express legal obligations for intermediaries that enable or facilitate synthetic media. The result is not a standalone AI statute, but the regulatory focus has become more technology-specific, even though it still operates through the intermediary governance architecture.

Would the next step be a full-fledged AI statute?

Disclaimer – The views expressed in this article are the personal views of the authors and are purely informative in nature.

Tags:    

By: - Harsh Walia

Harsh is a leading Indian Telecom, Media, Technology (TMT) and Data Privacy lawyer with over 22 years of experience advising global technology companies, digital businesses, and multinational clients on complex India-facing legal and regulatory matters. He advises on regulatory, contentious, transactional, and strategic compliance issues across the TMT sector, with particular strength in telecom, technology regulation, data protection, privacy governance, and cross-border data flows. He is regularly sought out for advice on the Digital Personal Data Protection Act, 2023 (DPDP Act), including implementation, breach response, consent frameworks, and international data transfer issues. Harsh has been invited by the Government of India to participate in consultations on the DPDP Act and its Rules, and has represented major technology clients before government and regulatory authorities on significant policy and compliance issues. He is also a regular speaker at leading industry platforms such as NASSCOM, FICCI, and ACTO, and serves on digital economy and industry-focused committees.

By: - Khyati Goel

Khyati is a Senior Associate with the firm, specializing in the Technology, Data Protection, Media, and Telecommunications (TMT) practice group. She brings extensive experience in advising clients on a wide range of regulatory and compliance matters, including information technology-enabled services, data storage and processing agreements, cloud service arrangements, and cross-border data transfers. Khyati also provides strategic counsel on compliance by AI models, cybersecurity breach reporting obligations, e-commerce platform operations, and other cutting-edge issues in the digital economy. Her expertise ensures clients navigate complex legal landscapes with confidence and precision.

Similar News

AI And Trade Secret