- Home
- News
- Articles+
- Aerospace
- Artificial Intelligence
- Agriculture
- Alternate Dispute Resolution
- Arbitration & Mediation
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- Environmental, Social, and Governance
- Foreign Direct Investment
- Food and Beverage
- Gaming
- Health Care
- IBC Diaries
- In Focus
- Inclusion & Diversity
- Insurance Law
- Intellectual Property
- International Law
- IP & Tech Era
- Know the Law
- Labour Laws
- Law & Policy and Regulation
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Student Corner
- Take On Board
- Tax
- Technology Media and Telecom
- Tributes
- Viewpoint
- Zoom In
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
- Middle East
- Africa
- News
- Articles
- Aerospace
- Artificial Intelligence
- Agriculture
- Alternate Dispute Resolution
- Arbitration & Mediation
- Banking and Finance
- Bankruptcy
- Book Review
- Bribery & Corruption
- Commercial Litigation
- Competition Law
- Conference Reports
- Consumer Products
- Contract
- Corporate Governance
- Corporate Law
- Covid-19
- Cryptocurrency
- Cybersecurity
- Data Protection
- Defence
- Digital Economy
- E-commerce
- Employment Law
- Energy and Natural Resources
- Entertainment and Sports Law
- Environmental Law
- Environmental, Social, and Governance
- Foreign Direct Investment
- Food and Beverage
- Gaming
- Health Care
- IBC Diaries
- In Focus
- Inclusion & Diversity
- Insurance Law
- Intellectual Property
- International Law
- IP & Tech Era
- Know the Law
- Labour Laws
- Law & Policy and Regulation
- Litigation
- Litigation Funding
- Manufacturing
- Mergers & Acquisitions
- NFTs
- Privacy
- Private Equity
- Project Finance
- Real Estate
- Risk and Compliance
- Student Corner
- Take On Board
- Tax
- Technology Media and Telecom
- Tributes
- Viewpoint
- Zoom In
- Law Firms
- In-House
- Rankings
- E-Magazine
- Legal Era TV
- Events
- Middle East
- Africa
Meity’s New AI Rulebook Signals a Regulatory Shift India’s Tech Industry Cannot Ignore
Meity’s New AI Rulebook Signals a Regulatory Shift India’s Tech Industry Cannot Ignore
Meity’s New AI Rulebook Signals a Regulatory Shift India’s Tech Industry Cannot Ignore
The regulatory trend since 2023 is now clear where MeitY began by treating deepfakes and AI-generated misinformation as part of the existing intermediary regulation, which then progressed to advisories to push platforms towards stronger user warnings, faster takedowns, proactive monitoring and technical safeguards
Introduction
Since late 2023, Ministry of Electronics and Information Technology’s (MeitY) approach to misinformation, deepfakes and AI-generated content has evolved from advisory-based enforcement of existing intermediary obligations to explicit regulatory framework for synthetic media. The clearest expression of that shift is the recent amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2026 (Intermediary Guidelines), which has been brought into force from 20 February 2026 (Amendment). This Amendment introduced a dedicated framework for “synthetically generated information” and impose specific due diligence obligations where intermediaries enable or facilitate the creation and dissemination of such content.
Evolving trend
The evolution is visible in materials issued by MeitY since November 2023. An initial press release focused on misinformation and deepfakes within the existing intermediary due diligence framework. It required significant social media intermediaries (SSMIs) to identify such content, prevent users from hosting it, and remove reported content within timelines prescribed under the Intermediary Guidelines. The December 2023 advisory extended this compliance focus to all intermediaries and platforms, while strengthening expectations around grievance redressal.
At this stage, MeitY’s approach remained anchored in intermediary law. The March 2024 AI advisories marked a key shift. MeitY advised that intermediaries and platforms must ensure that AI models, LLMs and generative AI tools deployed on their computer resources do not enable unlawful content or violations of applicable law. Under-tested or unreliable foundational models were to be made available to Indian users only with clear labelling of potential fallibility. Synthetic text, audio and audio-visual content capable of misuse as misinformation or deepfakes was to be labelled or embedded with permanent metadata or identifiers. This was the first time MeitY expressly addressed AI systems rather than only intermediaries.
Advisories issued through 2024, 2025 and early 2026 follow a consistent pattern. MeitY identifies harmful content categories (for example, hoax bomb threats, violence-related material, obscene content, or distorted AI-generated religious content), reiterates due diligence obligations under the IT Act and Intermediary Guidelines, cautions that section 79 safe harbour may be lost for non-compliance, and emphasises prompt takedown, user controls, grievance mechanisms and cooperation with authorities. Over time, the language has shifted from “proactive” monitoring to mandating “technology-based measures” and “algorithmic safeguards” against unlawful content.
The shorter compliance timelines under the amended Rules, such as 3 hours for takedown upon actual knowledge, 2 hours for complaints against nudity, sexual content, impersonation or morphed material, etc. will increase operational pressure considerably
The Final Amendment
The Amendment gave the trend in relation to synthetic media a formal legal basis. First, they insert definitions for “audio, visual or audio-visual information” and “synthetically generated information” (SGI). SGI is defined as audio, visual or audio-visual information that is artificially or algorithmically created, generated, modified using a computer resource, such that it appears real, authentic or true and depicts it to be perceived as indistinguishable from a natural person or a real-world event. At the same time, the rules exclude routine or good-faith editing, formatting, enhancement, transcription, accessibility and translation functions, provided they do not materially alter or misrepresent the substance or meaning of the underlying content.
Secondly, the Amendment clarifies that references to “information” in relevant unlawful-content provisions include SGI. This is an important drafting clarification because it expressly places synthetic media within the scope of the Intermediary Guidelines where unlawful acts are concerned.
Thirdly, the amendments insert a new rule that applies where an intermediary offers a computer resource that may enable, permit or facilitate the creation, generation, modification, publication, or sharing of SGI. Such intermediaries must deploy reasonable and appropriate technical measures, including automated tools or other suitable mechanisms, to prevent unlawful SGI such as paedophilic content, non-consensual intimate imagery, obscene, pornographic content, false documents or records, content relating to explosives, arms, etc. By doing so, MeitY clearly defines a new category of intermediaries, other than the existing social media intermediaries, SSMIs and online gaming intermediaries.
Where SGI is not prohibited, the rules require it to be clearly and prominently labelled with permanent metadata or a unique identifier, to identify the intermediary computer resource used to create, generate, modify or alter it. The rules also state that intermediaries must not enable the modification, suppression or removal of such labels and metadata.
The Amendment also tightens obligations for SSMIs by requiring such intermediaries to (i) require user declarations in relation to the nature of the content (whether SGI or not) before allowing content to be displayed, uploaded or published, (ii) deploy technical measures to verify the accuracy of that declaration, and (iii) ensure clear labelling where content is confirmed to be SGI. The amended provision therefore replaces the earlier “endeavour” language with an obligation to “deploy appropriate technical measures”.
Industry impact
The impact on Indian industry will be significant. Social media platforms, AI image and video tools, voice synthesis services and other intermediaries that facilitate synthetic media will need to revisit content moderation systems, user disclosures, metadata and provenance capabilities, complaint-handling workflows and evidence-preservation processes. The shorter compliance timelines under the amended Rules, such as 3 hours for takedown upon actual knowledge, 2 hours for complaints against nudity, sexual content, impersonation or morphed material, etc. will increase operational pressure considerably.
For larger firms, the challenge will be one of scale and implementation. For smaller Indian startups, the challenge may be more structural: building technical safeguards, labelling systems and traceability measures into products from the outset. At the same time, the amendments may also create demand for compliance tools relating to provenance, watermarking, synthetic media detection and rapid grievance management.
Conclusion
The regulatory trend since 2023 is now clear where MeitY began by treating deepfakes and AI-generated misinformation as part of the existing intermediary regulation, which then progressed to advisories to push platforms towards stronger user warnings, faster takedowns, proactive monitoring and technical safeguards.
The Amendments convert that trajectory into express legal obligations for intermediaries that enable or facilitate synthetic media. The result is not a standalone AI statute, but the regulatory focus has become more technology-specific, even though it still operates through the intermediary governance architecture.
Would the next step be a full-fledged AI statute?
Disclaimer – The views expressed in this article are the personal views of the authors and are purely informative in nature.


