New Delhi: The government on Wednesday proposed changes to IT rules, mandating the clear labelling of AI-generated content and increasing the accountability of large platforms like Facebook and YouTube for verifying and flagging synthetic information to curb user harm from deepfakes and misinformation.
The IT ministry noted that deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create “convincing falsehoods”, where such content can be “weaponised” to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
The proposed amendments to IT rules provide a clear legal basis for labelling, traceability, and accountability related to synthetically-generated information.
Apart from clearly defining synthetically generated information, the draft amendment, on which comments from stakeholders have been sought by November 6, 2025, mandates labelling, visibility, and metadata embedding for synthetically generated or modified information to distinguish such content from authentic media. The stricter rules would increase the accountability of significant social media intermediaries (those with 50 lakh or more registered users) in verifying and flagging synthetic information through reasonable and appropriate technical measures.
The draft rules mandate platforms to label AI-generated content with prominent markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip.
It requires significant social media platforms to obtain a user declaration on whether uploaded information is synthetically generated, deploy reasonable and proportionate technical measures to verify such declarations, and ensure that AI-generated information is clearly labelled or accompanied by a notice indicating the same. The draft rules further prohibit intermediaries from modifying, suppressing, or removing such labels or identifiers.
“In Parliament as well as many forums, there have been demands that something be done about deepfakes, which are harming society...people using some prominent person’s image, which then affects their personal lives, and privacy...Steps we have taken aim to ensure that users get to know whether something is synthetic or real. It is important that users know what they are seeing,” IT Minister Ashwini Vaishnaw said, adding that mandatory labelling and visibility will enable clear distinctions between synthetic and authentic content.
Once rules are finalised, any compliance failure could mean loss of the safe harbour clause enjoyed by large platforms.
With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (deepfakes), the potential for misuse of such technologies to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly, the ministry said.
Accordingly, the IT Ministry has prepared draft amendments to the IT Rules, 2021, with an aim to strengthen due diligence obligations for intermediaries, particularly significant social media intermediaries (SSMIs), as well as for platforms that enable the creation or modification of synthetically-generated content.
The draft introduces a new clause defining synthetically generated content as information that is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.
A note by the ministry said that globally, and in India, policymakers are increasingly concerned about fabricated or synthetic images, videos, and audio clips (deepfakes) that are indistinguishable from real content, and are being blatantly used to produce non-consensual intimate or obscene imagery, mislead the public with fabricated political or news content, commit fraud or impersonation for financial gain.
The move assumes significance as India is among the top markets for global social media platforms.