By OUR CORRESPONDENT
New Delhi, India – The Indian government has significantly strengthened its regulatory oversight of digital platforms by introducing stringent new rules aimed at curbing the spread of deepfakes and misinformation. Under the amended Information Technology Rules, social media companies are now legally required to remove flagged unlawful content within a three-hour window, a sharp reduction from the previous thirty-six-hour allowance.
These regulations, which are set to come into effect on February 20, place a heightened burden of accountability on tech giants such as Meta, X, and Google. Beyond the expedited takedown timelines, platforms must ensure that all synthetically generated information is accompanied by prominent labels and embedded metadata. This measure is designed to provide transparency, allowing users to immediately distinguish between authentic media and content created or altered using artificial intelligence.
The Ministry of Electronics and Information Technology has also mandated that platforms obtain formal declarations from users when they upload AI-generated material. To enforce compliance, companies are expected to deploy automated tools capable of detecting deceptive or sexually explicit synthetic content. Failure to adhere to these new standards could result in platforms losing their ‘safe harbour’ status, potentially leaving them legally liable for the user-generated content they host.
While the government maintains that these steps are essential for digital safety and the integrity of the information ecosystem, industry experts have raised concerns regarding the practicalities of such rapid compliance. Critics suggest that the three-hour deadline may lead to over-censorship as platforms rush to remove content to avoid legal repercussions.
DW
© 2021 Apex Press and Publishing. All Rights Reserved. Powered by Mesdac