Deepfakes and Digital Dilemmas: India's Unfolding Battle Against AI Misuse
Share- Nishadil
- October 24, 2025
- 0 Comments
- 3 minutes read
- 4 Views
It started, really, with a jolt, didn’t it? That now-infamous deepfake of actor Rashmika Mandanna that exploded across the internet last November. It wasn’t just a simple hoax, you see; it was a chilling glimpse into a digital future that feels, for many of us, already a little too close for comfort.
And honestly, it wasn’t an isolated incident either; others like Katrina Kaif and even the legendary Sachin Tendulkar have found themselves unwitting victims in this insidious new landscape.
For a while now, there's been this quiet hum, a growing unease about the darker corners of Artificial Intelligence.
But that Mandanna deepfake, well, it was a thunderclap. It forced a conversation, pushing the issue from tech news sidebars right into the glaring spotlight. And, quite rightly, the Indian government took notice. The reaction was swift, a stern warning from Union Minister Rajeev Chandrasekhar echoing through the digital corridors: platforms, he essentially said, simply cannot turn a blind eye to such dangerous content.
In truth, India has been grappling with AI misuse for a couple of years now.
Our existing IT Rules of 2021 already lay down a foundational expectation: platforms must remove unlawful content within a tight 36-hour window. But the rapid, almost dizzying evolution of AI — particularly generative AI that can whip up hyper-realistic fakes in moments — means the old rules, while a start, were perhaps a tad behind the curve.
So, what happened? Well, a crucial amendment was quietly introduced, shifting the onus, requiring platforms to make "reasonable efforts" to ensure such content doesn’t even appear in the first place. A subtle change, perhaps, but a significant one in the grand scheme of things, moving from reactive cleanup to proactive prevention.
But what exactly is a deepfake in the eyes of the law? Essentially, it's any content created or significantly altered using computational techniques that distort reality, designed to impersonate someone or, even worse, mislead users.
And the stakes, as we’re learning, are incredibly high. The government isn’t just tinkering around the edges anymore. There's a much bigger piece of legislation on the horizon: the Digital India Act (DIA).
This new act is poised to replace the two-decades-old IT Act of 2000 and, in effect, the current IT Rules.
It's a complete overhaul, you could say, designed specifically to tackle the complexities of our digital age. One of its cornerstones? A strong emphasis on intermediary accountability and, crucially, a new 'duty of care' for these platforms. This isn’t just about penalties, though those are certainly on the table—we’re talking potential imprisonment and hefty fines.
It’s about cultivating a digital environment where platforms are not just hosts, but guardians.
And yet, the responsibility doesn't rest solely with the government or the tech giants. No, for once, we, the users, also have a vital role to play. If you spot something suspicious, something that feels off or looks like a deepfake, reporting it isn’t just a suggestion; it’s a shared responsibility to protect our collective digital space.
Because, let’s be honest, in this interconnected world, misinformation can spread like wildfire, causing real harm.
India, for its part, has ambitious plans. It envisions itself as a global leader in AI innovation. But how can you achieve that if the very foundations of trust and safety are constantly undermined? It’s a delicate tightrope walk: fostering innovation while building robust defenses against misuse.
This two-year fight, intensifying with each new deepfake revelation, is far from over. It’s a continuous, evolving battle against an invisible enemy, but one that India seems increasingly determined to win, step by thoughtful step.
.Disclaimer: This article was generated in part using artificial intelligence and may contain errors or omissions. The content is provided for informational purposes only and does not constitute professional advice. We makes no representations or warranties regarding its accuracy, completeness, or reliability. Readers are advised to verify the information independently before relying on