AI-Generated Casteist Video Targeting CJI Sparks Legal Action: Social Media User Booked in Mumbai
Share- Nishadil
- October 10, 2025
- 0 Comments
- 2 minutes read
- 4 Views

In a significant development highlighting the growing concerns around AI misuse and online hate speech, a social media user has been booked by Mumbai police for allegedly creating and disseminating an AI-generated video with casteist remarks aimed at the Chief Justice of India (CJI) D.Y. Chandrachud.
This incident underscores the urgent need for stringent regulations and vigilant monitoring of digital content, particularly in the realm of deepfake technology.
The FIR was registered following a complaint lodged by the IT cell of a prominent political party, which flagged the video's inflammatory content.
The video, manipulated using Artificial Intelligence, featured a distorted voice and imagery designed to mimic the CJI, while delivering derogatory and caste-based comments. This malicious act not only attempts to defame a high constitutional authority but also seeks to sow discord and promote discrimination within society.
Authorities have invoked several sections of the Indian Penal Code (IPC) and the Information Technology (IT) Act in the case.
These include Section 153A (promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony), Section 505(2) (statements creating or promoting enmity, hatred or ill-will between classes), and Section 67 of the IT Act (publishing or transmitting obscene material in electronic form).
The serious nature of the charges reflects the gravity with which law enforcement views such digital offenses.
Investigators are now actively working to trace the originator of the video and gather technical evidence. This involves collaborating with social media platforms to identify the user responsible for creating and first uploading the content, as well as those who further propagated it.
The challenge lies in navigating the complexities of digital forensics and ensuring accountability in an increasingly decentralized online landscape.
This incident serves as a stark reminder of the ethical quandaries posed by advanced AI technologies. While AI offers immense potential for progress, its misuse can have severe societal implications, ranging from defamation and misinformation to incitement of hatred and electoral manipulation.
Experts are calling for robust frameworks, including AI literacy programs, platform accountability, and clear legal guidelines, to combat the rise of malicious AI-generated content.
The case is ongoing, and further details are expected as the investigation progresses. It is anticipated that this booking will send a strong message to individuals attempting to exploit technology for harmful purposes, reaffirming the commitment of legal authorities to uphold social harmony and protect public figures from targeted online attacks.
.Disclaimer: This article was generated in part using artificial intelligence and may contain errors or omissions. The content is provided for informational purposes only and does not constitute professional advice. We makes no representations or warranties regarding its accuracy, completeness, or reliability. Readers are advised to verify the information independently before relying on