The misuse of artificial intelligence continues to challenge ethical and legal boundaries, with one of its most alarming manifestations being deepfake pornography. To counter this growing threat, the US Senate recently passed the bipartisan ‘Take It Down’ Act, which criminalises the publication of non-consensual intimate imagery, including AI-generated deepfake content. The Bill, which mandates social media platforms and websites to remove such content within 48 hours of notification from victims, now moves to the House for consideration.
It is important to understand the technology behind deepfake pornography, its rise, dangers, how individuals can protect themselves, and what to do if photos or videos are being misused.
To get insights about the issue, we spoke with Abhivardhan, chairperson and managing trustee, Indian Society of Artificial Intelligence and Law, and Lohit Matani, Nagpur’s deputy commissioner of police (cyber).
In 2024, deepfakes took centre stage in India following morphed images of Bollywood celebrities Rashmika Mandanna, Alia Bhatt, and Katrina Kaif spread like wildfire on social media. However, deepfakes are not limited to celebrities, the technology has been used increasingly to target regular people, making deepfake pornography a widespread concern.
According to social media analytics firm Twicsy, 84 per cent of social media influencers have fallen victim to deepfake pornography. What began with hyper-realistic AI-generated celebrity images has now transitioned into scams and blackmail targeting ordinary people.
Take the case of Sanket (name changed). Early last year, he clicked on a seemingly harmless advertisement on a social media platform. What followed was a nightmare—scammers with access to his public pictures began blackmailing him with deepfake intimate images, threatening to distribute them unless he paid up.
Sanket initially ignored the threats, turning off his phone to avoid incessant WhatsApp calls. But the harassment escalated when fake Facebook profiles began sharing doctored images of him in the comments section of his posts. He deleted his Facebook profile and changed his number. Fearing judgment and distrust from law enforcement, Sanket chose not to report the incident to the cybercrime cell. “They would have found faults in my social media usage, would have judged me for clicking on an inappropriate advertisement, and I believe nothing would have changed,” he told indianexpress.com.
With better awareness, Sanket could have averted the emotional and social toll of such a traumatic experience.
Creating deepfake pornography has become alarmingly easy, with more than a hundred “nudify” websites, software and applications available online. These tools allow even users with minimal technical skills to create deepfakes. A simple search will lead users to these websites.
While “rules” on such websites say only those above 18 years of age are allowed, there is no proof or verification required, much like on porn sites, and even though such websites warn users not to use others’ images without their permission, it is obvious users are not going to use the tool to create their own pornographic images.
According to Abhivardhan, cases of deepfakes in India have grown at an alarming pace—not just in political campaigns but also in non-political contexts. “According to a report, deepfake online content increased by 900 per cent between 2019 and 2020 globally, and some researchers predict that up to 90 per cent of online content could be made synthetically by 2026. More recent analysis shows that Indian social media users are frequently exposed to AI-altered media,” he said.
Modern AI deepfakes are powered by Generative Adversarial Networks (GANs), which uses two machine-learning models: a “generator” to create synthetic media and a “discriminator” to detect if it’s fake. Through repeated attempts, the generator refines its output until the visual or audio content is convincingly realistic. The continuous feedback loop between the generator and discriminator polishes the output until it is often indistinguishable from genuine footage, rendering it highly deceptive making these deepfakes very convincing. Advanced AI can capture minute details—facial expressions, lighting nuances, voice tones, and more.
“At present, India lacks a comprehensive legal framework specifically tailored to AI-generated content or deepfakes. AI platforms generally operate in a grey area—so long as they adhere to India’s Information Technology (IT) Act, 2000, and avoid hosting or distributing explicit illegal content, they may remain technically within legal limits. However, the law is still evolving,” Abhivardhan said.
While the Bharatiya Nyaya Sanhita (BNS) 2023 includes provisions for defamation, fraud, identity theft, deepfakes or non-consensual AI-generated images and videos are not explicitly addressed. Same is the case with the IT Act, 2000.
Recent advisories from the Ministry of Electronics and Information Technology have faced criticism for being vague and non-binding. Civil remedies under privacy and defamation laws also remain limited in scope.
Preserve evidence: Capture screenshots, note URLs and timestamps of the manipulated content.
Report to authorities: Capture screenshots, note URLs and timestamps of the manipulated content. File a complaint with the cybercrime cell via the Ministry of Home Affairs portal or local police station. Sections 66E (image privacy) of the IT Act or provisions of the BNS can be invoked.
Request platform takedowns: Most social media platforms have policies against non-consensual intimate images or manipulated content. Victims can file a takedown request with evidence.
Seek legal help: A legal notice or cease-and-desist letter might be needed if the perpetrator or platform is unresponsive.
NGOs and legal aid programmes: While India does not have a specific deepfake support organisation, certain digital rights groups, cyber law clinics, and pro bono legal initiatives may help navigate the process.
According to Matani, “The primary motive when a victim files a complaint regarding morphed intimate images is to take down the image from the platform where it has been uploaded or shared. The steps are to identify the account, deleting the media and identifying and nabbing the accused.”
In cases involving sharing of such manipulated data on WhatsApp, “it becomes important to delete the images from the devices of the people”, said Matani. “If the pictures are shared on WhatsApp groups, the admins are treated as accused in the case.
The accused are booked under the combination of sections from Information Technology Act, 2000 and the Bharatiya Nyay Sanhita, 2023. In cases of child pornography, the accused are booked under relevant sections of the Protection of Children from Sexual Offences Act (POCSO),” he said.
Abhivardhan listed some preventive measures for social media users:
📌Restrict privacy settings: Limit who can view or download your content.
📌Avoid sharing sensitive data: Remove geotags, personal identifiers, or any data that can be used to create convincing forgeries.
📌Two-factor authentication: Enable it to reduce account hacking risks.
📌Use watermarking: Subtle watermarking of personal images can deter misuse. This may not be foolproof, it adds a barrier.
📌Be vigilant: Regularly review tagged photos or unknown friend requests to catch unauthorised usage
Deepfake technology poses unique challenges to privacy and security. Beyond its misuse in pornography, it enables sophisticated fraud, such as voice cloning for identity theft.
Abhivardhan underscores the shared responsibility of governments, tech platforms, and citizens to address this issue. While AI-driven innovations hold promise, their misuse must be curtailed to protect personal rights and maintain public trust in digital spaces.
Stay informed with access to our award-winning journalism.
Avoid misinformation with trusted, accurate reporting.
Make smarter decisions with insights that matter.