How Should Deepfakes Be Regulated in India?

Rahul Kumar
Rahul Kumar
Intern - Policy & Advocacy, CyberPeace
PUBLISHED ON
Dec 12, 2025
10

Introduction

The use of AI in content production, especially images and videos, is changing the foundations of evidence. AI-generated videos and images can mirror a person’s facial features, voice, or actions with a level of fidelity to which the average individual may not be able to distinguish real from fake. The ability to provide creative solutions is indeed a beneficial aspect of this technology. However, its misuse has been rapidly escalating over recent years. This creates threats to privacy and dignity, and facilitates the creation of dis/misinformation. Its real-world consequences are the manipulation of elections, national security threats, and the erosion of trust in society. 

Why India Needs Deepfake Regulation

Deepfake regulation is urgently needed in India, evidenced by the recent Rashmika Mandanna incident, where a hoax deepfake of an actress created a scandal throughout the country. This was the first time that an individual's image was superimposed on the body of another woman in a viral deepfake video that fooled many viewers and created outrage among those who were deceived by the video. The incident even led to law enforcement agencies issuing warnings to the public about the dangers of manipulated media.

This was not an isolated incident; many influencers, actors, leaders and common people have fallen victim to deepfake pornography, deepfake speech scams, defraudations, and other malicious uses of deepfake technology. The rapid proliferation of deepfake technology is outpacing any efforts by lawmakers to regulate its widespread use. In this regard, a Private Member’s Bill was introduced in the Lok Sabha in its Winter Session. This proposal was presented to the Lok Sabha as an individual MP's Private Member's Bill. Even though these have had a low rate of success in being passed into law historically, they do provide an opportunity for the government to take notice of and respond to emerging issues. In fact, Private Member's Bills have been the catalyst for government action on many important matters and have also provided an avenue for parliamentary discussion and future policy creation. The introduction of this Bill demonstrates the importance of addressing the public concern surrounding digital impersonation and demonstrates that the Parliament acknowledges digital deepfakes to be a significant concern and, therefore, in need of a legislative framework to combat them.

Key Features Proposed by the New Deepfake Regulation Bill

The proposed legislation aims to create a strong legal structure around the creation, distribution and use of deepfake content in India. Its five core proposals  are: 

1. Prior Consent Requirement: individuals must give their written approval before producing or distributing deepfake media, including digital representations of themselves, as well as their faces, images, likenesses and voices. This aims to protect women, celebrities, minors, and everyday citizens against the use of their identities with the intent to harm them or their reputations or to harass them through the production of deepfakes.

2. Penalties for Malicious Deepfakes: Serious criminal consequences should be placed for creating or sharing deepfake media, particularly when it is intended to cause harm (defame, harass, impersonate, deceive or manipulate another person). The Bill also addresses financially fraudulent use of deepfakes, political misinformation, interfering with elections and other types of explicit AI-generated media. 

3. Establishment of a Deepfake Task Force: To look at the potential impact of deepfakes on national security, elections and public order, as well as on public safety and privacy. This group will work with academic institutions, AI research labs and technology companies to create advanced tools for the detection of deepfakes and establish best practices for the safe and responsible use of generative AI. 

4. Creation of a Deepfake Detection and Awareness Fund: To assist with the development of tools for detecting deepfakes, increasing the capacity of law enforcement agencies to investigate cybercrime, promoting public awareness of deepfakes through national campaigns, and funding research on artificial intelligence safety and misinformation.

How Other Countries Are Handling Deepfakes

1. United States
Many States in the United States, including California and Texas, have enacted laws to prohibit the use of politically deceptive deepfakes during elections. Additionally, the Federal Government is currently developing regulations requiring that AI-generated content be clearly labelled. Social Media Platforms are also being encouraged to implement a requirement for users to disclose deepfakes.

2. United Kingdom
In the United Kingdom, it is illegal to create or distribute intimate deepfake images without consent; violators face jail time. The Online Safety Act emphasises the accountability of digital media providers by requiring them to identify, eliminate, and avert harmful synthetic content, which makes their role in curating safe environments all the more important.

3. European Union:
The EU has enacted the EU AI Act, which governs the use of deepfakes by requiring an explicit label to be affixed to any AI-generated content. The absence of a label would subject an offending party to potentially severe regulatory consequences; therefore, any platform wishing to do business in the EU should evaluate the risks associated with deepfakes and adhere strictly to the EU's guidelines for transparency regarding manipulated media.

4. China:
China has among the most rigorous regulations regarding deepfakes anywhere on the planet. All AI-manipulated media will have to be marked with a visible watermark, users will have to authenticate their identities prior to being allowed to use advanced AI tools, and online platforms have a legal requirement to take proactive measures to identify and remove synthetic materials from circulation.

Conclusion

Deepfake technology has the potential to be one of the greatest (and most dangerous) innovations of AI technology. There is much to learn from incidents such as that involving Rashmika Mandanna, as well as the proliferation of deepfake technology that abuses globally, demonstrating how easily truth can be altered in the digital realm. The new Private Member's Bill created by India seeks to provide for a comprehensive framework to address these abuses based on prior consent, penalties that actually work, technical preparedness, and public education/awareness. With other nations of the world moving towards increased regulation of AI technology, proposals such as this provide a direction for India to become a leader in the field of responsible digital governance.

References 

  1. https://www.ndtv.com/india-news/lok-sabha-introduces-bill-to-regulate-deepfake-content-with-consent-rules-9761943
  2. https://m.economictimes.com/news/india/shiv-sena-mp-introduces-private-members-bill-to-regulate-deepfakes/articleshow/125802794.cms
  3. https://www.bbc.com/news/world-asia-india-67305557
  4. https://www.akingump.com/en/insights/blogs/ag-data-dive/california-deepfake-laws-first-in-country-to-take-effect
  5. https://codes.findlaw.com/tx/penal-code/penal-sect-21-165/ 
  6. https://www.mishcon.com/news/when-ai-impersonates-taking-action-against-deepfakes-in-the-uk#:~:text=As%20of%2031%20January%202024,of%20intimate%20deepfakes%20without%20consent
  7. https://www.politico.eu/article/eu-tech-ai-deepfakes-labeling-rules-images-elections-iti-c2pa/ 
  8. https://www.reuters.com/article/technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VT/

PUBLISHED ON
Dec 12, 2025
Category
TAGS
No items found.

Related Blogs