#FactCheck- No, Iran’s Supreme Leader Mojtaba Khamenei Is Not Dead—Viral Video Debunked
Executive Summary
A video circulating on social media claims that Iran’s new Supreme Leader Mojtaba Khamenei has passed away, with users attributing the claim to American sources. However, research by the CyberPeace found the claim to be false. Our research confirms that Mojtaba Khamenei is alive and in good health.
Claim
A Facebook user shared the viral video, claiming that Iran’s new Supreme Leader Mojtaba Khamenei had died.

Fact Check
To verify the claim, we conducted keyword searches on Google but found no credible media reports confirming his death. Further research led us to a report published on April 10, 2026, by ABP News. According to the report, amid discussions around a ceasefire, Mojtaba Khamenei issued a statement saying that Iran does not seek war with the United States or Israel, but as a nation, it must defend its rights.

Additionally, the image used in the viral video was analyzed using the AI detection tool HIVE Moderation. The results indicated a 99% probability that the image is AI-generated.

Conclusion
The viral claim is false and misleading. There is no credible evidence to suggest that Mojtaba Khamenei has died. On the contrary, recent verified reports confirm that he is alive and has even issued public statements on ongoing geopolitical developments. The widespread circulation of this claim appears to be driven by misinformation, amplified through social media without verification. The use of AI-generated visuals further adds to the confusion, making the content appear authentic at first glance.
Related Blogs
.webp)
Introduction
In India, the rights of children with regard to protection of their personal data are enshrined under the Digital Personal Data Protection Act, 2023 which is the newly enacted digital personal data protection law of India. The DPDP Act requires that for the processing of children's personal data, verifiable consent of parents or legal guardians is a necessary requirement. If the consent of parents or legal guardians is not obtained then it constitutes a violation under the DPDP Act. Under section 2(f) of the DPDP act, a “child” means an individual who has not completed the age of eighteen years.
Section 9 under the DPDP Act, 2023
With reference to the collection of children's data section 9 of the DPDP Act, 2023 provides that for children below 18 years of age, consent from Parents/Legal Guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or the lawful guardian. Section 9 aims to create a safer online environment for children by limiting the exploitation of their data for commercial purposes or otherwise. By virtue of this section, the parents and guardians will have more control over their children's data and privacy and they are empowered to make choices as to how they manage their children's online activities and the permissions they grant to various online services.
Section 9 sub-section (3) specifies that a Data Fiduciary shall not undertake tracking or behavioural monitoring of children or targeted advertising directed at children. However, section 9 sub-section (5) further provides room for exemption from this prohibition by empowering the Central Government which may notify exemption to specific data fiduciaries or data processors from the behavioural tracking or target advertising prohibition under the future DPDP Rules which are yet to be announced or released.
Impact on social media platforms
Social media companies are raising concerns about Section 9 of the DPDP Act and upcoming Rules for the DPDP Act. Section 9 prohibits behavioural tracking or targeted advertising directed at children on digital platforms. By prohibiting intermediaries from tracking a ‘child's internet activities’ and ‘targeted advertising’ - this law aims to preserve children's privacy. However, social media corporations contended that this limitation adversely affects the efficacy of safety measures intended to safeguard young users, highlighting the necessity of monitoring specific user signals, including from minors, to guarantee the efficacy of safety measures designed for them.
Social media companies assert that tracking teenagers' behaviour is essential for safeguarding them from predators and harmful interactions. They believe that a complete ban on behavioural tracking is counterproductive to the government's objectives of protecting children. The scope to grant exemption leaves the door open for further advocacy on this issue. Hence it necessitates coordination with the concerned ministry and relevant stakeholders to find a balanced approach that maintains both privacy and safety for young users.
Furthermore, the impact on social media platforms also extends to the user experience and the operational costs required to implement the functioning of the changes created by regulations. This also involves significant changes to their algorithms and data-handling processes. Implementing robust age verification systems to identify young users and protect their data will also be a technically challenging step for the various scales of platforms. Ensuring that children’s data is not used for targeted advertising or behavioural monitoring also requires sophisticated data management systems. The blanket ban on targeted advertising and behavioural tracking may also affect the personalisation of content for young users, which may reduce their engagement with the platform.
For globally operating platforms, aligning their practices with the DPDP Act in India while also complying with data protection laws in other countries (such as GDPR in Europe or COPPA in the US) can be complex and resource-intensive. Platforms might choose to implement uniform global policies for simplicity, which could impact their operations in regions not governed by similar laws. On the same page, competitive dynamics such as market shifts where smaller or niche platforms that cater specifically to children and comply with these regulations may gain a competitive edge. There may be a drive towards developing new, compliant ways of monetizing user interactions that do not rely on behavioural tracking.
CyberPeace Policy Recommendations
A balanced strategy should be taken into account which gives weightage to the contentions of social media companies as well as to the protection of children's personal information. Instead of a blanket ban, platforms can be obliged to follow and encourage openness in advertising practices, ensuring that children are not exposed to any misleading or manipulative marketing techniques. Self-regulation techniques can be implemented to support ethical behaviour, responsibility, and the safety of young users’ online personal information through the platform’s practices. Additionally, verifiable consent should be examined and put forward in a manner which is practical and the platforms have a say in designing the said verification. Ultimately, this should be dealt with in a manner that behavioural tracking and targeted advertising are not affecting the children's well-being, safety and data protection in any way.
Final Words
Under section 9 of the DPDP Act, the prohibition of behavioural tracking and targeted advertising in case of processing children's personal data - will compel social media platforms to overhaul their data collection and advertising practices, ensuring compliance with stricter privacy regulations. The legislative intent behind this provision is to enhance and strengthen the protection of children's digital personal data security and privacy. As children are particularly vulnerable to digital threats due to their still-evolving maturity and cognitive capacities, the protection of their privacy stands as a priority. The innocence of children is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. Furthermore, a balanced approach needs to be adopted which maintains both ‘privacy’ and ‘safety’ for young users.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.firstpost.com/tech/as-govt-of-india-starts-preparing-rules-for-dpdp-act-social-media-platforms-worried-13789134.html#google_vignette
- https://www.business-standard.com/industry/news/social-media-platforms-worry-new-data-law-could-affect-child-safety-ads-124070400673_1.html

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.

Introduction
The debate between free speech and social responsibility is one of the oldest, long-running debates in history. Free speech is considered to be at the heart of every democracy. It is considered the “mother” of all other freedoms, enshrined in Article 19(1)(a) of the Indian Constitution under Part III: Fundamental Rights. It takes various shapes and forms according to the sociopolitical context of society. Evelyn Beatrice Hall, a prominent English writer of the 19th century, laid the foundation of every democracy when she wrote in her book, "I disapprove of what you say, but I willdefend to the death your right to say it." The drastic misuse of social media to disseminate propaganda and fakenews makes it a marketplace of half-baked truth, becoming the antithesis ofwhat early philosophers dreamed of for a democratic modern age. Losethe ethics, and there you have it, the modern conceptualisation of freedom ofspeech and expression in the digital age. The right to freedom of speech andexpression is one of the most fundamental rights, but its exercise is notunfettered, and certain limits are placed upon this right under Art. 19 (2).Every right comes with a corresponding duty, and the exercise of such freedomalso puts the citizenry under the responsibility not to violate the rights ofothers and not to use the media to demean any other person.
SocialMedia: The New Public Square or a Weaponised Echo Chamber
InIndia, Art. 19(1)(a) of the constitution guarantees the right to freedom ofspeech and expression, but it is not absolute. Under Art. 19(2), this right issubject to reasonable restrictions in the interest of public order, decency,morality, and national security. This is construed as a freedom for everyindividual to freely express their opinions, but not to incite violence, spreadfalsehoods, or harm others’ dignity. Unfortunately, the boundaries betweenthese are increasingly blurred.
Thedissemination of unfiltered media and the strangulation of innocence by pushingoften vulgar and obscene content down the throats of individuals, withoutverifying the age and gender profile of the social media user, is a big farcein the name of free speech and a conscious attempt by the intermediaries andsocial media platforms such as Facebook, Instagram, Threads, etc., to wriggleout of their responsibility. A prime example is when Meta’s Mark Zuckerberg, on7th January 2025, gave a statement asserting less intervention into what peoplefind on its social media platforms as the new “best practice”. While lessinterference would have worked in a generation that merely operated on thediffering, dissenting, and raw ideas bred by the minds of differentindividuals, it is not the case for this day and age. There has been asignificant rise in cases where social media platforms have been used as abattleground for disputes, spreading communal violence, misinformation, anddisinformation.
Thereis no debate about the fact that social media platforms have fostered a globalexpression, making the world a global village, bringing everyone together. Onthe other hand, the platforms have become the epicentre of computer-basedcrimes, where children and teenagers often become prey to these crimes,cyberbullying, and cyberstalking.
Rising Importance of Platform Accountability
Themost pertinent question that is to be asked with a conscious mind is whether anunregulated media is a reflection of Freedom of Speech, a right given to us byour constitution under Article. 19(1)(a), or whether free speech is just a garbby big stakeholders, and we are all victims of an impending infodemic andvictims of AI algorithms, because, as per the reports that surfaced during theCovid-19 pandemic, India saw a dramatic 214% rise in false information. Anotherreport by the UNESCO-Ipsos survey revealed that 85% of Indian respondentsencounter online hate speech, with around 64% pointing to social media as aprimary source.
While the focus on platform accountability is critical, it is equally important to recognise that the right to free speech is not absolute. Therefore, users also bear a constitutional responsibility while exercising this right. Free expression in a democratic society must be accompanied by civic digital behaviour, which includes refraining from spreading hate speech, misinformation, or engaging in harmful conduct online. The most recent example of this is the case of Ranveer Gautam Allahabadia vs. UOI (popularly known as “Latent Case”); the court came down heavily on the hosts and makers of the show and made its position crystal clear by stating, “there is nothinglike a fundamental right on platter...the fundamental rights are all followedby a duty...unless those people understand duty, there is no [...] deal withthat kind of elements...if somebody wants to enjoy fundamental rights, thiscountry gives a guarantee to enjoy, but guarantee is with a duty so thatguarantee will involve performing that duty also” .
The Way Forward: CyberPeace Suggests
In order to realise the benefits and derive the true benefits from the rights we are provided, especially the one in discussion, i.e., Freedom of Speech and Expression, the government and the designated intermediaries and regulators have to prepare both roadmaps, one for “Platform Accountability” and one for "User Accountability”, wherein the regulators with a reasonable foresight should conduct Algorithm Risk Audits which is a technique to make algorithms and there effects on content feeds visible. It can be an effective tool and an objective manner to compare how algorithms are automatically pushing different content to different users in an unfair or unbalanced way. As for user accountability, “Digital Literacy” is the way forward, ensuring that social media remains a marketplace of ideas and does not become a minefield of misfires.