#FactCheck - AI-Generated Video Falsely Claims Death of Iran’s Supreme Leader
Executive Summary
Iran’s Supreme Leader Ayatollah Ali Khamenei was reportedly killed in a major attack carried out by Israel and the United States, with claims circulating that Iranian state media confirmed his death early Sunday morning. Amid these claims, a video is being widely shared on social media. The viral video shows a body trapped under debris. Users sharing the clip claim that the body seen in the footage is that of Ayatollah Ali Khamenei. However, research conducted by CyberPeace found the viral claim to be false. Our research revealed that the video is not authentic but AI-generated.
Claim:
On March 1, 2026, an Instagram user shared the viral video with the caption: “Shaheed Ayatollah Sayyid Ali Hosseini Khamenei — Neither fled nor hid in a bunker, embraced death like a brave man.” The link to the post and its archived version are provided below along with a screenshot.

Fact Check:
Upon closely examining the viral video, we noticed several visual irregularities and technical inconsistencies. This raised suspicion about its authenticity. We then scanned the video using the AI detection tool Hive Moderation. The results indicated that approximately 83 percent of the content showed signs of being AI-generated.

To further verify the claim, we also analyzed the video using another AI detection tool, WasItAI. The findings similarly suggested that the video was generated using artificial intelligence.

Conclusion:
Our research establishes that the viral video is not real. It has been artificially generated using AI and is being shared with misleading claims.
Related Blogs
.webp)
Introduction
In the sprawling and ever-evolving landscape of cybercrime, phishing links, phoney emails, and dubious investment offers are no longer the only tools used by scammers. Cybercriminals are becoming skilled at taking advantage of commonplace digital behaviours, undermining confidence, and turning popular features of our most essential apps into weapons. A fast expanding international threat has been revealed by the National Cybercrime Threat Analytics Unit (NCTAU) of the Indian Cybercrime Coordination Centre(I4C)’s most recent advisory on “WhatsApp account renting”. This scam uses QR codes to trick users into connecting their WhatsApp accounts to fraudulent sites under the guise of a “quick income” opportunity. What initially appears innocuous turns into a tool for thieves to take control of accounts and use them for illicit purposes.
The Global Rise of Cyber Mule Networks
Initially the word “mule” in cybercrime networks referred to a bank account used, knowingly often unknowingly, to transfer or “launder” money obtained from fraud and illegal activities. In light of the evolving nature of this cybercrime, Cyber mules in the present scenario can be referred to as, individuals who knowingly or unknowingly allow their digital identities, devices, or bank accounts to be used for illegal activity.
Various cybersecurity companies as well as Europol and Interpol, have frequently cautioned that hackers are increasingly using digital mule recruiting, which frequently takes the form of the following:
- Work-from-home Offers
- Streams of passive income
- Monetisation of social media
- Roles for verification assistants
- Apps that earn commissions
Earlier versions involved money transfers through personal bank accounts . Criminals now want your digital identity rather than just your money, as the trend has been reported to be changing.
Scammers frequently “rent” victims’ Facebook, LINE, Telegram, and WeChat accounts in parts of Southeast Asia and Africa in order to conduct impersonation frauds or assist with criminal operations. The WhatsApp variant that is making its way to India is a logical progression, although it comes only with the widely used WhatsApp Web linked-device capability.
How the WhatsApp Account Renting Scam Works
I4C’s advisory dated 15th October, 2025, highlights a sophisticated yet psychologically simple scheme that exploits trust, curiosity, and the illusion of easy income.The scam’s lifetime is as follows:
1. The Hook: “Automatically Earn Passive Income”
Threat actors claim users can earn daily rewards by connecting their WhatsApp accounts to a new “partner platform” in their polished and professional Instagram and Facebook ads.
This strategy imitates international scam factories in Cambodia and Myanmar, where victims are lured into investment schemes or bogus tasks by social media advertisements.
2.The Redirect: Rogue APKs & Fake Websites
When victims click on the advertisement, they are sent to
- Fake dashboards for earnings
- Untrustworthy websites that imitate authentic financial interfaces
- Instructions for installing Android APKs from sources other than the Play Store
- These APKs often carry spyware or remote-access malware.
3.The Trap: Scanning a QR Code
The user is asked to scan a QR code through WhatsApp’s “Linked Devices” feature, which is normally used for WhatsApp Web.
Without ever touching the victim’s phone, the con artist obtains complete session access to their WhatsApp account as soon as the QR is scanned.
Threat actors are able to:
- Transmit and receive messages
- Get access to contact lists
- Participate in or start groups
- Assume the victim’s identity
- Conduct frauds using their identities
4.The Illusion: A Multi-Level Commission Structure
A pyramid-style earnings model is displayed to maintain credibility:
- 10% off direct invites
- 5% of secondary invites
- 2% of tertiary invitations
These figures are designed to encourage victims to recruit more users, increasing the number of compromised WhatsApp accounts.
5.The Misuse: “Mule WhatsApp accounts”
The victim’s account becomes a digital mule once it is connected, allowing fraudsters to:
- Start UPI fraud and phishing
- Distribute harmful links
- Impersonate the victim to scam their contacts
- Participate in bulk messaging campaigns
- Get additional mule accounts
Precautions Issued by I4C
I4C has advised citizens to take the following precautions:
- You could face criminal charges or similar consequences if you carelessly rent or link your WhatsApp account for money
- Installing APKs from non-official app shops should be avoided
- Advertisements that promise automatic revenue, referral bonuses, or passive income should be avoided.
- Regularly check linked devices on WhatsApp: Settings → Linked Devices
- Use WhatsApp’s Official support page to report hacked accounts or impersonation: https://www.whatsapp.com/contact/forms/1534459096974129
- Report financial fraud immediately by calling 1930 or visiting cybercrime.gov.in
CyberPeace Outlook
The WhatsApp account rental fraud is not an isolated phenomenon; rather, it is the latest mutation of a global cybercrime apparatus that feeds on social engineering, digital identity theft, and international mule networks. Its simplicity, all it takes to take over your digital life is a QR code scan, makes it especially hazardous. I4C’s timely warning serves as an important reminder that easy money is nearly always a trap in the digital world and that, if we let our guard down, our most reliable platforms can become attack surfaces. Stay informed, and stay safe. In order to protect our identities, data, and communities, cyber hygiene is now a must.
References
- https://www.cnbctv18.com/personal-finance/mule-account-fraud-on-the-rise-what-it-is-and-how-to-shttps://i4c.mha.gov.in/theme/resources/advisories/Mule%20Whatsapp%20V1.4.pdftay-safe-19662507.htm
- https://i4c.mha.gov.in/theme/resources/advisories/Mule%20Whatsapp%20V1.4.pdf

Introduction
The Government of India has initiated a cybercrime crackdown that has resulted in the blocking of 781,000 SIM cards and 208,469 IMEI (International Mobile Equipment Identity) numbers that are associated with digital fraud as of February 2025. This data was released as a written response by the Union Minister of State for Home Affairs, Bandi Sanjay Kumar, with respect to a query presented in the Lok Sabha. A significant jump from the 669,000 SIM cards blocked in the past year, efforts aimed at combating digital fraud are in full swing, considering the increasing cases. The Indian Cyber Crime Coordination Centre (I4C) is proactively blocking other platform accounts found suspicious, such as WhatsApp Accounts (83,668) and Skype IDs (3,962) on its part, aiding in eliminating identified threat actors.
Increasing Digital Fraud And The Current Combative Measures
According to the data tabled by the Ministry of Finance in the Rajya Sabha, the first 10 months of the Financial year 2024-2025 have recorded around 2.4 million incidents covering an amount of Rs. 4,245 crore involving cases of digital Financial Fraud cases. Apart from the evident financial loss, such incidents also take an emotional toll as people are targeted regardless of their background and age, leaving everyone equally vulnerable. To address this growing problem, various government departments have dedicated measures to combat and reduce such incidents. Some of the notable initiatives/steps are as follows:
- The Citizen Financial Cyber Fraud Reporting and Management System- This includes reporting Cybercrimes through the nationwide toll-free (1930) number and registration on the National Cyber Crime Reporting Portal. On being a victim of digital fraud, one can call the toll-free number, describing details of the incident, which would further help in the investigation. After reporting the incident, the complainant receives a generated login ID/acknowledgement number that they can use for further reference.
- International Incoming Spoofed Calls Prevention System- This is a mechanism developed to counter fraudulent calls that appear to originate from within India but are actually made from international locations. This system prevents the misuse of the Calling Line Identity (CLI), which is manipulated to deceive recipients in order to carry out financial crimes like digital arrests, among other things. Coordinating with the Department of Telecommunication (DoT), private telecommunication service providers (TSPs) are being encouraged to check with their ILD (International Long-Distance) network as a measure. Airtel has recently started categorising such numbers as International numbers on their part.
- Chakshu Facility at Sanchar Saathi platform- A citizen-centric initiative, created by the Department of Telecommunications, to empower mobile subscribers. It focuses on reporting unsolicited commercial communication (spam messages) and reporting suspected fraudulent communication. (https://sancharsaathi.gov.in/).
- Aadhaar-based verification of SIM cards- A directive issued by the Prime Minister's Office to the Department of Telecommunications mandates an Aadhaar-based biometric verification for the issuance of new SIM cards. This has been done so in an effort to prevent fraud and cybercrime through mobile connections obtained using fake documents. Legal action against non-compliant retailers in the form of FIRs is also being taken.
On the part of the public, awareness of the following steps could encourage them on how to deal with such situations:
- Awareness regarding types of crimes and the tell-tale signs of the modus operandi of a criminal: A general awareness and a cautionary approach to how such crimes take place could help better prepare and respond to such malicious scams. Some important signs on the part of the offender include pressuring the victim into immediate action, insistence on video calls, and the threat of arrest in case of non-compliance. It is also important to note that no official authority, in any legal capacity, allows for enabling a digital/online arrest.
- Knowing the support channels: Awareness regarding reporting mechanisms and cyber safety hygiene tips can help in building cyber resilience amongst netizens.
Conclusion
As cybercrooks continue to find new ways of duping people of their hard-earned money, both government and netizens must make efforts to combat such crimes and increase awareness on both ends (systematic and public). Increasing developments in AI, deepfakes, and other technology often render the public inept at assessing the veracity of the source, making them susceptible to such crime. A cautionary yet proactive approach is need of the hour.
References
- https://mobileidworld.com/india-blocks-781000-sim-cards-in-major-cybercrime-crackdown/
- https://www.storyboard18.com/how-it-works/over-83k-whatsapp-accounts-used-for-digital-arrest-blocked-home-ministry-60292.htm
- https://www.business-standard.com/finance/news/digital-financial-frauds-touch-rs-4-245-crore-in-the-apr-jan-period-of-fy25-125032001214_1.html
- https://www.business-standard.com/india-news/govt-blocked-781k-sims-3k-skype-ids-83k-whatsapp-accounts-till-feb-125032500965_1.html
- https://pib.gov.in/PressReleasePage.aspx?PRID=2042130
- https://mobileidworld.com/india-mandates-aadhaar-biometric-verification-for-new-sim-cards-to-combat-fraud/
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2067113
.webp)
Introduction
The rapid advancement of technology, including generative AI, offers immense benefits but also raises concerns about misuse. The Internet Watch Foundation reported that, as of July 2024, over 3,500 new AI-generated child sexual abuse images appeared on the dark web. The UK’s National Crime Agency records 800 monthly arrests for online child threats and estimates 840,000 adults as potential offenders. In response, the UK is introducing legislation to criminalise AI-generated child exploitation imagery, which will be a part of the Crime and Policing Bill when it comes to parliament in the next few weeks, aligning with global AI regulations like the EU AI Act and the US AI Initiative Act. This policy shift strengthens efforts to combat online child exploitation and sets a global precedent for responsible AI governance.
Current Legal Landscape and the Policy Gap
The UK’s Online Safety Act 2023 aims to combat CSAM and deepfake pornography by holding social media and search platforms accountable for user safety. It mandates these platforms to prevent children from accessing harmful content, remove illegal material, and offer clear reporting mechanisms. For adults, major platforms must be transparent about harmful content policies and provide users control over what they see.
However, the Act has notable limitations, including concerns over content moderation overreach, potential censorship of legitimate debates, and challenges in defining "harmful" content. It may disproportionately impact smaller platforms and raise concerns about protecting journalistic content and politically significant discussions. While intended to enhance online safety, these challenges highlight the complexities of balancing regulation with digital rights and free expression.
The Proposed Criminalisation of AI-Generated Sexual Abuse Content
The proposed law by the UK criminalises the creation, distribution, and possession of AI-generated CSAM and deepfake pornography. It mandates enforcement agencies and digital platforms to identify and remove such content, with penalties for non-compliance. Perpetrators may face up to two years in prison for taking intimate images without consent or installing equipment to facilitate such offences. Currently, sharing or threatening to share intimate images, including deepfakes, is an offence under the Sexual Offences Act 2003, amended by the Online Safety Act 2023. The government plans to repeal certain voyeurism offences, replacing them with broader provisions covering unauthorised intimate recordings. This aligns with its September 2024 decision to classify sharing intimate images as a priority offence under the Online Safety Act, reinforcing its commitment to balancing free expression with harm prevention.
Implications for AI Regulation and Platform Responsibility
The UK's move aligns with its AI Safety Summit commitments, placing responsibility on platforms to remove AI-generated sexual abuse content or face Ofcom enforcement. The Crime and Policing Bill is expected to tighten AI regulations, requiring developers to integrate safeguards against misuse, and the licensing frameworks may enforce ethical AI standards, restricting access to synthetic media tools. Given AI-generated abuse's cross-border nature, enforcement will necessitate global cooperation with platforms, law enforcement, and regulators. Bilateral and multilateral agreements could help harmonise legal frameworks, enabling swift content takedown, evidence sharing, and extradition of offenders, strengthening international efforts against AI-enabled exploitation.
Conclusion and Policy Recommendations
The Crime and Policing Bill marks a crucial step in criminalising AI-generated CSAM and deepfake pornography, strengthening online safety and platform accountability. However, balancing digital rights and enforcement remains a challenge. For effective implementation, industry cooperation is essential, with platforms integrating detection tools and transparent reporting systems. AI ethics frameworks should prevent misuse while allowing innovation, and victim support mechanisms must be prioritised. Given AI-driven abuse's global nature, international regulatory alignment is key for harmonised laws, evidence sharing, and cross-border enforcement. This legislation sets a global precedent, emphasising proactive regulation to ensure digital safety, ethical AI development, and the protection of human dignity.
References
- https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
- https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/
- https://www.financialexpress.com/life/technology-uk-set-to-ban-ai-tools-for-creating-child-sexual-abuse-images-with-new-laws-3735296/
- https://www.gov.uk/government/publications/national-crime-agency-annual-report-and-accounts-2023-to-2024/national-crime-agency-annual-report-and-accounts-2023-to-2024-accessible#part-1--performance-report