#FactCheck - AI-Generated Video Falsely Linked to Protests in Iran
Amid protests against rising inflation in Iran, a video is being widely shared on social media showing people gathering on streets at night while using mobile phone flashlights. The video is being circulated with the claim that it shows recent protests in Iran. Cyber Peace Foundation’s research found that the video being shared as visuals from the ongoing protests in Iran is not real. Our investigation revealed that the viral video is AI-generated and has no connection with actual events on the ground.
Claim
On January 11, 2026, an Instagram user shared the video with a caption written in Spanish. The Hindi translation of the caption reads: “The Iranian government shut down the lights of protesters, but that did not stop them from remaining on the streets demanding that the Ayatollahs step down from power.”The post link, its archived version, and screenshots can be seen below: https://www.instagram.com/p/DTXqzayjqFz/

FactCheck:
To verify the claim, we extracted keyframes from the viral video and conducted a Google reverse image search.During this process, we found the same video uploaded on Instagram on January 11, 2026. In that post, the user explicitly stated that the video was created using AI. The caption reads that the streetlights were turned off to hide the scale of protesters, but people used their phone lights to show their presence, adding:
“I created this video using AI, inspired by tonight’s protests (January 10, 2026) in Tehran, Iran.” Link to the post and screenshot can be seen below: https://www.instagram.com/p/DTWXsHajNvl/

To further verify the authenticity of the video, we scanned it using multiple AI detection tools.Hive Moderation flagged the video as 97 percent AI-generated.
We also scanned the video using another AI detection tool, Wasitai, which likewise identified the video as AI-generated.


Conclusion
Our investigation confirms that the video being shared as footage from protests in Iran is not real. The viral video has been created using artificial intelligence and is being falsely linked to the ongoing protests. The claim circulating on social media is false and misleading.
Related Blogs

Introduction
The use of AI in content production, especially images and videos, is changing the foundations of evidence. AI-generated videos and images can mirror a person’s facial features, voice, or actions with a level of fidelity to which the average individual may not be able to distinguish real from fake. The ability to provide creative solutions is indeed a beneficial aspect of this technology. However, its misuse has been rapidly escalating over recent years. This creates threats to privacy and dignity, and facilitates the creation of dis/misinformation. Its real-world consequences are the manipulation of elections, national security threats, and the erosion of trust in society.
Why India Needs Deepfake Regulation
Deepfake regulation is urgently needed in India, evidenced by the recent Rashmika Mandanna incident, where a hoax deepfake of an actress created a scandal throughout the country. This was the first time that an individual's image was superimposed on the body of another woman in a viral deepfake video that fooled many viewers and created outrage among those who were deceived by the video. The incident even led to law enforcement agencies issuing warnings to the public about the dangers of manipulated media.
This was not an isolated incident; many influencers, actors, leaders and common people have fallen victim to deepfake pornography, deepfake speech scams, defraudations, and other malicious uses of deepfake technology. The rapid proliferation of deepfake technology is outpacing any efforts by lawmakers to regulate its widespread use. In this regard, a Private Member’s Bill was introduced in the Lok Sabha in its Winter Session. This proposal was presented to the Lok Sabha as an individual MP's Private Member's Bill. Even though these have had a low rate of success in being passed into law historically, they do provide an opportunity for the government to take notice of and respond to emerging issues. In fact, Private Member's Bills have been the catalyst for government action on many important matters and have also provided an avenue for parliamentary discussion and future policy creation. The introduction of this Bill demonstrates the importance of addressing the public concern surrounding digital impersonation and demonstrates that the Parliament acknowledges digital deepfakes to be a significant concern and, therefore, in need of a legislative framework to combat them.
Key Features Proposed by the New Deepfake Regulation Bill
The proposed legislation aims to create a strong legal structure around the creation, distribution and use of deepfake content in India. Its five core proposals are:
1. Prior Consent Requirement: individuals must give their written approval before producing or distributing deepfake media, including digital representations of themselves, as well as their faces, images, likenesses and voices. This aims to protect women, celebrities, minors, and everyday citizens against the use of their identities with the intent to harm them or their reputations or to harass them through the production of deepfakes.
2. Penalties for Malicious Deepfakes: Serious criminal consequences should be placed for creating or sharing deepfake media, particularly when it is intended to cause harm (defame, harass, impersonate, deceive or manipulate another person). The Bill also addresses financially fraudulent use of deepfakes, political misinformation, interfering with elections and other types of explicit AI-generated media.
3. Establishment of a Deepfake Task Force: To look at the potential impact of deepfakes on national security, elections and public order, as well as on public safety and privacy. This group will work with academic institutions, AI research labs and technology companies to create advanced tools for the detection of deepfakes and establish best practices for the safe and responsible use of generative AI.
4. Creation of a Deepfake Detection and Awareness Fund: To assist with the development of tools for detecting deepfakes, increasing the capacity of law enforcement agencies to investigate cybercrime, promoting public awareness of deepfakes through national campaigns, and funding research on artificial intelligence safety and misinformation.
How Other Countries Are Handling Deepfakes
1. United States
Many States in the United States, including California and Texas, have enacted laws to prohibit the use of politically deceptive deepfakes during elections. Additionally, the Federal Government is currently developing regulations requiring that AI-generated content be clearly labelled. Social Media Platforms are also being encouraged to implement a requirement for users to disclose deepfakes.
2. United Kingdom
In the United Kingdom, it is illegal to create or distribute intimate deepfake images without consent; violators face jail time. The Online Safety Act emphasises the accountability of digital media providers by requiring them to identify, eliminate, and avert harmful synthetic content, which makes their role in curating safe environments all the more important.
3. European Union:
The EU has enacted the EU AI Act, which governs the use of deepfakes by requiring an explicit label to be affixed to any AI-generated content. The absence of a label would subject an offending party to potentially severe regulatory consequences; therefore, any platform wishing to do business in the EU should evaluate the risks associated with deepfakes and adhere strictly to the EU's guidelines for transparency regarding manipulated media.
4. China:
China has among the most rigorous regulations regarding deepfakes anywhere on the planet. All AI-manipulated media will have to be marked with a visible watermark, users will have to authenticate their identities prior to being allowed to use advanced AI tools, and online platforms have a legal requirement to take proactive measures to identify and remove synthetic materials from circulation.
Conclusion
Deepfake technology has the potential to be one of the greatest (and most dangerous) innovations of AI technology. There is much to learn from incidents such as that involving Rashmika Mandanna, as well as the proliferation of deepfake technology that abuses globally, demonstrating how easily truth can be altered in the digital realm. The new Private Member's Bill created by India seeks to provide for a comprehensive framework to address these abuses based on prior consent, penalties that actually work, technical preparedness, and public education/awareness. With other nations of the world moving towards increased regulation of AI technology, proposals such as this provide a direction for India to become a leader in the field of responsible digital governance.
References
- https://www.ndtv.com/india-news/lok-sabha-introduces-bill-to-regulate-deepfake-content-with-consent-rules-9761943
- https://m.economictimes.com/news/india/shiv-sena-mp-introduces-private-members-bill-to-regulate-deepfakes/articleshow/125802794.cms
- https://www.bbc.com/news/world-asia-india-67305557
- https://www.akingump.com/en/insights/blogs/ag-data-dive/california-deepfake-laws-first-in-country-to-take-effect
- https://codes.findlaw.com/tx/penal-code/penal-sect-21-165/
- https://www.mishcon.com/news/when-ai-impersonates-taking-action-against-deepfakes-in-the-uk#:~:text=As%20of%2031%20January%202024,of%20intimate%20deepfakes%20without%20consent.
- https://www.politico.eu/article/eu-tech-ai-deepfakes-labeling-rules-images-elections-iti-c2pa/
- https://www.reuters.com/article/technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VT/

Introduction
In the sprawling online world, trusted relationships are frequently taken advantage of by cybercriminals seeking to penetrate guarded systems. The Watering Hole Attack is one advanced method, which focuses on a user’s ecosystem by compromising the genuine sites they often use. This attack method is different from phishing or direct attacks as it quietly exploits the everyday browsing of the target to serve malicious content. The quiet and exact nature of watering hole attacks makes them prevalent amongst Advanced Persistent Threat (APT) groups, especially in conjunction with state-sponsored cyber-espionage operations.
What Qualifies as a Watering Hole Attack?
A Watering Hole Attack targets and infects a trusted website. The targeted website is one that is used by a particular organization or community, such as a specific industry sector. This type of cyberattack is analogous to the method of attack used by animals and predators waiting by the water’s edge for prey to drink. Attackers prey on their targets by injecting malicious code, such as an exploit kit or malware loader, into websites that are popular with their victims. These victims are then infected when they visit said websites unknowingly. This opens as a gateway for attackers to infiltrate corporate systems, harvest credentials, and pivot across internal networks.
How Watering Hole Attacks Unfold
The attack lifecycle usually progresses as follows:
- Reconnaissance - Attackers gather intelligence on the websites frequented by the target audience, including specialized communities, partner websites, or local news sites.
- Website Exploitation - Through the use of outdated CMS software and insecure plugins, attackers gain access to the target website and insert malicious code such as JS or iframe redirections.
- Delivery and Exploitation - The visitor’s browser executes the malicious code injected into the page. The code might include a redirection payload which sends the user to an exploit kit that checks the user’s browser, plugins, operating system, and other components for vulnerabilities.
- Infection and Persistence - The infected system malware such as RATs, keyloggers, or backdoors. These enable lateral and long-term movements within the organisation for espionage.
- Command and Control (C2) - For further instructions, additional payload delivery, and stolen data retrieval, infected devices connect to servers managed by the attackers.
Key Features of Watering Hole Attacks
- Indirect Approach: Instead of going after the main target, attackers focus on sites that the main target trusts.
- Supply-Chain-Like Impact: An infected industry portal can affect many companies at the same time.
- Low Profile: It is difficult to identify since the traffic comes from real websites.
- Advanced Customization: Exploit kits are known to specialize in making custom payloads for specific browsers or OS versions to increase the chance of success.
Why Are These Attacks Dangerous?
Worming hole attacks shift the battlefield to new grounds in cyber warfare on the web. They eliminate the need for firewalls, email shields, and other security measures because they operate on the traffic to and from real, trusted websites. When the attacks work as intended, the following consequences can be expected:
- Stealing Credentials: Including privileged accounts and VPN credentials.
- Espionage: Theft of intellectual property, defense blueprints, or government confidential information.
- Supply Chain Attacks: Resulting in a series of infections among related companies.
- Zero-Day Exploits: Including automated attacks using zero-day exploits for full damage.
Incidents of Primary Concern
The implications of watering hole attacks have been felt in the real world for quite some time. An example from 2019 reveals this, where a known VoIP firm’s site was compromised and used to spread data-stealing malware to its users. Likewise, in 2014, the Operation Snowman campaign—which seems to have a state-backed origin—attempted to infect users of a U.S. veterans’ portal in order to gain access to visitors from government, defense, and related fields. Rounding up the list, in 2021, cybercriminals attacked regional publications focusing on energy, using the publications to spread malware to company officials and engineers working on critical infrastructure, as well as to steal data from their systems. These attacks show the widespread and dangerous impact of watering hole attacks in the world of cybersecurity.
Detection Issues
Due to the following reasons, traditional approaches to security fail to detect watering hole attacks:
- Use of Authentic Websites: Attacks involving trusted and popular domains evade detection via blacklisting.
- Encrypted Traffic: Delivering payloads over HTTPS conceals malicious scripts from being inspected at the network level.
- Fileless Methods: Using in-memory execution is a modern campaign technique, and detection based on signatures is futile.
Mitigation Strategies
To effectively neutralize the threat of watering hole attacks, an organization should implement a defense-in-depth strategy that incorporates the following elements:
- Patch Management and Hardening -
- Conduct routine updates on operating systems, web browsers, and extensions to eliminate exploit opportunities.
- Either remove or reduce the use of high-risk elements such as Flash and Java, if feasible.
- Network Segmentation - Minimize lateral movement by isolating critical systems from the general user network.
- Behavioral Analytics - Implement Endpoint Detection and Response (EDR) tools to oversee unusual behaviors on processes—for example, script execution or dubious outgoing connections.
- DNS Filtering and Web Isolation - Implement DNS-layer security to deny access to known malicious domains and use browser isolation for dangerous sites.
- Threat Intelligence Integration - Track watering hole threats and campaigns for indicators of compromise (IoCs) on advisories and threat feeds.
- Multi-Layer Email and Web Security - Use web gateways integrated with dynamic content scanning, heuristic analysis, and sandboxing.
- Zero Trust Architecture - Apply least privilege access, require device attestation, and continuous authentication for accessing sensitive resources.
Incident Response Best Practices
- Forensic Analysis: Check affected endpoints for any mechanisms set up for persistence and communication with C2 servers.
- Log Review: Look through proxy, DNS, and firewall logs to detect suspicious traffic.
- Threat Hunting: Search your environment for known Indicators of Compromise (IoCs) related to recent watering hole attacks.
- User Awareness Training: Help employees understand the dangers related to visiting external industry websites and promote safe browsing practices.
The Immediate Need for Action
The adoption of cloud computing and remote working models has significantly increased the attack surface for watering hole attacks. Trust and healthcare sectors are increasingly targeted by nation-state groups and cybercrime gangs using this technique. Not taking action may lead to data leaks, legal fines, and break-ins through the supply chain, which damage the trustworthiness and operational capacity of the enterprise.
Conclusion
Watering hole attacks demonstrate how phishing attacks evolve from a broad attack to a very specific, trust-based attack. Protecting against these advanced attacks requires the zero-trust mindset, adaptive defenses, and continuous monitoring, which is multicentral security. Advanced response measures, proactive threat intelligence, and detection technologies integration enable organizations to turn this silent threat from a lurking predator to a manageable risk.
References
- https://www.fortinet.com/resources/cyberglossary/watering-hole-attack
- https://en.wikipedia.org/wiki/Watering_hole_attack
- https://www.proofpoint.com/us/threat-reference/watering-hole
- https://www.techtarget.com/searchsecurity/definition/watering-hole-attack

Executive Summary
A video circulating on social media shows Dr. Vikas Divyakirti speaking during a podcast, where he is heard saying, “Those who cannot even memorise and speak four sentences are considered the greatest in India.” Several users are sharing the clip claiming that the remark was aimed at Narendra Modi. However, a research by CyberPeace found the claim to be misleading. The research revealed that the viral clip has been edited and shared out of context. In the original video, Divyakirti made the remarks in reference to film stars, not the Prime Minister.
Claim
On Facebook, a user shared the viral clip with an English caption alleging that Divyakirti criticised Modi, saying he cannot speak without a teleprompter or scripted interviews and has built a false image of greatness.

Similarly, another user shared the video on X, suggesting that people who cannot speak without a teleprompter are still considered great in India, indirectly linking the remark to Modi.

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search using Google Lens. This led us to the original video uploaded on the official YouTube channel of Raj Shamani.

At around the 3:55 mark, the same clip can be seen. During the conversation, Shamani asks whether building a larger-than-life perception actually benefits an individual. Responding to this, Dr. Vikas Divyakirti explains that film stars often have an exaggerated public image. He notes that many of the dialogues they are praised for are not written by them, but by others, and some even rely on teleprompters while speaking. He further adds that there are individuals who cannot even memorise and deliver four sentences or think independently, yet are regarded as great in India. He also mentions that many social media personalities use teleprompters, but audiences remain unaware and assume they possess exceptional knowledge.
Conclusion
The viral claim is misleading. The video has been edited and shared out of context. Dr. Vikas Divyakirti was referring to film stars and social media personalities, not Narendra Modi.