#FactCheck : AI Video Falsely Shows Iran Destroying Israeli Military Base
Executive Summary
Amid the ongoing conflict involving the US-Israel and Iran in West Asia, a video showing destroyed aircraft at an airport is going viral on social media. The clip is being shared with the claim that it shows an Israeli military base destroyed in an Iranian attack. However, an research by the CyberPeacen found that the viral video is not real but AI-generated.
Claim:
An Instagram user “sakirali8064” shared the video on March 22, 2026, claiming that Iran had demonstrated its military strength by deploying advanced missiles capable of long-range precision strikes.The video also carries a “Breaking News” overlay stating:“Iran attack Israel military base… the entire base destroyed.
Post link and archive link:

Fact Check:
To verify the claim, we extracted keyframes from the viral clip and conducted a reverse image search using Google Lens. We found a longer version of the same video posted on March 5, 2026, by a Facebook user named “With INC,” where it was also falsely linked to an Iranian attack on Israel’s Ben Gurion Airport.

Upon closely examining the video, we observed inconsistencies such as fire changing positions unnaturally, which raised suspicion of AI manipulation. We then analyzed the video using Hive Moderation, which indicated a probability of over 99% that the content is AI-generated.

Additionally, analysis using Tencent’s “Zhuque AI” detection tool suggested more than 78% likelihood of the video being AI-generated.

Conclusion:
The viral video claiming that an Iranian attack destroyed an Israeli military base is AI-generated and misleading. While Iran has claimed to have targeted Israel’s Ben Gurion International Airport using drones, the viral footage does not depict a real event.
Related Blogs

Introduction
Search Engine Optimisation (SEO) is a process through which one can improve website visibility on search engine platforms like Google, Microsoft Bing, etc. There is an implicit understanding that SEO suggestions or the links that are generated on top are the more popular information sources and, hence, are deemed to be more trustworthy. This trust, however, is being misused by threat actors through a process called SEO poisoning.
SEO poisoning is a method used by threat actors to attack and obtain information about the user by using manipulative methods that position their desired link, web page, etc to appear at the top of the search engine algorithm. The end goal is to lure the user into clicking and downloading their malware, presented in the garb of legitimate marketing or even as a valid result for Google search.
An active example of attempts at SEO poisoning has been discussed in a report by the Hindustan Times on 11th November, 2024. It highlights that using certain keywords could make a user more susceptible to hacking. Hackers are now targeting people who enter specific words or specific combinations in search engines. According to the report, users who looked up and clicked on links at the top related to the search query “Are Bengal cats legal in Australia?” had details regarding their personal information posted online soon after.
SEO Poisoning - Modus Operandi Of Attack
There are certain tactics that are used by the attackers on SEO poisoning, these are:
- Keyword stuffing- This method involves overloading a webpage with irrelevant words, which helps the false website appear higher in ranking.
- Typosquatting- This method involves creating domain names or links similar to the more popular and trusted websites. A lack of scrutiny before clicking would lead the user to download malware, from what they thought was a legitimate site.
- Cloaking- This method operates by showing different content to both the search engines and the user. While the search engine sees what it assumes to be a legitimate website, the user is exposed to harmful content.
- Private Link Networks- Threat actors create a group of unrelated websites in order to increase the number of referral links, which enables them to rank higher on search engine platforms.
- Article Spinning- This method involves imitating content from other pre-existing, legitimate websites, while making a few minor changes, giving the impression to search engine crawlers of it being original content.
- Sneaky Redirect- This method redirects the users to malicious websites (without their knowledge) instead of the ones the user had intended to click.
CyberPeace Recommendations
- Employee Security Awareness Training: Security awareness training can help employees familiarise themselves with tactics of SEO poisoning, encouraging them to either spot such inconsistencies early on or even alert the security team at the earliest.
- Tool usage: Companies can use Digital Risk Monitoring tools to catch instances of typosquatting. Endpoint Detection and Response (EDR) tools also help keep an eye on client history and assess user activities during security breaches to figure out the source of the affected file.
- Internal Security Measures: To refer to lists of Indicators of Compromise (IOC). IOC has URL lists that show evidence of the strange behaviour of websites, and this can be used to practice caution. Deploying Web Application Firewalls (WAFs) to mitigate and detect malicious traffic is helpful.
Conclusion
The nature of SEO poisoning is such that it inherently promotes the spread of misinformation, and facilitates cyberattacks. Misinformation regarding the legitimacy of the links and the content they display, in order to lure users into clicking on them, puts personal information under threat. As people trust their favoured search engines, and there is a lack of awareness of such tactics in use, one must exercise caution while clicking on links that seem to be popular, despite them being hosted by trusted search engines.
References
- https://www.checkpoint.com/cyber-hub/cyber-security/what-is-cyber-attack/what-is-seo-poisoning/
- https://www.vectra.ai/topics/seo-poisoning
- https://www.techtarget.com/whatis/definition/search-poisoning
- https://www.blackberry.com/us/en/solutions/endpoint-security/ransomware-protection/seo-poisoning
- https://www.coalitioninc.com/blog/seo-poisoning-attacks
- https://www.sciencedirect.com/science/article/abs/pii/S0160791X24000186
- https://www.repindia.com/blog/secure-your-organisation-from-seo-poisoning-and-malvertising-threats/
- https://www.hindustantimes.com/technology/typing-these-6-words-on-google-could-make-you-a-target-for-hackers-101731286153415.html

Introduction
Misinformation is no longer a challenge limited to major global platforms or widely spoken languages. In India and many other countries, false information is increasingly disseminated through local and vernacular languages, allowing it to reach communities more directly and intimately. While regional language content has played a crucial role in expanding access to information, it has also emerged as a powerful driver of misinformation by bad actors, and it often becomes harder to detect and counter. The challenge of local language misinformation is not merely digital in nature; it is deeply social, cultural, and shaped by specific local contexts.
Why Local-Language Misinformation Is More Impactful
A person’s mother tongue can be a highly effective medium for misinformation because it carries emotional resonance and a sense of authenticity. Information that aligns with an individual’s linguistic and cultural background is often trusted the most. When false narratives are framed using familiar expressions, local references, or community-specific concerns, they are more readily accepted and shared more widely.
Misinformation in a language like English, which is more heavily moderated, does not usually have the same impact as content in vernacular languages. In the latter case, such content tends to circulate within closed networks such as family WhatsApp groups, regional Facebook pages, local YouTube channels, and community forums. These spaces are often perceived as safe or trusted, which lowers scepticism and encourages the spread of unverified information.
The Role of Digital Platforms and Algorithms
Although social media platforms have opened up access to the content of regional languages, the moderation mechanisms have not kept up. The automated control systems for content are frequently trained mainly on the dominant languages, thus missing the detection of vernacular speech, slang, dialects, and code-mixing.
This results in a disparity in the enforcement of laws where misinformation in local languages:
- Doesn’t go through automated fact-checking tools
- Is subject to human moderation takes place at a slower pace
- Is less prone to being reported or flagged
- Gains unrestrained access for a longer time period than first imagined
The problem is further magnified by algorithmic amplification. Content that triggers very strong emotional reactions fear, anger, pride, or outrage, has a higher chance of being promoted, irrespective of its truthfulness. In regional situations, such content may very quickly sway public opinion even in very closely knit communities.
Forms of Vernacular Misinformation
Local-language misinformation appears in various forms:
- Health misinformation, with such examples as panic remedies, vaccine myths, and misleading medical prescriptions
- Political misinformation, which is mostly identified with regional identity, local grievances, or community narratives
- Rumours regarding disasters that are very hard to control and spread hatred during floods, earthquakes, or other public emergencies
- Economic and financial frauds that are perpetrated via the local dialect authorities or trusted institutions
- Cultural and religious untruths, which are based on exploiting the core of the beliefs
The regional aspect of such misinformation makes it very difficult to be corrected because the fact-checks in other languages may not get to that audience.
Community-Level Consequences
The effect of misinformation in local languages is not only about the misdirection of individuals. It can also:
- Negatively affect the process of public institutions gaining trust
- Support social polarisation and communal strife
- Get in the way of public health measures
- Help shape the decision-making process in elections at the grassroots level
- Take advantage of the digitally illiterate poor people
In a lot of scenarios, the damage done is not instant but rather accumulative, thus changing perceptions and supporting false worldviews more.
Why Countering Vernacular Misinformation Is Difficult
Multiple structural layers make it difficult to respond effectively:
- Variety of Languages: Just in India, there are many languages and dialects, which are very hard to monitor universally.
- Culturally Aware Systems: The local languages sometimes bear meanings that are deeply rooted in the culture, such as by using sarcasm or referring to history, and automated systems are unable to interpret it correctly.
- Reporting Not Common: Users might not spot misinformation or may not want to be a part of the struggle by showing the content shared by reliable members of the community.
- Insufficient Fact-Checking Capacity: Resources are often unavailable for fact-checking organisations to perform their duties worldwide in different languages effectively.
Building a Community-Centric Response
Overcoming misinformation in local languages needs a community-driven resilience approach instead of a platform-centric one. Some of the key actions are:
- Boosting Digital Literacy: Users will be able to question, verify, and put the content on hold before sharing it, thanks to the regional language awareness campaigns that will be conducted.
- Facilitating Local Fact-Checkers: Local journalists, educators, and NGOs are the main players in providing the context for verification.
- Accountability of Platforms: It is necessary for technology companies to support global moderation in several languages, the hiring of local experts, and the implementation of transparent enforcement mechanisms.
- Contemplating Policy and Governance: Regulatory frameworks should facilitate proactive risk assessment while controlling the right to free expression.
- Establishment of Trusted Local Intermediaries: Community leaders, health workers, teachers, and local organisations can engage in preventing misinformation among the networks that they are trusted in.
The Way Forward
Misinformation in local languages is not a minor concern; it is an issue that directly affects the future of digital trust. As the number of users accessing the internet through local language interfaces continues to grow, the volume and influence of regional content will also increase. If measures do not include all language groups, misinformation will remain least corrected and most influential at the community level, where it is also the hardest to identify and address.
Such a problem exists only if the power of language is not recognised. Therefore, one can say that it is necessary to protect the quality of information in local languages, not only for digital safety but for other factors as well, such as social cohesion, democratic participation, and public well-being.
Conclusion
Vernacular content has the potential to be very powerful in the ways it can inform, include and empower; meanwhile, if it goes unmonitored, it has the same potential to mislead, divide, and harm. Mis-disinformation in local languages calls for the cooperation of platforms, regulators, NGOs, and the communities involved. To win over the digital ecosystem, it has to speak all languages, not only for communication but also for protection.
References
- https://www.mdpi.com/2304-6775/10/2/15
- https://afpr.in/regional-languages-shaping-indias-online-discourse/
- https://medium.com/@pratikgsalvi03/how-indias-misinformation-surge-and-media-credibility-crisis-are-undermining-democracy-public-dc8ad7be8e12
- https://projectshakti.in/
- https://journals.sagepub.com/doi/10.1177/02683962211037693
- https://rsisinternational.org/journals/ijriss/Digital-Library/volume-8-issue-11/505-518.pdf
- https://www.irjmets.com/upload_newfiles/irjmets71200016652/paper_file/irjmets71200016652.pdf

Introduction
Netflix is no stranger to its subscribers being targeted by SMS and email-led phishing campaigns. But the most recent campaign has been deployed at a global scale, affecting paid users in as many as 23 countries according to cybersecurity firm Bitdefender. In this particular campaign, attackers are using the carrot-and-stick tactic of either creating a false sense of urgency or promising rewards to steal financial information and Netflix credentials. For example, users may be contacted via SMS and told that their account is being suspended due to payment failures. A fake website may be shared through a link, encouraging the individual to share sensitive information to restore their account. Once this information has been input, it is now accessible to the attackers. This can create significant stress and even financial loss for its users. Thus, they are encouraged to develop the necessary skills to recognize and respond to these threats effectively.
How The Netflix Scam Works
Users are typically contacted through SMS. Bitdefender reports that these messages may look something like this:
"NETFLIX: There was an issue processing your payment. To keep your services active, please sign in and confirm your details at: https://account-details[.]com"
On clicking the link, the victim is directed to a website designed to mimic an authentic user experience interface, containing Netflix’s logo, color scheme, and grammatically-correct text. The website uses this interface to encourage the victim to divulge sensitive personal information, such as account credentials and payment details. Since this is a phishing website, the user’s personal information becomes accessible to the attacker as soon as it is entered. This information is then sold individually or in bundles on the dark web.
Practical Steps to Stay Safe
- Know Netflix’s Customer Interface: According to Netflix, it will never ask users to share personal information including credit or debit card numbers, bank account details, and Netflix passwords. It will also never ask for payment through a third-party vendor or website.
- Verify Authenticity: Do not open links from unknown sources sent by email or sms. If unsure, access Netflix directly by typing the URL into the browser instead of clicking on links in emails or texts. If the link has been opened, do not enter any information.
- Use Netflix’s Official Support Channels: Confirm any suspicious communication through Netflix’s verified help page or app. Write to phishing@netflix.com with any complaints about such an issue.
- Contact Your Financial Institution: If you have entered your personal information into a phishing website, you should immediately reach out to your bank to block your card and change your Netflix password. Contact the authorities via www.cybercrime.gov.in or by calling the helpline at 1930 in case of loss of funds.
- Use Strong Passwords and Enable MFA/2FA: Users are advised to use a unique, strong password with multiple characters. Enable Multi-Factor Authentication or Two Factor Authentication to your accounts, if available, to add an extra level of security.
Conclusion
Phishing campaigns which are designed to gather customer data through fraudulent means often involve sending links to as many users as possible, with the aim of monetizing stolen information. Attackers exploit user trust in online platforms to steal sensitive personal information, making such campaigns more sophisticated as highlighted above. This underscores the need for users of online platforms to practice good cyber hygiene by verifying information, learning to detect suspicious information and ignoring it, and staying aware of the types of online fraud they may be exposed to.
Sources
- https://www.bitdefender.com/en-gb/blog/hotforsecurity/netflix-scam-stay-safe
- https://help.netflix.com/en/node/65674
- https://timesofindia.indiatimes.com/technology/tech-news/netflix-users-beware-this-netflix-subscription-scam-is-active-in-23-countries-how-to-spot-one-and-stay-safe/articleshow/115820070.cms