#Fact Check: Viral Smoke Video Is Not From Israel-Iran Conflict, But Mexico Casino Fire
Executive Summary
Amid heightened tensions following Israel and US actions against Iran, a video is being widely shared on social media. The footage shows thick black smoke rising into the sky from a location, suggesting a major explosion or attack. However, research conducted by the CyberPeace found the viral claim to be misleading. Our research revealed that the video is not recent and has no connection to the current Israel-Iran tensions. In fact, the footage is nearly a year old and shows a fire at a casino in Mexico, now being shared out of context.
Claim
Users circulating the video claim that it shows an attack on Tel Aviv, Israel. On March 1, 2026, a user on X shared the clip with the caption, “Iran has drained the oil out of Tel Aviv,” implying a devastating retaliatory strike. (Post and archive links provided above.)

Fact Check:
To verify the authenticity of the video, we extracted key frames and performed a reverse image search using Google Lens. During the search, we found the same visuals in a Spanish media report published on January 16, 2025. This confirmed that the video predates the ongoing geopolitical developments.

According to the report, the footage shows a fire at the Royal Park Casino located inside the Cinépolis plaza in Culiacán, Mexico. Local outlet Meganoticias Culiacán reported on X that the casino was “completely burned down.” The structure reportedly collapsed following the blaze, and emergency responders confirmed that several people were injured. Further keyword searches led us to the same footage on the official YouTube channel of Milenio, uploaded on January 17, 2025. The report clearly states that the fire occurred at the Royal Yacht Casino in Mexico and is unrelated to any recent military developments.

Conclusion
Evidence gathered during our research clearly establishes that the viral video is not related to any missile attack by Iran on Israel. The claim is false. The footage is from a fire incident at a casino in Mexico and is being misleadingly shared in the context of current international tensions, potentially creating unnecessary panic and confusion.
Related Blogs
.webp)
Executive Summary
The U.S. Department of Justice recently released nearly three million pages of documents, along with thousands of videos and photographs, related to its research into convicted offender Jeffrey Epstein. Meanwhile, a video showing a massive crowd protesting on a street is going viral on social media The video, which had earlier circulated with false claims linking it to anti-government protests in Iran, is now being shared by several users who claim that the protest took place in the United States after the release of the Epstein files. Research by CyberPeace found the viral claim to be false. The video being linked to protests in the United States following the release of the Epstein files is not real and was generated using artificial intelligence (AI).
Claim:
An Instagram user uploaded the viral video on February 9, 2026, with the caption: “After Epstein files released in America. All eyes on America.”
- https://www.instagram.com/reel/DUjLe-XE5lA
- https://ghostarchive.org/archive/tkP6W

Fact Check:
To verify the claim, we first conducted a reverse search of the viral video using Google Lens. The same video was found posted on January 10, 2026, by an Instagram account named “elnaz555,” where it was shared in the context of recent protests in Iran. The post also mentioned that the video was created using AI.

Based on this lead, we further analyzed a higher-quality version of the viral video using Hive Moderation, a tool used to detect AI-generated images and videos. The analysis indicated a 97.9% probability that the video was generated using artificial intelligence. The research clearly shows that the video is not authentic and has been falsely linked to protests in the United States after the release of the Epstein files.

Conclusion:
The claim circulating on social media is false. The viral video allegedly showing protests in the United States following the release of the Epstein files is AI-generated and not related to any real event.

Introduction
The digital expanse of the metaverse has recently come under scrutiny following a gruesome incident. In a digital realm crafted for connection and exploration, a 16-year-old girl’s avatar falls victim to an agonising assault that kindled the fire of ethno-legal and societal discourse. The incident is a stark reminder that the cyberverse, offering endless possibilities and experiences, also has glaring challenges that require serious consideration. The incident involves a sixteen-year-old teen girl being raped through her digital avatar by a few members of Metaverse.
This incident has sparked a critical question of genuine psychological trauma inflicted by virtual experiences. The incident with a 16-year-old girl highlights the strong emotional repercussions caused by illicit virtual actions. While the physical realm remains unharmed, the digital assault can leave permanent scars on the psyche of the girl. This issue raises a critical question about the ethical implications of virtual interactions and the responsibilities of service providers to protect users' well-being on their platforms.
The Judicial Quagmire
The digital nature of these assaults gives impetus to complex jurisdictions which are profound in cyber offences. We are still novices in navigating the digital labyrinth where avatars have the ability to transcend borders with just a click of a mouse. The current legal structure is not equipped to tackle virtual crimes, calling for urgent reforms in critical legal structure. The Policymakers and legal Professionals must define virtual offenses first with clear and defined jurisdictional boundaries ensuring justice isn’t hampered due to geographical restrictions.
Meta’s Accountability
Meta, a platform where this gruesome incident occurred, finds itself at the crossroads of ethical dilemma. The company implemented plenty of safeguards that proved futile in preventing such harrowing acts. The incident has raised several questions about the broader role and responsibilities of tech juggernauts. Some of the questions demanding immediate answers as how a company can strike a balance between innovation and the protection of its users.
The Tightrope of Ethics
Metaverse is the epitome of innovation, yet this harrowing incident highlights a fundamental ethical contention. The real challenge is to harness the power of virtual reality while addressing the risks of digital hostilities. Society is still facing this conundrum, stakeholders must work in tandem to formulate robust and effective legal structures to protect the rights and well-being of users. This also includes balancing technological development and ethical challenges which require collective effort.
Reflections of Society
Beyond legal and ethical considerations, this act calls for wider societal reflections. It emphasises the pressing need for a cultural shift fostering empathy, digital civility and respect. As we tread deeper into the virtual realm, we must strive to cultivate ethos upholding dignity in both the digital and real world. This shift is only possible through awareness campaigns, educational initiatives and strong community engagement to foster a culture of respect and responsibility.
Safer and Ethical Way Forward
A multidimensional approach is essential to address the complicated challenges cyber violence poses. Several measures can pave the way for safer cyberspace for netizens.
- Legislative Reforms - There’s an urgent need to revamp legislative frameworks to mitigate and effectively address the complexities of these new and emerging virtual offences. The tech companies must collaborate with the government on formulating best practices and help develop standard security measures prioritising user protection.
- Public Awareness and Engagement - Initiating public awareness campaigns to educate users on crucial issues such as cyber resilience, ethics, digital detox and responsible online behaviour play a critical role in making netizens vigilant to avoid cyber hostilities and help fellow netizens in distress. Civil society organisations and think tanks such as CyberPeace Foundation are the pioneers of cyber safety campaigns in the country, working in tandem with governments across the globe to curb the evil of cyber hostilities.
- Interdisciplinary Research: The policymakers should delve deeper into the ethical, psychological and societal ramifications of digital interactions. The multidisciplinary approach in research is crucial for formulating policy based on evidence.
Conclusion
The digital Gang Rape is a wake-up call, demanding the bold measure to confront the intricate legal, societal and ethical pitfalls of the metaverse. As we navigate digital labyrinth, our collective decisions will help shape the metaverse's future. By nurturing the culture of empathy, responsibility and innovation, we can forge a path honouring the dignity of netizens, upholding ethical principles and fostering a vibrant and safe cyberverse. In this significant movement, ethical vigilance, diligence and active collaboration are indispensable.
References:
- https://www.thehindu.com/sci-tech/technology/virtual-gang-rape-reported-in-the-metaverse-probe-underway/article67705164.ece
- https://thesouthfirst.com/news/teen-uk-girl-virtually-gang-raped-in-metaverse-are-indian-laws-equipped-to-handle-similar-cases/

Introduction
Artificial intelligence is often hailed as a democratiser of knowledge, opportunity and skill. It is set to improve diagnostics, personalised learning, and productivity to boost the economy, which can assist millions of people to leave poverty. However, this may be an incomplete picture. A report of the United Nations Development Programme in 2025 tells a more complex tale. The Next Great Divergence: Why AI May Widen Inequality Between Countries cautions that, unless acts are taken to intervene, AI will not alleviate inequality between countries but will instead concentrate benefits in already advantaged economies and increase risks in more vulnerable ones.
Two Gaps, One Crisis
AI is not going to create a level playing field: it has been injected into a world where there is unprecedented inequality. The report outlines two structural asymmetries that will influence the ways in which its effects manifest: a capability gap and a vulnerability gap.
Those countries that have high connectivity, skills, compute and regulation will be in a position to reap a greater portion of the AI dividend. Others will be exposed to greater risks of job losses, information exclusion, misinformation, and the indirect consequences of increased energy and water demands.
The centre of this transition is the Asia-Pacific region, that harbors a population of more than 55 per cent of the world. More than half of the global AI users are now located in the region, but the initial positions are quite different. Nations such as Singapore and South Korea are already spending a lot of money on AI infrastructure, with others still striving to offer basic broadband services. Two out of three individuals already use AI tools in certain high-income economies. In most countries with low incomes, the utilisation is lower. Such figures are important as they depict not only a gap in technology but also a structural difference in terms of who controls AI and who is controlled by the latter.
When Inequality Becomes a Trust Problem
Any trusted technological system is based on three tenets: transparency, fairness and accountability. AI inequality negatively impacts all three.
If governments implement imported AI systems in areas with limited technical capability, with limited transparency on their operation, their construction, and their biases. Citizens do not really trust when decision-making systems are black boxes and domestic institutions lack the know-how to question them.
Data exclusion also interferes with fairness. The AI systems trained with the datasets not sufficiently representative of the rural population, linguistic minorities, and women will generate poorer results in those groups systematically. Since South Asian women are much less likely to own a smartphone, this impacts their representation in digital data, and consequently in any AI system trained on such data.
Safety Risks Are Not Evenly Distributed
The lack of trust has a direct safety aspect. For example, those countries that have less robust information ecosystems have a greater exposure to AI-generated misinformation that can bias the discourse of the populace, alter elections, and cause violence. They also have the weakest capability of screening, tagging, or combating such content.
The same can be said about labour markets. The very same technologies that can speed up marginalisation and destabilise governance increase human insecurity, especially among employees in the informal economy with weak social security. The UNDP report points out that the exposure of female employment to disruption by AI is disproportionate to that of male employment, which further presents a gendered dimension in an already unequal situation.
Risks of infrastructure are skewed as well. Large AI systems may create disproportionately high energy and water demands on countries that host the data infrastructure without there being an equivalent economic payback. The environmental cost is local while profits are outsourced. Dangers of AI spread downwards, and the advantages go upwards.
The Governance Gap and Regulatory Arbitrage
Governance is perhaps the most important aspect. There are only a few states that presently have extensive AI regulation systems. This gives rise to a patchy landscape, in which safety standards differ dramatically and where companies have an incentive to install systems in jurisdictions that have weaker regulation.
The main reason is the lack of capability, as expressed by Philip Schellekens, chief economist of the UNDP in Asia and the Pacific, who says that those countries that invest in skills, computing power and well-run governance structures will gain. The rest will be left far behind.
This departure has its ramifications outside the nations. When users in other areas are subjected to widely different rates of safety and equity by the same international platforms, the concept of uniform digital norms would no longer be sustainable. Confidence in AI systems is lost not only locally but also on a global scale.
Way Forward
The UNDP report makes it clear that there is no inevitability of divergence. To avert it, however, it is necessary to consider AI governance as a development, rather than a technology problem.
The capacity to govern should be constructed and not presumed. This implies assisting countries in establishing regulatory systems, institutional capacity, and facilitating cross-border collaboration on standards. It can also imply considering some AI features as a public good, with common models and open standards that do not allow a few firms or states to become too powerful.
The UNDP articulates the problem in a simple manner: in the end, the world's people and not machines must decide on what technologies should be given priority and how to utilise them optimally.
Conclusion
AI inequality is often framed as an economic divergence story. But its implications run deeper. It reshapes who is protected, who is visible in data, and who has the power to challenge harmful outcomes. The risk is not just that some countries fall behind economically. It is that the global digital ecosystem fragments into zones of high trust and low trust, high protection and low protection. The choices made now will determine which path prevails. AI can reinforce existing divides or help bridge them.
But that outcome will not be decided by the technology itself. It will be decided by how societies choose to distribute access, power, and responsibility in the systems they build.
References
- https://www.undp.org/sites/g/files/zskgke326/files/2025-12/undp-rbap-the-next-great-divergence_1.pdf
- https://www.undp.org/asia-pacific/press-releases/ai-risks-sparking-new-era-divergence-development-gaps-between-countries-widen-undp-report-finds
- https://www.undp.org/asia-pacific/blog/next-great-divergence-how-ai-could-split-world-again-if-we-dont-intervene
- https://www.aljazeera.com/news/2025/12/2/ai-threatens-to-widen-inequality-among-states-un
- https://www.undp.org/asia-pacific/next-great-divergence
- https://www.eco-business.com/press-releases/ai-risks-spark-new-era-of-divergence-as-development-gaps-widen-undp-report/