#FactCheck: Viral video blast of fuel tank in UAE Al Hariyah Port portray as Russia-Ukraine Conflict
Executive Summary:
A viral video showing flames and thick smoke from large fuel tanks has been shared widely on social media. Many claimed it showed a recent Russian missile attack on a fuel depot in Ukraine. However, our research found that the video is not related to the Russia-Ukraine conflict. It actually shows a fire that happened at Al Hamriyah Port in Sharjah, United Arab Emirates, on May 31, 2025. The confusion was likely caused by a lack of context and misleading captions.

Claim:
The circulating claim suggests that Russia deliberately bombed Ukraine's fuel reserves and the viral video shows evidence of the bombing. The posts claim the fuel depot was destroyed purposefully during military operations, implying an increase in violence. This narrative is intended to generate feelings and reinforce fears related to war.

Fact Check:
After doing a reverse image search of the key frames of the viral video, we found that the video is actually from Al Hamriyah Port, UAE, not from the Russia-Ukraine conflict. During further research we found the same visuals were also published by regional news outlets in the UAE, including Gulf News and Khaleej Times, which reported on a massive fire at Al Hamriyah Port on 31 May 2025.
As per the news report, a fire broke out at a fuel storage facility in Al Hamriyah Port, UAE. Fortunately, no casualties were reported. Fire Management Services responded promptly and successfully brought the situation under control.


Conclusion:
The belief that the viral video is evidence of a Russian strike in Ukraine is misleading and incorrect. The video is actually of a fire at a commercial port in the UAE. When you share misleading footage like that, you distort reality and incite fear based on lies. It is simply a reminder that not all viral media is what it appears to be, and every viewer should take the time to check and verify the content source and context before accepting or reposting. In this instance, the original claim is untrue and misleading.
- Claim: Fresh attack in Ukraine! Russian military strikes again!
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

March 3rd 2023, New Delhi: If you have received any message that contains a link asking users to download an application to avail Income Tax Refund or KYC benefits with the name of Income Tax Department or reputed Banks, Beware!
CyberPeace Foundation and Autobot Infosec Private Limited along with the academic partners under CyberPeace Center of Excellence (CCoE) recently conducted five different studies on phishing campaigns that have been circulating on the internet by using misleading tactics to convince users to install malicious applications on their devices. The first campaign impersonates the Income Tax Department, while the rest of the campaigns impersonate ICICI Bank, State Bank of India, IDFC Bank and Axis bank respectively. The phishing campaigns aim to trick users into divulging their personal and financial information.
After a detailed study, the research team found that:
- All campaigns appear to be an offer from reputed entities, however hosted on third-party domains instead of the official website of the Income Tax Department or the respective Banks, raising suspicion.
- The applications ask several access permissions of the device. Moreover some of them seek users to provide full control of the device. Allowing such access permission could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications.
- Cybercriminals created malicious applications using icons that closely resemble those of legitimate entities with the intention of enticing users into downloading the malicious applications.
- The applications collect user’s personal and banking information. Getting into this type of trap could lead users to face significant financial losses.
- While investigating the impersonated Income Tax Department’s application, the Research team identified the application sends http traffic to a remote server which acts as a Command and Control (CnC/C2) for the application.
- Customers who desire to avail benefits or refunds from respective banks, download relevant apps, believing that the chosen app will assist them. However, they are not always aware that the app may be fraudulent.
“The Research highlights the importance of being vigilant while browsing the internet and not falling prey to such phishing attacks. It is crucial to be cautious when clicking on links or downloading attachments from unknown sources, as they may contain malware that can harm the device or compromise the data.” spokesperson, CyberPeace added.
In addition to this in an earlier report released in last month, the same research team had drawn attention to the WhatsApp messages masquerading as an offer from Tanishq Jewellers with links luring unsuspecting users with the promise of free valentine’s day presents making the rounds on the app.
CyberPeace Advisory:
- The Research team recommends that people should avoid opening such messages sent via social platforms. One must always think before clicking on such links, or downloading any attachments from unauthorised sources.
- Downloading any application from any third party sources instead of the official app store should be avoided. This will greatly reduce the risk of downloading a malicious app, as official app stores have strict guidelines for app developers and review each app before it gets published on the store.
- Even if you download the application from an authorised source, check the app’s permissions before you install it. Some malicious apps may request access to sensitive information or resources on your device. If an app is asking for too many permissions, it’s best to avoid it.
- Keep your device and the app-store app up to date. This will ensure that you have the latest security updates and bug fixes.
- Falling into such a trap could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications and could lead users to financial loss.
- Do not share confidential details like credentials, banking information with such types of Phishing scams.
- Never share or forward fake messages containing links on any social platform without proper verification.

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/
.webp)
Introduction
With the advent of the internet, the world revealed the promise of boundless connection and the ability to bridge vast distances with a single click. However, as we wade through the complex layers of the digital age, we find ourselves facing a paradoxical realm where anonymity offers both liberation and a potential for unforeseen dangers. Omegle, a chat and video messaging platform, epitomizes this modern conundrum. Launched over a decade ago in 2009, it has burgeoned into a popular avenue for digital interaction, especially amidst the heightened need for human connection spurred by the COVID-19 pandemic's social distancing requirements. Yet, this seemingly benign tool of camaraderie, tragically, doubles as a contemporary incarnation of Pandora's box, unleashing untold risks upon the online privacy and security landscape. Omegle shuts down its operations permanently after 14 years of its service.
The Rise of Omegle
The foundations of this nebulous virtual dominion can be traced back to the very architecture of Omegle. Introduced to the world as a simple, anonymous chat service, Omegle has since evolved, encapsulating the essence of unpredictable human interaction. Users enter this digital arena, often with the innocent desire to alleviate the pangs of isolation or simply to satiate curiosity; yet they remain blissfully unaware of the potential cybersecurity maelstrom that awaits them.
As we commence a thorough inquiry into the psyche of Omegle's vast user base, we observe a digital diaspora with staggering figures. The platform, in May 2022, counted 51.7 million unique visitors, a testament to its sprawling reach across the globe. Delve a bit deeper, and you will uncover that approximately 29.89% of these digital nomads originate from the United States. Others, in varying percentages, flock from India, the Philippines, the United Kingdom, and Germany, revealing a vast, intricate mosaic of international engagement.
Such statistics beguile the uninformed observer with the lie of demographic diversity. Yet we must proceed with caution, for while the platform boasts an impressive 63.91% male patronage, we cannot overlook the notable surge in female participation, which has climbed to 36.09% during the pandemic era. More alarming still is the revelation, borne out of a BBC investigation in February 2021, that children as young as seven have trespassed into Omegle's adult sections—a section purportedly guarded by a minimum age limit of thirteen. How we must ask, has underage presence burgeoned on this platform? A sobering pointer finger towards the platform's inadvertent marketing on TikTok, where youthful influencers, with abandon, promote their Omegle exploits under the #omegle hashtag.
The Omegle Allure
Omegle's allure is further compounded by its array of chat opportunities. It flaunts an adult section awash with explicit content, a moderated chat section that, despite the platform's own admissions, remains imperfectly patrolled, and an unmoderated section, its entry pasted with forewarnings of an 18+ audience. Beyond these lies the college chat option, a seemingly exclusive territory that only admits individuals armed with a verified '.edu' email address.
The effervescent charm of Omegle's interface, however, belies its underlying treacheries. Herein lies a digital wilderness where online predators and nefarious entities prowl, emboldened by the absence of requisite registration protocols. No email address, no unique identifier—pestilence to any notion of accountability or safeguarding. Within this unchecked reality, the young and unwary stand vulnerable, a hapless game for exploitation.
Threat to Users
Venture even further into Omegle's data fiefdom, and the spectre of compromise looms larger. Users, particularly the youth, risk exposure to unsuitable content, and their naivety might lead to the inadvertent divulgence of personal information. Skulking behind the facade of connection, opportunities abound for coercion, blackmail, and stalking—perils rendered more potent as every video exchange and text can be captured, and recorded by an unseen adversary. The platform acts as a quasi-familiar confidante, all the while harvesting chat logs, cookies, IP addresses, and even sensory data, which, instead of being ephemeral, endure within Omegle's databases, readily handed to law enforcement and partnered entities under the guise of due diligence.
How to Combat the threat
In mitigating these online gorgons, a multi-faceted approach is necessary. To thwart incursion into your digital footprint, adults, seeking the thrills of Omegle's roulette, would do well to cloak their activities with a Virtual Private Network (VPN), diligently pore over the privacy policy, deploy robust cybersecurity tools, and maintain an iron-clad reticence on personal disclosures. For children, the recommendation gravitates towards outright avoidance. There, a constellation of parental control mechanisms await the vigilant guardian, ready to shield their progeny from the internet's darker alcoves.
Conclusion
In the final analysis, Omegle emerges as a microcosm of the greater web—a vast, paradoxical construct proffering solace and sociability, yet riddled with malevolent traps for the uninformed. As digital denizens, our traverse through this interconnected cosmos necessitates a relentless guarding of our private spheres and the sober acknowledgement that amidst the keystrokes and clicks, we must tread with caution lest we unseal the perils of this digital Pandora's box.
References: