#FactCheck - Old Ajman Fire Video Falsely Linked to Iran Drone Attack on Dubai Airport
Executive Summary:
The ongoing conflict between Iran and the US-Israel has entered its 19th day. Meanwhile, a video is being widely shared on social media claiming that Iran is carrying out continuous drone attacks at Dubai International Airport. The clip shows visuals of massive fire and explosion However, research by the CyberPeace has found the claim to be misleading. Our research revealed that the video has been available on the internet since 2020. In reality, the footage shows a fire at a market in Ajman, UAE, and not explosions at Dubai Airport in 2026. Although there were recent reports of a fire near DXB (Dubai Airport) following a drone attack, this video is not related to that incident.
Claim:
On social media platform X (formerly Twitter), a user shared the viral video on March 17, 2026, writing:
“Dubai International Airport – Iran is dropping drones continuously.”
Post link, archive link, and screenshot are given below:

Fact Check:
To verify the viral claim, we extracted keyframes from the video and conducted a reverse image search using Google Lens. During the research, we found the same video on a YouTube channel, where it was uploaded on August 6, 2020. The caption read: “Ajman fruits and vegetables market caught in fire.”

Based on this clue, it became clear that the viral video has no connection with the ongoing Iran-US-Israel conflict. In the next step, we searched using relevant keywords and found a report published on August 5, 2020, on the website of Gulf News, which contained visuals similar to the viral video.

According to the Gulf News report, a major fire broke out at a public market in the new industrial area of Ajman at around 6:30 pm. The blaze was later brought under control by Ajman Civil Defence with assistance from teams in Dubai, Sharjah, and Umm Al Quwain.
Conclusion:
Our research found that the viral video has been online since 2020 and shows a fire at a market in Ajman, UAE. It is not related to any recent incident at Dubai Airport.
Related Blogs
.webp)
Introduction
Misinformation poses a significant challenge to public health policymaking since it undermines efforts to promote effective health interventions and protect public well-being. The spread of inaccurate information, particularly through online channels such as social media and internet platforms, further complicates the decision-making process for policymakers since it perpetuates public confusion and distrust. This misinformation can lead to resistance against health initiatives, such as vaccination programs, and fuels scepticism towards scientifically-backed health guidelines.
Before the COVID-19 pandemic, misinformation surrounding healthcare largely encompassed the effects of alcohol and tobacco consumption, marijuana use, eating habits, physical exercise etc. However, there has been a marked shift in the years since. One such example is the outcry against palm oil in 2024: it is an ingredient prevalent in numerous food and cosmetic products, and came under the scanner after a number of claims that palmitic acid, which is present in palm oil, is detrimental to our health. However, scientific research by reputable institutions globally established that there is no cause for concern regarding the health risks posed by palmitic acid. Such trends and commentaries tend to create a parallel unscientific discourse that has the potential to not only impact individual choices but also public opinion and as a result, market developments and policy conversations.
A prevailing narrative during the worst of the Covid-19 pandemic was that the virus had been engineered to control society and boost hospital profits. The extensive misinformation surrounding COVID-19 and its management and care increased vaccine hesitancy amongst people worldwide. It is worth noting that vaccine hesitancy has been a consistent trend historically; the World Health Organisation flagged vaccine hesitancy as one of the main threats to global health, and there have been other instances where a majority of the population refused to get vaccinated anticipating unverified, long-lasting side effects. For example, research from 2016 observed a significant level of public skepticism regarding the development and approval process of the Zika vaccine in Africa. Further studies emphasised the urgent need to disseminate accurate information about the Zika virus on online platforms to help curb the spread of the pandemic.
In India during the COVID-19 pandemic, despite multiple official advisories, notifications and guidelines issued by the government and ICMR, people continued to remain opposed to vaccination, which resulted in inflated mortality rates within the country. Vaccination hesitancy was also compounded by anti-vaccination celebrities who claimed that vaccines were dangerous and contributed in large part to the conspiracy theories doing the rounds. Similar hesitation was noted in misinformation surrounding the MMR vaccines and their likely role in causing autism was examined. At the time of the crisis, the Indian government also had to tackle disinformation-induced fraud surrounding the supply of oxygens in hospitals. Many critically-ill patients relied on fake news and unverified sources that falsely portrayed the availability of beds, oxygen cylinders and even home set-ups, only to be cheated out of money.
The above examples highlight the difficulty health officials face in administering adequate healthcare. The special case of the COVID-19 pandemic also highlighted how current legal frameworks failed to address misinformation and disinformation, which impedes effective policymaking. It also highlights how taking corrective measures against health-related misinformation becomes difficult since such corrective action creates an uncomfortable gap in an individual’s mind, and it is seen that people ignore accurate information that may help bridge the gap. Misinformation, coupled with the infodemic trend, also leads to false memory syndrome, whereby people fail to differentiate between authentic information and fake narratives. Simple efforts to correct misperceptions usually backfire and even strengthen initial beliefs, especially in the context of complex issues like healthcare. Policymakers thus struggle with balancing policy making and making people receptive to said policies in the backdrop of their tendencies to reject/suspect authoritative action. Examples of the same can be observed on both the domestic front and internationally. In the US, for example, the traditional healthcare system rations access to healthcare through a combination of insurance costs and options versus out-of-pocket essential expenses. While this has been a subject of debate for a long time, it hadn’t created a large scale public healthcare crisis because the incentives offered to the medical professionals and public trust in the delivery of essential services helped balance the conversation. In recent times, however, there has been a narrative shift that sensationalises the system as an issue of deliberate “denial of care,” which has led to concerns about harms to patients.
Policy Recommendations
The hindrances posed by misinformation in policymaking are further exacerbated against the backdrop of policymakers relying on social media as a method to measure public sentiment, consensus and opinions. If misinformation about an outbreak is not effectively addressed, it could hinder individuals from adopting necessary protective measures and potentially worsen the spread of the epidemic. To improve healthcare policymaking amidst the challenges posed by health misinformation, policymakers must take a multifaceted approach. This includes convening a broad coalition of central, state, local, territorial, tribal, private, nonprofit, and research partners to assess the impact of misinformation and develop effective preventive measures. Intergovernmental collaborations such as the Ministry of Health and the Ministry of Electronics and Information Technology should be encouraged whereby doctors debunk online medical misinformation, in the backdrop of the increased reliance on online forums for medical advice. Furthermore, increasing investment in research dedicated to understanding misinformation, along with the ongoing modernization of public health communications, is essential. Enhancing the resources and technical support available to state and local public health agencies will also enable them to better address public queries and concerns, as well as counteract misinformation. Additionally, expanding efforts to build long-term resilience against misinformation through comprehensive educational programs is crucial for fostering a well-informed public capable of critically evaluating health information.
From an individual perspective, since almost half a billion people use WhatsApp it has become a platform where false health claims can spread rapidly. This has led to a rise in the use of fake health news. Viral WhatsApp messages containing fake health warnings can be dangerous, hence it is always recommended to check such messages with vigilance. This highlights the growing concern about the potential dangers of misinformation and the need for more accurate information on medical matters.
Conclusion
The proliferation of misinformation in healthcare poses significant challenges to effective policymaking and public health management. The COVID-19 pandemic has underscored the role of misinformation in vaccine hesitancy, fraud, and increased mortality rates. There is an urgent need for robust strategies to counteract false information and build public trust in health interventions; this includes policymakers engaging in comprehensive efforts, including intergovernmental collaboration, enhanced research, and public health communication modernization, to combat misinformation. By fostering a well-informed public through education and vigilance, we can mitigate the impact of misinformation and promote healthier communities.
References
- van der Meer, T. G. L. A., & Jin, Y. (2019), “Seeking Formula for Misinformation Treatment in Public Health Crises: The Effects of Corrective Information Type and Source” Health Communication, 35(5), 560–575. https://doi.org/10.1080/10410236.2019.1573295
- “Health Misinformation”, U.S. Department of Health and Human Services. https://www.hhs.gov/surgeongeneral/priorities/health-misinformation/index.html
- Mechanic, David, “The Managed Care Backlash: Perceptions and Rhetoric in Health Care Policy and the Potential for Health Care Reform”, Rutgers University. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2751184/pdf/milq_195.pdf
- “Bad actors are weaponising health misinformation in India”, Financial Express, April 2024.
- “Role of doctors in eradicating misinformation in the medical sector.”, Times of India, 1 July 2024. https://timesofindia.indiatimes.com/life-style/health-fitness/health-news/national-doctors-day-role-of-doctors-in-eradicating-misinformation-in-the-healthcare-sector/articleshow/111399098.cms

Introduction
Entrusted with the responsibility of leading the Global Education 2030 Agenda through the Sustainable Development Goal 4, UNESCO’s Institute for Lifelong Learning in collaboration with the Media and Information Literacy and Digital Competencies Unit has recently launched a Media and Information Literacy Course for Adult Educators. The course aligns with The Pact for The Future adopted at The United Nations Summit of the Future, September 2024 - asking for increased efforts towards media and information literacy from its member countries. The course is free for Adult Educators to access and is available until 31st May 2025.
The Course
According to a report by Statista, 67.5% of the global population uses the internet. Regardless of the age and background of the users, there is a general lack of understanding on how to spot misinformation, targeted hate, and navigating online environments in a manner that is secure and efficient. Since misinformation (largely spread online) is enabled by the lack of awareness, digital literacy becomes increasingly important. The course is designed keeping in mind that many active adult educators are yet to get an opportunity to hone their skills with regard to media and information through formal education. Self-paced, a total of 10 hours, this course covers basics such as concepts of misinformation and disinformation, artificial intelligence, and combating hate speech, and offers a certificate on completion.
CyberPeace Recommendations
As this course is free of cost, can be done in a remote capacity, and covers basics regarding digital literacy, all eligible are encouraged to take it up to familiarise themselves with such topics. However, awareness regarding the availability of this course, alongside who can avail of this opportunity can be further worked on so a larger number can avail its benefits.
CyberPeace Recommendations To Enhance Positive Impact
- Further Collaboration: As this course is open to adult educators, one can consider widening the scope through active engagement with Independent organisations and even Individual internet users who are willing to learn.
- Engagement with Educational Institutions: After launching a course, an interactive outreach programme and connecting with relevant stakeholders can prove to be beneficial. Since this course requires each individual adult educator to sign up to avail the course, partnering with educational universities, institutes, etc. is encouraged. In the Indian context, active involvement with training institutes such as DIET (District Institute of Education and Training), SCERT (State Council of Educational Research and Training), NCERT (National Council of Educational Research and Training), and Open Universities, etc. could be initiated, facilitating greater awareness and more participation.
- Engagement through NGOs: NGOs (focused on digital literacy) with a tie-up with UNESCO, can aid in implementing and encouraging awareness. A localised language approach option can be pondered upon for inclusion as well.
Conclusion
Though a long process, tackling misinformation through education is a method that deals with the issue at the source. A strong foundation in awareness and media literacy is imperative in the age of fake news, misinformation, and sensitive data being peddled online. UNESCO’s course launch garners attention as it comes from an international platform, is free of cost, truly understands the gravity of the situation, and calls for action in the field of education, encouraging others to do the same.
References
- https://www.uil.unesco.org/en/articles/media-and-information-literacy-course-adult-educators-launched
- https://www.unesco.org/en/articles/celebrating-global-media-and-information-literacy-week-2024
- https://www.unesco.org/en/node/559#:~:text=UNESCO%20believes%20that%20education%20is,must%20be%20matched%20by%20quality.

Introduction
Artificial intelligence is often hailed as a democratiser of knowledge, opportunity and skill. It is set to improve diagnostics, personalised learning, and productivity to boost the economy, which can assist millions of people to leave poverty. However, this may be an incomplete picture. A report of the United Nations Development Programme in 2025 tells a more complex tale. The Next Great Divergence: Why AI May Widen Inequality Between Countries cautions that, unless acts are taken to intervene, AI will not alleviate inequality between countries but will instead concentrate benefits in already advantaged economies and increase risks in more vulnerable ones.
Two Gaps, One Crisis
AI is not going to create a level playing field: it has been injected into a world where there is unprecedented inequality. The report outlines two structural asymmetries that will influence the ways in which its effects manifest: a capability gap and a vulnerability gap.
Those countries that have high connectivity, skills, compute and regulation will be in a position to reap a greater portion of the AI dividend. Others will be exposed to greater risks of job losses, information exclusion, misinformation, and the indirect consequences of increased energy and water demands.
The centre of this transition is the Asia-Pacific region, that harbors a population of more than 55 per cent of the world. More than half of the global AI users are now located in the region, but the initial positions are quite different. Nations such as Singapore and South Korea are already spending a lot of money on AI infrastructure, with others still striving to offer basic broadband services. Two out of three individuals already use AI tools in certain high-income economies. In most countries with low incomes, the utilisation is lower. Such figures are important as they depict not only a gap in technology but also a structural difference in terms of who controls AI and who is controlled by the latter.
When Inequality Becomes a Trust Problem
Any trusted technological system is based on three tenets: transparency, fairness and accountability. AI inequality negatively impacts all three.
If governments implement imported AI systems in areas with limited technical capability, with limited transparency on their operation, their construction, and their biases. Citizens do not really trust when decision-making systems are black boxes and domestic institutions lack the know-how to question them.
Data exclusion also interferes with fairness. The AI systems trained with the datasets not sufficiently representative of the rural population, linguistic minorities, and women will generate poorer results in those groups systematically. Since South Asian women are much less likely to own a smartphone, this impacts their representation in digital data, and consequently in any AI system trained on such data.
Safety Risks Are Not Evenly Distributed
The lack of trust has a direct safety aspect. For example, those countries that have less robust information ecosystems have a greater exposure to AI-generated misinformation that can bias the discourse of the populace, alter elections, and cause violence. They also have the weakest capability of screening, tagging, or combating such content.
The same can be said about labour markets. The very same technologies that can speed up marginalisation and destabilise governance increase human insecurity, especially among employees in the informal economy with weak social security. The UNDP report points out that the exposure of female employment to disruption by AI is disproportionate to that of male employment, which further presents a gendered dimension in an already unequal situation.
Risks of infrastructure are skewed as well. Large AI systems may create disproportionately high energy and water demands on countries that host the data infrastructure without there being an equivalent economic payback. The environmental cost is local while profits are outsourced. Dangers of AI spread downwards, and the advantages go upwards.
The Governance Gap and Regulatory Arbitrage
Governance is perhaps the most important aspect. There are only a few states that presently have extensive AI regulation systems. This gives rise to a patchy landscape, in which safety standards differ dramatically and where companies have an incentive to install systems in jurisdictions that have weaker regulation.
The main reason is the lack of capability, as expressed by Philip Schellekens, chief economist of the UNDP in Asia and the Pacific, who says that those countries that invest in skills, computing power and well-run governance structures will gain. The rest will be left far behind.
This departure has its ramifications outside the nations. When users in other areas are subjected to widely different rates of safety and equity by the same international platforms, the concept of uniform digital norms would no longer be sustainable. Confidence in AI systems is lost not only locally but also on a global scale.
Way Forward
The UNDP report makes it clear that there is no inevitability of divergence. To avert it, however, it is necessary to consider AI governance as a development, rather than a technology problem.
The capacity to govern should be constructed and not presumed. This implies assisting countries in establishing regulatory systems, institutional capacity, and facilitating cross-border collaboration on standards. It can also imply considering some AI features as a public good, with common models and open standards that do not allow a few firms or states to become too powerful.
The UNDP articulates the problem in a simple manner: in the end, the world's people and not machines must decide on what technologies should be given priority and how to utilise them optimally.
Conclusion
AI inequality is often framed as an economic divergence story. But its implications run deeper. It reshapes who is protected, who is visible in data, and who has the power to challenge harmful outcomes. The risk is not just that some countries fall behind economically. It is that the global digital ecosystem fragments into zones of high trust and low trust, high protection and low protection. The choices made now will determine which path prevails. AI can reinforce existing divides or help bridge them.
But that outcome will not be decided by the technology itself. It will be decided by how societies choose to distribute access, power, and responsibility in the systems they build.
References
- https://www.undp.org/sites/g/files/zskgke326/files/2025-12/undp-rbap-the-next-great-divergence_1.pdf
- https://www.undp.org/asia-pacific/press-releases/ai-risks-sparking-new-era-divergence-development-gaps-between-countries-widen-undp-report-finds
- https://www.undp.org/asia-pacific/blog/next-great-divergence-how-ai-could-split-world-again-if-we-dont-intervene
- https://www.aljazeera.com/news/2025/12/2/ai-threatens-to-widen-inequality-among-states-un
- https://www.undp.org/asia-pacific/next-great-divergence
- https://www.eco-business.com/press-releases/ai-risks-spark-new-era-of-divergence-as-development-gaps-widen-undp-report/