#FactCheck - Misleading Video of Dubai Airport Attack Circulates Online, Found AI-Generated
Executive Summary
Amid rising tensions in the Middle East following attacks on Iran by the United States and Israel, a video is being shared on social media claiming that it shows a recent attack at Dubai International Airport. Research by the CyberPeace found the viral claim to be false. Our research revealed that the viral video is not real but has been created using artificial intelligence technology.
Claim:
An Instagram user shared the viral video on March 1, 2026, claiming it shows an attack at Dubai Airport. The link to the post, the archive link, and a screenshot are provided below.

Fact Check:
To verify the viral claim, we searched Google using relevant keywords. However, we did not find any credible media report confirming the claim.On closely examining the viral video, we noticed several unusual visuals and technical inconsistencies, raising suspicion that it might be AI-generated. To verify this, we scanned the video using the AI detection tool Sightengine. According to the results, around 74 percent of the video shows the likelihood of being AI-generated.

Conclusion:
Our research found that the viral video is not real but has been created using artificial intelligence technology.
Related Blogs

Introduction
In today’s digital world, data has emerged as the new currency that influences global politics, markets, and societies. Companies, governments, and tech behemoths aim to control data because it accords them influence and power. However, a fundamental challenge brought about by this increased reliance on data is how to strike a balance between privacy protection and innovation and utility.
In recognition of these dangers, more than 200 Nobel laureates, scientists, and world leaders have recently signed the Global Call for AI Red Lines. Governments are urged by this initiative to create legally binding international regulations on artificial intelligence by 2026. Its goal is to stop AI from going beyond moral and security bounds, particularly in areas like political manipulation, mass surveillance, cyberattacks, and dangers to democratic institutions.
One way to address the threat to privacy is through pseudonymization, which makes it possible to use data valuable for research and innovation by substituting personal identifiers for artificial ones. Pseudonymization thus directly advances the AI Red Lines initiative's mission of facilitating technological advancement while lowering the risks of data misuse and privacy violations.
The Red Lines of AI: Why do they matter?
The Global Call for AI Red Lines initiative represents a collective attempt to impose precaution before catastrophe, which was done with the objective of recognising the Red Lines in the use of AI tools. Thus, anything that unites the risks of using AI is due to the absence of global safeguards. Some of these Red Lines can be understood as;
- Cybersecurity breaches in the form of exposure of financial and personal data due to AI-driven hacking and surveillance.
- Occurrence of privacy invasions due to endless tracking.
- Generative AI can also help to create realistic fake content, undermining the trust of public discourses, leading to misinformation.
- Algorithmic amplification of polarising content can also threaten civic stability, leading to a demographic disruption.
Legal Frameworks and Regulatory Landscape
The regulations of Artificial Intelligence stand fragmented across jurisdictions, leaving significant loopholes aside. Some of the frameworks already provide partial guidance. The European Union’s Artificial Intelligence Act 2024 bans “unacceptable” AI practices, whereas the US-China Agreement also ensures that nuclear weapons remain under human, not machine-controlled. The UN General Assembly has adopted resolutions urging safe and ethical AI usage, with a binding and elusive global treaty.
On the front of data protection, the General Data Protection Regulations (GDPR) of EU offers a clear definition of Pseudonymisation under Article 4(5). It also describes a process where personal data is altered in a way that it cannot be attributed to an individual without additional information, which must be stored securely and separately. Importantly, pseudonymised data still qualifies as “personal data” under GDPR. However, India’s Digital Personal Data Protection Act (DPDP) 2023 adopts a similar stance. It does not explicitly define pseudonymisation in broad terms, such as “personal data” by including potentially reversible identifiers. According to Section 8(4) of the Act, companies are meant to adopt appropriate technical or organisational measures. International bodies and conventions like the OECD Principles on AI or the Council of Europe Convention 108+ emphasize accountability, transparency, and data minimisation. Collectively, these instruments point towards pseudonymization as a best practice, though interpretations of its scope differ.
Strategies for Corporate Implementation
For a company, pseudonymisation is not just about compliance, it is also a practical solution that offers measurable benefits. By pseudonymising data, businesses can get benefits, such as;
- Enhancing Privacy protection by masking identifiers like names or IDs by reducing the impact of data breaches.
- Preserving Data Utility, unlike having a full anonymisation, pseudonymisation also retains patterns that are essential for analytical innovation.
- Facilitating data sharing can allow organizations to collaborate with their partners and researchers while maintaining proper trust.
According to these benefits, competitive advantages get translated to clauses where customers find it more likely to trust organizations that prioritise data protection, while pseudonymisation further enables the firms to engage in cross-border collaboration without violating local data laws.
Balancing Privacy Rights and Data Utility
Balancing is a central dilemma; on one side lies the case of necessity over data utility, where companies, researchers and governments rely on large datasets to enhance the scale of AI innovation. On the other hand lies the question of the right to privacy, which is a non-negotiable principle protected under the international human rights law.
Pseudonymisation offers a practical compromise by enabling the use of sensitive data while reducing the privacy risks. Taking examples of different domains, such as healthcare, it allows the researchers to work with patient information without exposing identities, whereas in finance, it supports fraud detection without revealing the customer details.
Conclusion
The rapid rise of artificial intelligence has led to the outpacing of regulations, raising urgent questions related to safety, fairness and accountability. The global call for recognising the AI red lines is a bold step that looks in the direction of setting universal boundaries. Yet, alongside the remaining global treaties, practical safeguards are also needed. Pseudonymisation exemplifies such a safeguard, which is legally recognised under the GDPR and increasingly relevant in India’s DPDP Act. It balances the twin imperatives of privacy, protection, and data utility. For organizations, adopting pseudonymisation is not only about ensuring regulatory compliance, rather, it is also about building trust, ensuring resilience, and aligning with the broader ethical responsibilities in this digital age. As the future of AI is debatable, the guiding principles also need to be clear. By embedding techniques for preserving privacy, like pseudonymisation, into AI systems, we can take a significant step towards developing a sustainable, ethical and innovation-driven digital ecosystem.
References
https://www.techaheadcorp.com/blog/shadow-ai-the-risks-of-unregulated-ai-usage-in-enterprises/
https://planetmainframe.com/2024/11/the-risks-of-unregulated-ai-what-to-know/
https://cepr.org/voxeu/columns/dangers-unregulated-artificial-intelligence
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/

Introduction
Online dating platforms have become a common way for individuals to connect in today’s digital age. For many in the LGBTQ+ community, especially in environments where offline meeting spaces are limited, these platforms offer a way to find companionship and support. However, alongside these opportunities come serious risks. Users are increasingly being targeted by cybercrimes such as blackmail, sextortion, identity theft, and online harassment. These incidents often go unreported due to stigma and concerns about privacy. The impact of such crimes can be both emotional and financial, highlighting the need for greater awareness and digital safety.
Cybercrime On LGBTQ+ Dating Apps: A Threat Landscape
According to the NCRB 2022 report, there has been a 24.4% increase in cybercrimes. But unfortunately, the queer community-specific data is not available. Cybercrimes that target LGBTQ+ users in very organised and predatory. In several Indian cities, gangs actively monitor dating platforms to the point that potential victims, especially young queers and those who seem discreet about their identity, become targets. Once the contact is established, perpetrators use a standard operating process, building false trust, forcing private exchanges, and then gradually starting blackmail and financial exploitation. Many queer victims are blackmailed with threats of exposure to families or workplaces, often by fake police demanding bribes. Fear of stigma and insensitive policing discourages reporting. Cyber criminal gangs exploit these gaps on dating apps. Despite some arrests, under-reporting persists, and activists call for stronger platform safety.
Types of Cyber Crimes against Queer Community on Dating Apps
- Romance scam or “Lonely hearts scam”: Scammers build trust with false stories (military, doctors, NGO workers) and quickly express strong romantic interest. They later request money, claiming emergencies. They often try to create multiple accounts to avoid profile bans.
- Sugar daddy scam: In this type of scam, the fraudster offers money or allowance in exchange for things like chatting, sending photos, or other interactions. They usually offer a specific amount and want to use some uncommon payment gateways. After telling you they will send you a lot of money, they often make up a story like: “My last sugar baby cheated me, so now you must first send me a small amount to prove you are trustworthy.” This is just a trick to make you send them money first.
- Sextortion / Blackmail scam: Scammers record explicit chats or pretend to be underage, then threaten exposure unless you pay. Some target discreet users. Never send explicit content or pay blackmailers.
- Investment Scams: Scammers posing as traders or bankers convince victims to invest in fake opportunities. Some "flip" small amounts to build trust, then disappear with larger sums. Real investors won’t approach you on dating apps. Don’t share financial info or transfer money.
- Pay-Before-You-Meet scam: Scammer demands upfront payment (gift cards, gas money, membership fees) before meeting, then vanishes. Never pay anyone before meeting in person.
- Security app registration scam: Scammers ask you to register on fake "security apps" to steal your info, claiming it ensures your safety. Research apps before registering. Be wary of quick link requests.
- The Verification code scam: Scammers trick you into giving them SMS verification codes, allowing them to hijack your accounts. Never share verification codes with anyone.
- Third-party app links: Mass spam messages with suspicious links that steal info or infect devices. Don’t click suspicious links or “Google me” messages.
- Support message scam: Messages pretending to be from application support, offering prizes or fake shows to lure you to malicious sites.
Platform Accountability & Challenges
The issue of online dating platforms in India is characterised by weak grievance redressal, poor takedown of abusive profiles, and limited moderation practices. Most platforms appoint grievance officers or offer an in-app complaint portal, but complaints are often unanswered or receive only automated and AI-generated responses. This highlights the gap between policy and enforcement on the ground.
Abusive or fake profiles, often used for scams, hate crimes, and outing LGBTQ+ individuals, remain active long after being reported. In India, organised extortion gangs have exploited such profiles to lure, assault, rob, and blackmail queer men. Moderation teams often struggle with backlogs and lack the resources needed to handle even the most serious complaints.
Despite offering privacy settings and restricting profile visibility, moderation practices in India are still weak, leaving large segments of users vulnerable to impersonation, catfishing, and fraud. The concept of pseudonymisation can help protect vulnerable communities, but it is difficult to distinguish authentic users from malicious actors without robust, privacy-respecting verification systems.
Since many LGBTQ+ individuals prefer to maintain their confidentiality, while others are more vocal about their identities, in either case, the data shared by an individual with an online dating platform must be vigilantly protected. The Digital Personal Data Protection Act, 2023, mandates the protection of personal data. Section 8(4) provides: “A Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the rules made thereunder.” Accordingly, digital platforms collecting such data should adopt the necessary technical and organisational measures to comply with data protection laws.
Recommendations
The Supreme Court has been proactive in this regard, through decisions like Navtej Singh Johar v. Union of India, which decriminalised same-sex relationships. Justice K.S. Puttaswamy (Retd.) v. Union of India and Ors., acknowledged the right to privacy as a fundamental right, and, most recently, the 2025 affirmation of the right to digital access. However, to protect LGBTQ+ people online, more robust legal frameworks are still required.
There is a requirement for a dedicated commission or an empowered LGBTQ+ cell. Like the National Commission for Women (NCW), which works to safeguard the rights of women, a similar commission would address community-specific issues, including cybercrime, privacy violations, and discrimination on digital platforms. It may serve as an institutional link between the victim, the digital platforms, the government, and the police. Dating Platforms must enhance their security features and grievance mechanisms to safeguard the users.
Best Practices
Scammers use data sets and plans to target individuals seeking specific interests, such as love, sex, money, or association. Do not make financial transactions, such as signing up for third-party platforms or services. Scammers may attempt to create accounts for others, which can be used to access dating platforms and harm legitimate users. Users should be vigilant about sharing sensitive information, such as private images, contact information, or addresses, as scammers can use this information to threaten users. Stay smart, stay cyber safe.
References
- https://www.hindustantimes.com/htcity/cinema/16yearold-queer-child-pranshu-dies-by-suicide-due-to-bullying-did-we-fail-as-a-society-mental-health-expert-opines-101701172202794.html#google_vignette
- https://www.ijsr.net/archive/v11i6/SR22617213031.pdf
- https://help.grindr.com/hc/en-us/articles/1500009328241-Scam-awareness-guide
- http://meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://mib.gov.in/sites/default/files/2024-02/IT%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20English.pdf

Executive Summary
Mumbai’s Mira–Bhayandar bridge has recently been in the news due to its unusual design. In this context, a photograph is going viral on social media showing a bus seemingly stuck on the bridge. Some users are also sharing the image while claiming that it is from Sonpur subdivision in Bihar. However, an research by the CyberPeace has found that the viral image is not real. The bridge shown in the image is indeed the Mira–Bhayandar bridge, which is under discussion because its design causes it to suddenly narrow from four lanes to two lanes. That said, the bridge is not yet operational, and the viral image showing a bus stuck on it has been created using Artificial Intelligence (AI).
Claim
An Instagram user shared the viral image on January 29, 2026, with the caption:“Are Indian taxpayers happy to see that this is funded by their money?” The link, archive link, and screenshot of the post can be seen below.

Fact Check:
To verify the claim, we first conducted a Google Lens reverse image search. This led us to a post shared by X (formerly Twitter) user Manoj Arora on January 29. While the bridge structure in that image matches the viral photo, no bus is visible in the original post.This raised suspicion that the viral image had been digitally manipulated.

We then ran the viral image through the AI detection tool Hive Moderation, which flagged it as over 99% likely to be AI-generated

Conclusion
The CyberPeace research confirms that while the Mira–Bhayandar bridge is real and has been in the news due to its design, the viral image showing a bus stuck on the bridge has been created using AI tools. Therefore, the image circulating on social media is misleading.