#FactCheck -AI-Generated Video Falsely Claims Iran Unveiled B-2-Like Drone During War
Executive Summary:
Amid the ongoing war involving the United States, Israel, and Iran, a video clip circulating on social media claims to show Iran unveiling a drone resembling the US B-2 stealth bomber. In the viral clip, an aircraft-like object can be seen emerging from a cave before taking off. Several users are sharing the video with the claim that Iran has deployed a B-2-style drone in the conflict.
However, research by the CyberPeace found that the viral video is not real and was generated using artificial intelligence. While the United States has reportedly used B-2 stealth bombers in strikes against Iran during the conflict, the viral clip does not show an actual Iranian drone.
Claim
X user “Muslim_Voice_Space” posted the video on March 3, 2026, claiming that Iran had rolled out a drone resembling the B-2 bomber for use in the war.

Fact Check
To verify the claim, we first closely examined the viral video. In the opening moments of the clip, the wing of the alleged drone appears to hit the side of the cave while exiting. Despite the apparent collision, the aircraft continues flying smoothly without any visible damage. This unusual detail raised doubts about the authenticity of the footage.
We then analyzed the video using the AI detection tool Hive Moderation, which flagged the clip as likely AI-generated.

Further analysis using the Sightengine AI detection tool also suggested that the video was artificially created. The tool estimated a 75% probability that the footage was generated using AI. It also indicated a 70% likelihood that the clip may have been created using Sora, an AI video-generation tool.

Conclusion
The viral video claiming to show an Iranian drone resembling the US B-2 stealth bomber emerging from a cave is not authentic. Analysis indicates that the clip was created using AI tools and is being misleadingly shared in the context of the ongoing conflict.
Related Blogs

Executive Summary:
A video of Prime Minister Narendra Modi is going viral across multiple social media platforms. In the clip, PM Modi is purportedly heard praising Christianity and stating that only Jesus Christ can lead people to heaven.Several users are sharing and commenting on the video, believing it to be genuine. The CyberPeace researched the viral claim and found it to be false. The circulating video has been created using artificial intelligence (AI).
Claim:
On January 29, 2026, a Facebook user named ‘Khaju Damor’ posted the viral video of PM Modi. The post gained traction, with many users sharing and commenting on it as if it were authentic. (Links and archived versions provided)

Fact Check:
As part of our research , we first closely examined the viral video. Upon careful observation, several inconsistencies were noticed. The Prime Minister’s facial expressions and hand movements appeared unnatural. The lip-sync and overall visual presentation also raised suspicions about the clip being digitally manipulated. To verify this further, we analyzed the video using the AI detection tool Hive Moderation. The tool’s analysis indicated a 99% probability that the video was AI-generated.

To independently confirm the findings, we also ran the clip through another detection platform, Undetectable.ai. Its analysis likewise indicated a very high likelihood that the video was created using artificial intelligence.

Conclusion:
Our research confirms that the viral video of Prime Minister Narendra Modi praising Christianity and making the alleged statement about heaven is fake. The clip has been generated using AI tools and does not depict a real statement made by the Prime Minister.

Executive Summary:
A viral video claiming the crash site of Air India Flight AI-171 in Ahmedabad has misled many people online. The video has been confirmed not to be from India or a recent crash, but was filmed at Universal Studios Hollywood on a TV or movie set meant to look like a plane crash set piece for a movie.

Claim:
A video that purportedly shows the wreckage of Air India Flight AI-171 after crashing in Ahmedabad on June 12, 2025, has circulated among social media users. The video shows a large amount of aircraft wreckage as well as destroyed homes and a scene reminiscent of an emergency, making it look genuine.

Fact check:
In our research, we took screenshots from the viral video and used reverse image search, which matched visuals from Universal Studios Hollywood. It became apparent that the video is actually from the most famous “War of the Worlds" set, located in Universal Studios Hollywood. The set features a 747 crash scene that was constructed permanently for Steven Spielberg's movie in 2005. We also found a YouTube video. The set has fake smoke poured on it, with debris scattered about and additional fake faceless structures built to represent a scene with a larger crisis. Multiple videos on YouTube here, here, and here can be found from the past with pictures of the tour at Universal Studios Hollywood, the Boeing 747 crash site, made for a movie.


The Universal Studios Hollywood tour includes a visit to a staged crash site featuring a Boeing 747, which has unfortunately been misused in viral posts to spread false information.

While doing research, we were able to locate imagery indicating that the video that went viral, along with the Universal Studios tour footage, provided an exact match and therefore verified that the video had no connection to the Ahmedabad incident. A side-by-side comparison tells us all we need to know to uncover the truth.


Conclusion:
The viral video claiming to show the aftermath of the Air India crash in Ahmedabad is entirely misleading and false. The video is showing a fictitious movie set from Universal Studios Hollywood, not a real disaster scene in India. Spreading misinformation like this can create unnecessary panic and confusion in sensitive situations. We urge viewers to only trust verified news and double-check claims before sharing any content online.
- Claim: Massive explosion and debris shown in viral video after Air India crash.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency