#FactCheck: Viral video claims Ahmedabad plane crash but actually a Hollywood Movie Clip
Executive Summary:
A viral video claiming the crash site of Air India Flight AI-171 in Ahmedabad has misled many people online. The video has been confirmed not to be from India or a recent crash, but was filmed at Universal Studios Hollywood on a TV or movie set meant to look like a plane crash set piece for a movie.

Claim:
A video that purportedly shows the wreckage of Air India Flight AI-171 after crashing in Ahmedabad on June 12, 2025, has circulated among social media users. The video shows a large amount of aircraft wreckage as well as destroyed homes and a scene reminiscent of an emergency, making it look genuine.

Fact check:
In our research, we took screenshots from the viral video and used reverse image search, which matched visuals from Universal Studios Hollywood. It became apparent that the video is actually from the most famous “War of the Worlds" set, located in Universal Studios Hollywood. The set features a 747 crash scene that was constructed permanently for Steven Spielberg's movie in 2005. We also found a YouTube video. The set has fake smoke poured on it, with debris scattered about and additional fake faceless structures built to represent a scene with a larger crisis. Multiple videos on YouTube here, here, and here can be found from the past with pictures of the tour at Universal Studios Hollywood, the Boeing 747 crash site, made for a movie.


The Universal Studios Hollywood tour includes a visit to a staged crash site featuring a Boeing 747, which has unfortunately been misused in viral posts to spread false information.

While doing research, we were able to locate imagery indicating that the video that went viral, along with the Universal Studios tour footage, provided an exact match and therefore verified that the video had no connection to the Ahmedabad incident. A side-by-side comparison tells us all we need to know to uncover the truth.


Conclusion:
The viral video claiming to show the aftermath of the Air India crash in Ahmedabad is entirely misleading and false. The video is showing a fictitious movie set from Universal Studios Hollywood, not a real disaster scene in India. Spreading misinformation like this can create unnecessary panic and confusion in sensitive situations. We urge viewers to only trust verified news and double-check claims before sharing any content online.
- Claim: Massive explosion and debris shown in viral video after Air India crash.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

"Cybercriminals are unleashing a surprisingly high volume of new threats in this short period of time to take advantage of inadvertent security gaps as organizations are in a rush to ensure business continuity.”
Cyber security firm Fortinet on Monday announced that over the past several weeks, it has been monitoring a significant spike in COVID-19 related threats.
An unprecedented number of unprotected users and devices are now online with one or two people in every home connecting remotely to work through the internet. Simultaneously there are children at home engaged in remote learning and the entire family is engaged in multi-player games, chatting with friends as well as streaming music and video. The cybersec firm’s FortiGuard Labs is observing this perfect storm of opportunity being exploited by cybercriminals as the Threat Report on the Pandemic highlights:
A surge in Phishing Attacks: The research shows an average of about 600 new phishing campaigns every day. The content is designed to either prey on the fears and concerns of individuals or pretend to provide essential information on the current pandemic. The phishing attacks range from scams related to helping individuals deposit their stimulus for Covid-19 tests, to providing access to Chloroquine and other medicines or medical device, to providing helpdesk support for new teleworkers.
Phishing Scams Are Just the Start: While the attacks start with a phishing attack, their end goal is to steal personal information or even target businesses through teleworkers. Majority of the phishing attacks contain malicious payloads – including ransomware, viruses, remote access trojans (RATs) designed to provide criminals with remote access to endpoint systems, and even RDP (remote desktop protocol) exploits.
A Sudden Spike in Viruses: The first quarter of 2020 has documented a 17% increase in viruses for January, a 52% increase for February and an alarming 131% increase for March compared to the same period in 2019. The significant rise in viruses is mainly attributed to malicious phishing attachments. Multiple sites that are illegally streaming movies that were still in theatres secretly infect malware to anyone who logs on. Free game, free movie, and the attacker is on your network.
Risks for IoT Devices magnify: As users are all connected to the home network, attackers have multiple avenues of attack that can be exploited targeting devices including computers, tablets, gaming and entertainment systems and even online IoT devices such as digital cameras, smart appliances – with the ultimate goal of finding a way back into a corporate network and its valuable digital resources.
Ransomware like attack to disrupt business: If the device of a remote worker can be compromised, it can become a conduit back into the organization’s core network, enabling the spread of malware to other remote workers. The resulting business disruption can be just as effective as ransomware targeting internal network systems for taking a business offline. Since helpdesks are now remote, devices infected with ransomware or a virus can incapacitate workers for days while devices are mailed in for reimaging.
“Though organizations have completed the initial phase of transitioning their entire workforce to remote telework and employees are becoming increasingly comfortable with their new reality, CISOs continue to face new challenges presented by maintaining a secure teleworker business model. From redefining their security baseline, or supporting technology enablement for remote workers, to developing detailed policies for employees to have access to data, organizations must be nimble and adapt quickly to overcome these new problems that are arising”, said Derek Manky, Chief, Security Insights & Global Threat Alliances at Fortinet – Office of CISO.

Artificial Intelligence (AI) provides a varied range of services and continues to catch intrigue and experimentation. It has altered how we create and consume content. Specific prompts can now be used to create desired images enhancing experiences of storytelling and even education. However, as this content can influence public perception, its potential to cause misinformation must be noted as well. The realistic nature of the images can make it hard to discern as artificially generated by the untrained eye. As AI operates by analysing the data it was trained on previously to deliver, the lack of contextual knowledge and human biases (while framing prompts) also come into play. The stakes are higher whilst dabbling with subjects such as history, as there is a fine line between the creation of content with the intent of mere entertainment and the spread of misinformation owing to biases and lack of veracity left unchecked. AI-generated images enhance storytelling but can also spread misinformation, especially in historical contexts. For instance, an AI-generated image of London during the Black Death might include inaccurate details, misleading viewers about the past.
The Rise of AI-Generated Historical Images as Entertainment
Recently, generated images and videos of various historical instances along with the point of view of the people present have been floating all over the internet. Some of them include the streets of London during the Black Death in the 1300s in England, the eruption of Mount Vesuvius at Pompeii etc. Hogne and Dan, two creators who operate accounts named POV Lab and Time Traveller POV on TikTok state that they create such videos as they feel that seeing the past through a first-person perspective is an interesting way to bring history back to life while highlighting the cool parts, helping the audience learn something new. Mostly sensationalised for visual impact and storytelling, such content has been called out by historians for inconsistencies with respect to details particular of the time. Presently, artists admit to their creations being inaccurate, reasoning them to be more of an artistic interpretation than fact-checked documentaries.
It is important to note that AI models may inaccurately depict objects (issues with lateral inversion), people(anatomical implausibilities), or scenes due to "present-ist" bias. As noted by Lauren Tilton, an associate professor of digital humanities at the University of Richmond, many AI models primarily rely on data from the last 15 years, making them prone to modern-day distortions especially when analysing and creating historical content. The idea is to spark interest rather than replace genuine historical facts while it is assumed that engagement with these images and videos is partly a product of the fascination with upcoming AI tools. Apart from this, there are also chatbots like Hello History and Charater.ai which enable simulations of interacting with historical figures that have piqued curiosity.
Although it makes for an interesting perspective, one cannot ignore that our inherent biases play a role in how we perceive the information presented. Dangerous consequences include feeding into conspiracy theories and the erasure of facts as information is geared particularly toward garnering attention and providing entertainment. Furthermore, exposure of such content to an impressionable audience with a lesser attention span increases the gravity of the matter. In such cases, information regarding the sources used for creation becomes an important factor.
Acknowledging the risks posed by AI-generated images and their susceptibility to create misinformation, the Government of Spain has taken a step in regulating the AI content created. It has passed a bill (for regulating AI-Generated content) that mandates the labelling of AI-generated images and failure to do so would warrant massive fines (up to $38 million or 7% of turnover on companies). The idea is to ensure that content creators label their content which would help to spot images that are artificially created from those that are not.
The Way Forward: Navigating AI and Misinformation
While AI-generated images make for exciting possibilities for storytelling and enabling intrigue, their potential to spread misinformation should not be overlooked. To address these challenges, certain measures should be encouraged.
- Media Literacy and Awareness – In this day and age critical thinking and media literacy among consumers of content is imperative. Awareness, understanding, and access to tools that aid in detecting AI-generated content can prove to be helpful.
- AI Transparency and Labeling – Implementing regulations similar to Spain’s bill on labelling content could be a guiding crutch for people who have yet to learn to tell apart AI-generated content from others.
- Ethical AI Development – AI developers must prioritize ethical considerations in training using diverse and historically accurate datasets and sources which would minimise biases.
As AI continues to evolve, balancing innovation with responsibility is essential. By taking proactive measures in the early stages, we can harness AI's potential while safeguarding the integrity and trust of the sources while generating images.
References:
- https://www.npr.org/2023/06/07/1180768459/how-to-identify-ai-generated-deepfake-images
- https://www.nbcnews.com/tech/tech-news/ai-image-misinformation-surged-google-research-finds-rcna154333
- https://www.bbc.com/news/articles/cy87076pdw3o
- https://newskarnataka.com/technology/government-releases-guide-to-help-citizens-identify-ai-generated-images/21052024/
- https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/
- https://www.psypost.org/ai-models-struggle-with-expert-level-global-history-knowledge/
- https://www.youtube.com/watch?v=M65IYIWlqes&t=2597s
- https://www.vice.com/en/article/people-are-creating-records-of-fake-historical-events-using-ai/?utm_source=chatgpt.com
- https://www.reuters.com/technology/artificial-intelligence/spain-impose-massive-fines-not-labelling-ai-generated-content-2025-03-11/?utm_source=chatgpt.com
- https://www.theguardian.com/film/2024/sep/13/documentary-ai-guidelines?utm_source=chatgpt.com
.webp)
Introduction
Empowering today’s youth with the right skills is more crucial than ever in a rapidly evolving digital world. Every year on July 15th, the United Nations marks World Youth Skills Day to emphasise the critical role of skills development in preparing young people for meaningful work and resilient futures. As AI transforms industries and societies, equipping young minds with digital and AI skills is key to fostering security, adaptability, and growth in the years ahead.
Why AI Upskilling is Crucial in Modern Cyber Defence
Security in the digital age remains a complex challenge, regardless of the presence of Artificial Intelligence (AI). It is one of the biggest modern ironies, and not only that, it is a paradox wrapped in code, where the cure and the curse are written in the same language. The very hand that protects the world from cyber threats can very well be used for the creation of that threat. This being said, the modern-day implementation of AI has to circumvent the threats posed by it or any other advanced technology. A solid grasp of AI and machine learning mechanisms is no longer optional; it is fundamental for modern cybersecurity. The traditional cybersecurity training programs employ static content, which can often become outdated and inadequate for the vulnerabilities. AI-powered solutions, such as intrusion detection systems and next-generation firewalls, use behavioural analysis instead of just matching signatures. AI models are susceptible, nevertheless, as malevolent actors can introduce hostile inputs or tainted data to trick computers into incorrect classification. Data poisoning is a major threat to AI defences, according to Cisco's evidence.
As threats surpass the current understanding of cybersecurity professionals, a need arises to upskill them in advanced AI technologies so that they can fortify the security of current systems. Two of the most important skills for professionals would be AI/ML Model Auditing and Data Science. Skilled data scientists can sift through vast logs, from pocket captures to user profiles, to detect anomalies, assess vulnerabilities, and anticipate attacks. A news report from Business Insider puts it correctly: ‘It takes a good-guy AI to fight a bad-guy AI.’ The technology of generative AI is quite new. As a result, it poses fresh security issues and faces security risks like data exfiltration and prompt injections.
Another method that can prove effective is Natural Language Processing (NLP), which helps machines process this unstructured data, enabling automated spam detection, sentiment analysis, and threat context extraction. Security teams skilled in NLP can deploy systems that flag suspicious email patterns, detect malicious content in code reviews, and monitor internal networks for insider threats, all at speeds and scales humans cannot match.
The AI skills, as aforementioned, are not only for courtesy’s sake; they have become essential in the current landscape. India is not far behind in this mission; it is committed, along with its western counterparts, to employ the emerging technologies in its larger goal of advancement. With quiet confidence, India takes pride in its remarkable capacity to nurture exceptional talent in science and technology, with Indian minds making significant contributions across global arenas.
AI Upskilling in India
As per a news report of March 2025, Jayant Chaudhary, Minister of State, Ministry of Skill Development & Entrepreneurship, highlighted that various schemes under the Skill India Programme (SIP) guarantee greater integration of emerging technologies, such as artificial intelligence (AI), cybersecurity, blockchain, and cloud computing, to meet industry demands. The SIP’s parliamentary brochure states that more than 6.15 million recipients have received training as of December 2024. Other schemes that facilitate educating and training professionals, such as Data Scientist, Business Intelligence Analyst, and Machine Learning Engineer are,
- Pradhan Mantri Kaushal Vikas Yojana 4.0 (PMKVY 4.0)
- Pradhan Mantri National Apprenticeship Promotion Scheme (PM-NAPS)
- Jan Shikshan Sansthan (JSS)
Another report showcases how Indian companies, or companies with their offices in India such as Ernst & Young (EY), are recognising the potential of the Indian workforce and yet their deficiencies in emerging technologies and leading the way by internal upskilling and establishing an AI Academy, a new program designed to assist businesses in providing their employees with essential AI capabilities, in response to the increasing need for AI expertise. Using more than 200 real-world AI use cases, the program offers interactive, organised learning opportunities that cover everything from basic ideas to sophisticated generative AI capabilities.
In order to better understand the need for these initiatives, a reference is significant to a report backed by Google.org and the Asian Development Bank; India appears to be at a turning point in the global use of AI. As per the research, “AI for All: Building an AI-Ready Workforce in Asia-Pacific,” India urgently needs to provide accessible and efficient AI upskilling despite having the largest workforce in the world. According to the paper, by 2030, AI could boost the Asia-Pacific region’s GDP by up to USD 3 trillion. The key to this potential is India, a country with the youngest and fastest-growing population.
Conclusion and CyberPeace Resolution
As the world stands at the crossroads of innovation and insecurity, India finds itself uniquely poised, with its vast young population and growing technologies. But to truly safeguard its digital future and harness the promise of AI, the country must think beyond flagship schemes. Imagine classrooms where students learn not just to code but to question algorithms, workplaces where AI training is as routine as onboarding.
India’s journey towards digital resilience is not just about mastering technology but about cultivating curiosity, responsibility, and trust. CyberPeace is committed to this future and is resolute in this collective pursuit of an ethically secure digital world. CyberPeace resolves to be an active catalyst in AI upskilling across India. We commit to launching specialised training modules on AI, cybersecurity, and digital ethics tailored for students and professionals. It seeks to close the AI literacy gap and develop a workforce that is both morally aware and technologically proficient by working with educational institutions, skilling initiatives, and industry stakeholders.
References
- https://www.helpnetsecurity.com/2025/03/07/ai-gamified-simulations-cybersecurity/
- https://www.businessinsider.com/artificial-intelligence-cybersecurity-large-language-model-threats-solutions-2025-5?utm
- https://apacnewsnetwork.com/2025/03/ai-5g-skills-boost-skill-india-targets-industry-demands-over-6-15-million-beneficiaries-trained-till-2024/
- https://indianexpress.com/article/technology/artificial-intelligence/india-must-upskill-fast-to-keep-up-with-ai-jobs-says-new-report-10107821/