#FactCheck: Viral video claims Ahmedabad plane crash but actually a Hollywood Movie Clip
Executive Summary:
A viral video claiming the crash site of Air India Flight AI-171 in Ahmedabad has misled many people online. The video has been confirmed not to be from India or a recent crash, but was filmed at Universal Studios Hollywood on a TV or movie set meant to look like a plane crash set piece for a movie.

Claim:
A video that purportedly shows the wreckage of Air India Flight AI-171 after crashing in Ahmedabad on June 12, 2025, has circulated among social media users. The video shows a large amount of aircraft wreckage as well as destroyed homes and a scene reminiscent of an emergency, making it look genuine.

Fact check:
In our research, we took screenshots from the viral video and used reverse image search, which matched visuals from Universal Studios Hollywood. It became apparent that the video is actually from the most famous “War of the Worlds" set, located in Universal Studios Hollywood. The set features a 747 crash scene that was constructed permanently for Steven Spielberg's movie in 2005. We also found a YouTube video. The set has fake smoke poured on it, with debris scattered about and additional fake faceless structures built to represent a scene with a larger crisis. Multiple videos on YouTube here, here, and here can be found from the past with pictures of the tour at Universal Studios Hollywood, the Boeing 747 crash site, made for a movie.


The Universal Studios Hollywood tour includes a visit to a staged crash site featuring a Boeing 747, which has unfortunately been misused in viral posts to spread false information.

While doing research, we were able to locate imagery indicating that the video that went viral, along with the Universal Studios tour footage, provided an exact match and therefore verified that the video had no connection to the Ahmedabad incident. A side-by-side comparison tells us all we need to know to uncover the truth.


Conclusion:
The viral video claiming to show the aftermath of the Air India crash in Ahmedabad is entirely misleading and false. The video is showing a fictitious movie set from Universal Studios Hollywood, not a real disaster scene in India. Spreading misinformation like this can create unnecessary panic and confusion in sensitive situations. We urge viewers to only trust verified news and double-check claims before sharing any content online.
- Claim: Massive explosion and debris shown in viral video after Air India crash.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In today's relentless current of information, where social media is oftentimes both the stage and the playwright, the line between reality and spectacle can become distressingly blurry. In such a virtual Pantheon, the conflation of truth and fiction has recently surfaced in a particularly contentious instance. The central figure is Poonam Pandey, an entertainment personality known for transgressing traditional contours of celebrity boldness. Pandey found herself ensnared in a narrative of her own orchestration—a grim hoax purporting she had succumbed to cervical cancer. This deceptive foray, rather than awakening public consciousness as intended, spiralled into an ominous fable about the malignant spread of misinformation and the profound moral dilemmas it engenders.
The Deception
The tapestry of this event was woven with threads of tragedy and deception, framing Pandey both as the tragic hero and the ill-fated architect of a spectacle that unfolded with a haunting familiarity evocative of ancient Greek dramas. The monumental pillar of social media, on what seemed to be an ordinary day, was shattered by the startling declaration of Pandey's untimely passing. The statement, as bereft of nuance as it was devastating, proclaimed: 'We are deeply grieved to announce the loss of our cherished Poonam to cervical cancer.' The emotional pulse of the Indian Film Industry was jolted; waves of homage inundated the digital space, each tribute a poignant echo of the shock that rippled through her fanbase. Yet the crux of the matter had yet to be unveiled.
As the world grappled with this news, the scenario took an unforeseen detour. Poonam Pandey made a re-entrance onto the world stage, alive, revealing her alleged demise to be nothing more than a macabre masquerade. The public's reaction to this revelation was a stratified symphony of emotions—indignation mingled with disbelief, with an underlying crescendo of betrayal. Pandey's defense postured her act as a last resort to draw attention to the silent yet pervasive threat of cervical cancer. In the ensuing mire of reactions, an inescapable quandary emerged: is it ever permissible to employ deceit for the sake of presumed publicity?
The Chaos
Satyajeet Tambe, an esteemed Maharashtra legislator, emerged amidst the churning chaos as a paragon of principled reason. Advocating that such mendacious stunts, playing the chords of public emotion and adulterating truth, should be met with legal repercussions, Tambe called for judicious action against Pandey. His imploration resonated with the necessity of integrity in the public domain, stating, 'The announcement of an influencer/model succumbing to cervical cancer should not be wielded as a tool for awareness.' His pronouncement sent reverberations through the collective conscience, echoing the need for accountability in the face of such transgressions.
Repercussion
The All Indian Cine Workers Association, a custodian of the film industry's values, also voiced its reproach. They urged for an FIR to be lodged against Poonam Pandey, underlining their sentiments with disappointment and a keen sense of betrayal. Within their condemnation lay a profound recognition of the elevated emotional investment inherent in their industry—an industry where the reverence for life and the abhorrence of deceit intertwine, making the cultivation of such lowly stunts anathema.
This spectacle, while unique in the temerity of its execution, mirrors the broader pathological wave of misinformation that corrodes the foundations of our digital era: the malady of fake news. When delineated, fake news finds its essence as information chiselled specifically to deceive, a form of communication that is not merely slanted but entirely devoid of authenticity, manufactured with nefarious intent. A protean adversary, fake news adeptly masquerades as trustworthy news, ensnaring the unsuspecting in its tendrils. Its purveyors span a spectrum—from shadowy figures to ostensibly benign social media accounts—all contributing to a dystopian fabric where truth is persistently imperilled.
The conjurers of these illusions are, in a sense, cunning illusionists ensconced behind curtains of anonymity or masquerading under a cloak of transparency. They craft elaborate illusions devoid of truth, but dripping with sufficient plausibility to ensnare those who yearn for simplicity in an increasingly complex world. Destabilizing forces, such as hyper partisan media outlets, regurgitate a concoction of concocted 'facts' and distortions, deliberately smudging the once-clear line between empirical truth and partisan fabrication.
The Aftermath
The Poonam Pandey episode stands as a harrowing beacon of the ethical abyss we face. It compels us to confront the irony of utilising falsity to raise awareness for laudable causes and considers the ramifications for public figures influencing the dissemination of information. The tempest around this event demonstrates the potent gravitational pull of information and the overarching need for the conscientious stewardship of its power.
Yet, as we sail through the murky waters of the digital expanse, where the allure of sensationalism and clickbait headlines is ever-present, our vigilance must not wane. The imperative of truth cannot come at the altar of awareness or sensationalism. The sanctity of fact anchors our understanding of reality; devoid of it, we are adrift in an ocean of confusion and misinformation.
In the dust settled after the Poonam Pandey debacle, the contours of a new discourse have emerged, harboring vital interrogations. How do we balance the drive for poignant awareness initiatives against the cardinal principle of truth? What mechanisms can ensure that health campaigns and their noble aspirations are not tainted by the allure of deception? Addressing these queries is not a solitary task for policymakers or influencers but, indeed, a collective societal responsibility that will define our cultural ethics and the legacy we wish to preserve.
Conclusion
As we contemplate the broader implications of this incident, let us not allow its sensational nature to eclipse the very real and pressing issue of cervical cancer—a condition that, beyond the glare of controversy, continues to shadow lives with its lethal silence. Instead, let our focus pivot towards tangible, truth-driven efforts aimed at education and empowerment. Truth, after all, is the beacon that dispels the murky shadows of ignorance and guides us toward enlightenment and healing.
References
- https://www.hindustantimes.com/india-news/poonam-pandey-in-trouble-as-maharashtra-politician-seeks-case-for-faking-her-death-101707005742992.html
- https://www.nagpurtoday.in/state-mlc-tambe-demands-police-action-against-poonam-pandey-for-faking-her-death/02051417

Introduction
Misinformation is, to its basic meaning, incorrect or misleading information, it may or may not include specific malicious intent and includes inaccurate, incomplete, misleading, or false information and selective or half-truths. The main challenges in dealing with misinformation are defining and distinguishing misinformation from legitimate content. This complexity arises due to the rapid evolution and propagation which information undergoes on the digital platforms. Additionally, balancing the fundamental right of freedom of speech and expression with content regulation by state actors poses a significant challenge. It requires careful consideration to avoid censorship while effectively combating harmful misinformation.
Acknowledging the severe consequences of misinformation and the critical need to combat misinformation, Bharatiya Nyaya Sanhita (BNS), 2023 has implemented key measures to address misinformation in India. These new provisions introduced under the new criminal laws in India penalise the deliberate creation, distribution, or publication of inaccurate information. Previously missing from the IPC, these sections offer an additional legal resource to counter the proliferation of falsehoods, complementing existing laws targeting the same issue.
Section 353 of the BNS on Statements Conducing to Public Mischief criminalises making, publishing, or circulating statements, false information, rumours, or reports, including through electronic means, with the intent or likelihood of causing various harmful outcomes.
This section thus brings misinformation into its ambit, since misinformation has been traditionally used to induce public fear or alarm that may lead to offences against the State or public tranquillity or inciting one class or community to commit offences against another. The section also penalizes the promotion of enmity, hatred, or ill will among different religious, racial, linguistic, or regional groups.
BNS also prescribes punishment of imprisonment for up to three years, a fine, or both for offences under section 353. Interestingly, a longer imprisonment of up to 5 years along with a fine has been prescribed to curb such offences in places of worship or during religious ceremonies. The only exception that may be availed under this section is granted to unsuspecting individuals who, believing the misinformation to be true, spread misinformation without any ill intent. However, this exception may not be as effective in curbing misinformation, since at the outset, the offence is hard to trace and has multiple pockets for individuals to seek protection without any mechanism to verify their intent.
The BNS also aims to regulate misinformation through Section 197(1)(d) on Imputations, assertions prejudicial to national integration. Under this provision, anyone who makes or publishes false or misleading information, whether it is in the form of spoken words, written, by signs, in visible representations, or through electronic communication, therefore, results in jeopardising the sovereignty, unity, integrity, or security of India is liable to face punishment in the form of imprisonment for up to three years, a fine, or both and if it occurs in a place of worship or during religious ceremonies, the quantum of punishment is increased to imprisonment for up to five years and may include a fine. Additionally, Section 212 (a) & (b) provides against furnishing false information. If a person who is legally obligated to provide information to a public servant, knowingly or reasonably believes that the information is false, and still furnishes it, they now face a punishment of six months imprisonment or a fine up to five thousand rupees or both. However, if the false information pertains to the commission or prevention of an offence, or the apprehension of an offender, the punishment increases to imprisonment for up to two years, a fine, or both.
Enforcement Mechanisms: CyberPeace Policy Wing Outlook
To ensure the effective enforcement of these provisions, coordination between the key stakeholders, i.e., the law enforcement agencies, digital platforms, and judicial oversight is essential. Law enforcement agencies must utilize technology such as data analytics and digital forensics for tracking and identifying the origins of false information. This technological capability is crucial for pinpointing the sources and preventing the further spread of misinformation. Simultaneously, digital platforms associated with misinformation content are required to implement robust monitoring and reporting mechanisms to detect and address the generated misleading content proactively. A supporting oversight by judicial bodies plays a critical role in ensuring that enforcement actions are conducted fairly and in line with legal standards. It helps maintain a balance between addressing misinformation and upholding fundamental rights such as freedom of speech. The success of the BNS in addressing these challenges will depend on the effective integration of these mechanisms and ongoing adaptation to the evolving digital landscape.
Resources:
- Bharatiya Nyaya Sanhita, 2023 https://www.mha.gov.in/sites/default/files/250883_english_01042024.pdf
- https://www.foxmandal.in/changes-brought-forth-by-the-bharatiya-nyaya-sanhita-2023/
- https://economictimes.indiatimes.com/news/india/spreading-fake-news-could-land-people-in-jail-for-three-years-under-new-bharatiya-nyaya-sanhita-bill/articleshow/102669105.cms?from=mdr

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21