#FactCheck: Misleading Clip of Nepal Crash Shared as Air India’s AI-171 Ahmedabad Accident
Executive Summary:
A viral video circulating on social media platforms, claimed to show the final moments of an Air India flight carrying passengers inside the cabin just before it crashed near Ahmedabad on June 12, 2025, is false. However, upon further research, the footage was found to originate from the Yeti Airlines Flight 691 crash that occurred in Pokhara, Nepal, on January 15, 2023. For all details, please follow the report.

Claim:
Viral videos circulating on social media claiming to show the final moments inside Air India flight AI‑171 before it crashed near Ahmedabad on June 12, 2025. The footage appears to have been recorded by a passenger during the flight and is being shared as real-time visuals from the recent tragedy. Many users have believed the clip to be genuine and linked it directly to the Air India incident.


Fact Check:
To confirm the validity of the video going viral depicting the alleged final moments of Air India's AI-171 that crashed near Ahmedabad on 12 June 2025, we engaged in a comprehensive reverse image search and keyframe analysis then we got to know that the footage occurs back in January 2023, namely Yeti Airlines Flight 691 that crashed in Pokhara, Nepal. The visuals shared in the viral video match up, including cabin and passenger details, identically to the original livestream made by a passenger aboard the Nepal flight, confirming that the video is being reused out of context.

Moreover, well-respected and reliable news organisations, including New York Post and NDTV, have shared reports confirming that the video originated from the 2023 Nepal plane crash and has no relation to the recent Air India incident. The Press Information Bureau (PIB) also released a clarification dismissing the video as disinformation. Reliable reports from the past, visual evidence, and reverse search verification all provide complete agreement in that the viral video is falsely attributed to the AI-171 tragedy.


Conclusion:
The viral footage does not show the AI-171 crash near Ahmedabad on 12 June 2025. It is an irrelevant, previously recorded livestream from the January 2023 Yeti Airlines crash in Pokhara, Nepal, falsely repurposed as breaking news. It’s essential to rely on verified and credible news agencies. Please refer to official investigation reports when discussing such sensitive events.
- Claim: A dramatic clip of passengers inside a crashing plane is being falsely linked to the recent Air India tragedy in Ahmedabad.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Cyber-attacks are another threat in this digital world, not exclusive to a single country, that could significantly disrupt global movements, commerce, and international relations all of which experienced first-hand when a cyber-attack occurred at Heathrow, the busiest airport in Europe, which threw their electronic check-in and baggage systems into a state of chaos. Not only were there chaos and delays at Heathrow, airports across Europe including Brussels, Berlin, and Dublin experienced delay and had to conduct manual check-ins for some flights further indicating just how interconnected the world of aviation is in today's world. Though Heathrow assured passengers that the "vast majority of flights" would operate, hundreds were delayed or postponed for hours as those passengers stood in a queue while nearly every European airport's flying schedule was also negatively impacted.
The Anatomy of the Attack
The attack specifically targeted Muse software by Collins Aerospace, a software built to allow various airlines to share check-in desks and boarding gates. The disruption initially perceived to be technical issues soon turned into a logistical nightmare, with airlines relying on Muse having to engage in horror-movie-worthy manual steps hand-tagging luggage, verifying boarding passes over the phone, and manually boarding passengers. While British Airways managed to revert to a backup system, most other carriers across Heathrow and partner airports elsewhere in Europe had to resort to improvised manual solutions.
The trauma was largely borne by the passengers. Stories emerged about travelers stranded on the tarmac, old folks left barely able to walk without assistance, and even families missing important connections. It served to remind everyone that the aviation world, with its schedules interlocked tightly across borders, can see even a localized system failure snowball into a continental-level crisis.
Cybersecurity Meets Aviation Infrastructure
In the last two decades, aviation has become one of the more digitally dependent industries in the world. From booking systems and baggage handling issues to navigation and air traffic control, digital systems are the invisible scaffold on which flight operations are supported. Though this digitalization has increased the scale of operations and enhanced efficiency, it must have also created many avenues for cyber threats. Cyber attackers increasingly realize that to target aviation is not just about money but about leverage. Just interfering with the check-in system of a major hub like Heathrow is more than just financial disruption; it causes panic and hits the headlines, making it much more attractive for criminal gangs and state-sponsored threat actors.
The Heathrow incident is like the worldwide IT crash in July 2024-thwarting activities of flights caused by a botched Crowdstrike update. Both prove the brittleness of digital dependencies in aviation, where one failure point triggering uncontrollable ripple effects spanning multiple countries. Unlike conventional cyber incidents contained within corporate networks, cyber-attacks in aviation spill on to the public sphere in real time, disturbing millions of lives.
Response and Coordination
Heathrow Airport first added extra employees to assist with manual check-in and told passengers to check flight statuses before traveling. The UK's National Cyber Security Centre (NCSC) collaborated with Collins Aerospace, the Department for Transport, and law enforcement agencies to investigate the extent and source of the breach. Meanwhile, the European Commission published a statement that they are "closely following the development" of the cyber incident while assuring passengers that no evidence of a "widespread or serious" breach has been observed.
According to passengers, the reality was quite different. Massive passenger queues, bewildering announcements, and departure time confirmations cultivated an atmosphere of chaos. The wrenching dissonance between the reassurances from official channel and Kirby needs to be resolved about what really happens in passenger experiences. During such incidents, technical restoration and communication flow are strategies for retaining public trust in incidents.
Attribution and the Shadow of Ransomware
As with many cyber-attacks, questions on its attribution arose quite promptly. Rumours of hackers allegedly working for the Kremlin escaped into the air quite possibly inside seconds of the realization, Cybersecurity experts justifiably advise against making conclusions hastily. Extortion ransomware gangs stand the last chance to hold the culprits, whereas state actors cannot be ruled out, especially considering Russian military activity under European airspace. Meanwhile, Collins Aerospace has refused to comment on the attack, its precise nature, or where it originated, emphasizing an inherent difficulty in cyberattribution.
What is clear is the way these attacks bestow criminal leverage and dollars. In previous ransomware attacks against critical infrastructure, cybercriminal gangs have extorted millions of dollars from their victims. In aviation terms, the stakes grow exponentially, not only in terms of money but national security and diplomatic relations as well as human safety.
Broader Implications for Aviation Cybersecurity
This incident brings to consideration several core resilience issues within aviation systems. Traditionally, the airports and airlines had placed premium on physical security, but today, the equally important concept of digital resilience has come into being. Systems such as Muse, which bind multiple airlines into shared infrastructure, offer efficiency but, at the same time, also concentrate that risk. A cyber disruption in one place will cascade across dozens of carriers and multiple airports, thereby amplifying the scale of that disruption.
The case also brings forth redundancy and contingency planning as an urgent concern. While BA systems were able to stand on backups, most other airlines could not claim that advantage. It is about time that digital redundancies, be it in the form of parallel systems or isolated backups or even AI-driven incident response frameworks, are built into aviation as standard practice and soon.
On the policy plane, this incident draws attention to the necessity for international collaboration. Aviation is therefore transnational, and cyber incidents standing on this domain cannot possibly be handled by national agencies only. Eurocontrol, the European Commission, and cross-border cybersecurity task forces must spearhead this initiative to ensure aviation-wide resilience.
Human Stories Amid a Digital Crisis
Beyond technical jargon and policy response, the human stories had perhaps the greatest impact coming from Heathrow. Passengers spoke of hours spent queuing, heading to funerals, and being hungry and exhausted as they waited for their flights. For many, the cyber-attack was no mere headline; instead, it was ¬ a living reality of disruption.
These stories reflect the fact that cybersecurity is no hunger strike; it touches people's lives. In critical sectors such as aviation, one hour of disruption means missed connections for passengers, lost revenue for airlines, and inculcates immense emotional stress. Crisis management must therefore entail technical recovery and passenger care, communication, and support on the ground.
Conclusion
The cybersecurity crisis of Heathrow and other European airports emphasizes the threat of cyber disruption on the modern legitimacy of aviation. The use of increased connectivity for airport processes means that any cyber disruption present, no matter how small, can affect scheduling issues regionally or on other continents, even threatening lives. The occurrences confirm a few things: a resilient solution should provide redundancy not efficiency; international networking and collaboration is paramount; and communicating with the traveling public is just as important (if not more) as the technical recovery process.
As governments, airlines, and technology providers analyse the disruption, the question is longer if aviation can withstand cyber threats, but to what extent it will be prepared to defend itself against those attacks. The Heathrow crisis is a reminder that the stake of cybersecurity is not just about a data breach or outright stealing of money but also about stealing the very systems that keep global mobility in motion. Now, the aviation industry is tested to make this disruption an opportunity to fortify the digital defences and start preparing for the next inevitable production.
References
- https://www.bbc.com/news/articles/c3drpgv33pxo
- https://www.theguardian.com/business/2025/sep/21/delays-continue-at-heathrow-brussels-and-berlin-airports-after-alleged-cyber-attack
- https://www.reuters.com/business/aerospace-defense/eu-agency-says-third-party-ransomware-behind-airport-disruptions-2025-09-22/
.webp)
Introduction
The rise of misinformation, disinformation, and synthetic media content on the internet and social media platforms has raised serious concerns, emphasizing the need for responsible use of social media to maintain information accuracy and combat misinformation incidents. With online misinformation rampant all over the world, the World Economic Forum's 2024 Global Risk Report, notably ranks India amongst the highest in terms of risk of mis/disinformation.
The widespread online misinformation on social media platforms necessitates a joint effort between tech/social media platforms and the government to counter such incidents. The Indian government is actively seeking to collaborate with tech/social media platforms to foster a safe and trustworthy digital environment and to also ensure compliance with intermediary rules and regulations. The Ministry of Information and Broadcasting has used ‘extraordinary powers’ to block certain YouTube channels, X (Twitter) & Facebook accounts, allegedly used to spread harmful misinformation. The government has issued advisories regulating deepfake and misinformation, and social media platforms initiated efforts to implement algorithmic and technical improvements to counter misinformation and secure the information landscape.
Efforts by the Government and Social Media Platforms to Combat Misinformation
- Advisory regulating AI, deepfake and misinformation
The Ministry of Electronics and Information Technology (MeitY) issued a modified advisory on 15th March 2024, in suppression of the advisory issued on 1st March 2024. The latest advisory specifies that the platforms should inform all users about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. The advisory necessitates identifying synthetically created content across various formats, and instructs platforms to employ labels, unique identifiers, or metadata to ensure transparency.
- Rules related to content regulation
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Updated as on 6.4.2023) have been enacted under the IT Act, 2000. These rules assign specific obligations on intermediaries as to what kind of information is to be hosted, displayed, uploaded, published, transmitted, stored or shared. The rules also specify provisions to establish a grievance redressal mechanism by platforms and remove unlawful content within stipulated time frames.
- Counteracting misinformation during Indian elections 2024
To counter misinformation during the Indian elections the government and social media platforms made their best efforts to ensure the electoral integrity was saved from any threat of mis/disinformation. The Election Commission of India (ECI) further launched the 'Myth vs Reality Register' to combat misinformation and to ensure the integrity of the electoral process during the general elections in 2024. The ECI collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google has supported the 2024 Indian General Election by providing high-quality information to voters and helping people navigate AI-generated content. Google connected voters to helpful information through product features that show data from trusted institutions across its portfolio. YouTube showcased election information panels, featuring content from authoritative sources.
- YouTube and X (Twitter) new ‘Notes Feature’
- Notes Feature on YouTube: YouTube is testing an experimental feature that allows users to add notes to provide relevant, timely, and easy-to-understand context for videos. This initiative builds on previous products that display helpful information alongside videos, such as information panels and disclosure requirements when content is altered or synthetic. YouTube clarified that the pilot will be available on mobiles in the U.S. and in the English language, to start with. During this test phase, viewers, participants, and creators are invited to give feedback on the quality of the notes.
- Community Notes feature on X: Community Notes on X aims to enhance the understanding of potentially misleading posts by allowing users to add context to them. Contributors can leave notes on any post, and if enough people rate the note as helpful, it will be publicly displayed. The algorithm is open source and publicly available on GitHub, allowing anyone to audit, analyze, or suggest improvements. However, Community Notes do not represent X's viewpoint and cannot be edited or modified by their teams. A post with a Community Note will not be labelled, removed, or addressed by X unless it violates the X Rules, Terms of Service, or Privacy Policy. Failure to abide by these rules can result in removal from accessing Community Notes and/or other remediations. Users can report notes that do not comply with the rules by selecting the menu on a note and selecting ‘Report’ or using the provided form.
CyberPeace Policy Recommendations
Countering widespread online misinformation on social media platforms requires a multipronged approach that involves joint efforts from different stakeholders. Platforms should invest in state-of-the-art algorithms and technology to detect and flag suspected misleading information. They should also establish trustworthy fact-checking protocols and collaborate with expert fact-checking groups. Campaigns, seminars, and other educational materials must be encouraged by the government to increase public awareness and digital literacy about the mis/disinformation risks and impacts. Netizens should be empowered with the necessary skills and ability to discern fact and misleading information to successfully browse true information in the digital information age. The joint efforts by Government authorities, tech companies, and expert cyber security organisations are vital in promoting a secure and honest online information landscape and countering the spread of mis/disinformation. Platforms must encourage netizens/users to foster appropriate online conduct while using platforms and abiding by the terms & conditions and community guidelines of the platforms. Encouraging a culture of truth and integrity on the internet, honouring differing points of view, and confirming facts all help to create a more reliable and information-resilient environment.
References:
- https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.youtube/news-and-events/new-ways-to-offer-viewers-more-context/
- https://help.x.com/en/using-x/community-notes
.webp)
Introduction
Misinformation poses a significant challenge to public health policymaking since it undermines efforts to promote effective health interventions and protect public well-being. The spread of inaccurate information, particularly through online channels such as social media and internet platforms, further complicates the decision-making process for policymakers since it perpetuates public confusion and distrust. This misinformation can lead to resistance against health initiatives, such as vaccination programs, and fuels scepticism towards scientifically-backed health guidelines.
Before the COVID-19 pandemic, misinformation surrounding healthcare largely encompassed the effects of alcohol and tobacco consumption, marijuana use, eating habits, physical exercise etc. However, there has been a marked shift in the years since. One such example is the outcry against palm oil in 2024: it is an ingredient prevalent in numerous food and cosmetic products, and came under the scanner after a number of claims that palmitic acid, which is present in palm oil, is detrimental to our health. However, scientific research by reputable institutions globally established that there is no cause for concern regarding the health risks posed by palmitic acid. Such trends and commentaries tend to create a parallel unscientific discourse that has the potential to not only impact individual choices but also public opinion and as a result, market developments and policy conversations.
A prevailing narrative during the worst of the Covid-19 pandemic was that the virus had been engineered to control society and boost hospital profits. The extensive misinformation surrounding COVID-19 and its management and care increased vaccine hesitancy amongst people worldwide. It is worth noting that vaccine hesitancy has been a consistent trend historically; the World Health Organisation flagged vaccine hesitancy as one of the main threats to global health, and there have been other instances where a majority of the population refused to get vaccinated anticipating unverified, long-lasting side effects. For example, research from 2016 observed a significant level of public skepticism regarding the development and approval process of the Zika vaccine in Africa. Further studies emphasised the urgent need to disseminate accurate information about the Zika virus on online platforms to help curb the spread of the pandemic.
In India during the COVID-19 pandemic, despite multiple official advisories, notifications and guidelines issued by the government and ICMR, people continued to remain opposed to vaccination, which resulted in inflated mortality rates within the country. Vaccination hesitancy was also compounded by anti-vaccination celebrities who claimed that vaccines were dangerous and contributed in large part to the conspiracy theories doing the rounds. Similar hesitation was noted in misinformation surrounding the MMR vaccines and their likely role in causing autism was examined. At the time of the crisis, the Indian government also had to tackle disinformation-induced fraud surrounding the supply of oxygens in hospitals. Many critically-ill patients relied on fake news and unverified sources that falsely portrayed the availability of beds, oxygen cylinders and even home set-ups, only to be cheated out of money.
The above examples highlight the difficulty health officials face in administering adequate healthcare. The special case of the COVID-19 pandemic also highlighted how current legal frameworks failed to address misinformation and disinformation, which impedes effective policymaking. It also highlights how taking corrective measures against health-related misinformation becomes difficult since such corrective action creates an uncomfortable gap in an individual’s mind, and it is seen that people ignore accurate information that may help bridge the gap. Misinformation, coupled with the infodemic trend, also leads to false memory syndrome, whereby people fail to differentiate between authentic information and fake narratives. Simple efforts to correct misperceptions usually backfire and even strengthen initial beliefs, especially in the context of complex issues like healthcare. Policymakers thus struggle with balancing policy making and making people receptive to said policies in the backdrop of their tendencies to reject/suspect authoritative action. Examples of the same can be observed on both the domestic front and internationally. In the US, for example, the traditional healthcare system rations access to healthcare through a combination of insurance costs and options versus out-of-pocket essential expenses. While this has been a subject of debate for a long time, it hadn’t created a large scale public healthcare crisis because the incentives offered to the medical professionals and public trust in the delivery of essential services helped balance the conversation. In recent times, however, there has been a narrative shift that sensationalises the system as an issue of deliberate “denial of care,” which has led to concerns about harms to patients.
Policy Recommendations
The hindrances posed by misinformation in policymaking are further exacerbated against the backdrop of policymakers relying on social media as a method to measure public sentiment, consensus and opinions. If misinformation about an outbreak is not effectively addressed, it could hinder individuals from adopting necessary protective measures and potentially worsen the spread of the epidemic. To improve healthcare policymaking amidst the challenges posed by health misinformation, policymakers must take a multifaceted approach. This includes convening a broad coalition of central, state, local, territorial, tribal, private, nonprofit, and research partners to assess the impact of misinformation and develop effective preventive measures. Intergovernmental collaborations such as the Ministry of Health and the Ministry of Electronics and Information Technology should be encouraged whereby doctors debunk online medical misinformation, in the backdrop of the increased reliance on online forums for medical advice. Furthermore, increasing investment in research dedicated to understanding misinformation, along with the ongoing modernization of public health communications, is essential. Enhancing the resources and technical support available to state and local public health agencies will also enable them to better address public queries and concerns, as well as counteract misinformation. Additionally, expanding efforts to build long-term resilience against misinformation through comprehensive educational programs is crucial for fostering a well-informed public capable of critically evaluating health information.
From an individual perspective, since almost half a billion people use WhatsApp it has become a platform where false health claims can spread rapidly. This has led to a rise in the use of fake health news. Viral WhatsApp messages containing fake health warnings can be dangerous, hence it is always recommended to check such messages with vigilance. This highlights the growing concern about the potential dangers of misinformation and the need for more accurate information on medical matters.
Conclusion
The proliferation of misinformation in healthcare poses significant challenges to effective policymaking and public health management. The COVID-19 pandemic has underscored the role of misinformation in vaccine hesitancy, fraud, and increased mortality rates. There is an urgent need for robust strategies to counteract false information and build public trust in health interventions; this includes policymakers engaging in comprehensive efforts, including intergovernmental collaboration, enhanced research, and public health communication modernization, to combat misinformation. By fostering a well-informed public through education and vigilance, we can mitigate the impact of misinformation and promote healthier communities.
References
- van der Meer, T. G. L. A., & Jin, Y. (2019), “Seeking Formula for Misinformation Treatment in Public Health Crises: The Effects of Corrective Information Type and Source” Health Communication, 35(5), 560–575. https://doi.org/10.1080/10410236.2019.1573295
- “Health Misinformation”, U.S. Department of Health and Human Services. https://www.hhs.gov/surgeongeneral/priorities/health-misinformation/index.html
- Mechanic, David, “The Managed Care Backlash: Perceptions and Rhetoric in Health Care Policy and the Potential for Health Care Reform”, Rutgers University. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2751184/pdf/milq_195.pdf
- “Bad actors are weaponising health misinformation in India”, Financial Express, April 2024.
- “Role of doctors in eradicating misinformation in the medical sector.”, Times of India, 1 July 2024. https://timesofindia.indiatimes.com/life-style/health-fitness/health-news/national-doctors-day-role-of-doctors-in-eradicating-misinformation-in-the-healthcare-sector/articleshow/111399098.cms