#FactCheck - Philadelphia Plane Crash Video Falsely Shared as INS Vikrant Attack on Karachi Port
Executive Summary:
A video currently circulating on social media falsely claims to show the aftermath of an Indian Navy attack on Karachi Port, allegedly involving the INS Vikrant. Upon verification, it has been confirmed that the video is unrelated to any naval activity and in fact depicts a plane crash that occurred in Philadelphia, USA. This misrepresentation underscores the importance of verifying information through credible sources before drawing conclusions or sharing content.
Claim:
Social media accounts shared a video claiming that the Indian Navy’s aircraft carrier, INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions. Captions such as “INDIAN NAVY HAS DESTROYED KARACHI PORT” accompanied the footage, which shows a crash site with debris and small fires.

Fact Check:
After reverse image search we found that the viral video to earlier uploads on Facebook and X (formerly Twitter) dated February 2, 2025. The footage is from a plane crash in Philadelphia, USA, involving a Mexican-registered Learjet 55 (tail number XA-UCI) that crashed near Roosevelt Mall.

Major American news outlets, including ABC7, reported the incident on February 1, 2025. According to NBC10 Philadelphia, the crash resulted in the deaths of seven individuals, including one child.

Conclusion:
The viral video claiming to show an Indian Navy strike on Karachi Port involving INS Vikrant is entirely misleading. The footage is from a civilian plane crash that occurred in Philadelphia, USA, and has no connection to any military activity or recent developments involving the Indian Navy. Verified news reports confirm the incident involved a Mexican-registered Learjet and resulted in civilian casualties. This case highlights the ongoing issue of misinformation on social media and emphasizes the need to rely on credible sources and verified facts before accepting or sharing sensitive content, especially on matters of national security or international relations.
- Claim: INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

A video circulating on social media claims that British Prime Minister Keir Starmer was forcibly thrown out of a pub by its owner. The clip has been widely shared by users, many of whom are drawing political comparisons and questioning democratic norms. However, research conducted by Cyber Peace Foundation has found that the viral claim is misleading. Our research reveals that the video dates back to 2021, a time when Keir Starmer was not the Prime Minister of the United Kingdom, but the leader of the opposition Labour Party.
Claim
On January 12, 2026, a video was shared on social media platform X (formerly Twitter) with the claim that British Prime Minister Sir Keir Starmer was asked to leave a pub by its owner. The post suggests that the pub owner was unhappy with Starmer’s performance and contrasts the incident with how political dissent is allegedly handled in India. The viral video, approximately 32 seconds long, shows a man angrily confronting Keir Starmer in English, stating that he had supported the Labour Party all his life but was disappointed with Starmer’s leadership. The man is then heard asking Starmer to leave the pub.
Links to the viral post and its archived version were reviewed as part of the research.

Fact Check
To verify the claim, we extracted key frames from the viral video and conducted a Google reverse image search. During this process, we found the same video posted on an X account on April 19, 2021.The visuals in the 2021 post matched the viral video exactly, clearly indicating that the footage is not recent.The original post described the incident as an event involving Labour Party leader Keir Starmer during his visit to the Raven pub in Bath, and included a warning about strong language used by the pub owner, Rod Humphries. Here is the link to the original video, along with a screenshot:

Further keyword searches led us to a report published by NBC News on April 19, 2021. According to the report, Keir Starmer, then the leader of the UK’s opposition Labour Party, was confronted and asked to leave a pub in the city of Bath. The pub owner reportedly accused Starmer of failing to oppose COVID-19 lockdown measures strongly enough at a time when strict restrictions were in place across the UK.
- https://www.nbcnews.com/video/anti-lockdown-pub-landlord-screams-at-u-k-labour-party-leader-to-get-out-of-his-pub-110466117702

We also verified who held the office of British Prime Minister in 2021. Official UK government records confirm that Boris Johnson was the Prime Minister at that time, while Keir Starmer served as the Leader of the Opposition.

Conclusion
Our research confirms that the viral video is old and misleadingly presented. The footage is from 2021, when Keir Starmer was not the Prime Minister of the United Kingdom, but the opposition Labour Party leader. Sharing the video with the claim that it shows a current British Prime Minister being thrown out of a pub is factually incorrect.
.webp)
Introduction:
The Federal Bureau of Investigation (FBI) focuses on threats and is an intelligence-driven agency with both law enforcement and intelligence responsibilities. The FBI has the power and duty to look into certain offences that are entrusted to it and to offer other law enforcement agencies cooperation services including fingerprint identification, lab tests, and training. In order to support its own investigations as well as those of its collaborators and to better comprehend and address the security dangers facing the United States, the FBI also gathers, disseminates, and analyzes intelligence.
The FBI’s Internet Crime Complaint Center (IC3) Functions combating cybercrime:
- Collection: Internet crime victims can report incidents and notify the relevant authorities of potential illicit Internet behavior using the IC3. Law enforcement frequently advises and directs victims to use www.ic3.gov to submit a complaint.
- Analysis: To find new dangers and trends, the IC3 examines and examines data that users submit via its website.
- Public Awareness: The website posts public service announcements, business alerts, and other publications outlining specific frauds. Helps to raise awareness and make people become aware of Internet crimes and how to stay protected.
- Referrals: The IC3 compiles relevant complaints to create referrals, which are sent to national, international, local, and state law enforcement agencies for possible investigation. If law enforcement conducts an investigation and finds evidence of a crime, the offender may face legal repercussions.
Alarming increase in cyber crime cases:
In the recently released 2022 Internet Crime Report by the FBI's Internet Crime Complaint Center (IC3), the statistics paint a concerning picture of cybercrime in the United States. FBI’s Internet Crime Complaint Center (IC3) received 39,416 cases of extortion in 2022. The number of cases in 2021 stood at 39,360.
FBI officials emphasize the growing scope and sophistication of cyber-enabled crimes, which come from around the world. They highlight the importance of reporting incidents to IC3 and stress the role of law enforcement and private-sector partnerships.
About Internet Crime Complaint Center IC3:
IC3 was established in May 2000 by the FBI to receive complaints related to internet crimes.
It has received over 7.3 million complaints since its inception, averaging around 651,800 complaints per year over the last five years. IC3's mission is to provide the public with a reliable reporting mechanism for suspected cyber-enabled criminal activity and to collaborate with law enforcement and industry partners.
The FBI encourages the public to regularly review consumer and industry alerts published by IC3. An victim of an internet crime are urged to submit a complaint to IC3, and can also file a complaint on behalf of another person. These statistics underscore the ever-evolving and expanding threat of cybercrime and the importance of vigilance and reporting to combat this growing challenge.
What is sextortion?
The use or threatened use of a sexual image or video of another person without that person’s consent, derived from online encounters or social media websites or applications, primarily to extort money from that person or asking for sexual favours and giving warning to distribute that picture or video to that person’s friends, acquaintances, spouse, partner, or co-workers or in public domain.
Sextortion is an online crime that can be understood as, when an bad actor coerces a young person into creating or sharing a sexual image or video of themselves and then uses it to get something from such young person, such as other sexual images, money, or even sexual favours. Reports highlights that more and more kids are being blackmailed in this way. Sextortion can also happen to adults. Sextortion can also take place by taking your pictures from social media account and converting those pictures into sexually explicit content by morphing such images or creating deepfake by miusing deepfake technologies.
Sextortion in the age of AI and advanced technologies:
AI and deep fake technology make sextortion even more dangerous and pernicious. A perpetrator can now produce a high-quality deep fake that convincingly shows a victim engaged in explicit acts — even if the person has not done any such thing.
Legal Measures available in cases of sextortion:
In India, cybersecurity is governed primarily by the Indian Penal Code (IPC) and the Information Technology Act, 2000 (IT Act). Addressing cyber crimes such as hacking, identity theft, and the publication of obscene material online, sextortion and other cyber crimes. The IT Act covers various aspects of electronic governance and e-commerce, with providing provisions for defining such offences and providing punishment for such offences.
Recently Digital Personal Data Protection Act, 2023 has been enacted by the Indian Government to protect the digital personal data of the Individuals. These laws collectively establish the legal framework for cybersecurity and cybercrime prevention in India. Victims are urged to report the crime to local law enforcement and its cybercrime divisions. Law enforcement will investigate sextortion cases reports and will undertake appropriate legal action.
How to stay protected from evolving cases of sextortion: Best Practices:
- Report the Crime to law enforcement agency and social media platform or Internet service provider.
- Enable Two-step verification as an extra layer of protection.
- Keep your laptop Webcams covered when not in use.
- Stay protected from malware and phishing Attacks.
- Protect your personal information on your social media account, and also monitor your social media accounts in order to identify any suspicious activity. You can also set and review privacy settings of your social media accounts.
Conclusion:
Sextortion cases has been increased in recent time. Knowing the risk, being aware of rules and regulations, and by following best practices will help in preventing such crime and help you to stay safe and also avoid the chance of being victimized. It is important to spreading awareness about such growing cyber crimes and empowering the people to report it and it is also significant to provide support to victims. Let’s all unite in order to fight against such cyber crimes and also to make life a safer place on the internet or digital space.
References:
- https://www.ic3.gov/Media/PDF/AnnualReport/2022_IC3ElderFraudReport.pdf
- https://octillolaw.com/insights/fbi-ic3-releases-2022-internet-crime-report/
- https://www.iafci.org/app_themes/docs/Federal%20Agency/2022_IC3Report.pdf

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions