#FactCheck - Philadelphia Plane Crash Video Falsely Shared as INS Vikrant Attack on Karachi Port
Executive Summary:
A video currently circulating on social media falsely claims to show the aftermath of an Indian Navy attack on Karachi Port, allegedly involving the INS Vikrant. Upon verification, it has been confirmed that the video is unrelated to any naval activity and in fact depicts a plane crash that occurred in Philadelphia, USA. This misrepresentation underscores the importance of verifying information through credible sources before drawing conclusions or sharing content.
Claim:
Social media accounts shared a video claiming that the Indian Navy’s aircraft carrier, INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions. Captions such as “INDIAN NAVY HAS DESTROYED KARACHI PORT” accompanied the footage, which shows a crash site with debris and small fires.

Fact Check:
After reverse image search we found that the viral video to earlier uploads on Facebook and X (formerly Twitter) dated February 2, 2025. The footage is from a plane crash in Philadelphia, USA, involving a Mexican-registered Learjet 55 (tail number XA-UCI) that crashed near Roosevelt Mall.

Major American news outlets, including ABC7, reported the incident on February 1, 2025. According to NBC10 Philadelphia, the crash resulted in the deaths of seven individuals, including one child.

Conclusion:
The viral video claiming to show an Indian Navy strike on Karachi Port involving INS Vikrant is entirely misleading. The footage is from a civilian plane crash that occurred in Philadelphia, USA, and has no connection to any military activity or recent developments involving the Indian Navy. Verified news reports confirm the incident involved a Mexican-registered Learjet and resulted in civilian casualties. This case highlights the ongoing issue of misinformation on social media and emphasizes the need to rely on credible sources and verified facts before accepting or sharing sensitive content, especially on matters of national security or international relations.
- Claim: INS Vikrant, attacked Karachi Port amid rising India-Pakistan tensions
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
With the ever-growing technology where cyber-crimes are increasing, a new cyber-attack is on the rise, but it’s not in your inbox or your computer- it's targeting your phone, especially your smartphone. Cybercriminals are expanding their reach in India, with a new text-messaging fraud targeting individuals. The Indian Computer Emergency Response Team (CERT-In) has warned against "smishing," or SMS phishing.
Understanding Smishing
Smishing is a combination of the terms "SMS" and "phishing." It entails sending false text messages that appear to be from reputable sources such as banks, government organizations, or well-known companies. These communications frequently generate a feeling of urgency in their readers, prompting them to click on harmful links, expose personal information, or conduct financial transactions.
When hackers "phish," they send out phony emails in the hopes of tricking the receiver into clicking on a dangerous link. Smishing is just the use of text messaging rather than email. In essence, these hackers are out to steal your personal information to commit fraud or other cybercrimes. This generally entails stealing money – usually your own, but occasionally also the money of your firm.
The cybercriminals typically use these tactics to lure victims and steal the information.
Malware- The cyber crooks send the smishing URL link that might tick you into downloading malicious software on your phone itself. This SMS malware may appear as legitimate software, deceiving you into putting in sensitive information and transmitting it to crooks.
Malicious website- The URL in the smishing message may direct you to a bogus website that seeks sensitive personal information. Cybercriminals employ custom-made rogue sites meant to seem like legitimate ones, making it simpler to steal your information.
Smishing text messages often appear to be from your bank, asking you to share personal sensitive information, ATM numbers, or account details. Mobile device cybercrime is increasing, as is mobile device usage. Aside from the fact that texting is the most prevalent usage of cell phones, a few additional aspects make this an especially pernicious security issue. Let's go over how smishing attacks operate.
Modus Operandi
The cyber crooks commit the fraud via SMS. As attackers assume an identity that might be of someone trusted, Smishing attackers can use social engineering techniques to sway a victim's decision-making. Three things are causing this deception:
- Trust- Cyber crooks target individuals, by posing to someone from a legitimate individual and organization, this naturally lowers a person’s defense against threats.
- Context- Using a circumstance that might be relevant to targets helps an attacker to create an effective disguise. The message feels personalized, which helps it overcome any assumption that it is spam.
- Emotion- The nature of the SMS is critical; it makes the victim think that is urgent and requires rapid action. Using these tactics, attackers craft communications that compel the receiver to act.
- Typically, attackers want the victim to click on a URL link within the text message, which takes them to a phishing tool that asks them for sensitive information. This phishing tool is frequently in the form of a website or app that also assumes a phony identity.
How does Smishing Spread?
As we have revealed earlier smishing attacks are delivered through both traditional texts. However, SMS phishing attacks primarily appear to be from known sources People are less careful while they are on their phones. Many people believe that their cell phones are more secure than their desktops. However, smartphone security has limits and cannot always guard against smishing directly.
Considering the fact phones are the target While Android smartphones dominate the market and are a perfect target for malware text messages, iOS devices are as vulnerable. Although Apple's iOS mobile technology has a high reputation for security, no mobile operating system can protect you from phishing-style assaults on its own. A false feeling of security, regardless of platform, might leave users especially exposed.
Kinds of smishing attacks
Some common types of smishing attacks that occurred are;
- COVID-19 Smishing: The Better Business Bureau observed an increase in reports of US government impersonators sending text messages requesting consumers to take an obligatory COVID-19 test via a connected website in April 2020. The concept of these smishing assaults may readily develop, as feeding on pandemic concerns is a successful technique of victimizing the public.
- Gift Smishing: Give away, shopping rewards, or any number of other free offers, this kind of smishing includes free services or products, from a reputable or other company. attackers plan in such a way that the offer is for a limited time or is an exclusive offer and the offers are so lucrative that one gets excited and falls into the trap.
CERT Guidelines
CERT-In shared some steps to avoid falling victim to smishing.
- Never click on any suspicious link in SMS/social media charts or posts.
- Use online resources to validate shortened URLs.
- Always check the link before clicking.
- Use updated antivirus and antimalware tools.
- If you receive any suspicious message pretending to be from a bank or institution, immediately contact the bank or institution.
- Use a separate email account for personal online transactions.
- Enforce multi-factor authentication (MFA) for emails and bank accounts.
- Keep your operating system and software updated with the latest patches.
Conclusion
Smishing uses fraudulent mobile text messages to trick people into downloading malware, sharing sensitive data, or paying cybercriminals money. With the latest technological developments, it has become really important to stay vigilant in the digital era not only protecting your computers but safeguarding the devices that fit in the palm of your hand, CERT warning plays a vital role in this. Awareness and best practices play a pivotal role in safeguarding yourself from evolving threats.
Reference
- https://www.ndtv.com/india-news/government-warns-of-smishing-attacks-heres-how-to-stay-safe-4709458
- https://zeenews.india.com/technology/govt-warns-citizens-about-smishing-scam-how-to-protect-against-this-online-threat-2654285.html
- https://www.the420.in/protect-against-smishing-scams-cert-in-advice-online-safety/

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india
.webp)
Introduction
Over-the-Top (OTT) streaming platforms have become a significant part of Indian entertainment consumption, offering users the ability to watch films, web series, and short-format videos directly online. These platforms operate on a subscription-based model, allowing for creative freedom, but they also lack clear accountability. On certain platforms, some content has been criticised for focusing on sensational or sexually explicit themes, particularly targeting young viewers seeking risqué entertainment. Such applications lack strong age verification mechanisms and offer ‘user access’ with minimal restrictions, which raises serious concerns about exposure to obscene content. This has triggered serious concerns among regulators, civil society organisations, advocacy and parental groups about the accessibility of such material and its potential influence, especially on minors.
Blocking order issued by the Ministry of Broadcasting and Information (MIB)
On 23rd July 2025, the Government of India, invoking powers under the Information Technology Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, has issued a ‘blocking order’ against 25 OTT platforms. A total of 26 websites and 14 mobile applications of the said OTT platforms were on the list, including several prominent OTT platforms for alleged distribution of obscene, vulgar and pornographic content in some cases. This regulatory action follows previous statutory advice and repeated warnings to the platforms in question, some of which continued to operate through new domains and disobeyed Indian laws and regulations.
This action was taken by the Ministry of Broadcasting and Information (MIB) in consultation with Ministry of Home Affairs, Ministry of Women and Child Development, Ministry of Electronics and Information Technology, Department of Legal Affairs, industry bodies and experts in the field of women rights and child rights.
The list of OTT Platforms covered under the said ‘Blocking Order’
The list includes - Big Shots App, Desiflix, Boomex, NeonX VIP, Navarasa Lite, Gulab App, Kangan App, Bull App, ShowHit, Jalva App, Wow Entertainment, Look Entertainment, Hitprime, Fugi, Feneo, ShowX, Sol Talkies, Adda TV, ALTT, HotX VIP, Hulchul App, MoodX, Triflicks, Ullu, and Mojflix.
The government has explicitly directed Internet Service Providers (ISP’s) to disable or remove public access to these websites within India.
Recent Judicial and Centre’s Interventions
- To refresh the memory, last year in March 2024, the Ministry of I&B blocked 18 OTT Platforms for Obscene and Vulgar Content.
- In April 2025, the Apex Court of India heard a petition on the prohibition of streaming of sexually explicit content on over-the-top (OTT) and social media platforms. In response to the petition, the Apex court stated, ‘It's not our domain, the centre has to take action and highlighted the need for executive action in the matter. The apex court has also issued notice to the Centre, OTT platforms, as well as social media platforms in response to a petition seeking a ban on sexually explicit content. (Uday Mahurkar & Ors. v. Union of India & Ors. [WP(C) 313/2025])
- The following recent blocking order dated 23rd July 2025 by the Ministry of I&B is a welcome and commendable step that reflects the government’s firm stance against illicit content on OTT platforms. Kangana Ranaut, Actress and politician, while speaking to a news agency, has appreciated the government's move to ban OTT platforms such as Ullu, ALTT, and Desiflix for showing soft porn content.
Conclusion
The centre’s intervention sends a clear message that OTT platforms cannot remain exempt from accountability. The move is a response to the growing concern of harms caused by unregulated digital content and non-compliances by the platforms, particularly in relation to illicit material, and broader violations of decency laws in India. However, the enforcement must now go beyond issuing orders and require a robust measurable compliance framework for OTT platforms.
In today’s fast-paced era, when subscription-based content platforms place vast libraries at users' fingertips, the government's action is necessary and proportionate, marking a decisive step toward safer digital and healthy regulated environments.
References
- https://www.newsonair.gov.in/govt-bans-25-ott-websites-apps-over-vulgar-and-pornographic-content/
- https://timesofindia.indiatimes.com/technology/tech-news/big-shots-ullu-altt-desiflix-mojflix-and-20-other-ott-apps-banned-what-governments-ban-order-says/articleshow/122918803.cms
- https://www.ndtv.com/india-news/centre-bans-ott-platforms-ullu-altt-desiflix-for-obscene-content-8947100
- https://foxmandal.in/News/sc-takes-note-of-obscenity-plea-issues-notice-to-ott-platforms/
- https://www.morungexpress.com/kangana-ranaut-calls-banning-ott-platforms-for-soft-porn-content-a-much-appreciated-move
- https://www.livemint.com/news/india/do-something-supreme-court-to-centre-ott-platforms-on-obscene-content-pil-netflix-amazon-prime-ullu-altt-x-facebook-11745823594972.html