#FactCheck - Misleading Viral Video Targets Rubika Liyaquat, Original Footage Tells Different Story
Executive Summary
A video circulating on social media claims that a Pakistani man misbehaved with TV anchor Rubika Liyaquat during a live television debate. Users sharing the clip alleged that the Pakistani participant silenced the anchor on live TV.
However, research by CyberPeace found the viral claim to be false and revealed that the video being shared on social media is edited. In the original video, published on YouTube on November 26, 2025, the alleged Pakistani man was not present in the TV debate.
Claim
On February 13, 2026, a user shared the viral clip on X (formerly Twitter), claiming that the anchor was insulted during the debate and was left speechless. Another user on February 11, 2026, asked News18 India to verify the video and questioned who allowed such behaviour towards the journalist on air.

Fact Check:
To verify the claim, we extracted key frames from the viral video and conducted a reverse image search using Google Lens. During the research, we found the full version of the debate uploaded on the official YouTube channel of News18 India on November 26, 2025. The nearly 40-minute original broadcast featured anchor Rubika Liyaquat along with panelists Zafar Islam, Varun Purohit, Prateek Kumar, Arvind Kumar Vajpayee, Tausif Ahmed Khan, and Aziz Khan. However, the person seen misbehaving with the anchor in the viral clip was not present in the original video.

Upon carefully reviewing the footage, we located the actual segment around the 25-minute 40-second mark. In this portion, the anchor can be heard asking panelist Tausif Ahmed Khan to leave the show, using the same words heard in the viral clip. However, the original broadcast does not feature any Pakistani participant or any individual named “Nadeem Shahzad.”

Conclusion
Our research found that the viral claim is false. The circulating video has been edited, and the alleged Pakistani participant does not appear in the original debate uploaded on November 26, 2025.
Related Blogs

A photograph showing a massive crowd on a road is being widely shared on social media. The image is being circulated with the claim that people in the United States are staging large-scale protests against President Donald Trump.
However, CyberPeace Foundation’s research has found this claim to be misleading. Our fact-check reveals that the viral photograph is nearly eight years old and has been falsely linked to recent political developments.
Claim:
Social media users are sharing a photograph and claiming that it shows people protesting against US President Donald Trump.An X (formerly Twitter) user, Salman Khan Gauri (@khansalman88177), shared the image with the caption:“Today, a massive protest is taking place in America against Donald Trump.”
The post can be viewed here, and its archived version is available here.

FactCheck:
To verify the claim, we conducted a reverse image search of the viral photograph using Google. This led us to a report published by The Mercury News on April 6, 2018.
The report features the same image and states that the photograph was taken on March 24, 2018, during the ‘March for Our Lives’ rally in Washington, DC. The rally was organized to demand stricter gun control laws in the United States. The image shows a large crowd gathered on Pennsylvania Avenue in support of gun reform.
The report further notes that the Associated Press, on March 30, 2018, debunked false claims circulating online which alleged that liberal billionaire George Soros and his organizations had paid protesters $300 each to participate in the rally.

Further research led us to a report published by The Hindu on March 25, 2018, which also carries the same photograph. According to the report, thousands of Americans across the country participated in ‘March for Our Lives’ rallies following a mass shooting at a school in Florida. The protests were led by survivors and victims, demanding stronger gun laws.
The objective of these demonstrations was to break the legislative deadlock that has long hindered efforts to tighten firearm regulations in a country frequently rocked by mass shootings in schools and colleges.

Conclusion
The viral photograph is nearly eight years old and is unrelated to any recent protests against President Donald Trump.The image actually depicts a gun control protest held in 2018 and is being falsely shared with a misleading political claim.By circulating this outdated image with an incorrect context, social media users are spreading misinformation.

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/

Introduction
“GPS Spoofing” though formerly was confined to conflict zones as a consequence, has lately become a growing hazard for pilots and aircraft operators across the world, and several countries have been facing such issues. This definition stems from the US Radio Technical Commission for Aeronautics, which delivers specialized advice for government regulatory authorities. Global Positioning System (GPS) is considered an emergent part of aviation infrastructure as it supersedes traditional radio beams used to direct planes towards the landing. “GPS spoofing” occurs when a double-dealing radio signal overrides a legitimate GPS satellite alert where the receiver gets false location information. In the present times, this is the first time civilian passenger flights have faced such a significant danger, though GPS signal interference of this character has existed for over a decade. According to the Agency France-Presse (AFP), false GPS signals mislead onboard plane procedures and problematise the job of airline pilots that are surging around conflict areas. GPS spoofing may also be the outcome of military electronic warfare systems that have been deployed in zones combating regional tension. GPS spoofing can further lead to significant upheavals in commercial aviation, which include arrivals and departures of passengers apart from safety.
Spoofing might likewise involve one country’s military sending false GPS signals to an enemy plane or drone to impede its capability to operate, which has a collateral impact on airliners operating at a near distance. Collateral impairment in commercial aircraft can occur as confrontations escalate and militaries send faulty GPS signals to attempt to thwart drones and other aircraft. It could, therefore, lead to a global crisis, leading to the loss of civilian aircraft in an area already at a high-risk zone close to an operational battle area. Furthermore, GPS jamming is different from GPS Spoofing. While jamming is when the GPS signals are jammed or obstructed, spoofing is very distinct and way more threatening.
Global Reporting
An International Civil Aviation Organization (ICAO) assessment released in 2019 indicated that there were 65 spoofing incidents across the Middle East in the preceding two years, according to the C4ADS report. At the beginning of 2018, Euro control received more than 800 reports of Global Navigation Satellite System (GNSS) interference in Europe. Also, GPS spoofing in Eastern Europe and the Middle East has resulted in up to 80nm divergence from the flight route and aircraft impacted have had to depend on radar vectors from Air Traffic Control (ATC). According to Forbes, flight data intelligence website OPSGROUP, constituted of 8,000 members including pilots and controllers, has been reporting spoofing incidents since September 2023. Similarly, over 20 airlines and corporate jets flying over Iran diverted from their planned path after they were directed off the pathway by misleading GPS signals transmitted from the ground, subjugating the navigation systems of the aircraft.
In this context, vicious hackers, however at large, have lately realized how to override the critical Inertial Reference Systems (IRS) of an airplane, which is the essential element of technology and is known by the manufacturers as the “brains” of an aircraft. However, the current IRS is not prepared to counter this kind of attack. IRS uses accelerometers, gyroscopes and electronics to deliver accurate attitude, speed, and navigation data so that a plane can decide how it is moving through the airspace. GPS spoofing occurrences make the IRS ineffective, and in numerous cases, all navigation power is lost.
Red Flag from Agencies
The European Union Aviation Safety Agency (EASA) and the International Air Transport Association (IATA) correspondingly hosted a workshop on incidents where people have spoofed and obstructed satellite navigation systems and inferred that these direct a considerable challenge to security. IATA and EASA have further taken measures to communicate information about GPS tampering so that crew and pilots can make sure to determine when it is transpiring. The EASA had further pre-cautioned about an upsurge in reports of GPS spoofing and jamming happenings in the Baltic Sea area, around the Black Sea, and regions near Russia and Finland in 2022 and 2023. According to industry officials, empowering the latest technologies for civil aircraft can take several years, and while GPS spoofing incidents have been increasing, there is no time to dawdle. Experts have noted critical navigation failures on airplanes, as there have been several recent reports of alarming cyber attacks that have changed planes' in-flight GPS. As per experts, GPS spoofing could affect commercial airlines and cause further disarray. Due to this, there are possibilities that pilots can divert from the flight route, further flying into a no-fly zone or any unauthorized zone, putting them at risk.
According to OpsGroup, a global group of pilots and technicians first brought awareness and warning to the following issue when the Federal Aviation Administration (FAA) issued a forewarning on the security of flight risk to civil aviation operations over the spate of attacks. In addition, as per the civil aviation regulator Directorate General of Civil Aviation (DGCA), a forewarning circular on spoofing threats to planes' GPS signals when flying over parts of the Middle East was issued. DGCA advisory further notes the aviation industry is scuffling with uncertainties considering the contemporary dangers and information of GNSS jamming and spoofing.
Conclusion
As the aviation industry continues to grapple with GPS spoofing problems, it is entirely unprepared to combat this, although the industry should consider discovering attainable technologies to prevent them. As International conflicts become convoluted, technological solutions are unrestricted and can be pricey, intricate and not always efficacious depending on what sort of spoofing is used.
As GPS interference attacks become more complex, specialized resolutions should be invariably contemporized. Improving education and training (to increase awareness among pilots, air traffic controllers and other aviation experts), receiver technology (Creating and enforcing more state-of-the-art GPS receiver technology), ameliorating monitoring and reporting (Installing robust monitoring systems), cooperation (collaboration among stakeholders like government bodies, aviation organisations etc.), data/information sharing, regulatory measures (regulations and guidelines by regulatory and government bodies) can help in averting GPS spoofing.
References
- https://economictimes.indiatimes.com/industry/transportation/airlines-/-aviation/false-gps-signal-surge-makes-life-hard-for-pilots/articleshow/108363076.cms?from=mdr
- https://nypost.com/2023/11/20/lifestyle/hackers-are-taking-over-planes-gps-experts-are-lost-on-how-to-fix-it/
- https://www.timesnownews.com/india/planes-losing-gps-signal-over-middle-east-dgca-flags-spoofing-threat-article-105475388
- https://www.firstpost.com/world/gps-spoofing-deceptive-gps-lead-over-20-planes-astray-in-iran-13190902.html
- https://www.forbes.com/sites/erictegler/2024/01/31/gps-spoofing-is-now-affecting-airplanes-in-parts-of-europe/?sh=48fbe725c550
- https://www.insurancejournal.com/news/international/2024/01/30/758635.htm
- https://airwaysmag.com/gps-spoofing-commercial-aviation/
- https://www.wsj.com/articles/aviation-industry-to-tackle-gps-security-concerns-c11a917f
- https://www.deccanherald.com/world/explained-what-is-gps-spoofing-that-has-misguided-around-20-planes-near-iran-iraq-border-and-how-dangerous-is-this-2708342