#FactCheck - Viral Video Falsely Linked to India; Actually from Bangladesh
A video circulating widely on social media shows a child throwing stones at a moving train, while a few other children can also be seen climbing onto the engine. The video is being shared with a communal narrative, with claims that the incident took place in India.
Cyber Peace Foundation’s research found the viral claim to be misleading. Our research revealed that the video is not from India, but from Bangladesh, and is being falsely linked to India on social media.
Claim:
On January 15, 2026, a Facebook user shared the viral video claiming it depicted an incident from India. The post carried a provocative caption stating, “We are not afraid of Pakistan outside our borders. We are afraid of the thousands of mini-Pakistans within India.” The post has been widely circulated, amplifying communal sentiments.

Fact Check:
To verify the authenticity of the video, we conducted a reverse image search using Google Lens by extracting keyframes from the viral clip. During this process, we found the same video uploaded on a Bangladeshi Facebook account named AL Amin Babukhali on December 28, 2025. The caption of the original post mentions Kamalapur, which is a well-known railway station in Bangladesh. This strongly indicates that the incident did not occur in India.

Further analysis of the video shows that the train engine carries the marking “BR”, along with text written in the Bengali language. “BR” stands for Bangladesh Railways, confirming the origin of the train. To corroborate this further, we searched for images related to Bangladesh Railways using Google’s open tools. We found multiple images on Getty Images showing train engines with the same design and markings as seen in the viral video. The visual match clearly establishes that the train belongs to Bangladesh Railways.

Conclusion
Our research confirms that the viral video is from Bangladesh, not India. It is being shared on social media with a false and misleading claim to give it a communal angle and link it to India.
Related Blogs

Executive Summary:
Apple has quickly responded to two severe zero-day threats, CVE-2024-44308 and CVE-2024-44309 in iOS, macOS, visionOS, and Safari. These defects, actively used in more focused attacks presumably by state actors, allow for code execution and cross-site scripting (XSS). In a report shared by Google’s Threat Analysis Group, the existing gaps prove that modern attacks are highly developed. Apple’s mitigation comprises memory management, especially state management to strengthen device security. Users are encouraged to update their devices as soon as possible, turn on automatic updates and be careful in the internet space to avoid these new threats.
Introduction
Apple has proved its devotion to the security issue releasing the updates fixing two zero-day bugs actively exploited by hackers. The bugs, with the IDs CVE-2024-44308 and CVE-2024-44309, are dangerous and can lead to code execution and cross-site scripting attacks. The vulnerabilities have been employed in attack and the significance of quick patch release for the safety of the users.
Vulnerabilities in Detail
The discovery of vulnerabilities (CVE-2024-44308, CVE-2024-44309) is credited to Clément Lecigne and Benoît Sevens of Google's Threat Analysis Group (TAG). These vulnerabilities were found in JavaScriptCore and WebKit, integral components of Apple’s web rendering framework. The details of these vulnerabilities are mentioned below:
CVE-2024-44308
- Severity: High (CVSS score: 8.8)
- Description: A flaw in the JavaScriptCore component of WebKit. Malicious web content could cause code to be executed on the target system and make the system vulnerable to the full control of the attacker.
- Technical Finding: This vulnerability involves bad handling of memory in the course of executing JavaScript, allowing the use of injected payloads remotely by the attackers.
CVE-2024-44309
- Severity: Moderate (CVSS score: 6.1)
- Description: A cookie management flaw in WebKit which might result in cross site scripting (XSS). This vulnerability enables the attackers to embed unauthorized scripts into genuine websites and endanger the privacy of users as well as their identities.
- Technical Finding: This issue arises because of wrong handling of cookies at the state level while processing the maliciously crafted web content and provides an unauthorized route to session data.
Affected Systems
These vulnerabilities impact a wide range of Apple devices and software versions:
- iOS 18.1.1 and iPadOS 18.1.1: For devices including iPhone XS and later, iPad Pro (13-inch), and iPad mini 5th generation onwards.
- iOS 17.7.2 and iPadOS 17.7.2: Supports earlier models such as iPad Pro (10.5-inch) and iPad Air 3rd generation.
- macOS Sequoia 15.1.1: Specifically targets systems running macOS Sequoia.
- visionOS 2.1.1: Exclusively for Apple Vision Pro.
- Safari 18.1.1: For Macs running macOS Ventura and Sonoma.
Apple's Mitigation Approach
Apple has implemented the following fixes:
- CVE-2024-44308: Enhanced input validation and robust memory checks to prevent arbitrary code execution.
- CVE-2024-44309: Improved state management to eliminate cookie mismanagement vulnerabilities.
These measures ensure stronger protection against exploitation and bolster the underlying security architecture of affected components.
Broader Implications
The exploitation of these zero-days highlights the evolving nature of threat landscapes:
- Increasing Sophistication: Attackers are refining techniques to target niche vulnerabilities, bypassing traditional defenses.
- Spyware Concerns: These flaws align with the modus operandi of spyware tools, potentially impacting privacy and national security.
- Call for Timely Updates: Users delaying updates inadvertently increase their risk exposure
Technical Recommendations for Users
To mitigate potential risks:
- Update Devices Promptly: Install the latest patches for iOS, macOS, visionOS, and Safari.
- Enable Automatic Updates: Ensures timely application of future patches.
- Restrict WebKit Access: Avoid visiting untrusted websites until updates are installed.
- Monitor System Behavior: Look for anomalies that could indicate exploitation.
Conclusion
The exploitation of CVE-2024-44308 and CVE-2024-44309 targeting Apple devices highlight the importance of timely software updates to protect users from potential exploitation. The swift action of Apple by providing immediate improved checks, state management and security patches. Users are therefore encouraged to install updates as soon as possible to guard against these zero day flaws.
References:
- https://support.apple.com/en-us/121752
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-44308
- https://securityonline.info/cve-2024-44308-and-cve-2024-44309-apple-addresses-zero-day-vulnerabilities/

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

Executive Summary:
Amid the ongoing tensions and conflict involving the United States, Israel, and Iran, an image of a heavily damaged industrial facility is circulating widely on social media. Several users are sharing the picture claiming that it shows an Iranian water treatment or desalination plant destroyed in a US–Israel attack. Some media reports have also used the same image while reporting on the alleged attack on a freshwater desalination plant in Iran.
However, a research by the CyberPeace found that the claim is misleading. The viral image is not from Iran. It actually shows the aftermath of a drone attack on a warehouse belonging to a US company in Basra, Iraq.
Claim
X user “Shashank Shekhar Jha” shared the image on March 8, 2026, claiming that a freshwater desalination plant in Qeshm, Iran, had been destroyed.
Fact check
To verify the claim, we conducted a reverse image search using Google Lens. During the search, we found a report published on March 7, 2026, on the website of Asian News International (ANI). The report stated that Iran’s Foreign Minister Seyed Abbas Araghchi condemned a US attack on a freshwater desalination plant on Qeshm Island, calling it a “blatant and desperate crime.”
The report used the same viral image; however, the caption clearly mentioned that it was a representational image credited to Reuters.
https://www.aninews.in/news/world/middle-east/blatant-and-desperate-crime-irans-fm-condemns-us-attack-on-qeshms-freshwater-desalination-plant-warns-of-grave-consequences20260307212645/

To further confirm the claim, we checked the official X account of Seyed Abbas Araghchi. In a post on March 7, he condemned the alleged attack on the desalination plant in Qeshm and stated that the strike had disrupted water supply to around 30 villages. However, the post did not include any image of the incident.

Conclusion
The viral image being shared as evidence of a US–Israel attack on Iran’s water treatment plant is misleading. The photo actually shows the aftermath of a drone strike on a warehouse belonging to a US company in Basra, Iraq, and has been wrongly linked to the situation in Iran.