#FactCheck-Bangladesh Video Falsely Shared as Security Forces Action During West Bengal Elections 2026
Executive Summary
As West Bengal heads for vote counting on May 4, 2026, following the second phase of Assembly polling held on April 29, a video is being widely shared on social media. The clip shows security personnel baton-charging civilians, with users claiming it depicts force being used during the West Bengal Assembly Elections 2026. Research by CyberPeace Research Wing found that the viral claim is misleading. The video is actually from Bangladesh and is being falsely linked to the West Bengal elections to spread confusion.
Claim
A Facebook user named “Adv Mohd Salman” shared the clip on April 29, 2026, using Bengal-related hashtags and claiming that voters standing in line were beaten to influence the election outcome. The post alleged that free and fair voting rights were being suppressed.

Fact Check
To verify the claim, we closely examined the viral video. A vehicle visible in the footage had a registration number written in a non-Hindi script. Using Google Lens reverse image search, we found a matching image uploaded on Alamy on December 30, 2018. The image showed a military vehicle with the same script and registration style seen in the viral clip.
According to the description on the platform, the image was taken in Dhaka during Bangladesh’s national elections and showed Bangladeshi army personnel moving through a street near a polling station. This confirms that the viral footage is not related to the 2026 West Bengal Assembly elections.

Conclusion
Our research confirms that the video showing security personnel baton-charging civilians is from Bangladesh, not West Bengal. It is being falsely shared as footage from the 2026 West Bengal Assembly elections to mislead users.
Related Blogs

Executive Summary:
A photo claiming that Mr. Rowan Atkinson, the famous actor who played the role of Mr. Bean, lying sick on bed is circulating on social media. However, this claim is false. The image is a digitally altered picture of Mr.Barry Balderstone from Bollington, England, who died in October 2019 from advanced Parkinson’s disease. Reverse image searches and media news reports confirm that the original photo is of Barry, not Rowan Atkinson. Furthermore, there are no reports of Atkinson being ill; he was recently seen attending the 2024 British Grand Prix. Thus, the viral claim is baseless and misleading.

Claims:
A viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in sick condition.



Fact Check:
When we received the posts, we first did some keyword search based on the claim made, but no such posts were found to support the claim made.Though, we found an interview video where it was seen Mr. Bean attending F1 Race on July 7, 2024.

Then we reverse searched the viral image and found a news report that looked similar to the viral photo of Mr. Bean, the T-Shirt seems to be similar in both the images.

The man in this photo is Barry Balderstone who was a civil engineer from Bollington, England, died in October 2019 due to advanced Parkinson’s disease. Barry received many illnesses according to the news report and his application for extensive healthcare reimbursement was rejected by the East Cheshire Clinical Commissioning Group.
Taking a cue from this, we then analyzed the image in an AI Image detection tool named, TrueMedia. The detection tool found the image to be AI manipulated. The original image is manipulated by replacing the face with Rowan Atkinson aka Mr. Bean.



Hence, it is clear that the viral claimed image of Rowan Atkinson bedridden is fake and misleading. Netizens should verify before sharing anything on the internet.
Conclusion:
Therefore, it can be summarized that the photo claiming Rowan Atkinson in a sick state is fake and has been manipulated with another man’s image. The original photo features Barry Balderstone, the man who was diagnosed with stage 4 Parkinson’s disease and subsequently died in 2019. In fact, Rowan Atkinson seemed perfectly healthy recently at the 2024 British Grand Prix. It is important for people to check on the authenticity before sharing so as to avoid the spreading of misinformation.
- Claim: A Viral photo of Rowan Atkinson aka Mr. Bean, lying on a bed in a sick condition.
- Claimed on: X, Facebook
- Fact Check: Fake & Misleading

Introduction
WhatsApp has become the new platform for scams, and the number of cases of WhatsApp scams is increasing daily. Just like that, a new WhatsApp scam has been started, and many WhatsApp users in India have reported receiving missed calls from unknown international numbers. Worse, one does not even have to answer the call to be scammed. A missed call is sufficient to be scammed.
Millions of populations switch from normal SMS to WhatsApp, usually, people used to get fake messages and marketing messages, but the trend of scamming has been evolving now. Most people get calls from different countries, and they are concerned about how these scammers got the numbers. WhatsApp works through VoIP networks, so no extra charges from any country exist. And about 500 million WhatsApp users are getting these scam calls, the calls are mainly on job-scams of promising part-time employment and opportunities. These types of job scam calls have been started reporting in 2023.
People reporting missed calls from countries like Ethiopia (+251), Malaysia (+60), Indonesia (+62), Vietnam (+84), etc.
The agenda of these calls are still unclear. Still, in some cases, the scammers ask for confidential information from WhatsApp users, like bank details, so the users must not reveal their personal information. Also, it is important to note that if you get any calls from a particular country, it necessarily does not mean it is from that country. Various agencies sell international numbers for WhatsApp calls.
Why has WhatsApp become a hub scam?
The generation has evolved and dumped the old SMS into WhatsApp. From school to college and offices, people use WhatsApp for their official work, as it is very easy and user-friendly, so people avoid safety measures. Generally, users need to understand the consequences of technology and use it with safeguards and awareness. Many people lose money and become victims of scams on WhatsApp as they share their confidential information. And the worse is that one does not even have to answer the call to be scammed. A missed call is sufficient to be scammed.
Before these international calls scam, the user received a call from the scam that they were from KBC, and the user won something. Then sought confidential information by the excuse that they would transfer the money to the user, and because of that user got scammed by the scammers. These scams have risen rapidly lately.
Safeguards users can use against these scam calls
WhatsApp responds to complaints regarding international calls to “block and report.”
If you have already received such calls, the best thing you can do is report and block them right away. As a result, the same number does not return to your phone, and numerous identical reports may persuade WhatsApp to delete the number entirely.
WhatsApp is also working on an update allowing users to block calls from unknown numbers on the service.

Users must modify their phone’s and app’s fundamental privacy settings to protect themselves from data breaches. The calls are directed toward app users who are actively using the app. However, by modifying the account’s appearance, a user can lessen the likelihood of being added to the scammers’ attack lists.
Limit Privacy
Begin by modifying WhatsApp’s ‘who can see’ settings. If your profile photo, last seen, and online status are visible to anybody, restrict them to persons on your contact list only. Change the About and Groups options as well.
Turn on two-factor authentication
Enabling two-factor authentication on WhatsApp adds more security to your data. In addition, the app also supports biometric protection in case of theft or loss.
Active Reporting
The users should report as soon as they see something odd or suspicious activity.
A typical question that users have is, ‘Where do the scammers acquire my phone number from?’
The answer is a little more complicated than we thought. Your data is retained on the company database from the time you sign up on a website or reveal your phone number at a store in order to take advantage of promotional offers and promotions. Due to a lack of technological infrastructure and legislation to protect personal data, a scammer can simply obtain your information.
According to Palo Alto research, India is the second most vulnerable country in the APAC region in terms of cyberattacks and data breaches. A data protection law is essential in the face of increasing calls and data breaches.
The Digital Personal Data Protection bill is set to be introduced in the parliament’s monsoon session. The bill has the potential to protect data, which will help to eliminate scams.
Conclusion
Several people had tweeted on tweeter about receiving fake calls on WhatsApp from international numbers more than once. WhatsApp encrypts calls and messages, making it difficult to track the person, and it appears that hackers are taking advantage of this to swindle customers. If you receive a WhatsApp call from any of the above ISD codes, we strongly advise you not to answer it and to block the number so the bad actors do not call you again. Report & block immediately that’s what WhatsApp has been responding to the complainants.

Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724