Navigating the Path to CyberPeace: Insights and Strategies
Featured #factCheck Blogs

Executive Summary
Amid the ongoing tensions involving the United States, Israel, and Iran, a video of a cargo ship engulfed in flames is being widely shared across social media platforms. The clip shows a vessel burning intensely at sea, with users claiming that Iran targeted the ship with a drone for attempting to cross the Strait of Hormuz without permission. Some users have also claimed that the destroyed vessel was a Pakistani-flagged oil tanker hit by Iranian missiles. However, research by CyberPeace found the claim to be false. Our verification also reveals that the viral video is being misrepresented.
Claim
Social media users, including an X (formerly Twitter) account named “IranDefenceForce,” shared the video claiming that Iran targeted an oil tanker in the Strait of Hormuz for allegedly violating restrictions.

Fact Check
A keyword-based news search led us to multiple credible reports mentioning a statement by Iran’s Foreign Minister Abbas Araghchi. According to reports, Iran had allowed ships from “friendly countries” including India, China, Russia, Iraq, and Pakistan to pass through the Strait of Hormuz.

A March 26, 2026 report by The Hindu stated that Araghchi also emphasized Iran’s assertion of sovereignty over the strategic waterway connecting the Persian Gulf and the Gulf of Oman. The same statement was also shared via the official X handle of the Iranian Consulate in Mumbai. During a frame-by-frame analysis of the viral video, we noticed the word “SAFEEN” written on a part of the ship. Using this clue, we conducted a targeted news search and found a report by Reuters dated March 4, 2026.

According to the report, a Malta-flagged container ship named Safeen Prestige was damaged in an attack while heading toward the Strait of Hormuz. Shipping sources cited in the report stated that the vessel was struck around 1109 GMT while sailing eastward, approximately two nautical miles north of Oman. The ship had reportedly departed from Sharjah Port in the United Arab Emirates but was damaged before reaching its destination. Its last known location was in the Persian Gulf. Additionally, earlier this month, another cargo vessel named Mayuri Naree was also attacked near Iran’s Qeshm Island. As per Reuters, an explosion caused a fire in the engine room, after which 20 crew members were rescued by the Omani navy, while three remained missing.
Conclusion
The viral video does not show Iran targeting a Pakistani oil tanker for violating restrictions in the Strait of Hormuz. In reality, the clip features the Malta-flagged container ship Safeen Prestige, which was damaged in an unidentified attack in the Persian Gulf. The claim being circulated on social media is misleading.

Executive Summary
A video is being widely shared on social media showing a police officer driving an e-rickshaw, while two other policemen are seen in the back seat. Users sharing the clip claim that, due to a shortage of petrol, this is a new initiative by the Uttar Pradesh Police. However, research by CyberPeace found the viral claim to be false. Our research also confirms that the video is not real but AI-generated.
Claim
An Instagram user shared the viral video claiming that due to fuel shortages, Uttar Pradesh Police has started patrolling using e-rickshaws.
- Post link: https://www.instagram.com/reel/DWepKWXAeiE/
- Archive: https://archive.ph/QBNXs

Fact Check
To verify the claim, we first conducted a keyword search on Google but found no credible media reports supporting this claim.

Next, we extracted keyframes from the viral video and performed a reverse image search using Google Lens. During this process, we found the same video uploaded on an Instagram channel on March 28, 2026. The uploader clearly mentioned that the video was created purely for entertainment purposes.

We further analyzed the video using AI detection tools. When scanned with Hive Moderation, the results indicated that the video is approximately 94% AI-generated.

In the next step, we also tested the clip using DeepAI. According to its analysis, the video is about 97% AI-generated.

Conclusion
Our research clearly shows that the viral video is not authentic. It is an AI-generated clip created for entertainment purposes, and the claim that Uttar Pradesh Police has started e-rickshaw patrolling due to petrol shortage is false.

Executive Summary:
A misleading video of a child covered in ash allegedly circulating as the evidence for attacks against Hindu minorities in Bangladesh. However, the investigation revealed that the video is actually from Gaza, Palestine, and was filmed following an Israeli airstrike in July 2024. The claim linking the video to Bangladesh is false and misleading.

Claims:
A viral video claims to show a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes of the video, which led us to a X post posted by Quds News Network. The report identified the video as footage from Gaza, Palestine, specifically capturing the aftermath of an Israeli airstrike on the Nuseirat refugee camp in July 2024.
The caption of the post reads, “Journalist Hani Mahmoud reports on the deadly Israeli attack yesterday which targeted a UN school in Nuseirat, killing at least 17 people who were sheltering inside and injuring many more.”

To further verify, we examined the video footage where the watermark of Al Jazeera News media could be seen, We found the same post posted on the Instagram account on 14 July, 2024 where we confirmed that the child in the video had survived a massacre caused by the Israeli airstrike on a school shelter in Gaza.

Additionally, we found the same video uploaded to CBS News' YouTube channel, where it was clearly captioned as "Video captures aftermath of Israeli airstrike in Gaza", further confirming its true origin.

We found no credible reports or evidence were found linking this video to any incidents in Bangladesh. This clearly implies that the viral video was falsely attributed to Bangladesh.
Conclusion:
The video circulating on social media which shows a child covered in ash as the evidence of attack against Hindu minorities is false and misleading. The investigation leads that the video originally originated from Gaza, Palestine and documents the aftermath of an Israeli air strike in July 2024.
- Claims: A video shows a child in Bangladesh covered in ash as evidence of attacks on Hindu minorities.
- Claimed by: Facebook
- Fact Check: False & Misleading

Executive Summary:
A viral picture on social media showing UK police officers bowing to a group of social media leads to debates and discussions. The investigation by CyberPeace Research team found that the image is AI generated. The viral claim is false and misleading.

Claims:
A viral image on social media depicting that UK police officers bowing to a group of Muslim people on the street.


Fact Check:
The reverse image search was conducted on the viral image. It did not lead to any credible news resource or original posts that acknowledged the authenticity of the image. In the image analysis, we have found the number of anomalies that are usually found in AI generated images such as the uniform and facial expressions of the police officers image. The other anomalies such as the shadows and reflections on the officers' uniforms did not match the lighting of the scene and the facial features of the individuals in the image appeared unnaturally smooth and lacked the detail expected in real photographs.

We then analysed the image using an AI detection tool named True Media. The tools indicated that the image was highly likely to have been generated by AI.



We also checked official UK police channels and news outlets for any records or reports of such an event. No credible sources reported or documented any instance of UK police officers bowing to a group of Muslims, further confirming that the image is not based on a real event.
Conclusion:
The viral image of UK police officers bowing to a group of Muslims is AI-generated. CyberPeace Research Team confirms that the picture was artificially created, and the viral claim is misleading and false.
- Claim: UK police officers were photographed bowing to a group of Muslims.
- Claimed on: X, Website
- Fact Check: Fake & Misleading
.webp)
Executive Summary:
Cyber incidents are evolving along with time, they are designed to attract and lure people through social networking sites and/or messaging services. In the recent past a spate of messages alleging that TRAI is offering ‘3 months free recharge with free voice calls and internet for 4g/5g with 200 GB free data’. These messages display the TRAI logo with attractive offers to trick the users into revealing their personal details. This blog discusses the functioning of this free mobile recharge scheme, its methods and guidelines on how to avoid such fake schemes. This blog explains the importance of vigilance and verification when receiving any links, emphasizing the need to report suspicious activities and educate others to prevent identity theft and protect personal information.
Claim:
The message circulated an enticing offer: free mobile recharge for 3 months which provides unlimited free voice calls with 200GB 4G/5G data with TRAI logo. The key characteristics of the false claims are
- Official Branding: The logo of TRAI has been viewed as a deceptive facade of credibility.
- Unrealistic Offers: It is accompanied by a free recharge , which is intended for an extended period indefinite period, like most fraudsters’ bait.
- Urgency and Exclusivity: The offer is for a limited time to make urgency forcing the receiver to take the offer without confirmation.
The Deceptive Scheme:
Organized systematically, the fraudulent campaign usually proceeds in several steps, all of which aim at extracting the victim’s personal data. Here’s a breakdown of the scheme:
1. Initial Contact: Such messages or calls reach the users’ inboxes or phone numbers through social media applications such as WhatsApp or through text messages. These messages further implies that the user was chosen for the special offer from TRAI, which elicits the interest of the user.
2. Information Request: To claim the purported offer, users are directed to a website or asked to reply with personal details, including:
- Phone number
- State of residence
- SIM provider details
This is useful for the scammers as they harvest information which can be used to conduct identity theft or sold to others on the shady part of the internet known as the ‘Dark Web’.
3. Fake Confirmation: After providing all the information, a congratulatory message appears on the screen showing that their phone number is eligible for the offer. The user is compelled to forward the message to many phone numbers through whatsapp to get the offer.
4. Pressure Tactics: The message often implies a sense of time constraint or fear which psychologically produces pressure to provide all the user information. For example, users are given messages such as that if they do not ‘act now’, they will lose their mobile service.
Analyzing the Fraudulent Campaign
The TRAI fraudulent recharge scheme case depicts that social engineering is used in cyber crimes. Here are some key aspects that characterize this campaign:
- Sophisticated Social Engineering
Scammers take advantage of the holders’ confidence in official bodies such as TRAI. By using official TRAI logos, official language they try to deceive even cautious people.
- Viral Spread
The user is compelled to share the given message to friends and groups; this is an excellent strategy to spread the scam. It not only spreads the fraudulent message but also tries to extract the details of other people.
- Technical Analysis

- Domain Name: SGOFF[.]CYOU
- Registry Domain ID: D472308342-CNIC
- Registrar WHOIS Server: whois.hkdns.hk
- Registrar URL: http://www.hkdns.hk
- Updated Date: 2024-07-24T18:50:48.0Z
- Creation Date: 2024-07-19T18:48:44.0Z
- Registry Expiry Date: 2025-07-19T23:59:59.0Z
- Registrar: West263 International Limited
- Registrar IANA ID: 1915
- Registrant State/Province: Anhui
- Registrant Country: CN
- Name Server: NORMAN.NS.CLOUDFLARE.COM
- Name Server: PAM.NS.CLOUDFLARE.COM
- DNSSEC: unsigned
Cloudflare Inc. is used to cover the scam. The real website always uses the older domain while this url has been registered recently which indicates that this link is a scam.

The graph indicates that some of the communicated files and websites are malicious.
CyberPeace Advisory and Best Practice:
In light of the growing threat posed by such scams, the Research Wing of CyberPeace recommend the following best practices to help users protect themselves:
1. Verify Communications: It is always advisable to visit the official site of the organization or call the official contact numbers of the company to speak to their customer care and clarify about the offers.
2. Do not share personal information: No genuine organization will call the people for personal information. Step carefully and do not provide personal information that will lead to identity theft when dealing with such offers.
3. Report Fraudulent Activity: If one receives any calls or messages that seem to be suspicious, then the user can report cyber crimes to the National Cyber Crime Reporting Portal on www. cybercrime. gov. in or call on 1930. Such scams are reportable and assist the authorities in tracking and fighting the vice.
4. Educate Others : Always raise awareness among friends by sharing these kinds of scams. Educating people helps to avoid them falling prey to such fraudulent schemes.
5. Use Reliable Resources : Always refer to official sources or websites for any kind of offers or promotions.
Conclusion:
The free recharge scheme for 3 months with the logo of TRAI is a fraudulent scam. There is no official information from TRAI or in their official website about this free recharge scheme. Though the scheme looks attractive, it is deceptive. Through this, the scammers are trying to collect personal details of the individual. Before clicking any links, it is necessary to check the authenticity of the information, report these kinds of incidents to spread awareness among people. Always be safe and be vigilant.

Executive Summary:
A video online alleges that people are chanting "India India" as Ohio Senator J.D. Vance meets them at the Republican National Convention (RNC). This claim is not correct. The CyberPeace Research team’s investigation showed that the video was digitally changed to include the chanting. The unaltered video was shared by “The Wall Street Journal” and confirmed via the YouTube channel of “Forbes Breaking News”, which features different music performing while Mr. and Mrs. Usha Vance greeted those present in the gathering. So the claim that participants chanted "India India" is not real.

Claims:
A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).


Fact Check:
Upon receiving the posts, we did keyword search related to the context of the viral video. We found a video uploaded by The Wall Street Journal on July 16, titled "Watch: J.D. Vance Is Nominated as Vice Presidential Nominee at the RNC," at the time stamp 0:49. We couldn’t hear any India-India chants whereas in the viral video, we can clearly hear it.
We also found the video on the YouTube channel of Forbes Breaking News. In the timestamp at 3:00:58, we can see the same clip as the viral video but no “India-India” chant could be heard.

Hence, the claim made in the viral video is false and misleading.
Conclusion:
The viral video claiming to show "India-India" chants during Ohio Senator J.D. Vance's greeting at the Republican National Convention is altered. The original video, confirmed by sources including “The Wall Street Journal” and “Forbes Breaking News” features different music without any such chants. Therefore, the claim is false and misleading.
Claim: A video spreading on social media shows attendees chanting "India-India" as Ohio Senator J.D. Vance and his wife, Usha Vance greet them at the Republican National Convention (RNC).
Claimed on: X
Fact Check: Fake & Misleading

About Customs Scam:
The Customs Scam is a type of fraud where the scammers pretend to be from the renowned courier office company (DTDC, etc.), or customs department or other government entities. They try to deceive the targets to transfer the money to resolve the fake customs related concerns. The Research Wing at CyberPeace along with the Research Wing of Autobot Infosec Private Ltd. delved into this case through Open Source Intelligence methods and undercover interactions with the scammers and concluded with some credible information.
Case Study:
The victim receives a phone call posing as a renowned courier office (DTDC, etc.) employee (in some case custom’s officer) that a parcel in the name of the victim has been taken into custody because of inappropriate content. The scammer provides the victim an employee ID, FIR number to prove the authenticity of the case and also they show empathy towards the victim. The scammer pretends to help the victim to connect with a police officer for further action. This so-called police officer shows transparency in his work. He asks him to join a skype video call and he even provides time to install the skype app. He instructs the victim to connect with the skype id provided by the fake police officer where the scammer created a fake police station environment. He also claims that he contacted the headquarters and the victim’s phone number is associated with many illegal activities to create panic to the victim. Then the scammers also ask the victim to give their personal details such as home address, office address, aadhar card number, PAN card number and screenshot of their bank accounts along with their available account balance for the sake of so-called investigation. Sometimes scammers also demand a high amount of money to resolve the issue and create fake urgency to trap the victim in making the payment. He sternly warns the victim not to contact any other police officials or professionals, making it clear that doing so would only lead to more trouble.
Analysis & Findings:
After receiving these kinds of complaints from multiple sources, the analysis was done on the collection of phone numbers from where the calls originated. These phone numbers were analysed for alias name, location, Telecom operator, etc. Further, we have verified the number to check whether the number is linked with any social media account on reputed platforms like Google, Facebook, Whatsapp, Twitter, Instagram, Linkedin, and other classified platforms such as Locanto.
- Phone Number Analysis: Each phone number looks authentic, cleverly concealing the fraud. Sometimes scammers use virtual/temporary phone numbers for these kinds of scams. In this case the victim was from Delhi, so the scammer posed themselves from Delhi Police station, while the phone numbers belong to a different place.
- Undercover Interactions: The interactions with the suspects reveals their chilling way of modus operandi. These scammers are masters of psychological manipulation. They threaten the victims and act as if they are genuine LEA officers.
- Exploitation Tactics: They target unsuspecting individuals and create fear and fake urgency among the targets to extract sensitive information such as Aadhaar, PAN card and bank account details.
- Fraud Execution: The scammers demand for the payment to resolve this issue and they make use of the stolen personally identifiable information. Once the victims transfer the money, the fraudsters cut off all the communication.
- Outcome for Victims: The scammers act so genuine and they frame the incidents so realistic, victims don't realise that they are trapped in this scam. They suffer severe financial loss and psychological trauma.
Recommendations:
- Verify Identities: It is important to verify the identity of any individual, especially if they demand personal information or payment. Contact the official agency directly using verified contact details to confirm the authenticity of the communication.
- Education on Personal Information: Provide education to people to protect their personal identity numbers like Aadhaar and PAN card number. Always emphasise the possible dangers connected to sharing such data in the course of phone conversations.
- Report Suspicious Activity: Prompt reporting of suspicious phone calls or messages to relevant authorities and consumer protection agencies helps in tracking down scammers and prevents people from falling. Report to https://cybercrime.gov.in or reach out to helpline@cyberpeace.net for further assistance.
- Enhanced Cybersecurity Measures: Implement robust cybersecurity measures to detect and mitigate phishing attempts and fraudulent activities. This includes monitoring and blocking suspicious phone numbers and IP addresses associated with scams.
Conclusion:
In the Customs Scam fraud, the scammers pretend to be a custom or any government official and sometimes threaten the targets to get the details such as Aadhaar, PAN card details, screenshot of their bank accounts along with their available balance in their account. The phone numbers used for these kinds of scams were analysed for any suspicious activity. It is found that all the phone numbers look authentic concealing the fraudentent activities. The interactions made with them reveals that they create fearness and urgency between the individuals. They act as if they are genuine officer’s and ask for money to resolve this issue. It is important to stay vigilant and not to share any personal or financial information. When facing these kinds of scams, report and spread awareness among individuals.

Executive Summary:
A new threat being uncovered in today’s threat landscape is that while threat actors took an average of one hour and seven minutes to leverage Proof-of-Concept(PoC) exploits after they went public, now the time is at a record low of 22 minutes. This incredibly fast exploitation means that there is very limited time for organizations’ IT departments to address these issues and close the leaks before they are exploited. Cloudflare released the Application Security report which shows that the attack percentage is more often higher than the rate at which individuals invent and develop security countermeasures like the WAF rules and software patches. In one case, Cloudflare noted an attacker using a PoC-based attack within a mere 22 minutes from the moment it was released, leaving almost no time for a remediation window.
Despite the constant growth of vulnerabilities in various applications and systems, the share of exploited vulnerabilities, which are accompanied by some level of public exploit or PoC code, has remained relatively stable over the past several years and fluctuates around 50%. These vulnerabilities with publicly known exploit code, 41% was initially attacked in the zero-day mode while of those with no known code, 84% was first attacked in the same mode.
Modus Operandi:
The modus operandi of the attack involving the rapid weaponization of proof-of-concept (PoC) exploits is characterized by the following steps:
- Vulnerability Identification: Threat actors bring together the exploitation of a system vulnerability that may be in the software or hardware of the system; this may be a code error, design failure, or a configuration error. This is normally achieved using vulnerability scanners and test procedures that have to be performed manually.
- Vulnerability Analysis: After the vulnerability is identified, the attackers study how it operates to determine when and how it can be triggered and what consequences that action will have. This means that one needs to analyze the details of the PoC code or system to find out the connection sequence that leads to vulnerability exploitation.
- Exploit Code Development: Being aware of the weakness, the attackers develop a small program or script denoted as the PoC that addresses exclusively the identified vulnerability and manipulates it in a moderated manner. This particular code is meant to be utilized in showing a particular penalty, which could be unauthorized access or alteration of data.
- Public Disclosure and Weaponization: The PoC exploit is released which is frequently done shortly after the vulnerability has been announced to the public. This makes it easier for the attackers to exploit it while waiting for the software developer to release the patch. To illustrate, Cloudflare has spotted an attacker using the PoC-based exploit 22 minutes after the publication only.
- Attack Execution: The attackers then use the weaponized PoC exploit to attack systems which are known to be vulnerable to it. Some of the actions that are tried in this context are attempts at running remote code, unauthorized access and so on. The pace at which it happens is often much faster than the pace at which humans put in place proper security defense mechanisms, such as the WAF rules or software application fixes.
- Targeted Operations: Sometimes, they act as if it’s a planned operation, where the attackers are selective in the system or organization to attack. For example, exploitation of CVE-2022-47966 in ManageEngine software was used during the espionage subprocess, where to perform such activity, the attackers used the mentioned vulnerability to install tools and malware connected with espionage.
Precautions: Mitigation
Following are the mitigating measures against the PoC Exploits:
1. Fast Patching and New Vulnerability Handling
- Introduce proper patching procedures to address quickly the security released updates and disclosed vulnerabilities.
- Focus should be made on the patching of those vulnerabilities that are observed to be having available PoC exploits, which often risks being exploited almost immediately.
- It is necessary to frequently check for the new vulnerability disclosures and PoC releases and have a prepared incident response plan for this purpose.
2. Leverage AI-Powered Security Tools
- Employ intelligent security applications which can easily generate desirable protection rules and signatures as attackers ramp up the weaponization of PoC exploits.
- Step up use of artificial intelligence (AI) - fueled endpoint detection and response (EDR) applications to quickly detect and mitigate the attempts.
- Integrate Artificial Intelligence based SIEM tools to Detect & analyze Indicators of compromise to form faster reaction.
3. Network Segmentation and Hardening
- Use strong networking segregation to prevent the attacker’s movement across the network and also restrict the effects of successful attacks.
- Secure any that are accessible from the internet, and service or protocols such as RDP, CIFS, or Active directory.
- Limit the usage of native scripting applications as much as possible because cyber attackers may exploit them.
4. Vulnerability Disclosure and PoC Management
- Inform the vendors of the bugs and PoC exploits and make sure there is a common understanding of when they are reported, to ensure fast response and mitigation.
- It is suggested to incorporate mechanisms like digital signing and encryption for managing and distributing PoC exploits to prevent them from being accessed by unauthorized persons.
- Exploits used in PoC should be simple and independent with clear and meaningful variable and function names that help reduce time spent on triage and remediation.
5. Risk Assessment and Response to Incidents
- Maintain constant supervision of the environment with an intention of identifying signs of a compromise, as well as, attempts of exploitation.
- Support a frequent detection, analysis and fighting of threats, which use PoC exploits into the system and its components.
- Regularly communicate with security researchers and vendors to understand the existing threats and how to prevent them.
Conclusion:
The rapid process of monetization of Proof of Concept (POC) exploits is one of the most innovative and constantly expanding global threats to cybersecurity at the present moment. Cyber security experts must react quickly while applying a patch, incorporate AI to their security tools, efficiently subdivide their networks and always heed their vulnerability announcements. Stronger incident response plan would aid in handling these kinds of menaces. Hence, applying measures mentioned above, the organizations will be able to prevent the acceleration of turning PoC exploits into weapons and the probability of neutral affecting cyber attacks.
Reference:
https://www.mayrhofer.eu.org/post/vulnerability-disclosure-is-positive/
https://www.uptycs.com/blog/new-poc-exploit-backdoor-malware
https://www.balbix.com/insights/attack-vectors-and-breach-methods/
https://blog.cloudflare.com/application-security-report-2024-update

Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading

Executive Summary:
Several videos claiming to show bizarre, mutated animals with features such as seal's body and cow's head have gone viral on social media. Upon thorough investigation, these claims were debunked and found to be false. No credible source of such creatures was found and closer examination revealed anomalies typical of AI-generated content, such as unnatural leg movements, unnatural head movements and joined shoes of spectators. AI material detectors confirmed the artificial nature of these videos. Further, digital creators were found posting similar fabricated videos. Thus, these viral videos are conclusively identified as AI-generated and not real depictions of mutated animals.

Claims:
Viral videos show sea creatures with the head of a cow and the head of a Tiger.



Fact Check:
On receiving several videos of bizarre mutated animals, we searched for credible sources that have been covered in the news but found none. We then thoroughly watched the video and found certain anomalies that are generally seen in AI manipulated images.



Taking a cue from this, we checked all the videos in the AI video detection tool named TrueMedia, The detection tool found the audio of the video to be AI-generated. We divided the video into keyframes, the detection found the depicting image to be AI-generated.


In the same way, we investigated the second video. We analyzed the video and then divided the video into keyframes and analyzed it with an AI-Detection tool named True Media.

It was found to be suspicious and so we analyzed the frame of the video.

The detection tool found it to be AI-generated, so we are certain with the fact that the video is AI manipulated. We analyzed the final third video and found it to be suspicious by the detection tool.


The detection tool found the frame of the video to be A.I. manipulated from which it is certain that the video is A.I. manipulated. Hence, the claim made in all the 3 videos is misleading and fake.
Conclusion:
The viral videos claiming to show mutated animals with features like seal's body and cow's head are AI-generated and not real. A thorough investigation by the CyberPeace Research Team found multiple anomalies in AI-generated content and AI-content detectors confirmed the manipulation of A.I. fabrication. Therefore, the claims made in these videos are false.
- Claim: Viral videos show sea creatures with the head of a cow, the head of a Tiger, head of a bull.
- Claimed on: YouTube
- Fact Check: Fake & Misleading

Executive Summary:
A photo allegedly shows an Israeli Army dog attacking an elderly Palestinian woman has been circulating online on social media. However, the image is misleading as it was created using Artificial Intelligence (AI), as indicated by its graphical elements, watermark ("IN.VISUALART"), and basic anomalies. Although there are certain reports regarding the real incident in several news channels, the viral image was not taken during the actual event. This emphasizes the need to verify photos and information shared on social media carefully.

Claims:
A photo circulating in the media depicts an Israeli Army dog attacking an elderly Palestinian woman.



Fact Check:
Upon receiving the posts, we closely analyzed the image and found certain discrepancies that are commonly seen in AI-generated images. We can clearly see the watermark “IN.VISUALART” and also the hand of the old lady looks odd.

We then checked in AI-Image detection tools named, True Media and contentatscale AI detector. Both found potential AI Manipulation in the image.



Both tools found it to be AI Manipulated. We then keyword searched for relevant news regarding the viral photo. Though we found relevant news, we didn’t get any credible source for the image.

The photograph that was shared around the internet has no credible source. Hence the viral image is AI-generated and fake.
Conclusion:
The circulating photo of an Israeli Army dog attacking an elderly Palestinian woman is misleading. The incident did occur as per the several news channels, but the photo depicting the incident is AI-generated and not real.
- Claim: A photo being shared online shows an elderly Palestinian woman being attacked by an Israeli Army dog.
- Claimed on: X, Facebook, LinkedIn
- Fact Check: Fake & Misleading
.webp)
Executive Summary:
Footage of the Afghanistan cricket team singing ‘Vande Mataram’ after India’s triumph in ICC T20 WC 2024 exposed online. The CyberPeace Research team carried out a thorough research to uncover the truth about the viral video. The original clip was posted on X platform by Afghan cricketer Mohammad Nabi on October 23, 2023 where the Afghan players posted the video chanting ‘Allah-hu Akbar’ after winning the ODIs in the World Cup against Pakistan. This debunks the assertion made in the viral video about the people chanting Vande Mataram.

Claims:
Afghan cricket players chanted "Vande Mataram" to express support for India after India’s victory over Australia in the ICC T20 World Cup 2024.

Fact Check:
Upon receiving the posts, we analyzed the video and found some inconsistency in the video such as the lip sync of the video.
We checked the video in an AI audio detection tool named “True Media”, and the detection tool found the audio to be 95% AI-generated which made us more suspicious of the authenticity of the video.


For further verification, we then divided the video into keyframes. We reverse-searched one of the frames of the video to find any credible sources. We then found the X account of Afghan cricketer Mohammad Nabi, where he uploaded the same video in his account with a caption, “Congratulations! Our team emerged triumphant n an epic battle against ending a long-awaited victory drought. It was a true test of skills & teamwork. All showcased thr immense tlnt & unwavering dedication. Let's celebrate ds 2gether n d glory of our great team & people” on 23 Oct, 2023.

We found that the audio is different from the viral video, where we can hear Afghan players chanting “Allah hu Akbar” in their victory against Pakistan. The Afghan players were not chanting Vande Mataram after India’s victory over Australia in T20 World Cup 2014.
Hence, upon lack of credible sources and detection of AI voice alteration, the claim made in the viral posts is fake and doesn’t represent the actual context. We have previously debunked such AI voice alteration videos. Netizens must be careful before believing misleading information.
Conclusion:
The viral video claiming that Afghan cricket players chanted "Vande Mataram" in support of India is false. The video was altered from the original video by using audio manipulation. The original video of Afghanistan players celebrating victory over Pakistan by chanting "Allah-hu Akbar" was posted in the official Instagram account of Mohammad Nabi, an Afghan cricketer. Thus the information is fake and misleading.
- Claim: Afghan cricket players chanted "Vande Mataram" to express support for India after the victory over Australia in the ICC T20 World Cup 2024.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
.webp)
Executive Summary:
A viral video of the Argentina football team dancing in the dressing room to a Bhojpuri song is being circulated in social media. After analyzing the originality, CyberPeace Research Team discovered that this video was altered and the music was edited. The original footage was posted by former Argentine footballer Sergio Leonel Aguero in his official Instagram page on 19th December 2022. Lionel Messi and his teammates were shown celebrating their win at the 2022 FIFA World Cup. Contrary to viral video, the song in this real-life video is not from Bhojpuri language. The viral video is cropped from a part of Aguero’s upload and the audio of the clip has been changed to incorporate the Bhojpuri song. Therefore, it is concluded that the Argentinian team dancing to Bhojpuri song is misleading.

Claims:
A video of the Argentina football team dancing to a Bhojpuri song after victory.


Fact Check:
On receiving these posts, we split the video into frames, performed the reverse image search on one of these frames and found a video uploaded to the SKY SPORTS website on 19 December 2022.

We found that this is the same clip as in the viral video but the celebration differs. Upon further analysis, We also found a live video uploaded by Argentinian footballer Sergio Leonel Aguero on his Instagram account on 19th December 2022. The viral video was a clip from his live video and the song or music that’s playing is not a Bhojpuri song.

Thus this proves that the news that circulates in the social media in regards to the viral video of Argentina football team dancing Bhojpuri is false and misleading. People should always ensure to check its authenticity before sharing.
Conclusion:
In conclusion, the video that appears to show Argentina’s football team dancing to a Bhojpuri song is fake. It is a manipulated version of an original clip celebrating their 2022 FIFA World Cup victory, with the song altered to include a Bhojpuri song. This confirms that the claim circulating on social media is false and misleading.
- Claim: A viral video of the Argentina football team dancing to a Bhojpuri song after victory.
- Claimed on: Instagram, YouTube
- Fact Check: Fake & Misleading
.webp)
Executive Summary:
A viral image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man; however, this claim is false. A thorough investigation by the Cyberpeace Research team found that the image has been digitally manipulated. The original photo, which was posted by Balmukund Acharya, a BJP MLA from Jaipur, on his official Facebook account in December 2023, he was posing with a Muslim man in his election office. The man wearing the Muslim skullcap is featured in several other photos on Acharya's Instagram account, where he expressed gratitude for the support from the Muslim community. Thus, the claimed image of a marriage between a Hindu Sadhvi and a Muslim man is digitally altered.

Claims:
An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.


Fact Check:
Upon receiving the posts, we reverse searched the image to find any credible sources. We found a photo posted by Balmukund Acharya Hathoj Dham on his facebook page on 6 December 2023.

This photo is digitally altered and posted on social media to mislead. We also found several different photos with the skullcap man where he was featured.

We also checked for any AI fabrication in the viral image. We checked using a detection tool named, “content@scale” AI Image detection. This tool found the image to be 95% AI Manipulated.

We also checked with another detection tool for further validation named, “isitai” image detection tool. It found the image to be 38.50% of AI content, which concludes to the fact that the image is manipulated and doesn’t support the claim made. Hence, the viral image is fake and misleading.

Conclusion:
The lack of credible source and the detection of AI manipulation in the image explains that the viral image claiming to show a Hindu Sadhvi marrying a Muslim man is false. It has been digitally altered. The original image features BJP MLA Balmukund Acharya posing with a Muslim man, and there is no evidence of the claimed marriage.
- Claim: An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading