#FactCheck
Executive Summary:
A new threat being uncovered in today’s threat landscape is that while threat actors took an average of one hour and seven minutes to leverage Proof-of-Concept(PoC) exploits after they went public, now the time is at a record low of 22 minutes. This incredibly fast exploitation means that there is very limited time for organizations’ IT departments to address these issues and close the leaks before they are exploited. Cloudflare released the Application Security report which shows that the attack percentage is more often higher than the rate at which individuals invent and develop security countermeasures like the WAF rules and software patches. In one case, Cloudflare noted an attacker using a PoC-based attack within a mere 22 minutes from the moment it was released, leaving almost no time for a remediation window.
Despite the constant growth of vulnerabilities in various applications and systems, the share of exploited vulnerabilities, which are accompanied by some level of public exploit or PoC code, has remained relatively stable over the past several years and fluctuates around 50%. These vulnerabilities with publicly known exploit code, 41% was initially attacked in the zero-day mode while of those with no known code, 84% was first attacked in the same mode.
Modus Operandi:
The modus operandi of the attack involving the rapid weaponization of proof-of-concept (PoC) exploits is characterized by the following steps:
- Vulnerability Identification: Threat actors bring together the exploitation of a system vulnerability that may be in the software or hardware of the system; this may be a code error, design failure, or a configuration error. This is normally achieved using vulnerability scanners and test procedures that have to be performed manually.
- Vulnerability Analysis: After the vulnerability is identified, the attackers study how it operates to determine when and how it can be triggered and what consequences that action will have. This means that one needs to analyze the details of the PoC code or system to find out the connection sequence that leads to vulnerability exploitation.
- Exploit Code Development: Being aware of the weakness, the attackers develop a small program or script denoted as the PoC that addresses exclusively the identified vulnerability and manipulates it in a moderated manner. This particular code is meant to be utilized in showing a particular penalty, which could be unauthorized access or alteration of data.
- Public Disclosure and Weaponization: The PoC exploit is released which is frequently done shortly after the vulnerability has been announced to the public. This makes it easier for the attackers to exploit it while waiting for the software developer to release the patch. To illustrate, Cloudflare has spotted an attacker using the PoC-based exploit 22 minutes after the publication only.
- Attack Execution: The attackers then use the weaponized PoC exploit to attack systems which are known to be vulnerable to it. Some of the actions that are tried in this context are attempts at running remote code, unauthorized access and so on. The pace at which it happens is often much faster than the pace at which humans put in place proper security defense mechanisms, such as the WAF rules or software application fixes.
- Targeted Operations: Sometimes, they act as if it’s a planned operation, where the attackers are selective in the system or organization to attack. For example, exploitation of CVE-2022-47966 in ManageEngine software was used during the espionage subprocess, where to perform such activity, the attackers used the mentioned vulnerability to install tools and malware connected with espionage.
Precautions: Mitigation
Following are the mitigating measures against the PoC Exploits:
1. Fast Patching and New Vulnerability Handling
- Introduce proper patching procedures to address quickly the security released updates and disclosed vulnerabilities.
- Focus should be made on the patching of those vulnerabilities that are observed to be having available PoC exploits, which often risks being exploited almost immediately.
- It is necessary to frequently check for the new vulnerability disclosures and PoC releases and have a prepared incident response plan for this purpose.
2. Leverage AI-Powered Security Tools
- Employ intelligent security applications which can easily generate desirable protection rules and signatures as attackers ramp up the weaponization of PoC exploits.
- Step up use of artificial intelligence (AI) - fueled endpoint detection and response (EDR) applications to quickly detect and mitigate the attempts.
- Integrate Artificial Intelligence based SIEM tools to Detect & analyze Indicators of compromise to form faster reaction.
3. Network Segmentation and Hardening
- Use strong networking segregation to prevent the attacker’s movement across the network and also restrict the effects of successful attacks.
- Secure any that are accessible from the internet, and service or protocols such as RDP, CIFS, or Active directory.
- Limit the usage of native scripting applications as much as possible because cyber attackers may exploit them.
4. Vulnerability Disclosure and PoC Management
- Inform the vendors of the bugs and PoC exploits and make sure there is a common understanding of when they are reported, to ensure fast response and mitigation.
- It is suggested to incorporate mechanisms like digital signing and encryption for managing and distributing PoC exploits to prevent them from being accessed by unauthorized persons.
- Exploits used in PoC should be simple and independent with clear and meaningful variable and function names that help reduce time spent on triage and remediation.
5. Risk Assessment and Response to Incidents
- Maintain constant supervision of the environment with an intention of identifying signs of a compromise, as well as, attempts of exploitation.
- Support a frequent detection, analysis and fighting of threats, which use PoC exploits into the system and its components.
- Regularly communicate with security researchers and vendors to understand the existing threats and how to prevent them.
Conclusion:
The rapid process of monetization of Proof of Concept (POC) exploits is one of the most innovative and constantly expanding global threats to cybersecurity at the present moment. Cyber security experts must react quickly while applying a patch, incorporate AI to their security tools, efficiently subdivide their networks and always heed their vulnerability announcements. Stronger incident response plan would aid in handling these kinds of menaces. Hence, applying measures mentioned above, the organizations will be able to prevent the acceleration of turning PoC exploits into weapons and the probability of neutral affecting cyber attacks.
Reference:
https://www.mayrhofer.eu.org/post/vulnerability-disclosure-is-positive/
https://www.uptycs.com/blog/new-poc-exploit-backdoor-malware
https://www.balbix.com/insights/attack-vectors-and-breach-methods/
https://blog.cloudflare.com/application-security-report-2024-update
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.
Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.
This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.
We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.
Comparison of both photos:
Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Executive Summary:
Several videos claiming to show bizarre, mutated animals with features such as seal's body and cow's head have gone viral on social media. Upon thorough investigation, these claims were debunked and found to be false. No credible source of such creatures was found and closer examination revealed anomalies typical of AI-generated content, such as unnatural leg movements, unnatural head movements and joined shoes of spectators. AI material detectors confirmed the artificial nature of these videos. Further, digital creators were found posting similar fabricated videos. Thus, these viral videos are conclusively identified as AI-generated and not real depictions of mutated animals.
Claims:
Viral videos show sea creatures with the head of a cow and the head of a Tiger.
Fact Check:
On receiving several videos of bizarre mutated animals, we searched for credible sources that have been covered in the news but found none. We then thoroughly watched the video and found certain anomalies that are generally seen in AI manipulated images.
Taking a cue from this, we checked all the videos in the AI video detection tool named TrueMedia, The detection tool found the audio of the video to be AI-generated. We divided the video into keyframes, the detection found the depicting image to be AI-generated.
In the same way, we investigated the second video. We analyzed the video and then divided the video into keyframes and analyzed it with an AI-Detection tool named True Media.
It was found to be suspicious and so we analyzed the frame of the video.
The detection tool found it to be AI-generated, so we are certain with the fact that the video is AI manipulated. We analyzed the final third video and found it to be suspicious by the detection tool.
The detection tool found the frame of the video to be A.I. manipulated from which it is certain that the video is A.I. manipulated. Hence, the claim made in all the 3 videos is misleading and fake.
Conclusion:
The viral videos claiming to show mutated animals with features like seal's body and cow's head are AI-generated and not real. A thorough investigation by the CyberPeace Research Team found multiple anomalies in AI-generated content and AI-content detectors confirmed the manipulation of A.I. fabrication. Therefore, the claims made in these videos are false.
- Claim: Viral videos show sea creatures with the head of a cow, the head of a Tiger, head of a bull.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Executive Summary:
A photo allegedly shows an Israeli Army dog attacking an elderly Palestinian woman has been circulating online on social media. However, the image is misleading as it was created using Artificial Intelligence (AI), as indicated by its graphical elements, watermark ("IN.VISUALART"), and basic anomalies. Although there are certain reports regarding the real incident in several news channels, the viral image was not taken during the actual event. This emphasizes the need to verify photos and information shared on social media carefully.
Claims:
A photo circulating in the media depicts an Israeli Army dog attacking an elderly Palestinian woman.
Fact Check:
Upon receiving the posts, we closely analyzed the image and found certain discrepancies that are commonly seen in AI-generated images. We can clearly see the watermark “IN.VISUALART” and also the hand of the old lady looks odd.
We then checked in AI-Image detection tools named, True Media and contentatscale AI detector. Both found potential AI Manipulation in the image.
Both tools found it to be AI Manipulated. We then keyword searched for relevant news regarding the viral photo. Though we found relevant news, we didn’t get any credible source for the image.
The photograph that was shared around the internet has no credible source. Hence the viral image is AI-generated and fake.
Conclusion:
The circulating photo of an Israeli Army dog attacking an elderly Palestinian woman is misleading. The incident did occur as per the several news channels, but the photo depicting the incident is AI-generated and not real.
- Claim: A photo being shared online shows an elderly Palestinian woman being attacked by an Israeli Army dog.
- Claimed on: X, Facebook, LinkedIn
- Fact Check: Fake & Misleading
Executive Summary:
Footage of the Afghanistan cricket team singing ‘Vande Mataram’ after India’s triumph in ICC T20 WC 2024 exposed online. The CyberPeace Research team carried out a thorough research to uncover the truth about the viral video. The original clip was posted on X platform by Afghan cricketer Mohammad Nabi on October 23, 2023 where the Afghan players posted the video chanting ‘Allah-hu Akbar’ after winning the ODIs in the World Cup against Pakistan. This debunks the assertion made in the viral video about the people chanting Vande Mataram.
Claims:
Afghan cricket players chanted "Vande Mataram" to express support for India after India’s victory over Australia in the ICC T20 World Cup 2024.
Fact Check:
Upon receiving the posts, we analyzed the video and found some inconsistency in the video such as the lip sync of the video.
We checked the video in an AI audio detection tool named “True Media”, and the detection tool found the audio to be 95% AI-generated which made us more suspicious of the authenticity of the video.
For further verification, we then divided the video into keyframes. We reverse-searched one of the frames of the video to find any credible sources. We then found the X account of Afghan cricketer Mohammad Nabi, where he uploaded the same video in his account with a caption, “Congratulations! Our team emerged triumphant n an epic battle against ending a long-awaited victory drought. It was a true test of skills & teamwork. All showcased thr immense tlnt & unwavering dedication. Let's celebrate ds 2gether n d glory of our great team & people” on 23 Oct, 2023.
We found that the audio is different from the viral video, where we can hear Afghan players chanting “Allah hu Akbar” in their victory against Pakistan. The Afghan players were not chanting Vande Mataram after India’s victory over Australia in T20 World Cup 2014.
Hence, upon lack of credible sources and detection of AI voice alteration, the claim made in the viral posts is fake and doesn’t represent the actual context. We have previously debunked such AI voice alteration videos. Netizens must be careful before believing misleading information.
Conclusion:
The viral video claiming that Afghan cricket players chanted "Vande Mataram" in support of India is false. The video was altered from the original video by using audio manipulation. The original video of Afghanistan players celebrating victory over Pakistan by chanting "Allah-hu Akbar" was posted in the official Instagram account of Mohammad Nabi, an Afghan cricketer. Thus the information is fake and misleading.
- Claim: Afghan cricket players chanted "Vande Mataram" to express support for India after the victory over Australia in the ICC T20 World Cup 2024.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Executive Summary:
A viral video of the Argentina football team dancing in the dressing room to a Bhojpuri song is being circulated in social media. After analyzing the originality, CyberPeace Research Team discovered that this video was altered and the music was edited. The original footage was posted by former Argentine footballer Sergio Leonel Aguero in his official Instagram page on 19th December 2022. Lionel Messi and his teammates were shown celebrating their win at the 2022 FIFA World Cup. Contrary to viral video, the song in this real-life video is not from Bhojpuri language. The viral video is cropped from a part of Aguero’s upload and the audio of the clip has been changed to incorporate the Bhojpuri song. Therefore, it is concluded that the Argentinian team dancing to Bhojpuri song is misleading.
Claims:
A video of the Argentina football team dancing to a Bhojpuri song after victory.
Fact Check:
On receiving these posts, we split the video into frames, performed the reverse image search on one of these frames and found a video uploaded to the SKY SPORTS website on 19 December 2022.
We found that this is the same clip as in the viral video but the celebration differs. Upon further analysis, We also found a live video uploaded by Argentinian footballer Sergio Leonel Aguero on his Instagram account on 19th December 2022. The viral video was a clip from his live video and the song or music that’s playing is not a Bhojpuri song.
Thus this proves that the news that circulates in the social media in regards to the viral video of Argentina football team dancing Bhojpuri is false and misleading. People should always ensure to check its authenticity before sharing.
Conclusion:
In conclusion, the video that appears to show Argentina’s football team dancing to a Bhojpuri song is fake. It is a manipulated version of an original clip celebrating their 2022 FIFA World Cup victory, with the song altered to include a Bhojpuri song. This confirms that the claim circulating on social media is false and misleading.
- Claim: A viral video of the Argentina football team dancing to a Bhojpuri song after victory.
- Claimed on: Instagram, YouTube
- Fact Check: Fake & Misleading
Executive Summary:
The internet has become a hub for fraudsters, and a new fraudulent scheme has been circulating, stating a free 84-day recharge of ₹719 given by the Honourable Prime Minister Narendra Modi in celebration of the BJP Government formation in 2024. This is yet another scam that uses tricks to lure the users, for instance by fake questionnaires, fake promises and the use of the Honourable Prime Minister Narendra Modi’s image to give a fake impression of legitimacy. The following blog post analyzes the scam and offers recommendations on how to recognize similar frauds and avoid them.
False Claim:
A viral link trending on various social media platforms states that Narendra Modi, the Honourable Prime Minister of India, is giving a free 84-day free recharge worth ₹719 to all users in India and this is an Election Bonus in celebration of the BJP government formation in 2024. The claim insists the users are required to click on the link (https://offerraj.in/Congress2024-Recharge/id=9jMiaeN1) and complete a questionnaire to get the offer.
The Deceptive Scheme:
- Mobile-Only Access: The malicious link (https://offerraj.in/Congress2024-Recharge/id=9jMiaeN1) is designed to open only on mobile devices; this makes it easier for more people to be affected.
- Multiple Redirects: After clicking the link, the users are led through a sequence of other links in order to conceal the actual source of the deception, and probably a try of making it difficult to track the notorious activity.
- Fake Comments & Images: First, the landing page contains a banner with the photo of India’s Honourable Prime-Minister Narendra Modi which gives the site’s visitors the impression of the official source. Also, fake comments can be made for the same reason, stating that the author has received a free recharge and supporting the so-called initiative.
- Fake Prize Notifications: For instance, after responding to the questions in the questionnaire, users may be presented with messages such as ‘Congratulations, you have won a free recharge’; this further creates an impression of a genuine offer.
- Social Sharing Requirement: To collect the so-called ‘prize’, the users are requested to share the link in the WhatsApp or other social networks, thus contributing to the spread of the scam.
Analyzing the Fraudulent Campaign:
- No Official Announcement: The internet and other social platforms are the only places where such an offer has been mentioned, and there is no official announcement from the Government or any other authorized body.
- Multiple Redirects: After clicking the link, users are taken through multiple redirects to obfuscating the source of the deception and to trace the malicious activity.
- Suspicious Domain and Hosting: The campaign is hosted on a third-party domain (offerraj.in) instead of any official government website, raising suspicion about its authenticity.
- Personal Data Collection: The questionnaire prompts users to provide personal information, which legitimate Government initiatives would not typically request through unofficial channels.
- Insecure HTTP Link: The link provided is an insecure HTTP link, whereas legitimate government websites employ secure HTTPS encryption.
Domain Analysis:
The actual url is hosted on a third party domain instead of the official website of the BJP or any Government website. This is the common way to deceive users into falling for a Phishing scam. Whois information reveals that the domain has been registered recently i.e on 28-03-2023 and the domain is registered with godaddy.com and state is from Rajasthan, India. Cybercriminals used Cloudflare technology to mask the actual IP address of the fraudulent website.
- Domain Name: offerraj.in
- Registry Domain ID: D9483D0EB38264263958C9609D2DCEA70-IN
- Registrar WHOIS Server:
- Registrar URL: www.godaddy.com
- Updated Date: 2024-05-03T07:30:03Z
- Creation Date: 2023-03-28T04:33:12Z
- Registry Expiry Date: 2026-03-28T04:33:12Z
- Registrar: GoDaddy.com, LLC
- Registrar IANA ID: 146
- Registrant State/Province: Rajasthan
- Registrant Country: IN
- Name Server: johnathan.ns.cloudflare.com
- Name Server: braelyn.ns.cloudflare.com
Similar offer surfing with different links: Several similar kind of offers through various links such as https://offerintro.com/BJP2024-Recharge/id=QYntPBDU, https://mahaloot2.xyz, https://mahaloot3.xyz, https://pmoffer4.online, are available in the social media. All these links are analysed and validated to be malicious or phishing links.
CyberPeace Advisory and Best Practices:
- Stay Informed: Be aware of potential scams and rely on official government channels for verified information.
- Verify Website Security: Do not click on links that have the ‘http’ at the beginning and focus on sites that have encryption (‘https’).
- Protect Personal Information: Be careful when there is any request to send some type of personal information, especially if it is done through informal companies.
- Report Suspicious Activity: When you notice that you have been scammed or a certain activity is fraudulent, ensure to report the incidents to the necessary authorities and the platforms to prevent others from being scammed.
Conclusion:
The claim of 84 day free recharge worth ₹719 to all users in India as an “Election Bonus” is false and similar kinds of various links are consistently surfing through the internet. The deceptive practices employed in these kinds of links are insecure and it has multiple redirects to false promises which highlights the need for heightened awareness and caution among internet users. In this digital world, it is important to stay informed, verify the authenticity of resources to protect personal information. Individuals can safeguard themselves against such fraudulent schemes and contribute to a safer online environment.
Executive Summary:
A viral image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man; however, this claim is false. A thorough investigation by the Cyberpeace Research team found that the image has been digitally manipulated. The original photo, which was posted by Balmukund Acharya, a BJP MLA from Jaipur, on his official Facebook account in December 2023, he was posing with a Muslim man in his election office. The man wearing the Muslim skullcap is featured in several other photos on Acharya's Instagram account, where he expressed gratitude for the support from the Muslim community. Thus, the claimed image of a marriage between a Hindu Sadhvi and a Muslim man is digitally altered.
Claims:
An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
Fact Check:
Upon receiving the posts, we reverse searched the image to find any credible sources. We found a photo posted by Balmukund Acharya Hathoj Dham on his facebook page on 6 December 2023.
This photo is digitally altered and posted on social media to mislead. We also found several different photos with the skullcap man where he was featured.
We also checked for any AI fabrication in the viral image. We checked using a detection tool named, “content@scale” AI Image detection. This tool found the image to be 95% AI Manipulated.
We also checked with another detection tool for further validation named, “isitai” image detection tool. It found the image to be 38.50% of AI content, which concludes to the fact that the image is manipulated and doesn’t support the claim made. Hence, the viral image is fake and misleading.
Conclusion:
The lack of credible source and the detection of AI manipulation in the image explains that the viral image claiming to show a Hindu Sadhvi marrying a Muslim man is false. It has been digitally altered. The original image features BJP MLA Balmukund Acharya posing with a Muslim man, and there is no evidence of the claimed marriage.
- Claim: An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Executive Summary:
A misleading video has been widely shared online, falsely portraying Pandit Jawaharlal Nehru stating that he was not involved in the Indian independence struggle and he even opposed it. The video is a manipulated excerpt from Pandit Nehru’s final major interview in 1964 with American TV host Arnold Mich. The original footage available on India’s state broadcaster Prasar Bharati’s YouTube channel shows Pandit Nehru discussing about Muhammad Ali Jinnah, stating that Jinnah did not participate in the independence movement and opposed it. The viral video falsely edits Pandit Nehru’s comments to create a false narrative, which has been debunked upon reviewing the full, unedited interview.
Claims:
In the viral video, Pandit Jawaharlal Nehru states that he was not involved in the fight for Indian independence and even opposed it.
Fact check:
Upon receiving the posts, we thoroughly checked the video and then we divided the video into keyframes using the inVid tool. We reverse-searched one of the frames of the video. We found a video uploaded by Prasar Bharati Archives official YouTube channel on 14 May 2019.
The description of the video reads, “Full video recording of what was perhaps Pandit Jawaharlal Nehru's last significant interview to American TV Host Arnold Mich Jawaharlal Nehru's last TV Interview - May 1964e his death. Another book by Chandrika Prasad provides a date of 18th May 1964 when the interview was aired in New York, this is barely a few days before the death of Pandit Nehru on 27th May 1964.”
On reviewing the full video, we found that the viral clip of Pandit Nehru runs from 14:50 to 15:45. In this portion, Pandit Nehru is speaking about Muhammad Ali Jinnah, a key leader of the Muslim League.
At the timestamp 14:34, the American TV interviewer Arnold Mich says, “You and Mr. Gandhi and Mr. Jinnah, you were all involved at that point of Independence and then partition in the fight for Independence of India from the British domination.” Pandit Nehru replied, “Mr. Jinnah was not involved in the fight for independence at all. In fact, he opposed it. Muslim League was started in about 1911 I think. It was started really by the British encouraged by them so as to create factions, they did succeed to some extent. And ultimately there came the partition.”
Upon thoroughly analyzing we found that the viral video is an edited version of the real video to misrepresent the actual context of the video.
We also found the same interview uploaded on a Facebook page named Nehru Centre for Social Research on 1 December 2021.
Hence, the viral claim video is misleading and fake.
Hence, the viral video is fake and misleading and netizens must be careful while believing in such an edited video.
Conclusion:
In conclusion, the viral video claiming that Pandit Jawaharlal Nehru stated that he was not involved in the Indian independence struggle is found to be falsely edited. The original footage reveals that Pandit Nehru was referring to Muhammad Ali Jinnah's participation in the struggle, not his own. This explanation debunks the false story conveyed by the manipulated video.
- Claim: Pandit Jawaharlal Nehru stated that he was not involved in the struggle for Indian independence and even he opposed it.
- Claimed on: YouTube, LinkedIn, Facebook, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Executive Summary:
The claim of a video of US President Joe Biden dozing off during a television interview is digitally manipulated . The original video is from a 2011 incident involving actor and singer Harry Belafonte. He seems to fall asleep during a live satellite interview with KBAK – KBFX - Eyewitness News. Upon thorough analysis of keyframes from the viral video, it reveals that US President Joe Biden’s image was altered in Harry Belafonte's video. This confirms that the viral video is manipulated and does not show an actual event involving President Biden.
Claims:
A video shows US President Joe Biden dozing off during a television interview while the anchor tries to wake him up.
Fact Check:
Upon receiving the posts, we watched the video then divided the video into keyframes using the inVid tool, and reverse-searched one of the frames from the video.
We found another video uploaded on Oct 18, 2011 by the official channel of KBAK - KBFX - Eye Witness News. The title of the video reads, “Official Station Video: Is Harry Belafonte asleep during live TV interview?”
The video looks similar to the recent viral one, the TV anchor could be heard saying the same thing as in the viral video. Taking a cue from this we also did some keyword searches to find any credible sources. We found a news article posted by Yahoo Entertainment of the same video uploaded by KBAK - KBFX - Eyewitness News.
Upon thorough investigation from reverse image search and keyword search reveals that the recent viral video of US President Joe Biden dozing off during a TV interview is digitally altered to misrepresent the context. The original video dated back to 2011, where American Singer and actor Harry Belafonte was the actual person in the TV interview but not US President Joe Biden.
Hence, the claim made in the viral video is false and misleading.
Conclusion:
In conclusion, the viral video claiming to show US President Joe Biden dozing off during a television interview is digitally manipulated and inauthentic. The video is originally from a 2011 incident involving American singer and actor Harry Belafonte. It has been altered to falsely show US President Joe Biden. It is a reminder to verify the authenticity of online content before accepting or sharing it as truth.
- Claim: A viral video shows in a television interview US President Joe Biden dozing off while the anchor tries to wake him up.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Executive Summary:
A viral claim circulated in social media that Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe. Thorough analysis revealed abnormalities in image quality, particularly between the face, neck, and hands compared to the claimed gold clothing, leads to possible AI manipulation. A keyword search found no credible news reports or authentic images supporting this claim. Further analysis using AI detection tools, TrueMedia and Hive Moderator, confirmed substantial evidence of AI fabrication, with a high probability of the image being AI-generated or a deep fake. Additionally, a photo from a previous event at Jio World Plaza matched with the pose of the manipulated image, further denying the claim and indicating that the image of Anant Ambani and Radhika Merchant wearing golden outfit during their pre-wedding cruise was digitally altered.
Claims:
Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe.
Fact Check:
When we received the posts, we found anomalies that were usually found in edited images or AI manipulated images, particularly between the face, neck, and hands.
It’s very unusual in any image. So we then checked in AI Image detection software named Hive Moderation detection tool and found it to be 95.9% AI manipulated.
We also checked with another widely used AI detection tool named True Media. True Media also found it to be 100% to be made using AI.
This implies that the image is AI-generated. To find the original image that has been edited, we did keyword search. We found an image with the same pose as in the manipulated image, with the title "Radhika Merchant, Anant Ambani pose with Mukesh Ambani at Jio World Plaza opening”. The two images can be compared to verify that the digitally altered image is the same.
Hence, it’s confirmed that the viral image is digitally altered and has no connection with the 2nd Pre-wedding cruise party in Europe. Thus the viral image is fake and misleading.
Conclusion:
The claim that Anant Ambani and Radhika Merchant wore clothes made of pure gold at their pre-wedding cruise party in Europe is false. The analysis of the image showed signs of manipulation, and a lack of credible news reports or authentic photos supports that it was likely digitally altered. AI detection tools confirmed a high probability that the image was fake, and a comparison with a genuine photo from another event revealed that the image had been edited. Therefore, the claim is false and misleading.
- Claim: Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe.
- Claimed on: YouTube, LinkedIn, Instagram
- Fact Check: Fake & Misleading
Executive Summary:
The viral image in the social media which depicts fake injuries on the face of the MP(Member of Parliament, Lok Sabha) Kangana Ranaut alleged to have been beaten by a CISF officer at the Chandigarh airport. The reverse search of the viral image taken back to 2006, was part of an anti-mosquito commercial and does not feature the MP, Kangana Ranaut. The findings contradict the claim that the photos are evidence of injuries resulting from the incident involving the MP, Kangana Ranaut. It is always important to verify the truthfulness of visual content before sharing it, to prevent misinformation.
Claims:
The images circulating on social media platforms claiming the injuries on the MP, Kangana Ranaut’s face were because of an assault incident by a female CISF officer at Chandigarh airport. This claim hinted that the photos are evidence of the physical quarrel and resulting injuries suffered by the MP, Kangana Ranaut.
Fact Check:
When we received the posts, we reverse-searched the image and found another photo that looked similar to the viral one. We could verify through the earring in the viral image with the new image.
The reverse image search revealed that the photo was originally uploaded in 2006 and is unrelated to the MP, Kangana Ranaut. It depicts a model in an advertisement for an anti-mosquito spray campaign.
We can validate this from the earrings in the photo after the comparison between the two photos.
Hence, we can confirm that the viral image of the injury mark of the MP, Kangana Ranaut has been debunked as fake and misleading, instead it has been cropped out from the original photo to misrepresent the context.
Conclusion:
Therefore, the viral photos on social media which claimed to be the results of injuries on the MP, Kangana Ranaut’s face after being assaulted allegedly by a CISF officer at the airport in Chandigarh were fake. Detailed analysis of the pictures provided the fact that the pictures have no connection with Ranaut; the picture was a 2006 anti-mosquito spray advertisement; therefore, the allegations that show these images as that of Ranaut’s injury are fake and misleading.
- Claim: photos circulating on social media claiming to show injuries on the MP, Kangana Ranaut's face following an assault incident by a female CISF officer at Chandigarh airport.
- Claimed on: X (Formerly known as Twitter), thread, Facebook
- Fact Check: Fake & Misleading