#FactCheck - AI-Cloned Audio in Viral Anup Soni Video Promoting Betting Channel Revealed as Fake
Executive Summary:
A morphed video of the actor Anup Soni popular on social media promoting IPL betting Telegram channel is found to be fake. The audio in the morphed video is produced through AI voice cloning. AI manipulation was identified by AI detection tools and deepfake analysis tools. In the original footage Mr Soni explains a case of crime, a part of the popular show Crime Patrol which is unrelated to betting. Therefore, it is important to draw the conclusion that Anup Soni is in no way associated with the betting channel.

Claims:
The facebook post claims the IPL betting Telegram channel which belongs to Rohit Khattar is promoted by Actor Anup Soni.

Fact Check:
Upon receiving the post, the CyberPeace Research Team closely analyzed the video and found major discrepancies which are mostly seen in AI-manipulated videos. The lip sync of the video does not match the audio. Taking a cue from this we analyzed using a Deepfake detection tool by True Media. It is found that the voice of the video is 100% AI-generated.



We then extracted the audio and checked in an audio Deepfake detection tool named Hive Moderation. Hive moderation found the audio to be 99.9% AI-Generated.

We then divided the video into keyframes and reverse searched one of the keyframes and found the original video uploaded by the YouTube channel named LIV Crime.
Upon analyzing we found that in the 3:18 time frame the video was edited, and altered with an AI voice.

Hence, the viral video is an AI manipulated video and it’s not real. We have previously debunked such AI voice manipulation with different celebrities and politicians to misrepresent the actual context. Netizens must be careful while believing in such AI manipulation videos.
Conclusion:
In conclusion, the viral video claiming that IPL betting Telegram channel promotion by actor Anup Soni is false. The video has been manipulated using AI voice cloning technology, as confirmed by both the Hive Moderation AI detector and the True Media AI detection tool. Therefore, the claim is baseless and misleading.
- Claim: An IPL betting Telegram channel belonging to Rohit Khattar promoted by Actor Anup Soni.
- Claimed on: Facebook
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
The rise in cybercrime targeting vulnerable individuals, particularly students and their families, has reached alarming levels. Impersonation scams, where fraudsters pose as Law Enforcement Officers, have become increasingly sophisticated, exploiting fear, urgency, and social stigma. This report delves into recent incidents of ransom scams involving fake CBI officers, highlighting the execution methods, psychological impact on victims, and preventive measures. The goal is to raise public awareness and equip individuals with the knowledge needed to protect themselves from such fraudulent activities.
Introduction:
Cybercriminals are evolving their tactics, with impersonation and social engineering at the forefront. Scams involving fake law enforcement officers have become rampant, preying on the fear of legal repercussions and the desire to protect loved ones. This report examines incidents where scammers impersonated CBI officers to extort money from families of students, emphasizing the urgent need for awareness, verification, and preventive measures.
Case Study:
This case study explains how the scammers impersonate themselves for the money targeting students' families.
Targets receive calls from scammers posing as CBI officers. Mostly the families of students are targeted by the fraudsters using sophisticated impersonation and emotional manipulation tactics. In our case study, the targets received calls from unknown international numbers, falsely claiming that the students, along with their friends, were involved in a fabricated rape case. The parents get calls during school or college hours, a time when it is particularly difficult and chaotic for parents to reach their children, adding to the panic and sense of urgency. The scammers manipulate the parents by stating that, due to the students' clean records, they are not officially arrested but would face severe legal consequences unless a sum of money is paid immediately.
Although in these specific cases, the parents did not pay the money, many parents in our country fall victim to such scams, paying large sums out of fear and desperation to protect their children’s futures. The fear of legal repercussions, social stigma, and the potential damage to the students' reputations, the scammers used high-pressure tactics to force compliance.
These incidents may result in significant financial losses, emotional trauma, and a profound loss of trust in communication channels and authorities. This underscores the urgent need for awareness, verification of authority, and prompt reporting of such scams to prevent further victimisation
Modus Operandi:
- Caller ID Spoofing: The scammer used a unknown number and spoofing techniques to mimic a legitimate law enforcement authority.
- Fear Induction: The fraudster played on the family's fear of social stigma, manipulating them into compliance through emotional blackmail.
Analysis:
Our research found that the unknown international numbers used in these scams are not real but are puppet numbers often used for prank calls and fraudulent activities. This incident also raises concerns about data breaches, as the scammers accurately recited students' details, including names and their parents' information, adding a layer of credibility and increasing the pressure on the victims. These incidents result in significant financial losses, emotional trauma, and a profound loss of trust in communication channels and authorities.
Impact on Victims:
- Financial and Psychological Losses: The family may face substantial financial losses, coupled with emotional and psychological distress.
- Loss of Trust in Authorities: Such scams undermine trust in official communication and law enforcement channels.
- Exploitation of Fear and Urgency: Scammers prey on emotions such as fear, urgency, and social stigma to manipulate victims.
- Sophisticated Impersonation Techniques: Using caller ID spoofing, Virtual/Temporary numbers and impersonation of Law Enforcement Officers adds credibility to the scam.
- Lack of Verification: Victims often do not verify the caller's identity, leading to successful scams.
- Significant Psychological Impact: Beyond financial losses, these scams cause lasting emotional trauma and distrust in institutions.
Recommendations:
- Cross-Verification: Always cross-verify with official sources before acting on such claims. Always contact official numbers listed on trusted Government websites to verify any claims made by callers posing as law enforcement.
- Promote Awareness: Educational institutions should conduct regular awareness programs to help students and families recognize and respond to scams.
- Encourage Prompt Reporting: Reporting such incidents to authorities can help track scammers and prevent future cases. Encourage victims to report incidents promptly to local authorities and cybercrime units.
- Enhance Public Awareness: Continuous public awareness campaigns are essential to educate people about the risks and signs of impersonation scams.
- Educational Outreach: Schools and colleges should include Cybersecurity awareness as part of their curriculum, focusing on identifying and responding to scams.
- Parental Guidance and Support: Parents should be encouraged to discuss online safety and scam tactics with their children regularly, fostering a vigilant mindset.
Conclusion:
The rise of impersonation scams targeting students and their families is a growing concern that demands immediate attention. By raising awareness, encouraging verification of claims, and promoting proactive reporting, we can protect vulnerable individuals from falling victim to these manipulative and harmful tactics. It is high time for the authorities, educational institutions, and the public to collaborate in combating these scams and safeguarding our communities. Strengthening data protection measures and enhancing public education on the importance of verifying claims can significantly reduce the impact of these fraudulent schemes and prevent further victimisation.
.webp)
Executive Summary:
A video circulating on social media claims that people in Balochistan, Pakistan, hoisted the Indian national flag and declared independence from Pakistan. The claim has gone viral, sparking strong reactions and spreading misinformation about the geopolitical scenario in South Asia. Our research reveals that the video is misrepresented and actually shows a celebration in Surat, Gujarat, India.

Claim:
A viral video shows people hoisting the Indian flag and allegedly declaring independence from Pakistan in Balochistan. The claim implies that Baloch nationals are revolting against Pakistan and aligning with India.

Fact Check:
After researching the viral video, it became clear that the claim was misleading. We took key screenshots from the video and performed a reverse image search to trace its origin. This search led us to one of the social media posts from the past, which clearly shows the event taking place in Surat, Gujarat, not Balochistan.

In the original clip, a music band is performing in the middle of a crowd, with people holding Indian flags and enjoying the event. The environment, language on signboards, and festive atmosphere all confirm that this is an Indian Independence Day celebration. From a different angle, another photo we found further proves our claim.

However, some individuals with the intention of spreading false information shared this video out of context, claiming it showed people in Balochistan raising the Indian flag and declaring independence from Pakistan. The video was taken out of context and shared with a fake narrative, turning a local celebration into a political stunt. This is a classic example of misinformation designed to mislead and stir public emotions.
To add further clarity, The Indian Express published a report on May 15 titled ‘Slogans hailing Indian Army ring out in Surat as Tiranga Yatra held’. According to the article, “A highlight of the event was music bands of Saifee Scout Surat, which belongs to the Dawoodi Bohra community, seen leading the yatra from Bhagal crossroads.” This confirms that the video was from an event in Surat, completely unrelated to Balochistan, and was falsely portrayed by some to spread misleading claims online.

Conclusion:
The claim that people in Balochistan hoisted the Indian national flag and declared independence from Pakistan is false and misleading. The video used to support this narrative is actually from Surat, Gujarat, India, during “The Tiranga Yatra”. Social media users are urged to verify the authenticity and source of content before sharing, to avoid spreading misinformation that may escalate geopolitical tensions.
- Claim: Mass uprising in Balochistan as citizens reject Pakistan and honor India.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724