#FactCheck - Fake Video Uses AI Voice to Falsely Attribute Remarks on Prasidh Krishna to Virat Kohli
A video circulating widely on social media claims that Indian cricketer Virat Kohli made a sarcastic remark about fast bowler Prasidh Krishna ahead of the New Zealand series. In the clip, Kohli is allegedly heard saying that he expected to be the top scorer of the series, but lost all hope after seeing Prasidh Krishna’s name in the squad.
Users sharing the video claim that Kohli publicly commented on Prasidh Krishna in this manner.
Research by the CyberPeace Foundation has found the viral claim to be false. Our probe revealed that the viral clip has been digitally manipulated. The video is originally from a 2024 advertisement featuring Virat Kohli, in which his voice has been altered using deepfake (AI-generated) technology and falsely presented with a misleading narrative.
Claim
The video was shared on Instagram on January 6, 2025, with users claiming that Kohli made the remark after the New Zealand squad was announced. The post included the altered audio suggesting Kohli’s disappointment over Prasidh Krishna’s selection. Link, archive link

Fact Check:
To verify the claim, we extracted key frames from the viral video and conducted a Google Reverse Image Search. This led us to the original video posted by Virat Kohli himself on X (formerly Twitter) on April 15, 2024. The original clip was part of a brand advertisement, and no such statement about the New Zealand series or Prasidh Krishna was made in it. Link and Screenshot

A close review of the viral clip raised suspicions due to the unnatural tone and inconsistencies in Kohli’s voice. To confirm this, we analysed the video using the AI detection tool Aurigin AI. The tool’s results showed that the audio in the viral clip is 100 percent AI-generated, confirming that Kohli’s voice was artificially manipulated.

Conclusion
The CyberPeace Foundation’s research confirms that the viral video claiming Virat Kohli mocked Prasidh Krishna is fake and misleading. The clip is taken from an old advertisement and has been doctored using deepfake technology to alter Kohli’s voice. The video is being circulated on social media with a false claim, and Virat Kohli has made no such statement regarding the New Zealand series or Prasidh Krishna.
Related Blogs
.webp)
Introduction
In July 2025, the Digital Defence Report prepared by Microsoft raised an alarm that India is part of the top target countries in AI-powered nation-state cyberattacks with malicious agents automating phishing, creating convincing deepfakes, and influencing opinion with the help of generative AI (Microsoft Digital Defence Report, 2025). Most of the attention in the world has continued to be on the United States and Europe, but Asia-Pacific and especially India have become a major target in terms of AI-based cyber activities. This blog discusses the role of AI in espionage, redefining the threat environment of India, the reaction of the government, and what India can learn by looking at the example of cyber giants worldwide.
Understanding AI-Powered Cyber Espionage
Conventional cyber-espionage intends to hack systems, steal information or bring down networks. With the emergence of generative AI, these strategies have changed completely. It is now possible to automate reconnaissance, create fake voices and videos of authorities and create highly advanced phishing campaigns which can pass off as genuine even to a trained expert. According to the report made by Microsoft, AI is being used by state-sponsored groups to expand their activities and increase accuracy in victims (Microsoft Digital Defence Report, 2025). Based on SQ Magazine, almost 42 percent of state-based cyber campaigns in 2025 had AIs like adaptive malware or intelligent vulnerability scanners (SQ Magazine, 2025).
AI is altering the power dynamic of cyberspace. The tools previously needing significant technical expertise or substantial investments have become ubiquitous, and smaller countries can conduct sophisticated cyber operations as well as non-state actors. The outcome is the speeding up of the arms race with AI serving as the weapon and the armour.
India’s Exposure and Response
The weakness of the threat landscape lies in the growing online infrastructure and geopolitical location. The attack surface has expanded the magnitude of hundreds of millions of citizens with the integration of platforms like DigiLocker and CoWIN. Financial institutions, government portals and defence networks are increasingly becoming targets of cyber attacks that are more sophisticated. Faking videos of prominent figures, phishing letters with the official templates, and manipulation of the social media are currently all being a part of disinformation campaigns (Microsoft Digital Defence Report, 2025).
According to the Data Security Council of India (DSCI), the India Cyber Threat Report 2025 reported that attacks using AI are growing exponentially, particularly in the shape of malicious behaviour and social engineering (DSCI, 2025). The nodal cyber-response agency of India, CERT-In, has made several warnings regarding scams related to AI and AI-generated fake content that is aimed at stealing personal information or deceiving the population. Meanwhile, enforcement and red-teaming actions have been intensified, but the communication between central agencies and state police and the private platforms is not even. There is also an acute shortage of cybersecurity talents in India, as less than 20 percent of cyber defence jobs are occupied by qualified specialists (DSCI, 2025).
Government and Policy Evolution
The government response to AI-enabled threats is taking three forms, namely regulation, institutional enhancing, and capacity building. The Digital Personal Data Protection Act 2023 saw a major move in defining digital responsibility (Government of India, 2023). Nonetheless, threats that involve AI-specific issues like data poisoning, model manipulation, or automated disinformation remain grey areas. The following National Cybersecurity Strategy will attempt to remedy them by establishing AI-government guidelines and responsibility standards to major sectors.
At the institutional level, the efforts of such organisations as the National Critical Information Infrastructure Protection Centre (NCIIPC) and the Defence Cyber Agency are also being incorporated into their processes with the help of AI-based monitoring. There is also an emerging public-private initiative. As an example, the CyberPeace Foundation and national universities have signed a memorandum of understanding that currently facilitates the specialised training in AI-driven threat analysis and digital forensics (Times of India, August 2025). Even after these positive indications, India does not have any cohesive system of reporting cases of AI. The publication on arXiv in September 2025 underlines the importance of the fact that legal approaches to AI-failure reporting need to be developed by countries to approach AI-initiated failures in such fields as national security with accountability (arXiv, 2025).
Global Implications and Lessons for India
Major economies all over the world are increasing rapidly to integrate AI innovation with cybersecurity preparedness. The United States and United Kingdom are spending big on AI-enhanced military systems, performing machine learning in security operations hubs and organising AI-based “red team” exercises (Microsoft Digital Defence Report, 2025). Japan is testing cross-ministry threat-sharing platforms that utilise AI analytics and real-time decision-making (Microsoft Digital Defence Report, 2025).
Four lessons can be distinguished as far as India is concerned.
- To begin with, the cyber defence should shift to proactive intelligence in place of reactive investigation. It is not only possible to detect the adversary behaviour after the attacks, but to simulate them in advance using AI.
- Second, teamwork is essential. The issue of cybersecurity cannot be entrusted to government enforcement. The private sector that maintains the majority of the digital infrastructure in India must be actively involved in providing information and knowledge.
- Third, there is the issue of AI sovereignty. Building or hosting its own defensive AI tools in India will diminish dependence on foreign vendors, and minimise the possible vulnerabilities of the supply-chain.
- Lastly, the initial defence is digital literacy. The citizens should be trained on how to detect deepfakes, phishing, and other manipulated information. The importance of creating human awareness cannot be underestimated as much as technical defences (SQ Magazine, 2025).
Conclusion
AI has altered the reasoning behind cyber warfare. There are quicker attacks, more difficult to trace and scalable as never before. In the case of India, it is no longer about developing better firewalls but rather the ability to develop anticipatory intelligence to counter AI-powered threats. This requires a national policy that incorporates technology, policy and education.
India can transform its vulnerability to strength with the sustained investment, ethical AI governance, and healthy cooperation between the government and the business sector. The following step in cybersecurity does not concern who possesses more firewalls than the other but aims to learn and adjust more quickly and successfully in a world where machines already belong to the battlefield (Microsoft Digital Defence Report, 2025).
References:
- Microsoft Digital Defense Report 2025
- India Cyber Threat Report 2025, DSCI
- Lucknow based organisations to help strengthen cybercrime research training policy ecosystem
- AI Cyber Attacks Statistics 2025: How Attacks, Deepfakes & Ransomware Have Escalated, SQ Magazine
- Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India.
- The Digital Personal Data Protection Act, 2023
.webp)
Executive Summary
A video showing a monkey allegedly saving the life of a sleeping child is rapidly going viral on social media. In the clip, a monkey can be seen picking up a child sleeping on a mat under a tree and moving the child away moments before a heavy tree branch falls at the same spot. Social media users are sharing the video as a “miracle of nature” and praising the emotional sensitivity and instincts of animals. However, research conducted by CyberPeace Research Wing found that the viral video is not real and was created using artificial intelligence tools.
Claim
The caption accompanying the viral post states:“In a shocking incident, a monkey was seen stepping in to save an innocent child sleeping under a tree from imminent danger. People nearby were stunned by the scene. It is being claimed that the monkey sensed the danger around the child and tried to protect him. The unusual incident has now gone viral on social media, with many saying that emotions and compassion are not limited to humans, animals can also understand feelings.”
The video has been widely shared across social media platforms
- https://www.instagram.com/reels/DYMvhRPTcCA/
- https://archive.ph/https://www.instagram.com/reels/DYMvhRPTcCA/

Fact Check
To verify the authenticity of the video, we extracted keyframes from the clip and conducted a reverse image search. During the research, we found the same video uploaded on May 8, 2026, on an Instagram page named Instagram user “mojilo_vandro.” The caption of the original upload did not provide any factual context and presented the video in a dramatic, miracle-like manner.

We further examined the Instagram account and found that it regularly posts several AI-generated videos featuring monkeys performing heroic or emotional acts. Importantly, the account owner has also identified themselves as an “AI video creator” in the bio section.

To further analyze the clip, we tested it using the AI detection tool Hive Moderation. The tool’s analysis classified the viral video as 85.6% likely to be AI-generated. We also checked the clip using another AI detection platform, Deepfake-o-meter. Its AVSRDD (2025) detection model flagged the video as potentially AI-generated with a 100% confidence score.

Conclusion
The evidence gathered during our research clearly shows that the viral video claiming to show a monkey saving a sleeping child from a falling tree branch is not authentic. The clip was created using AI-generated visual techniques and does not depict a real incident.

Executive Summary:
In the digital world, people are becoming targets more and more of online scams, which rely on deception. One of the ways the social media is being used for the elections in recent time, is the "BJP - Election Bonus" offer that promises a cash prize of Rs. 5000 or more, through some easy questionnaire. This article provides the details of this swindle and reveals its deceptive tricks as well as gives a set of recommendations on how to protect yourself from such online fraud, especially during the upcoming elections.
False Claim:
The "BJP - Election Bonus" campaign boasts that by taking a few clicks of the mouse, users will get a cash prize. This scheme is nothing but a fake association with the Bharatiya Janata Party (BJP)’s Government and Prime Minister Shri Narendra Modi and therefore, it uses the images and brands of both of them to give the scheme an impression of legitimacy. The imposters are taking advantage of the public's trust for the Government and the widespread desire for remuneration to ensnare the unaware victims, specifically before the upcoming Lok Sabha elections.

The Deceptive Scheme:
- Tempting Social Media Offer: The fraud begins with an attractive link on the social media platforms. The scammers say that the proposal is related to the Bharatiya Janata Party (BJP) with the caption of “The official party has prepared many gifts for their supporters.” accompanied by an image of the Prime Minister Shri Narendra Modi.
- Luring with Money: The offer promises to give Rs.5,000 or more. This is aimed at drawing in people specifically during election campaigns; and people’s desire for financial gain.
- Tricking with Questions: When the link is clicked, the person is brought to the page with the simple questions. The purpose of these questions is to make people feel safe and believe that they have been selected for an actual government’s program.
- The Open-the-Box Trap: Finally, the questions are answered and the last instruction is to open-the-box for the prize. However, this is just a tactic for them to make you curious about the reward.
- Fake Reward and Spreading the Scam: Upon opening the box, the recipient will be greeted with the text of Rs. 5000. However, this is not true; it is just a way to make them share the link on WhatsApp, helping the scammers to reach more victims.
The fraudsters use political party names and the Prime Minister's name to increase the plausibility of it, although there is no real connection. They employ the people's desire for monetary help, and also the time of the elections, making them susceptible to their tricks.
Analytical Breakdown:
- The campaign is a cleverly-created scheme to lure people by misusing the trust they have in the Government. By using BJP's branding and the Prime Minister's photo, fraudsters aim to make their misleading offer look credible. Fake reviews and cash reward are the two main components of the scheme that are intended to lure users into getting involved, and the end result of this is the path of deception.
- Through sharing the link over WhatsApp, users become unaware accomplices that are simply assisting the scammers to reach an even bigger audience and hence their popularity, especially with the elections around the corner.
- On top of this, the time of committing this fraud is very disturbing, as the election is just round the corner. Scammers do this in the context of the political turmoil and the spread of unconfirmed rumors and speculation about the upcoming elections in the same way they did earlier. The fraudsters are using this strategy to take advantage of the political affiliations by linking their scam to the Political party and their Leaderships.
- We have also cross-checked and as of now there is no well established and credible source or any official notification that has confirmed such an offer advertised by the Party.
- Domain Analysis: The campaign is hosted on a third party domain, which is different from the official website, thus creating doubts. Whois information reveals that the domain has been registered not long ago. The domain was registered on 29th march 2024, just a few days back.

- Domain Name: PSURVEY[.]CYOU
- Registry Domain ID: D443702580-CNIC
- Registrar WHOIS Server: whois.hkdns.hk
- Registrar URL: http://www.hkdns.hk
- Updated Date: 2024-03-29T16:18:00.0Z
- Creation Date: 2024-03-29T15:59:17.0Z (Recently Created)
- Registry Expiry Date: 2025-03-29T23:59:59.0Z
- Registrant State/Province: Anhui
- Registrant Country: CN (China)
- Name Server: NORMAN.NS.CLOUDFLARE.COM
- Name Server: PAM.NS.CLOUDFLARE.COM
Note: Cybercriminals used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory and Best Practices:
- Be careful and watchful for any offers that seem too good to be true online, particularly during election periods. Exercise caution at a high level when you come across such offers, because they are usually accompanied by dishonest schemes.
- Carefully cross-check the authenticity of every campaign or offer you’re considering before interacting with it. Do not click on suspicious links and do not share private data that can be further used to run the scam.
- If you come across any such suspicious activity or if you feel you have been scammed, report it to the relevant authorities, such as the local police or the cybercrime section. Reporting is one of the most effective instruments to prevent the spread of these misleading schemes and it can support the course of the investigations.
- Educate yourselves and your families on the usual scammers’ tricks, including their election-related strategies. Prompt people to think critically and a good deal of skepticism when they meet online offers and promotions that evoke a possibility to obtain money or rewards easily.
- Ensure that you are always on a high level of alert as you explore the digital field, especially during elections. The authenticity of the information you encounter should always be verified before you act on it or pass it over to someone else.
- In case you have any doubt or worry regarding a certain e-commerce offer or campaign, don’t hesitate to ask for help from reliable sources such as Cybersecurity experts or Government agencies. A consultation with credible sources will assist you in coming up with informed decisions and guarding yourself against being navigated by these schemes.
Conclusion:
The "BJP - Election Bonus" campaign is a real case study of how Internet fraud is becoming more popular day by day, particularly before the elections. Through the awareness of the tactics employed by these scammers and their abuse of the community's trust in the Government and political figures, we can equip ourselves and our communities to avert becoming the victim of such fraudulent schemes. As a team, we can collectively strive for a digital environment free of threats and breaches of security, even in times of high political tension that accompany elections.