#FactCheck - The video of Virat Kohli promoting online casino mobile app is a deep fake.
Executive Summary:
A viral clip where the Indian batsman Virat Kohli is shown endorsing an online casino and declaring a Rs 50,000 jackpot in three days as a guarantee has been proved a fake. In the clip that is accompanied by manipulated captions, Kohli is said to have admitted to being involved in the launch of an online casino during the interview with Graham Bensinger but this is not true. Nevertheless, an investigation showed that the original interview, which was published on YouTube in the last quarter of 2023 by Bensinger, did not have the mentioned words spoken by Kohli. Besides, another AI deepfake analysis tool called Deepware labelled the viral video as a deepfake.

Claims:
The viral video states that cricket star Virat Kohli gets involved in the promotion of an online casino and ensures that the users of the site can make a profit of Rs 50,000 within three days. Conversely, the CyberPeace Research Team has just revealed that the video is a deepfake and not the original and there is no credible evidence suggesting Kohli's participation in such endorsements. A lot of the users are sharing the videos with the wrong info title over different Social Media platforms.


Fact Check:
As soon as we were informed about the news, we made use of Keyword Search to see any news report that could be considered credible about Virat Kohli promoting any Casino app and we found nothing. Therefore, we also used Reverse Image Search for Virat Kohli wearing a Black T-shirt as seen in the video to find out more about the subject. We landed on a YouTube Video by Graham Bensinger, an American Journalist. The clip of the viral video was taken from this original video.

In this video, he discussed his childhood, his diet, his cricket training, his marriage, etc. but did not mention anything regarding a newly launched Casino app by the cricketer.
Through close scrutiny of the viral video we have noticed some inconsistencies in the lip-sync and voice. Subsequently, we executed Deepfake Detection in Deepware tool and identified it to be Deepfake Detected.


Finally, we affirm that the Viral Video Is Deepfakes Video and the statement made is False.
Conclusion:
The video has gone viral and claims that cricketer Virat Kohli is the one endorsing an online casino and assuring you that in three days time you will be a guaranteed winner of Rs 50,000. This is all a fake story. This incident demonstrates the necessity of checking facts and a source before believing any information, as well as remaining sceptical about deepfakes and AI (artificial intelligence), which is a new technology used nowadays for spreading misinformation.
Related Blogs

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/

Overview of the Advisory
On 18 November 2025, the Ministry of Information and Broadcasting (I&B) published an Advisory that addresses all of the private satellite television channels in India. The advisory is one of the critical institutional interventions to the broadcast of sensitive content regarding recent security incidents concerning the blast at the Red Fort on November 10th, 2025. This advisory came after the Ministry noticed that some news channels have been broadcasting content related to alleged persons involved in Red Fort blasts, justifying their acts of violence, as well as information/video on explosive material. Broadcasting like this at this critical situation may inadvertently encourage or incite violence, disrupt public order, and pose risks to national security.
Key Instructions under the Advisory
The advisory provides certain guidelines to the TV channels to ensure strict compliance with the Programming and Advertising Code under the Cable Television Networks (Regulation) Act, 1995. The television channels are advised to exercise the highest level of discretion and sensitivity possible in reporting on issues involving alleged perpetrators of violence, and especially when reporting on matters involving the justification of acts of violence or providing instructional media on making explosive materials. The fundamental focus is to be very strict in following the Programme and Advertising Code as stipulated in the Cable Television Network Rules. In particular, broadcasters should not make programming that:
- Contain anything obscene, defamatory, deliberately false, or suggestive innuendos and half-truths.
- Likely to encourage or incite violence, contain anything against the maintenance of law and order, or promote an anti-national attitude.
- Contain anything that affects the integrity of the Nation.
- Could aid, abet or promote unlawful activities.
Responsible Reporting Framework
The advisory does not constitute outright censorship but instead a self-regulatory system that depends on the discretion and sensitivity of the TV channels focused on differentiating between broadcasting legitimate news and the content that crosses the threshold from information dissemination to incitement.
Why This Advisory is Important in a Digital Age
With the modern media systems, there has been an erosion of the line between the journalism of the traditional broadcasting medium and digital virality. The contents of television are no longer limited to the scheduled programs or cable channels of distribution. The contents of a single news piece, especially that of dramatic or contentious nature, can be ripped off, revised and repackaged on social media networks within minutes of airing- often without the context, editorial discretion or timing indicators.
This effect makes sensitive content have a multiplier effect. The short news item about a suspect justifying violence or containing bombs can be viewed by millions on YouTube, WhatsApp, Twitter/X, Facebook, by spreading organically and being amplified by an algorithm. Studies have shown that misinformation and sensational reporting are much faster to circulate compared to factual corrections- a fact that has been noticed in the recent past during conflicts and crisis cases in India and other parts of the world.
Vulnerabilities of Information Ecosystems
- The advisory is created in a definite information setting that is characterised by:
- Rapid Viral Mechanism: Content spreads faster than the process of verification.
- Algorithmic-driven amplification: Platform mechanism boosts emotionally charged content.
- Coordinated amplification networks: Organised groups are there to make these posts, videos viral, to set a narrative for the general public.
- Deepfake and synthetic media risks: Original broadcasts can be manipulated and reposted with false attribution.
Interconnection with Cybersecurity and National Security
Verified or sensationalised reporting of security incidents poses certain weaknesses:
- Trust Erosion: Trust is broken when the masses observe broadcasters in the air giving unverified claims or emotional accounts as facts. This is even to security agencies, law enforcement and government institutions themselves. The lack of trust towards the official information gives rise to information gaps, which are occupied by rumours, conspiracy theories, and enemy tales.
- Cognitive Fragmentation: Misinformation develops multiple versions of the truth among the people. The narratives given to citizens vary according to the sources of the media that they listen to or read. This disintegration complicates organising the collective response of the society an actual security threat because the populations can be organised around misguided stories and not the correct data.
- Radicalisation Pipeline: People who are interested in finding ideological backgrounds to violent action might get exposed to media-created materials that have been carefully distorted to evidence justifications of terrorism as a valid political or religious stand.
How Social Instability Is Exploited in Cyber Operations and Influence Campaigns
Misinformation causes exploitable vulnerability in three phases.
- First, conflicting unverified accounts disintegrate the information environment-populations are presented with conflicting versions of events by various media sources.
- Second, institutional trust in media and security agencies is shaken by exposure to subsequently rectified false information, resulting in an information vacuum.
- Third, in such a distrusted and puzzled setting, the population would be susceptible to organised manipulation by malicious agents.
- Sensationalised broadcasting gives opponents assets of content, narrative frameworks, and information gaps that they can use to promote destabilisation movements. These mechanisms of exploitation are directly opposed by responsible broadcasting.
Media Literacy and Audience Responsibility
Structural Information Vulnerabilities-
A major part of the Indian population is structurally disadvantaged in information access:
- Language barriers: Infrastructure in the field of fact-checking is still highly centralised in English and Hindi, as vernacular-language misinformation goes viral in Tamil, Telugu, Marathi, Punjabi, and others.
- Digital literacy gaps: It is estimated that there are about 40 million people in India who have been trained on digital literacy, but more than 900 million Indians access digital content with different degrees of ability to critically evaluate the content.
- Divides between rural and urban people: Rural citizens and less affluent people experience more difficulty with access to verification tools and media literacy resources.
- Algorithmic capture: social media works to maximise engagement over accuracy, and actively encourages content that is emotionally inflammatory or divisive to its users, according to their history of engagement.
Conclusion
The advisory of the Ministry of Information and Broadcasting is an acknowledgment of the fact that media accountability is a part of state security in the information era. It states the principles of responsible reporting without interference in editorial autonomy, a balance that various stakeholders should uphold. Implementation of the advisory needs to be done in concert with broadcasters, platforms, civil society, government and educational institutions. Information integrity cannot be handled by just a single player. Without media literacy resources, citizens are unable to be responsible in their evaluation of information. Without open and fast communication with the media stakeholders, government agencies are unable to combat misinformation.
The recommendations include collaborative governance, i.e., institutional forms in which media self-regulation, technological protection, user empowerment, and policy frameworks collaborate and do not compete. The successful deployment of measures will decide whether India can continue to have open and free media without compromising on information integrity that is sufficient to provide national security, democratic governance and social stability during the period of high-speed information flow, algorithmic amplification, and information warfare actions.
References
https://mib.gov.in/sites/default/files/2025-11/advisory-18.11.2025.pdf

What are Wi-Fi attacks?
Wi-fi is an important area of cyber security and there is no need for physical cable for the network. Wi-Fi has access to a network signal radius everywhere. The devices and systems can have a network without physical access due to Wi-fi. But everything comes with cons and pros, and if we talk about cybersecurity, it has been established that Wi-fi networks are extremely vulnerable to security breaches and it is very easy to be hacked by hackers. Wi-Fi can be accessed by almost every device in the modern day: it can be smartphones, tablets, computers, and laptops. To know whether someone has been tampering with your personal Wi-Fi there are certain signs that can prove it. The first and most important sign is that your internet speed gets slower, as someone else is using your Wi-Fi surf.
Why would anyone hack someone’s Wi-Fi network?
Usually, hackers hack the network because they want access to the confidential data of someone and they can observe all the online activities and data that have been sent through a network. An unauthorize hacker will pretty much be able to see everything you do online. Wi-Fi allows hackers o view information on sites. Any financial information which is saved in the browser can be accessed by hackers and they can alter it and can alter the content you see online. And all the information saved in Wi-fi networks can be used by hackers for their own benefit, they can sell it, impersonate you, or even take money out of your bank through Wi-Fi.
Avoiding vulnerable Wi-Fi networks
The first and foremost rule of protection is that you should not use public networks if that network is easily open to you then that is also available to others and from others, and someone can who wishes to use your confidential and sensitive information, can access that. If you really need to access the public network in an urgent situation, then you must make sure to limit your activities while connected. And avoid accessing your online banking or pages that require login information. Also, a good measure to take as well is to always delete your cookies after using public WIFI.
How To Secure Your Home Wi-Fi Network
Your home’s wireless internet connection is your Wi-Fi network. Typically, a wireless router is used, which broadcasts a signal into the atmosphere. You can connect to the internet using that signal. However, if your network is not password-protected, any nearby device can grab the signal off the air and connect to your internet. The benefit of Wi-Fi? Wireless access to the internet is possible. The negative? Your internet activity, including your personal information, may be visible to neighboring users who connect to your unprotected network. Furthermore, if someone uses your network to conduct a crime or send out unauthorized spam, you might be held accountable.
Wi-Fi or Li-Fi? –
The common consensus is that Li-Fi technology is more secure than Wi-Fi. Li-Fi systems can be made more secure by integrating a variety of security features. Although these qualities might appear when Li-Fi is widely used in the near future, it is already thought to be safer because of a number of security features. Since the connection’s characteristics make it simpler to lock connections, limit access, and track users even in the absence of encryption and other security features, Li-Fi is seen as being safer. Li-Fi systems will be able to support new security protocols, which will not only enable high-speed networking but also open the door for innovative security techniques to strengthen connections.
Conclusion
A hacker can sniff the network packets without having to be in the same building where the network is located. As wireless networks communicate through radio waves, a hacker can easily sniff the network from a nearby location. Most attackers use network sniffing to find the SSID and hack a wireless network.
Any wireless network can theoretically be attacked in a number of different ways. Use of the default SSID or password, WPS pin authentication, insufficient access control, and leaving the access point available in open locations are all examples of potential vulnerabilities that could allow for the theft of sensitive data. Kismet’s architecture in WIDS mode may guard against DOS, MiTM, and MAC spoofing attacks. routine software updates on the other hand, the use of firewalls may help defend the network against outside intrusion. The act of finding infrastructure issues that could allow harmful code to be injected into a service, system, or organization is known as ethical hacking. They use this technique to prevent invasions by lawfully breaking into networks and looking for weak spots.