#FactCheck-AI-Generated Video Falsely Shows Samay Raina Making a Joke on Rekha
Executive Summary:
A viral video circulating on social media that appears to be deliberately misleading and manipulative is shown to have been done by comedian Samay Raina casually making a lighthearted joke about actress Rekha in the presence of host Amitabh Bachchan which left him visibly unsettled while shooting for an episode of Kaun Banega Crorepati (KBC) Influencer Special. The joke pointed to the gossip and rumors of unspoken tensions between the two Bollywood Legends. Our research has ruled out that the video is artificially manipulated and reflects a non genuine content. However, the specific joke in the video does not appear in the original KBC episode. This incident highlights the growing misuse of AI technology in creating and spreading misinformation, emphasizing the need for increased public vigilance and awareness in verifying online information.

Claim:
The claim in the video suggests that during a recent "Influencer Special" episode of KBC, Samay Raina humorously asked Amitabh Bachchan, "What do you and a circle have in common?" and then delivered the punchline, "Neither of you and circle have Rekha (line)," playing on the Hindi word "rekha," which means 'line'.ervicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
To check the genuineness of the claim, the whole Influencer Special episode of Kaun Banega Crorepati (KBC) which can also be found on the Sony Set India YouTube channel was carefully reviewed. Our analysis proved that no part of the episode had comedian Samay Raina cracking a joke on actress Rekha. The technical analysis using Hive moderator further found that the viral clip is AI-made.

Conclusion:
A viral video on the Internet that shows Samay Raina making a joke about Rekha during KBC was released and completely AI-generated and false. This poses a serious threat to manipulation online and that makes it all the more important to place a fact-check for any news from credible sources before putting it out. Promoting media literacy is going to be key to combating misinformation at this time, with the danger of misuse of AI-generated content.
- Claim: Fake AI Video: Samay Raina’s Rekha Joke Goes Viral
- Claimed On: X (Formally known as Twitter)
- Fact Check: False and Misleading
Related Blogs

The recent Promotion and Regulation of Online Gaming Act, 2025, that came into force in August, has been one of the most widely anticipated regulations in the digital entertainment industry. Among provisions such as promoting esports and licensing of online gaming, the legislation notably introduces a blanket ban on real-money gaming (RMG). The rationale behind this was to reduce its addictive effects, protect minors, and limit the circulation of black-money. However, in reality, the Act has spawned apprehension about the legislative process, regulatory redundancy, and unintended consequences that can shift users and revenue to offshore operators.
From Debate to Prohibition: How the Act was Passed
The Promotion and Regulation of Online Gaming Act was passed as a central law, providing the earlier fragmented state laws on online betting and gambling with an overarching framework. Proponents argue that, among other provisions, some kind of unified national framework was needed to deal with the scale of online betting due to its detrimental impact on young users. The current Act is a direct transition to criminalisation rather than the swings of self-regulation and partial restrictions used during the previous decade of incremental experiments in regulation. Stakeholders in the industry believe that this type of sudden, blanket action creates uncertainty and erodes confidence in the system in the long run. Further, critics have pointed out that the Bill was passed without adequate Parliamentary deliberation. A question has been raised about whether procedural safeguards were upheld.
Prohibition of Online RMG
Within the Indian context, a distinction has long been drawn between games of skill and games of chance, with the latter, like a lottery or a casino, being severely prohibited under state laws, whereas the former, like rummy or fantasy sports, have generally been allowed after being recognized as skill-based by court authorities. The Online Gaming Act of 2025 abolishes this distinction on the internet, thus banning all RMG actions that include cash transactions, regardless of skill or chance. The act also criminalises the advertising, facilitation, and hosting of such sites, thereby penalizing offshore operators with an Indian customer focus, and subjecting their payment gateways, app stores, and advertisers under its jurisdiction to penalties.
The Problem of Overlap
One potential issue that the Act presents is its overlap with the existing laws. The IT Rules 2023 mandate intermediaries in the gaming sector to appoint compliance officers, submit monthly reports, and undergo due diligence. The new Act introduces a three-level classification of games, whereas the advisories of the Central Consumer Protection Authority (CCPA) under the Consumer Protection Act treat online betting as an unfair trade practice.
This multiplicity of regulations builds a maze where different Ministries and state governments have overlapping jurisdiction. Policy experts caution that such an overlap can create enforcement challenges, punish players who act within the law, and leave offshore malefactors undetected.
Unintended Consequences: Driving Users Offshore
Outright prohibition will hardly ever remove demand; it will only push it out. Offshore sites have taken advantage of the situation as Indian operators like Dream11 shut down their money games after the ban. It has already been reported that there is aggressive advertising by foreign betting companies that are not registered in India, most of which have backend infrastructure that cannot be regulated by the Act (Storyboard18).
This diversion of users to unregulated markets has two main risks. First, Indian players are deprived of the consumer protection offered to them in local regulation, and their data can be sent to suspicious foreign organizations. Second, the government loses control over the money flow that can be transferred via informal channels or cryptocurrencies or other obscure systems. Industry analysts are alerting that such developments may only worsen the issue of black-money instead of solving it (IGamingBusiness).
Advertising, Age Gating, and Digital Rights
The Act has also strengthened advertisement regulations, aligning with advisories issued by the Advertising Standards Council of India, which prohibits the targeting of minors. However, critics believe that the application remains inadequately enforced, and children can with comparative ease access unregulated overseas applications. In the absence of complementary digital literacy programs and strong parental controls, these limitations can be effectively superficial instead of real.
Privacy advocates also warn that frequent prompts, vague messages, or invasive surveillance can weaken the digital rights of users instead of strengthening them. Overregulation has also been found to create banner blindness in global contexts where users ignore warnings without first clearly understanding them.
Enforcement Challenges
The Act puts a lot of responsibilities on many stakeholders, including the Ministry of Information and Broadcasting (MIB) and the Reserve Bank of India (RBI). Platforms like Google Play and Apple App Store are expected to verify government-approved lists of compliant gaming apps and remove non-compliant or banned ones, as directed by the MIB and the RBI. Although this pressure may motivate intermediaries to collaborate, it may also have a risk of overreach when it is applied unequally or in a political way.
According to the experts, the solution should be underpinned by technology itself. Artificial intelligence can be used to identify illegal advertisements, track illegal gaming in children, and trace payment streams. At the same time, the regulators should be able to issue final lists of either compliant or non-compliant applications to advise the consumers and intermediaries alike. Without such practical provisions, enforcement risks remaining patchy.
Online Gaming Rules
On 1 October 2025, the government issued a draft of the Online Gaming Rules in accordance with the Promotion and Regulation of Online Gaming Act. The regulations focus on the creation of the compliance frameworks, define the classification of the allowed gaming activities, and prescribe grievance-redressal mechanisms aiming to promote the protection of the players and procedural transparency. However, the draft does not revisit or soften the existing blanket prohibition on real-money gaming (RMG) and, hence, the questions about the effectiveness of enforcement and regulatory clarity remain open (Times of India, 2025).
Protecting Consumers Without Stifling Innovation
The ban highlights a larger conflict, i.e., the protection of the vulnerable users without stifling an industry that has traditionally contributed to innovation, jobs, and the collection of tax revenue. Online gaming has significantly added to the GST collections, and the sudden shakeup brings fiscal concerns (Reuters).
Several legal objections to the Act have already been brought, asking whether the Act is constitutional, especially as to whether the restrictions are proportional to the right to trade. The outcome of such cases will define the future trajectory of the digital economy of India (Reuters).
Way Forward
Instead of outright prohibition, a more balanced approach that incorporates regulation and consumer protection is suggested by the experts. Key measures could include:
- A definite difference between games of skill and games of chance, with proportionate regulation.
- Age confirmation and campaign against online illiteracy to protect the underage population.
- Enhanced advertising and payments compliance requirements and enforceable non-compliance penalty.
- Coordinated oversight among different ministries to prevent duplication and regulatory struggle.
- Leveraging AI and fintech to track illegal financial activities (black money flows) and developing innovation.
Conclusion
The Online Gaming Act 2025 addresses social issues, such as addiction, monetary risk, and child safety, that require governance interventions. However, the path it follows to this end, that of total prohibition, is more likely to spawn a new set of issues instead of providing solutions because it will send consumers to offshore sites, undermine consumer rights, and slow innovation.
For India, the real challenge is not whether to prohibit online money gaming but how to create a balanced, transparent, and enforceable framework that protects users while fostering a responsible gaming ecosystem. India can reduce the adverse consequences of online betting without keeping the industry in the shadows with better coordination, reasonable use of technology, and balanced protection.
References:
- India's Dream11, top gaming apps halt money-based games after ban
- India online gambling ban could drive punters to black market
- Offshore betting firms with backend ops in India not covered by online gaming law
- The Great Gamble: India’s Online Gaming Ban, The GST Battle, And What Lies Ahead.
- Game Over for Online Money Games? An Analysis of the Online Gaming Act 2025
- Government gambles heavily on prohibiting online money gaming
- Online gaming regulation: New rules to take effect from October 1; government stresses consultative approach with industry
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds

Introduction
AI has transformed the way we look at advanced technologies. As the use of AI is evolving, it also raises a concern about AI-based deepfake scams. Where scammers use AI technologies to create deep fake videos, images and audio to deceive people and commit AI-based crimes. Recently a Kerala man fall victim to such a scam. He received a WhatsApp video call, the scammer impersonated the face of the victim’s known friend using AI-based deep fake technology. There is a need for awareness and vigilance to safeguard ourselves from such incidents.
Unveiling the Kerala deep fake video call Scam
The man in Kerala received a WhatsApp video call from a person claiming to be his former colleague in Andhra Pradesh. In actuality, he was the scammer. He asked for help of 40,000 rupees from the Kerala man via google pay. Scammer to gain the trust even mentioned some common friends with the victim. The scammer said that he is at the Dubai airport and urgently need the money for the medical emergency of his sister.
As AI is capable of analysing and processing data such as facial images, videos, and audio creating a realistic deep fake of the same which closely resembles as real one. In the Kerala Deepfake video call scam the scammer made a video call that featured a convincingly similar facial appearance and voice as same to the victim’s colleague which the scammer was impersonating. The Kerala man believing that he was genuinely communicating with his colleague, transferred the money without hesitation. The Kerala man then called his former colleague on the number he had saved earlier in his contact list, and his former colleague said that he has not called him. Kerala man realised that he had been cheated by a scammer, who has used AI-based deep-fake technology to impersonate his former colleague.
Recognising Deepfake Red Flags
Deepfake-based scams are on the rise, as they pose challenges that really make it difficult to distinguish between genuine and fabricated audio, videos and images. Deepfake technology is capable of creating entirely fictional photos and videos from scratch. In fact, audio can be deepfaked too, to create “voice clones” of anyone.
However, there are some red flags which can indicate the authenticity of the content:
- Video quality- Deepfake videos often have compromised or poor video quality, and unusual blur resolution, which might pose a question to its genuineness.
- Looping videos: Deepfake videos often loop or unusually freeze or where the footage repeats itself, indicating that the video content might be fabricated.
- Verify Separately: Whenever you receive requests for such as financial help, verify the situation by directly contacting the person through a separate channel such as a phone call on his primary contact number.
- Be vigilant: Scammers often possess a sense of urgency leading to giving no time to the victim to think upon it and deceiving them by making a quick decision. So be vigilant and cautious when receiving and entertaining such a sudden emergency which demands financial support from you on an urgent basis.
- Report suspicious activity: If you encounter such activities on your social media accounts or through such calls report it to the platform or to the relevant authority.
Conclusion
The advanced nature of AI deepfake technology has introduced challenges in combatting such AI-based cyber crimes. The Kerala man’s case of falling victim to an AI-based deepfake video call and losing Rs 40,000 serves as an alarming need to remain extra vigilant and cautious in the digital age. So in the reported incident where Kerala man received a call from a person appearing as his former colleague but in actuality, he was a scammer and tricking the victim by using AI-based deepfake technology. By being aware of such types of rising scams and following precautionary measures we can protect ourselves from falling victim to such AI-based cyber crimes. And stay protected from such malicious scammers who exploit these technologies for their financial gain. Stay cautious and safe in the ever-evolving digital landscape.