#FactCheck-AI-Generated Video Falsely Shows Samay Raina Making a Joke on Rekha
Executive Summary:
A viral video circulating on social media that appears to be deliberately misleading and manipulative is shown to have been done by comedian Samay Raina casually making a lighthearted joke about actress Rekha in the presence of host Amitabh Bachchan which left him visibly unsettled while shooting for an episode of Kaun Banega Crorepati (KBC) Influencer Special. The joke pointed to the gossip and rumors of unspoken tensions between the two Bollywood Legends. Our research has ruled out that the video is artificially manipulated and reflects a non genuine content. However, the specific joke in the video does not appear in the original KBC episode. This incident highlights the growing misuse of AI technology in creating and spreading misinformation, emphasizing the need for increased public vigilance and awareness in verifying online information.

Claim:
The claim in the video suggests that during a recent "Influencer Special" episode of KBC, Samay Raina humorously asked Amitabh Bachchan, "What do you and a circle have in common?" and then delivered the punchline, "Neither of you and circle have Rekha (line)," playing on the Hindi word "rekha," which means 'line'.ervicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
To check the genuineness of the claim, the whole Influencer Special episode of Kaun Banega Crorepati (KBC) which can also be found on the Sony Set India YouTube channel was carefully reviewed. Our analysis proved that no part of the episode had comedian Samay Raina cracking a joke on actress Rekha. The technical analysis using Hive moderator further found that the viral clip is AI-made.

Conclusion:
A viral video on the Internet that shows Samay Raina making a joke about Rekha during KBC was released and completely AI-generated and false. This poses a serious threat to manipulation online and that makes it all the more important to place a fact-check for any news from credible sources before putting it out. Promoting media literacy is going to be key to combating misinformation at this time, with the danger of misuse of AI-generated content.
- Claim: Fake AI Video: Samay Raina’s Rekha Joke Goes Viral
- Claimed On: X (Formally known as Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Misinformation is, to its basic meaning, incorrect or misleading information, it may or may not include specific malicious intent and includes inaccurate, incomplete, misleading, or false information and selective or half-truths. The main challenges in dealing with misinformation are defining and distinguishing misinformation from legitimate content. This complexity arises due to the rapid evolution and propagation which information undergoes on the digital platforms. Additionally, balancing the fundamental right of freedom of speech and expression with content regulation by state actors poses a significant challenge. It requires careful consideration to avoid censorship while effectively combating harmful misinformation.
Acknowledging the severe consequences of misinformation and the critical need to combat misinformation, Bharatiya Nyaya Sanhita (BNS), 2023 has implemented key measures to address misinformation in India. These new provisions introduced under the new criminal laws in India penalise the deliberate creation, distribution, or publication of inaccurate information. Previously missing from the IPC, these sections offer an additional legal resource to counter the proliferation of falsehoods, complementing existing laws targeting the same issue.
Section 353 of the BNS on Statements Conducing to Public Mischief criminalises making, publishing, or circulating statements, false information, rumours, or reports, including through electronic means, with the intent or likelihood of causing various harmful outcomes.
This section thus brings misinformation into its ambit, since misinformation has been traditionally used to induce public fear or alarm that may lead to offences against the State or public tranquillity or inciting one class or community to commit offences against another. The section also penalizes the promotion of enmity, hatred, or ill will among different religious, racial, linguistic, or regional groups.
BNS also prescribes punishment of imprisonment for up to three years, a fine, or both for offences under section 353. Interestingly, a longer imprisonment of up to 5 years along with a fine has been prescribed to curb such offences in places of worship or during religious ceremonies. The only exception that may be availed under this section is granted to unsuspecting individuals who, believing the misinformation to be true, spread misinformation without any ill intent. However, this exception may not be as effective in curbing misinformation, since at the outset, the offence is hard to trace and has multiple pockets for individuals to seek protection without any mechanism to verify their intent.
The BNS also aims to regulate misinformation through Section 197(1)(d) on Imputations, assertions prejudicial to national integration. Under this provision, anyone who makes or publishes false or misleading information, whether it is in the form of spoken words, written, by signs, in visible representations, or through electronic communication, therefore, results in jeopardising the sovereignty, unity, integrity, or security of India is liable to face punishment in the form of imprisonment for up to three years, a fine, or both and if it occurs in a place of worship or during religious ceremonies, the quantum of punishment is increased to imprisonment for up to five years and may include a fine. Additionally, Section 212 (a) & (b) provides against furnishing false information. If a person who is legally obligated to provide information to a public servant, knowingly or reasonably believes that the information is false, and still furnishes it, they now face a punishment of six months imprisonment or a fine up to five thousand rupees or both. However, if the false information pertains to the commission or prevention of an offence, or the apprehension of an offender, the punishment increases to imprisonment for up to two years, a fine, or both.
Enforcement Mechanisms: CyberPeace Policy Wing Outlook
To ensure the effective enforcement of these provisions, coordination between the key stakeholders, i.e., the law enforcement agencies, digital platforms, and judicial oversight is essential. Law enforcement agencies must utilize technology such as data analytics and digital forensics for tracking and identifying the origins of false information. This technological capability is crucial for pinpointing the sources and preventing the further spread of misinformation. Simultaneously, digital platforms associated with misinformation content are required to implement robust monitoring and reporting mechanisms to detect and address the generated misleading content proactively. A supporting oversight by judicial bodies plays a critical role in ensuring that enforcement actions are conducted fairly and in line with legal standards. It helps maintain a balance between addressing misinformation and upholding fundamental rights such as freedom of speech. The success of the BNS in addressing these challenges will depend on the effective integration of these mechanisms and ongoing adaptation to the evolving digital landscape.
Resources:
- Bharatiya Nyaya Sanhita, 2023 https://www.mha.gov.in/sites/default/files/250883_english_01042024.pdf
- https://www.foxmandal.in/changes-brought-forth-by-the-bharatiya-nyaya-sanhita-2023/
- https://economictimes.indiatimes.com/news/india/spreading-fake-news-could-land-people-in-jail-for-three-years-under-new-bharatiya-nyaya-sanhita-bill/articleshow/102669105.cms?from=mdr

Introduction
Today, on the International Day of UN Peacekeepers, we honour the brave individuals who risk their lives to uphold peace in the world’s most fragile and conflict-ridden regions. These peacekeepers are symbols of hope, diplomacy, and resilience. But as the world changes, so do the arenas of conflict. In today’s interconnected age, peace and safety are no longer confined to physical spaces—they extend to the digital realm. As we commemorate their service, we must also reflect on the new frontlines of peacekeeping: the internet, where misinformation, cyberattacks, and digital hate threaten stability every day.
The Legacy of UN Peacekeepers
Since 1948, UN Peacekeepers have served in over 70 missions, protecting civilians, facilitating political processes, and rebuilding societies. From conflict zones in Africa to the Balkans, they’ve worked in the toughest terrains to keep the peace. Their role is built on neutrality, integrity, and international cooperation. But as hybrid warfare becomes more prominent and digital threats increasingly influence real-world violence, the peacekeeping mandate must evolve. Traditional missions are now accompanied by the need to understand and respond to digital disruptions that can escalate local tensions or undermine democratic institutions.
The Digital Battlefield
In recent years, we’ve seen how misinformation, deepfakes, online radicalisation, and coordinated cyberattacks can destabilise peace processes. Disinformation campaigns can polarise communities, hinder humanitarian efforts, and provoke violence. Peacekeepers now face the added challenge of navigating conflict zones where digital tools are weaponised. The line between physical and virtual conflict is blurring. Cybersecurity has gone beyond being just a technical issue and is now a peace and security issue as well. From securing communication systems to monitoring digital hate speech that could incite violence, peacekeeping must now include digital vigilance and strategic digital diplomacy.
Building a Culture of Peace Online
Safeguarding peace today also means protecting people from harm in the digital space. Governments, tech companies, civil society, and international organisations must come together to build digital resilience. This includes investing in digital literacy, combating online misinformation, and protecting human rights in cyberspace. Peacekeepers may not wear blue helmets online, but their spirit lives on in every effort to make the internet a safer, kinder, and more truthful place. The role of youth, educators, and responsible digital citizens has never been more crucial. A culture of peace must be cultivated both offline and online.
Conclusion: A Renewed Pledge for Peace
On this UN Peacekeepers’ Day, let us not only honour those who have served and sacrificed but also renew our commitment to peace in all its dimensions. The world’s conflicts are evolving, and so must our response. As we support peacekeepers on the ground, let’s also become peacebuilders in the digital world, amplifying truth, rejecting hate, and building safer, inclusive communities online. Peace today is not just about silencing guns but also silencing disinformation. The call for peace is louder than ever. Let’s answer it, both offline and online.

AI systems have grown in both popularity and complexity on which they operate. They are enhancing accessibility for all, including people with disabilities, by revolutionising sectors including healthcare, education, and public services. We are at the stage where AI-powered solutions that can help people with mental, physical, visual or hearing impairments perform everyday and complex tasks are being created.
Generative AI is now being used to amplify human capability. The development of tools for speech-to-text and image recognition is helping in facilitating communication and interaction for visually or hearing-impaired individuals, and smart prosthetics are providing tailored support. Unfortunately, even with these developments, PWDs have continued to face challenges. Therefore, it is important to balance innovation with ethical considerations aand ensuring that these technologies are designed with qualities like privacy, equity, and inclusivity in mind.
Access to Tech: the Barriers Faced by PWDs
PWDs face several barriers while accessing technology. Identifying these challenges is important as they lack computer accessibility, in the use of hardware and software, which has become a norm in life nowadays. Website functions that only work when users click with a mouse, self-service kiosks without accessibility features, touch screens without screen reader software or tactile keyboards, and out-of-order equipment, such as lifts, captioning mirrors and description headsets, are just some difficulties that they face in their day-to-day life.
While they are helpful, much of the current technology doesn’t fully address all disabilities. For example, many assistive devices focus on visual or mobility impairments, but they fall short of addressing cognitive or sensory conditions. In addition to this, these solutions often lack personalisation, making them less effective for individuals with diverse needs. AI has significant potential to bridge this gap. With adaptive systems like voice assistants, real-time translation, and personalised features, AI can create more inclusive solutions, improving access to both digital and physical spaces for everyone.
The Importance of Inclusive AI Design
Creating an Inclusive AI design is important. It ensures that PWDs are not excluded from technological advancements because of the impairments that they are suffering from. The concept of an ‘inclusive or universal’ design promotes creating products and services that are usable for the widest possible range of people. Tech Developers have an ethical responsibility to create advancements in AI that serve everyone. Accessibility features should be built into the core design. They should be treated as a practice rather than an afterthought. However, bias in AI development often stems from data of a non-representative nature, or assumptions can lead to systems that overlook or poorly serve PWDs. If AI algorithms are trained on limited or biased data, they risk excluding marginalised groups, making ethical, inclusive design a necessity for equity and accessibility.
Regulatory Efforts to Ensure Accessible AI
In India, the Rights of Persons with Disabilities Act of 2016 impresses upon the need to provide PWDs with equal accessibility to technology. Subsequently, the DPDP Act of 2023 highlights data privacy concerns for the disabled under section 9 to process their data.
On the international level, the newly incorporated EU’s AI Act mandates measures for transparent, safe, and fair access to AI systems along with including measures that are related to accessibility.
In the US, the Americans with Disabilities Act of 1990 and Section 508 of the 1998 amendment to the Rehabilitation Act of 1973 are the primary legislations that work on promoting digital accessibility in public services.
Challenges in implementing Regulations for AI Accessibility for PWDs
Defining the term ‘inclusive AI’ is a challenge. When working on implementing regulations and compliance for the accessibility of AI, if the primary work is left undefined, it makes the task of creating tools to address the issue an issue. The rapid pace of tech and AI development has more often outpaced legal frameworks in development. This leads to the creation of enforcement gaps. Countries like Canada and tech industry giants like Microsoft and Google are leading forces behind creating accessible AI innovations. Their regulatory frameworks focus on developing AI ethics with inclusivity and collaboration with disability rights groups.
India’s efforts in creating an inclusive AI include the redesign of the Sugamya Bharat app. The app had been created to assist PWDs and the elderly. It will now be incorporating AI features specifically to assist the intended users.
Though AI development has opportunities for inclusivity, unregulated development can be risky. Regulation plays a critical role in ensuring that AI-driven solutions prioritise inclusivity, fairness, and accessibility, harnessing AI’s potential to empower PWDs and contribute to a more inclusive society.
Conclusion
AI development can offer PWDs unprecedented independence and accessibility in leading their lives. The development of AI while keeping inclusivity and fairness in mind is needed to be prioritised. AI that is free from bias, combined with robust regulatory frameworks, together are essential in ensuring that AI serves equitably. Collaborations between tech developers, policymakers, and disability advocates need to be supported and promoted to build AI systems. This will in turn work towards bridging the accessibility gaps for PWDs. As AI continues to evolve, maintaining a steadfast commitment to inclusivity will be crucial in preventing marginalisation and advancing true technological progress for all.
References
- https://www.business-standard.com/india-news/over-1-4k-accessibility-related-complaints-filed-on-govt-app-75-solved-124090800118_1.html
- https://www.forbes.com/councils/forbesbusinesscouncil/2023/06/16/empowering-individuals-with-disabilities-through-ai-technology/ .
- https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities
- Thehttps://blogs.microsoft.com/on-the-issues/2018/05/07/using-ai-to-empower-people-with-disabilities/andensur,personalization