#FactCheck - Viral Video of Shah Rukh Khan Is AI-Generated; Users Sharing Misleading Claims
A video of Bollywood actor and Kolkata Knight Riders (KKR) owner Shah Rukh Khan is going viral on social media. The video claims that Shah Rukh Khan is reacting to opposition against Bangladeshi bowler Mustafizur Rahman playing for KKR and is allegedly calling industrialist Gautam Adani a “traitor,” while appealing to stop Hindu–Muslim politics.
Research by the CyberPeace Foundation found that the voice heard in the video is not Shah Rukh Khan’s but is AI-generated. Shah Rukh Khan has not made any official statement regarding Mustafizur Rahman’s removal from KKR. The claim made in the video concerning industrialist Gautam Adani is also completely misleading and baseless.
Claim
In the viral video, Shah Rukh Khan is allegedly heard saying: “People barking about Mustafizur Rahman playing for KKR should stop it. Adani is earning money by betraying the country by supplying electricity from India to Bangladesh. Leave Hindu–Muslim politics and raise your voice against traitors like Adani for the welfare of the country. Mustafizur Rahman will continue to play for the team.”
The post link, archive link, and screenshots can be seen below:
- Archive link: https://archive.is/XsQXp
- Facebook reel link: https://www.facebook.com/reel/1220246633365097

Research
We examined the key frames of Shah Rukh Khan’s viral video using Google Lens. During this process, we found the original video on the official YouTube channel Talks at Google, which was uploaded on 2 October 2014.
In this video, Shah Rukh Khan is seen wearing the same outfit as in the viral clip. He is seen responding to questions from Google CEO Sundar Pichai. The YouTube video description mentions that Shah Rukh Khan participated in a fireside chat held at the Googleplex, where he answered Pichai’s questions and also promoted his upcoming film “Happy New Year.”
The link to the video is given : https://www.youtube.com/watch?v=H_8UBv5bZo0

Upon closely analyzing the viral video of Shah Rukh Khan, we noticed a clear mismatch between his voice and lip movements (lip sync). Such inconsistencies usually appear when the original video or its audio has been tampered with.
We then examined the audio present in the video using the AI detection tool Aurigin. According to the tool’s results, the audio in the viral video was found to be approximately 99 percent AI-generated.
Conclusion
Our research confirmed that the voice heard in the video is not Shah Rukh Khan’s but is AI-generated. Shah Rukh Khan has not made any official comment regarding Mustafizur Rahman’s removal from KKR. Additionally, the claims made in the video about industrialist Gautam Adani are completely misleading and baseless.
Related Blogs

Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading

Introduction
The rapid rise of AI tools has reshaped how health content spreads on platforms like Instagram Reels and YouTube Shorts. These sub-minute videos promise quick fixes for weight loss, glowing skin, or reduced anxiety, often delivered through polished visuals and confident AI-generated voiceovers. The result feels highly personalised, as if the advice is tailored to each viewer, even though it is usually generic and widely recycled.
Short-form videos tend to compress complex health topics into “one tip” solutions, such as drinking a specific detox drink daily or following a single workout for rapid fat loss. While appealing, this oversimplification removes essential context, including individual health conditions, long-term risks, and scientific nuance. For example, viral diet trends or fitness hacks may work for some but can be ineffective or even harmful for others.
Algorithms play a major role in amplifying such content. Videos that promise dramatic transformations or instant results are more likely to gain engagement, which pushes them to wider audiences. Repeated exposure then builds familiarity, making the advice seem more credible over time. Audiences often trust this content due to its clean presentation, authoritative tone, and frequent repetition. However, the risks include misinformation, unrealistic expectations, and potential harm from unverified practices. To approach such content critically, viewers should cross-check claims with credible medical sources, avoid relying on single tip solutions, and remember that real health advice is rarely one size fits all.
The Illusion of Personalisation
AI-generated health content often mimics personalisation through:
- Synthetic voiceovers that designers created to match different age groups through their voice output, which speak specifically to people who are 20 years old and younger.
- The script development process uses data that tracks currently popular search terms.
- Viewers can interpret information through visual elements, which show changes between two different states.
The process of "personalisation" uses generalised data that does not match individual health profiles to create targeted results. The videos fail to provide a medical assessment because they do not consider:
- Existing medical conditions
- Hereditary differences
- Personal habits and the impact of surrounding conditions
The users will think that general medical advice applies to their personal health needs, which will lead them to use this advice inappropriately.
Short-Form Content and Oversimplification
Short-form videos have time limitations, which result in reduced complex medical information development into basic medical stories. The typical patterns of evaluation include these two patterns of evaluation include:
- “One-tip solutions” (e.g., “Drink this before bed to burn fat”)
- Binary framing (“good vs bad foods”)
- The process of eliminating all disclaimers and side effects information
For example, the three diet methods here the three diet methods here
- Viral detox drinks that make the claim to "flush toxins" from the body
- Extreme calorie-cutting diet hacks
- Fitness shortcuts that guarantee users will see results within days
The content demonstrates a pattern of disregarding essential human body operation rules that include both metabolic patterns and human body operation over extended periods of time.
Algorithmic Amplification and Virality
The recommendation algorithms used by Instagram and YouTube deliver their most important results through three main factors, which include:
- Engagement (likes, shares, watch time)
- Retention rates
- Emotional or aspirational triggers
Health-related content that claims to deliver:
- Immediate body changes
- Needs minimal work from viewers
- Results in extreme physical changes
Attractive health-related content that displays extreme physical changes through quick transformations. The system produces a continuous cycle during which:
- Misleading content gains traction
- Algorithms amplify it further
- More creators replicate similar formats using AI tools
The system produces a secondary result that favours content that people share instead of content that has authentic credibility.
Why Do Users Trust AI-Generated Health Content?
Several psychological and technological factors contribute to trust:
- Professional Aesthetics - AI tools generate high-quality visual content together with authentic voiceover performance and expert-level script documentation, which replicates professional communication methods.
- Repetition and Familiarity - When people encounter identical recommendations multiple times, their belief in those recommendations increases through the illusory truth effect.
- Authority Signals
- Medical terminology serves as a standard term
- Medical professionals appear in stock footage through lab coat visuals
- The narrator delivers information through an assertive speaking style
- Perceived Personal Relevance - Algorithmic targeting makes users feel the content is "meant for them.
Real-World Examples of Viral Trends
The typical types of health misinformation that artificial intelligence systems spread through their enhanced capabilities include:
- Diet Trends: Keto shortcuts, extreme intermittent fasting variants
- Fitness Hacks: Spot reduction exercises (scientifically unsupported)
- Supplement Advice: Unverified claims about vitamins or herbal products
- Mental Health Tips: Oversimplified coping strategies that lack clinical evidence
The statement that drinking warm lemon water will detox your liver continues to be popular despite the fact that the liver has natural self-detoxification abilities.
Risks and Public Health Implications
The widespread consumption of such content creates multiple dangers, which include:
1. Physical Health Risks
- Nutritional deficiencies from extreme diets
- Injury from improper exercise techniques
- Delayed medical consultation
2. Psychological Impact
- Unrealistic body image expectations
- Anxiety due to conflicting advice
3. Misinformation Ecosystem
- The public loses confidence in evidence-based medicine
- Unverified or pseudoscientific practices spread throughout society
Regulatory and Ethical Concerns
The increase of AI-generated health materials connects to more extensive problems, which include:
- Who is responsible for the content
- Who is responsible for the platform
- How AI systems show their inner workings to users
Most platforms today do not have strict systems that can:
- Verify medical claims
- Display which health advice comes from artificial intelligence
- Punish users who spread false information multiple times
The absence of regulations allows misleading information to spread without consequences.
A CyberPeace Perspective: Building Digital Health Resilience
The problem needs complete involvement from several parties to create effective solutions that protect both online security and data integrity.
For Users
- Users should confirm claims by using trustworthy medical resources, which include the WHO and peer-reviewed studies.
- People should avoid using "quick solutions" until they receive guidance from certified experts.
- Users should exercise caution when they encounter content that does not include necessary warning signs.
For Platforms
- Platforms should implement systems that enable users to identify AI-generated content.
- Platforms should decrease the visibility of health information that contains false statements.
- Platforms should support authentic health content producers who have been validated.
For Policymakers
- Policymakers should create standards that govern AI-produced medical content.
- Policymakers need to enhance initiatives that teach people about the health information available online.
For Content Creators
- Content creators must show how they implement AI technologies.
- They should stay away from making claims that either go beyond what is needed or state things as absolute truth.
Conclusion
AI-generated health tips on short-form video platforms create complex research conditions that involve three scientific fields: technology, psychology and public health. The tools provide equal access to information, yet create higher risks for people to believe false information when they use the tools without responsible usage.
The challenge requires organisations to maintain user safety through accurate information management while providing users with transparent digital health services. The growing dependence of users on algorithm-based content requires educational institutions to develop students' critical thinking abilities and digital skills to minimise negative effects from AI-driven communication methods.
References
- https://pmc.ncbi.nlm.nih.gov/articles/PMC12924558/
- https://academic.oup.com/heapro/article/40/2/daaf023/8100645
- https://pmc.ncbi.nlm.nih.gov/articles/PMC12673052/
- https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2025.1713794/full
- https://www.who.int/teams/digital-health-and-innovation/digital-channels/combatting-misinformation-online
- https://link.springer.com/article/10.1186/s12982-025-00777-2
- https://www.washingtonpost.com/health/2026/04/21/chatbot-medical-advice-accurate/
.webp)
Introduction
The link between social media and misinformation is undeniable. Misinformation, particularly the kind that evokes emotion, spreads like wildfire on social media and has serious consequences, like undermining democratic processes, discrediting science, and promulgating hateful discourses which may incite physical violence. If left unchecked, misinformation propagated through social media has the potential to incite social disorder, as seen in countless ethnic clashes worldwide. This is why social media platforms have been under growing pressure to combat misinformation and have been developing models such as fact-checking services and community notes to check its spread. This article explores the pros and cons of the models and evaluates their broader implications for online information integrity.
How the Models Work
- Third-Party Fact-Checking Model (formerly used by Meta) Meta initiated this program in 2016 after claims of extraterritorial election tampering through dis/misinformation on its platforms. It entered partnerships with third-party organizations like AFP and specialist sites like Lead Stories and PolitiFact, which are certified by the International Fact-Checking Network (IFCN) for meeting neutrality, independence, and editorial quality standards. These fact-checkers identify misleading claims that go viral on platforms and publish verified articles on their websites, providing correct information. They also submit this to Meta through an interface, which may link the fact-checked article to the social media post that contains factually incorrect claims. The post then gets flagged for false or misleading content, and a link to the article appears under the post for users to refer to. This content will be demoted in the platform algorithm, though not removed entirely unless it violates Community Standards. However, in January 2025, Meta announced it was scrapping this program and beginning to test X’s Community Notes Model in the USA, before rolling it out in the rest of the world. It alleges that the independent fact-checking model is riddled with personal biases, lacks transparency in decision-making, and has evolved into a censoring tool.
- Community Notes Model ( Used by X and being tested by Meta): This model relies on crowdsourced contributors who can sign up for the program, write contextual notes on posts and rate the notes made by other users on X. The platform uses a bridging algorithm to display those notes publicly, which receive cross-ideological consensus from voters across the political spectrum. It does this by boosting those notes that receive support despite the political leaning of the voters, which it measures through their engagements with previous notes. The benefit of this system is that it is less likely for biases to creep into the flagging mechanism. Further, the process is relatively more transparent than an independent fact-checking mechanism since all Community Notes contributions are publicly available for inspection, and the ranking algorithm can be accessed by anyone, allowing for external evaluation of the system by anyone.
CyberPeace Insights
Meta’s uptake of a crowdsourced model signals social media’s shift toward decentralized content moderation, giving users more influence in what gets flagged and why. However, the model’s reliance on diverse agreements can be a time-consuming process. A study (by Wirtschafter & Majumder, 2023) shows that only about 12.5 per cent of all submitted notes are seen by the public, making most misleading content go unchecked. Further, many notes on divisive issues like politics and elections may not see the light of day since reaching a consensus on such topics is hard. This means that many misleading posts may not be publicly flagged at all, thereby hindering risk mitigation efforts. This casts aspersions on the model’s ability to check the virality of posts which can have adverse societal impacts, especially on vulnerable communities. On the other hand, the fact-checking model suffers from a lack of transparency, which has damaged user trust and led to allegations of bias.
Since both models have their advantages and disadvantages, the future of misinformation control will require a hybrid approach. Data accuracy and polarization through social media are issues bigger than an exclusive tool or model can effectively handle. Thus, platforms can combine expert validation with crowdsourced input to allow for accuracy, transparency, and scalability.
Conclusion
Meta’s shift to a crowdsourced model of fact-checking is likely to have bigger implications on public discourse since social media platforms hold immense power in terms of how their policies affect politics, the economy, and societal relations at large. This change comes against the background of sweeping cost-cutting in the tech industry, political changes in the USA and abroad, and increasing attempts to make Big Tech platforms more accountable in jurisdictions like the EU and Australia, which are known for their welfare-oriented policies. These co-occurring contestations are likely to inform the direction the development of misinformation-countering tactics will take. Until then, the crowdsourcing model is still in development, and its efficacy is yet to be seen, especially regarding polarizing topics.
References
- https://www.cyberpeace.org/resources/blogs/new-youtube-notes-feature-to-help-users-add-context-to-videos
- https://en-gb.facebook.com/business/help/315131736305613?id=673052479947730
- http://techxplore.com/news/2025-01-meta-fact.html
- https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
- https://communitynotes.x.com/guide/en/about/introduction
- https://blogs.lse.ac.uk/impactofsocialsciences/2025/01/14/do-community-notes-work/?utm_source=chatgpt.com
- https://www.techpolicy.press/community-notes-and-its-narrow-understanding-of-disinformation/
- https://www.rstreet.org/commentary/metas-shift-to-community-notes-model-proves-that-we-can-fix-big-problems-without-big-government/
- https://tsjournal.org/index.php/jots/article/view/139/57