#FactCheck - An edited video of Bollywood actor Ranveer Singh criticizing PM getting viral
Executive Summary:
An alleged video is making the rounds on the internet featuring Ranveer Singh criticizing the Prime Minister Narendra Modi and his Government. But after examining the video closely it revealed that it has been tampered with to change the audio. In fact, the original videos posted by different media outlets actually show Ranveer Singh praising Varanasi, professing his love for Lord Shiva, and acknowledging Modiji’s role in enhancing the cultural charms and infrastructural development of the city. Differences in lip synchronization and the fact that the original video has no sign of criticizing PM Modi show that the video has been potentially manipulated in order to spread misinformation.

Claims:
The Viral Video of Bollywood actor Ranveer Singh criticizing Prime Minister Narendra Modi.

Fact Check:
Upon receiving the Video we divided the video into keyframes and reverse-searched one of the images, we landed on another video of Ranveer Singh with lookalike appearance, posted by an Instagram account named, “The Indian Opinion News''. In the video Ranveer Singh talks about his experience of visiting Kashi Vishwanath Temple with Bollywood actress Kriti Sanon. When we watched the Full video we found no indication of criticizing PM Modi.

Taking a cue from this we did some keyword search to find the full video of the interview. We found many videos uploaded by media outlets but none of the videos indicates criticizing PM Modi as claimed in the viral video.

Ranveer Singh shared his thoughts about how he feels about Lord Shiva, his opinions on the city and the efforts undertaken by the Prime Minister Modi to keep history and heritage of Varanasi alive as well as the city's ongoing development projects. The discrepancy in the viral video clip is clearly seen when we look at it closely. The lips are not in synchronization with the words which we can hear. It is clearly seen in the original video that the lips are in perfect synchronization with the words of audio. Upon lack of evidence to the claim made and discrepancies in the video prove that the video was edited to misrepresent the original interview of Bollywood Actor Ranveer Singh. Hence, the claim made is misleading and false.
Conclusion:
The video that claims Ranveer Singh criticizing PM Narendra Modi is not genuine. Further investigation shows that it has been edited by changing the audio. The original footage actually shows Singh speaking positively about Varanasi and Modi's work. Differences in lip-syncing and upon lack of evidence highlight the danger of misinformation created by simple editing. Ultimately, the claim made is false and misleading.
- Claim: A viral featuring Ranveer Singh criticizing the Prime Minister Narendra Modi and his Government.
- Claimed on: X (formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Election misinformation poses a major threat to democratic processes all over the world. The rampant spread of misleading information intentionally (disinformation) and unintentionally (misinformation) during the election cycle can not only create grounds for voter confusion with ramifications on election results but also incite harassment, bullying, and even physical violence. The attack on the United States Capitol Building in Washington D.C., in 2021, is a classic example of this phenomenon, where the spread of dis/misinformation snowballed into riots.
Election Dis/Misinformation
Election dis/misinformation is false or misleading information that affects/influences public understanding of voting, candidates, and election integrity. The internet, particularly social media, is the foremost source of false information during elections. It hosts fabricated news articles, posts or messages containing incorrectly-captioned pictures and videos, fabricated websites, synthetic media and memes, and distorted truths or lies. In a recent example during the 2024 US elections, fake videos using the Federal Bureau of Investigation’s (FBI) insignia alleging voter fraud in collusion with a political party and claiming the threat of terrorist attacks were circulated. According to polling data collected by Brookings, false claims influenced how voters saw candidates and shaped opinions on major issues like the economy, immigration, and crime. It also impacted how they viewed the news media’s coverage of the candidates’ campaign. The shaping of public perceptions can thus, directly influence election outcomes. It can increase polarisation, affect the quality of democratic discourse, and cause disenfranchisement. From a broader perspective, pervasive and persistent misinformation during the electoral process also has the potential to erode public trust in democratic government institutions and destabilise social order in the long run.
Challenges In Combating Dis/Misinformation
- Platform Limitations: Current content moderation practices by social media companies struggle to identify and flag misinformation effectively. To address this, further adjustments are needed, including platform design improvements, algorithm changes, enhanced content moderation, and stronger regulations.
- Speed and Spread: Due to increasingly powerful algorithms, the speed and scale at which misinformation can spread is unprecedented. In contrast, content moderation and fact-checking are reactive and are more time-consuming. Further, incendiary material, which is often the subject of fake news, tends to command higher emotional engagement and thus, spreads faster (virality).
- Geopolitical influences: Foreign actors seeking to benefit from the erosion of public trust in the USA present a challenge to the country's governance, administration and security machinery. In 2018, the federal jury indicted 11 Russian military officials for alleged computer hacking to gain access to files during the 2016 elections. Similarly, Russian involvement in the 2024 federal elections has been alleged by high-ranking officials such as White House national security spokesman John Kirby, and Attorney General Merrick Garland.
- Lack of Targeted Plan to Combat Election Dis/Misinformation: In the USA, dis/misinformation is indirectly addressed through laws on commercial advertising, fraud, defamation, etc. At the state level, some laws such as Bills AB 730, AB 2655, AB 2839, and AB 2355 in California target election dis/misinformation. The federal and state governments criminalize false claims about election procedures, but the Constitution mandates “breathing space” for protection from false statements within election speech. This makes it difficult for the government to regulate election-related falsities.
CyberPeace Recommendations
- Strengthening Election Cybersecurity Infrastructure: To build public trust in the electoral process and its institutions, security measures such as updated data protection protocols, publicized audits of election results, encryption of voter data, etc. can be taken. In 2022, the federal legislative body of the USA passed the Electoral Count Reform and Presidential Transition Improvement Act (ECRA), pushing reforms allowing only a state’s governor or designated executive official to submit official election results, preventing state legislatures from altering elector appointment rules after Election Day and making it more difficult for federal legislators to overturn election results. More investments can be made in training, scenario planning, and fact-checking for more robust mitigation of election-related malpractices online.
- Regulating Transparency on Social Media Platforms: Measures such as transparent labeling of election-related content and clear disclosure of political advertising to increase accountability can make it easier for voters to identify potential misinformation. This type of transparency is a necessary first step in the regulation of content on social media and is useful in providing disclosures, public reporting, and access to data for researchers. Regulatory support is also required in cases where popular platforms actively promote election misinformation.
- Increasing focus on ‘Prebunking’ and Debunking Information: Rather than addressing misinformation after it spreads, ‘prebunking’ should serve as the primary defence to strengthen public resilience ahead of time. On the other hand, misinformation needs to be debunked repeatedly through trusted channels. Psychological inoculation techniques against dis/misinformation can be scaled to reach millions on social media through short videos or messages.
- Focused Interventions On Contentious Themes By Social Media Platforms: As platforms prioritize user growth, the burden of verifying the accuracy of posts largely rests with users. To shoulder the responsibility of tackling false information, social media platforms can outline critical themes with large-scale impact such as anti-vax content, and either censor, ban, or tweak the recommendations algorithm to reduce exposure and weaken online echo chambers.
- Addressing Dis/Information through a Socio-Psychological Lens: Dis/misinformation and its impact on domains like health, education, economy, politics, etc. need to be understood through a psychological and sociological lens, apart from the technological one. A holistic understanding of the propagation of false information should inform digital literacy training in schools and public awareness campaigns to empower citizens to evaluate online information critically.
Conclusion
According to the World Economic Forum’s Global Risks Report 2024, the link between misleading or false information and societal unrest will be a focal point during elections in several major economies over the next two years. Democracies must employ a mixed approach of immediate tactical solutions, such as large-scale fact-checking and content labelling, and long-term evidence-backed countermeasures, such as digital literacy, to curb the spread and impact of dis/misinformation.
Sources
- https://www.cbsnews.com/news/2024-election-misinformation-fbi-fake-videos/
- https://www.brookings.edu/articles/how-disinformation-defined-the-2024-election-narrative/
- https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections
- https://indianexpress.com/article/world/misinformation-spreads-fear-distrust-ahead-us-election-9652111/
- https://academic.oup.com/ajcl/article/70/Supplement_1/i278/6597032#377629256
- https://www.brennancenter.org/our-work/policy-solutions/how-states-can-prevent-election-subversion-2024-and-beyond
- https://www.bbc.com/news/articles/cx2dpj485nno
- https://msutoday.msu.edu/news/2022/how-misinformation-and-disinformation-influence-elections
- https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
- https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
- https://www.weforum.org/stories/2024/03/disinformation-trust-ecosystem-experts-curb-it/
- https://www.apa.org/topics/journalism-facts/misinformation-recommendations
- https://mythvsreality.eci.gov.in/
- https://www.brookings.edu/articles/transparency-is-essential-for-effective-social-media-regulation/
- https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/
.webp)
Introduction
In July 2025, the Digital Defence Report prepared by Microsoft raised an alarm that India is part of the top target countries in AI-powered nation-state cyberattacks with malicious agents automating phishing, creating convincing deepfakes, and influencing opinion with the help of generative AI (Microsoft Digital Defence Report, 2025). Most of the attention in the world has continued to be on the United States and Europe, but Asia-Pacific and especially India have become a major target in terms of AI-based cyber activities. This blog discusses the role of AI in espionage, redefining the threat environment of India, the reaction of the government, and what India can learn by looking at the example of cyber giants worldwide.
Understanding AI-Powered Cyber Espionage
Conventional cyber-espionage intends to hack systems, steal information or bring down networks. With the emergence of generative AI, these strategies have changed completely. It is now possible to automate reconnaissance, create fake voices and videos of authorities and create highly advanced phishing campaigns which can pass off as genuine even to a trained expert. According to the report made by Microsoft, AI is being used by state-sponsored groups to expand their activities and increase accuracy in victims (Microsoft Digital Defence Report, 2025). Based on SQ Magazine, almost 42 percent of state-based cyber campaigns in 2025 had AIs like adaptive malware or intelligent vulnerability scanners (SQ Magazine, 2025).
AI is altering the power dynamic of cyberspace. The tools previously needing significant technical expertise or substantial investments have become ubiquitous, and smaller countries can conduct sophisticated cyber operations as well as non-state actors. The outcome is the speeding up of the arms race with AI serving as the weapon and the armour.
India’s Exposure and Response
The weakness of the threat landscape lies in the growing online infrastructure and geopolitical location. The attack surface has expanded the magnitude of hundreds of millions of citizens with the integration of platforms like DigiLocker and CoWIN. Financial institutions, government portals and defence networks are increasingly becoming targets of cyber attacks that are more sophisticated. Faking videos of prominent figures, phishing letters with the official templates, and manipulation of the social media are currently all being a part of disinformation campaigns (Microsoft Digital Defence Report, 2025).
According to the Data Security Council of India (DSCI), the India Cyber Threat Report 2025 reported that attacks using AI are growing exponentially, particularly in the shape of malicious behaviour and social engineering (DSCI, 2025). The nodal cyber-response agency of India, CERT-In, has made several warnings regarding scams related to AI and AI-generated fake content that is aimed at stealing personal information or deceiving the population. Meanwhile, enforcement and red-teaming actions have been intensified, but the communication between central agencies and state police and the private platforms is not even. There is also an acute shortage of cybersecurity talents in India, as less than 20 percent of cyber defence jobs are occupied by qualified specialists (DSCI, 2025).
Government and Policy Evolution
The government response to AI-enabled threats is taking three forms, namely regulation, institutional enhancing, and capacity building. The Digital Personal Data Protection Act 2023 saw a major move in defining digital responsibility (Government of India, 2023). Nonetheless, threats that involve AI-specific issues like data poisoning, model manipulation, or automated disinformation remain grey areas. The following National Cybersecurity Strategy will attempt to remedy them by establishing AI-government guidelines and responsibility standards to major sectors.
At the institutional level, the efforts of such organisations as the National Critical Information Infrastructure Protection Centre (NCIIPC) and the Defence Cyber Agency are also being incorporated into their processes with the help of AI-based monitoring. There is also an emerging public-private initiative. As an example, the CyberPeace Foundation and national universities have signed a memorandum of understanding that currently facilitates the specialised training in AI-driven threat analysis and digital forensics (Times of India, August 2025). Even after these positive indications, India does not have any cohesive system of reporting cases of AI. The publication on arXiv in September 2025 underlines the importance of the fact that legal approaches to AI-failure reporting need to be developed by countries to approach AI-initiated failures in such fields as national security with accountability (arXiv, 2025).
Global Implications and Lessons for India
Major economies all over the world are increasing rapidly to integrate AI innovation with cybersecurity preparedness. The United States and United Kingdom are spending big on AI-enhanced military systems, performing machine learning in security operations hubs and organising AI-based “red team” exercises (Microsoft Digital Defence Report, 2025). Japan is testing cross-ministry threat-sharing platforms that utilise AI analytics and real-time decision-making (Microsoft Digital Defence Report, 2025).
Four lessons can be distinguished as far as India is concerned.
- To begin with, the cyber defence should shift to proactive intelligence in place of reactive investigation. It is not only possible to detect the adversary behaviour after the attacks, but to simulate them in advance using AI.
- Second, teamwork is essential. The issue of cybersecurity cannot be entrusted to government enforcement. The private sector that maintains the majority of the digital infrastructure in India must be actively involved in providing information and knowledge.
- Third, there is the issue of AI sovereignty. Building or hosting its own defensive AI tools in India will diminish dependence on foreign vendors, and minimise the possible vulnerabilities of the supply-chain.
- Lastly, the initial defence is digital literacy. The citizens should be trained on how to detect deepfakes, phishing, and other manipulated information. The importance of creating human awareness cannot be underestimated as much as technical defences (SQ Magazine, 2025).
Conclusion
AI has altered the reasoning behind cyber warfare. There are quicker attacks, more difficult to trace and scalable as never before. In the case of India, it is no longer about developing better firewalls but rather the ability to develop anticipatory intelligence to counter AI-powered threats. This requires a national policy that incorporates technology, policy and education.
India can transform its vulnerability to strength with the sustained investment, ethical AI governance, and healthy cooperation between the government and the business sector. The following step in cybersecurity does not concern who possesses more firewalls than the other but aims to learn and adjust more quickly and successfully in a world where machines already belong to the battlefield (Microsoft Digital Defence Report, 2025).
References:
- Microsoft Digital Defense Report 2025
- India Cyber Threat Report 2025, DSCI
- Lucknow based organisations to help strengthen cybercrime research training policy ecosystem
- AI Cyber Attacks Statistics 2025: How Attacks, Deepfakes & Ransomware Have Escalated, SQ Magazine
- Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India.
- The Digital Personal Data Protection Act, 2023

Executive Summary:
A viral social media video falsely claims that Meta AI reads all WhatsApp group and individual chats by default, and that enabling “Advanced Chat Privacy” can stop this. On performing reverse image search we found a blog post of WhatsApp which was posted in the month of April 2025 which claims that all personal and group chats remain protected with end to end (E2E) encryption, accessible only to the sender and recipient. Meta AI can interact only with messages explicitly sent to it or tagged with @MetaAI. The “Advanced Chat Privacy” feature is designed to prevent external sharing of chats, not to restrict Meta AI access. Therefore, the viral claim is misleading and factually incorrect, aimed at creating unnecessary fear among users.
Claim:
A viral social media video [archived link] alleges that Meta AI is actively accessing private conversations on WhatsApp, including both group and individual chats, due to the current default settings. The video further claims that users can safeguard their privacy by enabling the “Advanced Chat Privacy” feature, which purportedly prevents such access.

Fact Check:
Upon doing reverse image search from the keyframe of the viral video, we found a WhatsApp blog post from April 2025 that explains new privacy features to help users control their chats and data. It states that Meta AI can only see messages directly sent to it or tagged with @Meta AI. All personal and group chats are secured with end-to-end encryption, so only the sender and receiver can read them. The "Advanced Chat Privacy" setting helps stop chats from being shared outside WhatsApp, like blocking exports or auto-downloads, but it doesn’t affect Meta AI since it’s already blocked from reading chats. This shows the viral claim is false and meant to confuse people.


Conclusion:
The claim that Meta AI is reading WhatsApp Group Chats and that enabling the "Advance Chat Privacy" setting can prevent this is false and misleading. WhatsApp has officially confirmed that Meta AI only accesses messages explicitly shared with it, and all chats remain protected by end-to-end encryption, ensuring privacy. The "Advanced Chat Privacy" setting does not relate to Meta AI access, as it is already restricted by default.
- Claim: Viral social media video claims that WhatsApp Group Chats are being read by Meta AI due to current settings, and enabling the "Advance Chat Privacy" setting can prevent this.
- Claimed On: Social Media
- Fact Check: False and Misleading