#FactCheck-Protest Video from Nagrota Falsely Linked to Opposition Against Indian Army
Executive Summary
A video is being widely circulated on social media by Pakistani propaganda-linked users, showing a group of people protesting on a road. It is being claimed that protesters in Jammu & Kashmir stopped Indian Army personnel from entering Nagrota, indicating growing public opposition against the forces. Research by CyberPeace Research Wing found that the claim is misleading. The viral video is unrelated to any protest against the Indian Army.
Claim
A user posted the video on X, claiming: “The days of Indian military occupation are numbered; people of Jammu & Kashmir have risen against India. Protesters stopped the Indian Army from entering Nagrota.”
- https://x.com/Stealthfalconer/status/2050301106623045758?s=20

Fact Check
During the research, the CyberPeace Research Wing team found no evidence of any such incident where civilians blocked or opposed the Indian Army in Nagrota. Further probe led to a post by an X user “Defence News Of INDIA,” which contained the full version of the viral video. The accompanying information clarified that the protest took place in Dansal’s Badsu Panchayat area of Nagrota and was led by BJP MLA Devayani Rana.

The protest was organized against the Public Health Engineering (PHE) Department over severe water shortage issues in the region. Locals, along with the MLA, staged a sit-in to highlight the lack of water supply.
We also found multiple media reports, including from KBC News – Kashmir and Jammu Links News, confirming that Devayani Rana led a road blockade protest in her constituency over water scarcity and accused the Jal Shakti Department of negligence and administrative failure. Additionally, videos of the same protest were available on social media platforms, including live streams shared from Devayani Rana’s official pages.

Conclusion
Our research confirms that the viral claim is false and misleading. The video does not show any protest against the Indian Army. It is actually from a demonstration led by Devayani Rana and local residents over water shortage issues in Nagrota.
Related Blogs

Introduction
All citizens are using tech to their advantage, and so we see a lot of upskilling among the population leading to innovation in India. As we go deeper into cyberspace, we must maintain our cyber security efficiently and effectively. When bad actors use technology to their advantage, we often see data loss or financial loss of the victim, In this blog, we will shine light upon two new forms of cyber attacks, causing havoc upon the innocent. The “Daam” Malware and a new malicious app are the two new issues.
Daam Botnet
Since 2021, the DAAM Android botnet has been used to acquire unauthorised access to targeted devices. Cybercriminals use it to carry out different destructive actions. Using the DAAM Android botnet’s APK binding service, threat actors can combine malicious code with a legitimate application. Keylogging, ransomware, VOIP call records, runtime code execution, browser history collecting, incoming call recording, PII data theft, phishing URL opening, photo capture, clipboard data theft, WiFi and data status switching, and browser history gathering are just a few of the functions offered by the DAAM Android botnet. The DAAM botnet tracks user activity using the Accessibility Service and stores keystrokes it has recorded together with the name of the programme package in a database. It also contains a ransomware module that encrypts and decrypts data on the infected device using the AES method.
Additionally, the botnet uses the Accessibility service to monitor the VOIP call-making features of social media apps like WhatsApp, Skype, Telegram, and others. When a user engages with these elements, the virus begins audio recording.
The Malware
CERT-IN, the central nodal institution that reacts to computer security-related issues, claims that Daam connects with various Android APK files to access a phone. The files on the phone are encrypted using the AES encryption technique, and it is distributed through third-party websites.
It is claimed that the malware can damage call recordings and contacts, gain access to the camera, change passwords, take screenshots, steal SMS, download/upload files, and perform a variety of other things.

Safeguards and Guidelines by Cert-In
Cert-In has released the guideline for combating malware. These were issued in the public interest. The recommendations by Cert-In are as follows-
Only download from official app stores to limit the risk of potentially harmful apps.
Before downloading an app, always read the details and user reviews; likewise, always give permissions that are related to the program’s purpose.
Install Android updates solely from Android device vendors as they become available.
Avoid visiting untrustworthy websites or clicking on untrustworthy
Install and keep anti-virus and anti-spyware software up to date.
Be cautious if you see mobile numbers that appear to be something other than genuine/regular mobile numbers.
Conduct sufficient investigation Before clicking on a link supplied in a communication.
Only click on URLs that clearly display the website domain; avoid abbreviated URLs, particularly those employing bit.ly and tinyurl.
Use secure browsing technologies and filtering tools in antivirus, firewall, and filtering services.
Before providing sensitive information, look for authentic encryption certificates by looking for the green lock in your browser’s URL information, look for authentic encryption certificates by looking for the green lock in your browser’s URL bar.
Any ‘strange’ activity in a user’s bank account must be reported immediately to the appropriate bank.
New Malicious App
From the remote parts of Jharkhand, a new form of malicious application has been circulated among people on the pretext of a bank account closure. The bad actors have always used messaging platforms like Whatsapp and Telegram to circulate malicious links among unaware and uneducated people to dupe them of their hard-earned money.
They send an ordinary-looking message on Whatsapp or Telegram where they mention that the user has a bank account at ICICI bank and, due to irregularity with the credentials, their account is being deactivated. Further, they ask users to update their PAN card to reactivate their account by uploading the PAN card on an application. This app, in turn, is a malicious app that downloads all the user’s personal credentials and shares them with the bad actors via text message, allowing them to bypass banks’ two-factor authentication and drain the money from their accounts. The Jharkhand Police Cyber Cells have registered numerous FIRs pertaining to this type of cybercrime and are conducting full-scale investigations to apprehend the criminals.
Conclusion
Malware and phishing attacks have gained momentum in the previous years and have become a major contributor to the tally of cybercrimes in the country. DaaM malware is one of the examples brought into light due to the timely action by Cert-In, but still, a lot of such malware are deployed by bad actors, and we as netizens need to use our best practices to keep such criminals at bay. Phishing crimes are often substantiated by exploiting vulnerabilities and social engineering. Thus working towards a rise in awareness is the need of the hour to safeguard the population by and large.

Executive Summary
A video circulating on social media, shared by a Pakistani account, claims to show Indian Army Chief General Upendra Dwivedi making a controversial statement. In the clip, he is allegedly heard saying that he requested Prime Minister Narendra Modi to connect him with film director Ranjan Agnihotri so he could provide inputs and a script for a movie on “Operation Sindoor.”
However, research by CyberPeace has found that the viral video is an AI-generated deepfake. General Upendra Dwivedi has made no such statement.
Claim
A Pakistani user shared the viral video on X (formerly Twitter) on April 10, 2026, making the above claim.
Post links:
- https://x.com/DanishNawaz2773/status/2042312967811973225?s=20
- https://archive.ph/kAwoR

Fact Check
To verify the claim, we conducted keyword searches on Google but found no credible media reports supporting it. Further research led us to the original video posted on the X account of ANI. In this authentic clip, General Upendra Dwivedi is seen speaking at the ‘Ran Samwad’ seminar held in Bengaluru.
In the original video, he discusses the operational aspects of “Operation Sindoor,” including ground intelligence, cyber and electronic warfare inputs, Pakistan’s behaviour, and the challenges of a two-front scenario. There is no mention whatsoever of Pakistan mediation, Prime Minister Modi, Ranjan Agnihotri, any movie script, or a film based on Operation Sindoor.

This clearly indicates that the viral clip has been manipulated and taken out of context. The video was further analyzed using the AI detection tool DetectVideo AI, which indicated a 72% probability that the content is AI-generated. This strongly supports the conclusion that the video is a deepfake.

Conclusion
The viral claim is false. The video featuring General Upendra Dwivedi has been digitally altered using AI techniques to insert fabricated statements. The original footage is from the ‘Ran Samwad’ seminar in Bengaluru, where he spoke about military strategy and multi-domain operations, not about any film or director. There is no evidence to suggest that he made any statement regarding contacting a filmmaker or contributing to a movie script. The inclusion of such references in the viral clip is entirely fabricated. This case highlights how AI-generated deepfakes are increasingly being used to spread misinformation, especially in sensitive contexts involving the military and international relations. Viewers are advised to rely on verified sources and exercise caution before sharing such content.
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation