#FactCheck: Fake Viral Video Claiming Vice Admiral AN Pramod saying that next time if Pakistan Attack we will complain to US and Prez Trump.
Executive Summary:
A viral video (archived link) circulating on social media claims that Vice Admiral AN Pramod stated India would seek assistance from the United States and President Trump if Pakistan launched an attack, portraying India as dependent rather than self-reliant. Research traced the extended footage to the Press Information Bureau’s official YouTube channel, published on 11 May 2025. In the authentic video, the Vice Admiral makes no such remark and instead concludes his statement with, “That’s all.” Further analysis using the AI Detection tool confirmed that the viral clip was digitally manipulated with AI-generated audio, misrepresenting his actual words.
Claim:
In the viral video an X user posted with the caption
”India sells itself as a regional superpower, but its Navy Chief’s own words betray that image. If Pakistan attacks, their plan is to involve Trump, not fight back. This isn’t strategic partnership; it’s dependency in uniform”.
In the video the Vice Admiral was heard saying
“We have worked out among three services, this time if Pakistan dares take any action, and Pakistan knows it, what we are going to do. We will complain against Pakistan to the United States of America and President Trump, like we did earlier in Operation Sindoor.”

Fact Check:
Upon conducting a reverse image search on key frames from the video, we located the full version of the video on the official YouTube channel of the Press Information Bureau (PIB), published on 11 May 2025. In this video, at the 59:57-minute mark, the Vice Admiral can be heard saying:
“This time if Pakistan dares take any action, and Pakistan knows it, what we are going to do. That’s all.”

Further analysis was conducted using the Hive Moderation tool to examine the authenticity of the circulating clip. The results indicated that the video had been artificially generated, with clear signs of AI manipulation. This suggests that the content was not genuine but rather created with the intent to mislead viewers and spread misinformation.

Conclusion:
The viral video attributing remarks to Vice Admiral AN Pramod about India seeking U.S. and President Trump’s intervention against Pakistan is misleading. The extended speech, available on the Press Information Bureau’s official YouTube channel, contained no such statement. Instead of the alleged claim, the Vice Admiral concluded his comments by saying, “That’s all.” AI analysis using Hive Moderation further indicated that the viral clip had been artificially manipulated, with fabricated audio inserted to misrepresent his words. These findings confirm that the video is altered and does not reflect the Vice Admiral’s actual remarks.
Claim: Fake Viral Video Claiming Vice Admiral AN Pramod saying that next time if Pakistan Attack we will complain to US and Prez Trump.
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

The evolution of technology has presented both profound benefits and considerable challenges. It has benefited us with global interconnectivity, optimisation of the workforce, faster and solution-oriented approach, but at the same time increases risks of cybercrimes and the misuse of technology via online theft, fraud, and abuse. As the reliance on technology increases, it makes the users vulnerable to cyberattacks.
One way to address this nuisance is to set global standards and initiate measures for cooperation by integrating the efforts of international institutions such as UN bodies and others. The United Nations Interregional Crime and Justice Research Institute, which combats cybercrime and promotes the responsible use of technology, is making waves in these issues.
Understanding the Scope of the Problem
Crowdstrike had estimated the cybersecurity market at $207.77 billion in 2024 and expected it to reach $376.55 billion by 2029 and continue growing at a CAGR of 12.63% during the forecast period. In October of 2024, Forbes predicted that the cost of cyber attacks on the global economy would be over $10.5 trillion.
The developments in technology have provided cybercriminals with more sophisticated means to commit cybercrimes. These include cybercrimes like data breaches, which are increasingly common, such as phishing attacks, ransomware, social engineering, and IoT attacks. Their impact is evident across various domains, including economic and social spheres. The victims of cybercrimes can often suffer from stress, anxiety, fear of being victimised again, a lack of trust and social polarisation/stigmatisation.
UNICRI’s Strategic Approach
UNICRI actively combats cybercrimes and technology misuse, focusing on cybersecurity, organized crime in cyberspace, and terrorists' internet use. Since 2020, it has monitored social media misuse, analysed tools to debunk misinformation and balanced security with human rights.
The key focus areas of UNICRI’s strategic approach include cybersecurity in robotics, critical infrastructure, and SCADA systems, digital forensics, child online protection and addressing online profiling and discrimination. It further supports LEAs (judges, prosecutors, and investigators) by providing them with specialised training. Its strategies to counter cybercrime and tech misuse include capacity-building exercises for law enforcement, developing international legal frameworks, and fostering public-private collaborations.
Key Initiatives under UNICRI Strategic Programme Framework of 2023-2026
The key initiatives under UNICRI set out the strategic priority areas that will guide its work. It includes:
- Prevent and Counter Violent Extremism: By addressing the drivers of radicalisation, gender-based discrimination, and leveraging sports for prevention.
- Combat Organised Crime: Via tackling illicit financial flows, counterfeiting, and supply chain crimes while promoting asset recovery.
- Promotion of Emerging Technology Governance: Encouraging responsible AI use, mitigating cybercrime risks, and fostering digital inclusivity.
- Rule of Law and Justice Access: Enhancing justice systems for women and vulnerable populations while advancing criminal law education.
- CBRN Risk Mitigation: Leveraging expert networks and whole-of-society strategies to address chemical, biological, radiological, and nuclear risks.
The Challenges and Opportunities: CyberPeace Takeaways
The challenges that affect the regulation of cybercrimes are most often caused due to jurisdictional barriers, the lack of resources, and the rapid pace of technological change. This is due to the cross-border nature of cybercrimes and as many nations lack the expertise or infrastructure to address sophisticated cyber threats. The regulatory or legislative frameworks often get outpaced by technology developments, including quantum computing, deepfakes, or blockchain misuse. Due to this, these crimes are often unpunished.
The opportunities that have been developing for innovation in cybercrime prevention, include AI and machine learning tools to detect cybercrimes, enhanced international cooperation that can strengthen the collective defence mechanisms, like multi-stakeholder approaches. Capacity Building initiatives for continuous training and education help LEAs and judicial systems adapt to emerging threats, is a continuous effort that requires participation from all sectors, be it public or private.
Conclusion
Due to cybercrimes and the threats they induce on individuals, communities, and global security, the proactive approach by UNICRI of combining international cooperation, capacity-building and innovative strategies is pivotal in combating these challenges. By addressing the challenges of organised crime in cyberspace, child online protection, and emerging technology governance, UNICRI exemplifies the power of strategic engagement. While jurisdictional barriers and resource limitations persist, the opportunities in AI, global collaboration, and education offer a path forward. With the evolution of technology, our defences must also be dynamic and ever evolving, and UNICRI’s efforts are essential to building a safer, more inclusive digital future for all.
References
- https://unicri.it/special_topics/securing_cyberspace
- https://www.forbes.com/sites/bernardmarr/2023/10/11/the-10-biggest-cyber-security-trends-in-2024-everyone-must-be-ready-for-now/

The Digital Personal Data Protection (DPDP) Act, 2023, operationalises data privacy largely through a consent management framework. It aims to give data principles, ie, individuals, control over their personal data by giving them the power to track, change, and withdraw their consent from its processing. However, in practice, consent management is often not straightforward. For example, people may be frequently bombarded with requests, which can lead to fatigue and eventual overlooking of consent requests. This article discusses the way consent management is handled by the DPDP Act, and looks at how India can design the system to genuinely empower users while holding organisations accountable.
Consent Management in the DPDP Act
According to the DPDP Act, consent must be unambiguous, free, specific, and informed. It must also be easy for people to revoke their consent (DPO India, 2023). To this end, the Act creates Consent Managers- registered middlemen- who serve as a link between users and data custodians.
The purpose of consent managers is to streamline and centralise the consent procedure. Users can view, grant, update, or revoke consent across various platforms using the dashboards they offer. They hope to improve transparency and lessen the strain on people to keep track of permissions across different services by standardising the way consent is presented (IAPP, 2024).
The Act draws inspiration from international frameworks such as the GDPR (General Data Protection Regulation), mandating that Indian users be provided with a single platform to manage permissions rather than having to deal with dispersed consent prompts from every service.
The Challenges
Despite the mandate for an interoperable platform for consent management, several key challenges emerge. There is a lack of clarity on how consent management will be operationalised. This creates challenges of accountability and implementation. Thus, :
- If the interface is poorly designed, users could be bombarded with content permissions from apps/platforms/ services that are not fully compliant with the platform.
- If consent notices are vague, frequent, lengthy, or complex, users may continue to grant permissions without meaningful engagement.
- It leaves scope for data fiduciaries to use dark patterns to coerce customers into granting consent through poor UI/UX design.
- The lack of clear, standardised interoperability protocols across sectors could lead to a fragmented system, undermining the goal of a single, easy-to-use platform.
- Consent fatigue could easily appear in India's digital ecosystem, where apps, e-commerce websites, and government services all ask for permissions from over 950 million internet subscribers. Experiences from GDPR countries show that users who are repeatedly prompted eventually become banner blind, which causes them to ignore notices entirely.
- Low levels of literacy (including digital literacy) and unequal access to digital devices among women and marginalised communities create complexities in the substantive coverage of privacy rights.
- Placing the burden of verification of legal guardianship for children and persons with disabilities (PwDs) on data fiduciaries might be ineffective, as SMEs may lack the resources to undertake this activity. This could create new forms of vulnerability for the two groups.
Legal experts claim that this results in what they refer to as a legal fiction, wherein consent is treated as valid by the law despite the fact that it does not represent true understanding or choice (Lawvs, 2023). Additionally, research indicates that users hardly ever read privacy policies in their entirety. People are very likely to tick boxes without fully understanding what they are agreeing to. By drastically limiting user control, this has a bearing on the privacy rights of Indian citizens and residents. (IJLLR, 2023).
Impacts of Weak Consent Management:
According to the Indian Journal of Law and Technology, in an era of asymmetry and information overload, privacy cannot be sufficiently protected by relying only on consent (IJLT, 2023). Almost every individual will be impacted by inadequate consent management.
- For Users: True autonomy is replaced by the appearance of control. Individuals may unintentionally disclose private information, which undermines confidence in digital services.
- For Businesses: Compliance could become a mere formality. Further, if acquired consent is found to be manipulated or invalid, it creates space for legal risks and reputational damage.
- For Regulators: It becomes difficult to oversee a system where consent is frequently disregarded or misinterpreted. When consent is merely formal, the law's promise to protect personal information is undermined.
Way Forward
- Layered and Simplified Notices: Simple language and layers of visual cues should be used in consent requests. Important details like the type of data being gathered, its intended use, and its duration should be made clear up front. Additional explanations are available for users who would like more information. This method enhances comprehension and lessens cognitive overload (Lawvs, 2023).
- Effective Dashboards: Dashboards from consent managers should be user-friendly, cross-platform, and multilingual. Management is made simple by features like alerts, one-click withdrawal or modification, and summaries of active permissions. The system is more predictable and dependable when all services use the same format, which also reduces confusion (IAPP, 2024).
- Dynamic and Contextual Consent: Instead of appearing as generic pop-ups, consent requests should show up when they are pertinent to a user's actions. Users can make well-informed decisions without feeling overburdened by subtle cues, such as emphasising risks when sensitive data is requested (IJLLR, 2023).
- Accountability of Consent Managers: Organisations that offer consent management services must be accountable and independent, through clear certification, auditing, and specific legal accountability frameworks. Even when formal consent is given, strong trustee accountability guarantees that data is not misused (IJLT, 2023).
- Complementary Protections Beyond Consent: Consent continues to be crucial, but some high-risk data processing might call for extra protections. These may consist of increased responsibilities for fiduciaries or proportionality checks. These steps improve people's general protection and lessen the need for frequent consent requests (IJLLR, 2023).
Conclusion
The core of the DPDP Act is to empower users to have control over their data through measures such as consent management. But requesting consent is insufficient; the system must make it simple for people to manage, monitor, and change it. Effectively designed, managed, and executed consent management has the potential to revolutionise user experience and trust in India's digital ecosystem if it is implemented carefully.To make consent management genuinely meaningful, it is imperative to standardise procedures, hold fiduciaries accountable, simplify interfaces, and investigate supplementary protections.
References
Building Trust with Technology: Consent Management Under India’s DPDP Act, 2023
Consent Fatigue and Data Protection Laws: Is ‘Informed Consent’ a Legal Fiction
Beyond Consent: Enhancing India's Digital Personal Data Protection Framework
Top 10 operational impacts of India’s DPDPA – Consent management