#FactCheck -Viral Video of General Manoj Pande Misleading, Audio Found to Be AI-Generated
Executive Summary:
A video of former Army Chief General Manoj Pande is going viral on social media with the claim that he attacked the Modi government, saying that supporting Israel is causing significant harm to the Indian Army. The research by CyberPeace revealed that the audio present in the viral video is AI-generated. No such statement was made in the original video.
Claim:
On social media platform X, while sharing the viral video, users wrote, “Delhi: Former Army Chief General Manoj Pande (Retd.) said, ‘Do you know what the biggest loss of supporting Israel is? Our Indian Army was always trained as a moral force, but the current situation is turning it into an ethnic force. Remember my words, this situation is moving towards a complete rebellion. We have all seen what is happening in Assam.’ ‘The Israeli army stands against humanity, and brutality has become its identity. Our army is becoming like them due to its association. The Modi government and the Sangh Parivar are responsible for this. For both, Israel is an ideal country, and they are running an agenda to turn India into Israel.’”

Fact Check:
In the research of the viral video claiming that former Army Chief General Manoj Pande attacked the Modi government, we conducted a reverse image search with the help of keyframes. During this process, we found a video uploaded on March 14 on the X account of the news agency Press Trust of India (PTI).
The visuals present in the video matched those in the viral video.
In this video, former Army Chief General Manoj Pande was seen delivering a speech in Marathi and English. However, during this, he was talking about increasing new kinds of capabilities in view of the current situation and not mentioning Israel, as claimed in the viral video. In the approximately 1 minute 15 seconds long video, he did not give any such statement as present in the viral video.

While taking the research forward, we found a report published on March 15, 2026, on the website of ThePrint. This report mentioned the speech delivered by former Army Chief General Manoj Pande, but no report mentioned the statement shown in the viral video.

Conclusion:
Our research found that the audio present in the viral video is AI-generated. In the original video, he did not make any such statement.
Related Blogs

Introduction
WhatsApp has become the new platform for scams, and the number of cases of WhatsApp scams is increasing daily. Just like that, a new WhatsApp scam has been started, and many WhatsApp users in India have reported receiving missed calls from unknown international numbers. Worse, one does not even have to answer the call to be scammed. A missed call is sufficient to be scammed.
Millions of populations switch from normal SMS to WhatsApp, usually, people used to get fake messages and marketing messages, but the trend of scamming has been evolving now. Most people get calls from different countries, and they are concerned about how these scammers got the numbers. WhatsApp works through VoIP networks, so no extra charges from any country exist. And about 500 million WhatsApp users are getting these scam calls, the calls are mainly on job-scams of promising part-time employment and opportunities. These types of job scam calls have been started reporting in 2023.
People reporting missed calls from countries like Ethiopia (+251), Malaysia (+60), Indonesia (+62), Vietnam (+84), etc.
The agenda of these calls are still unclear. Still, in some cases, the scammers ask for confidential information from WhatsApp users, like bank details, so the users must not reveal their personal information. Also, it is important to note that if you get any calls from a particular country, it necessarily does not mean it is from that country. Various agencies sell international numbers for WhatsApp calls.
Why has WhatsApp become a hub scam?
The generation has evolved and dumped the old SMS into WhatsApp. From school to college and offices, people use WhatsApp for their official work, as it is very easy and user-friendly, so people avoid safety measures. Generally, users need to understand the consequences of technology and use it with safeguards and awareness. Many people lose money and become victims of scams on WhatsApp as they share their confidential information. And the worse is that one does not even have to answer the call to be scammed. A missed call is sufficient to be scammed.
Before these international calls scam, the user received a call from the scam that they were from KBC, and the user won something. Then sought confidential information by the excuse that they would transfer the money to the user, and because of that user got scammed by the scammers. These scams have risen rapidly lately.
Safeguards users can use against these scam calls
WhatsApp responds to complaints regarding international calls to “block and report.”
If you have already received such calls, the best thing you can do is report and block them right away. As a result, the same number does not return to your phone, and numerous identical reports may persuade WhatsApp to delete the number entirely.
WhatsApp is also working on an update allowing users to block calls from unknown numbers on the service.

Users must modify their phone’s and app’s fundamental privacy settings to protect themselves from data breaches. The calls are directed toward app users who are actively using the app. However, by modifying the account’s appearance, a user can lessen the likelihood of being added to the scammers’ attack lists.
Limit Privacy
Begin by modifying WhatsApp’s ‘who can see’ settings. If your profile photo, last seen, and online status are visible to anybody, restrict them to persons on your contact list only. Change the About and Groups options as well.
Turn on two-factor authentication
Enabling two-factor authentication on WhatsApp adds more security to your data. In addition, the app also supports biometric protection in case of theft or loss.
Active Reporting
The users should report as soon as they see something odd or suspicious activity.
A typical question that users have is, ‘Where do the scammers acquire my phone number from?’
The answer is a little more complicated than we thought. Your data is retained on the company database from the time you sign up on a website or reveal your phone number at a store in order to take advantage of promotional offers and promotions. Due to a lack of technological infrastructure and legislation to protect personal data, a scammer can simply obtain your information.
According to Palo Alto research, India is the second most vulnerable country in the APAC region in terms of cyberattacks and data breaches. A data protection law is essential in the face of increasing calls and data breaches.
The Digital Personal Data Protection bill is set to be introduced in the parliament’s monsoon session. The bill has the potential to protect data, which will help to eliminate scams.
Conclusion
Several people had tweeted on tweeter about receiving fake calls on WhatsApp from international numbers more than once. WhatsApp encrypts calls and messages, making it difficult to track the person, and it appears that hackers are taking advantage of this to swindle customers. If you receive a WhatsApp call from any of the above ISD codes, we strongly advise you not to answer it and to block the number so the bad actors do not call you again. Report & block immediately that’s what WhatsApp has been responding to the complainants.
.webp)
The Equitable Growth Approach of AI and Digital Twins
Digital Twins can be simply described as virtual replicas of physical assets or systems, powered by real-time data and advanced simulations. When this technology is combined with AI, the impact it has on enabling real-time monitoring, predictive maintenance, optimised operations, and improved design processes through the creation of virtual replicas of physical assets becomes even greater. The greatest value of AI is its ability to make data actionable. And when combined with digital twins, these data can be collated, analysed, inefficiencies removed, and better decisions can be taken to improve efficiency and quality.
This intersection between AI and Digital Twins holds immense potential for addressing key challenges, particularly in countries like India, which is rapidly embracing digital adoption to achieve its economic ambitions and sustainability goals. According to Salesforce’s most recent survey on generative AI use among the general population within the U.S., UK, Australia and India, 75% of generative AI users are looking to automate repetitive tasks and use generative AI for work communications. India is particularly looking towards a rapid digital adoption, economic ambitions, and sustainable developments to be achieved through AI adoption. This blog discuss the intersection of equitable growth, sustainability, and AI-driven policies in India.
Sustainability and the Path Ahead: Digital Twin and AI-Driven Solutions
India faces sustainability challenges which are mainly associated with issues such as urban congestion, the rising demand for energy along with climate change and environmental degradation. AI and Digital Twins provide solutions for real-time simulations and predictive analysis. Some of the examples are its applications in sustainable urban planning such as smart cities like the Indore Smart City Initiative and traffic optimisation, energy efficiency/optimisation through AI-driven renewable energy projects and power grid optimisation and even water resource management through leak detection, equitable distribution and conservation.
The need is to balance innovation with regulation, particularly, underscoring the importance of ethical and sustainable deployment of AI and digital twins and addressing data privacy with AI ethics with recent developments such as the India’s evolving AI policy landscape, including the National Strategy for Artificial Intelligence and its focus on AI for All, regulatory frameworks such as DPDP Act and the manner in which they address AI ethics, data privacy, and digital governance.
The need is to initiate targeted policies that promote research and development in AI and digital twin technologies, skill development and partnerships with the private sector, think tanks, nonprofits and others. Also, collaborations at the global level would include aligning our domestic policies with global AI and sustainability initiatives and leveraging the international frameworks for climate tech and smart infrastructure.
Cyberpeace Outlook
As part of specific actions, policymakers need to engage in proactive governance to ensure the responsible use and development of AI. This includes enacting incentive schemes for sustainable AI projects and strengthening the enforcement of data privacy laws. Industry leaders must support equitable access to AI and digital twin technologies and develop tailored AI tools for resource-constrained settings, particularly in India. Finally, researchers need to drive innovation in alignment with sustainability goals, such as those related to agriculture and groundwater management.
References
- https://economictimes.indiatimes.com/tech/artificial-intelligence/technologies-like-ai-and-digital-twins-can-tackle-challenges-like-equitable-growth-to-sustainability-wef/articleshow/117121897.cms
- https://www.salesforce.com/news/stories/generative-ai-statistics/
- https://www.mdpi.com/2673-2688/4/3/38
- https://www.ibm.com/think/topics/generative-ai-for-digital-twin-energy-utilities

Introduction
In today’s digital world, data has emerged as the new currency that influences global politics, markets, and societies. Companies, governments, and tech behemoths aim to control data because it accords them influence and power. However, a fundamental challenge brought about by this increased reliance on data is how to strike a balance between privacy protection and innovation and utility.
In recognition of these dangers, more than 200 Nobel laureates, scientists, and world leaders have recently signed the Global Call for AI Red Lines. Governments are urged by this initiative to create legally binding international regulations on artificial intelligence by 2026. Its goal is to stop AI from going beyond moral and security bounds, particularly in areas like political manipulation, mass surveillance, cyberattacks, and dangers to democratic institutions.
One way to address the threat to privacy is through pseudonymization, which makes it possible to use data valuable for research and innovation by substituting personal identifiers for artificial ones. Pseudonymization thus directly advances the AI Red Lines initiative's mission of facilitating technological advancement while lowering the risks of data misuse and privacy violations.
The Red Lines of AI: Why do they matter?
The Global Call for AI Red Lines initiative represents a collective attempt to impose precaution before catastrophe, which was done with the objective of recognising the Red Lines in the use of AI tools. Thus, anything that unites the risks of using AI is due to the absence of global safeguards. Some of these Red Lines can be understood as;
- Cybersecurity breaches in the form of exposure of financial and personal data due to AI-driven hacking and surveillance.
- Occurrence of privacy invasions due to endless tracking.
- Generative AI can also help to create realistic fake content, undermining the trust of public discourses, leading to misinformation.
- Algorithmic amplification of polarising content can also threaten civic stability, leading to a demographic disruption.
Legal Frameworks and Regulatory Landscape
The regulations of Artificial Intelligence stand fragmented across jurisdictions, leaving significant loopholes aside. Some of the frameworks already provide partial guidance. The European Union’s Artificial Intelligence Act 2024 bans “unacceptable” AI practices, whereas the US-China Agreement also ensures that nuclear weapons remain under human, not machine-controlled. The UN General Assembly has adopted resolutions urging safe and ethical AI usage, with a binding and elusive global treaty.
On the front of data protection, the General Data Protection Regulations (GDPR) of EU offers a clear definition of Pseudonymisation under Article 4(5). It also describes a process where personal data is altered in a way that it cannot be attributed to an individual without additional information, which must be stored securely and separately. Importantly, pseudonymised data still qualifies as “personal data” under GDPR. However, India’s Digital Personal Data Protection Act (DPDP) 2023 adopts a similar stance. It does not explicitly define pseudonymisation in broad terms, such as “personal data” by including potentially reversible identifiers. According to Section 8(4) of the Act, companies are meant to adopt appropriate technical or organisational measures. International bodies and conventions like the OECD Principles on AI or the Council of Europe Convention 108+ emphasize accountability, transparency, and data minimisation. Collectively, these instruments point towards pseudonymization as a best practice, though interpretations of its scope differ.
Strategies for Corporate Implementation
For a company, pseudonymisation is not just about compliance, it is also a practical solution that offers measurable benefits. By pseudonymising data, businesses can get benefits, such as;
- Enhancing Privacy protection by masking identifiers like names or IDs by reducing the impact of data breaches.
- Preserving Data Utility, unlike having a full anonymisation, pseudonymisation also retains patterns that are essential for analytical innovation.
- Facilitating data sharing can allow organizations to collaborate with their partners and researchers while maintaining proper trust.
According to these benefits, competitive advantages get translated to clauses where customers find it more likely to trust organizations that prioritise data protection, while pseudonymisation further enables the firms to engage in cross-border collaboration without violating local data laws.
Balancing Privacy Rights and Data Utility
Balancing is a central dilemma; on one side lies the case of necessity over data utility, where companies, researchers and governments rely on large datasets to enhance the scale of AI innovation. On the other hand lies the question of the right to privacy, which is a non-negotiable principle protected under the international human rights law.
Pseudonymisation offers a practical compromise by enabling the use of sensitive data while reducing the privacy risks. Taking examples of different domains, such as healthcare, it allows the researchers to work with patient information without exposing identities, whereas in finance, it supports fraud detection without revealing the customer details.
Conclusion
The rapid rise of artificial intelligence has led to the outpacing of regulations, raising urgent questions related to safety, fairness and accountability. The global call for recognising the AI red lines is a bold step that looks in the direction of setting universal boundaries. Yet, alongside the remaining global treaties, practical safeguards are also needed. Pseudonymisation exemplifies such a safeguard, which is legally recognised under the GDPR and increasingly relevant in India’s DPDP Act. It balances the twin imperatives of privacy, protection, and data utility. For organizations, adopting pseudonymisation is not only about ensuring regulatory compliance, rather, it is also about building trust, ensuring resilience, and aligning with the broader ethical responsibilities in this digital age. As the future of AI is debatable, the guiding principles also need to be clear. By embedding techniques for preserving privacy, like pseudonymisation, into AI systems, we can take a significant step towards developing a sustainable, ethical and innovation-driven digital ecosystem.
References
https://www.techaheadcorp.com/blog/shadow-ai-the-risks-of-unregulated-ai-usage-in-enterprises/
https://planetmainframe.com/2024/11/the-risks-of-unregulated-ai-what-to-know/
https://cepr.org/voxeu/columns/dangers-unregulated-artificial-intelligence
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/