#FactCheck: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
Executive Summary:
A viral video (archive link) claims General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Air Force jets and 250 soldiers during clashes with Pakistan. Verification revealed the footage is from an IIT Madras speech, with no such statement made. AI detection confirmed parts of the audio were artificially generated.
Claim:
The claim in question is that General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Indian Air Force jets and 250 soldiers during recent clashes with Pakistan.

Fact Check:
Upon conducting a reverse image search on key frames from the video, it was found that the original footage is from IIT Madras, where the Chief of Army Staff (COAS) was delivering a speech. The video is available on the official YouTube channel of ADGPI – Indian Army, published on 9 August 2025, with the description:
“Watch COAS address the faculty and students on ‘Operation Sindoor – A New Chapter in India’s Fight Against Terrorism,’ highlighting it as a calibrated, intelligence-led operation reflecting a doctrinal shift. On the occasion, he also focused on the major strides made in technology absorption and capability development by the Indian Army, while urging young minds to strive for excellence in their future endeavours.”
A review of the full speech revealed no reference to the destruction of six jets or the loss of 250 Army personnel. This indicates that the circulating claim is not supported by the original source and may contribute to the spread of misinformation.

Further using AI Detection tools like Hive Moderation we found that the voice is AI generated in between the lines.

Conclusion:
The claim is baseless. The video is a manipulated creation that combines genuine footage of General Dwivedi’s IIT Madras address with AI-generated audio to fabricate a false narrative. No credible source corroborates the alleged military losses.
- Claim: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The digital communication landscape in India is set to change significantly as the Department of Telecommunications is preparing to implement new rules for messaging apps that operate using SIM cards. This step is part of the government’s effort to tackle cybercrime at its roots by enforcing stricter verification and reducing the number of communication platforms that can be misused. One clear change that users will notice is that WhatsApp Web sessions will now be automatically logged out every six hours, disrupting the previously uninterrupted use across multiple devices. Although this may appear to be a simple inconvenience, the measure is part of a broader plan to address the growing problem of cyber fraud. Cybercriminals exploit messaging apps like WhatsApp without keeping the registered SIM in the device, making it difficult to trace fraud. These efforts are surely gonna address these challenges at the root.
The Incident: What Has Changed?
The new regulations will make it mandatory for messaging platforms to create a direct link between user accounts and verified SIM identities. By this method, every account in the network can be associated with a valid and traceable mobile number. Because of this requirement, it is expected that WhatsApp is going to tighten the management of device sessions. The six-hour logout cycle for WhatsApp Web is implemented to prevent long-lived and unmonitored sessions that are sometimes taken advantage of in account takeovers, device-based breaches, and remote access scams. This change significantly affects the user experience. WhatsApp Web, often used for communication, customer support, and coordination, will now require more frequent authentication through mobile devices. Though mobile access remains uninterrupted, desktop and browser-linked sessions will be subjected to tighter security controls.
Why Identity-Linked Messaging Matters
India is facing a rapidly evolving cybercrime ecosystem in which messaging applications play a central role. Scammers often rely on fake, unverified, or illegally obtained SIM cards to create temporary accounts that can be used for various illegal activities, such as sending phishing messages, impersonating government officials, and deceiving victims through call centres set up for scams.
The new rules take into consideration the following main issues:
- Anonymity of accounts makes large-scale fraud possible: Criminals operate bulk scams using hundreds of SIM-linked accounts.
- Freedom to drop identities: Illegal SIMs are discarded after fraud, making it difficult for the police to trace the criminals.
- Multi-device vulnerabilities that last for a long time: Access without permission to WhatsApp Web sessions that last for a long time is seen as the main reason for OTP theft, account hijacking, and on-device social engineering.
The government wants to disrupt these foundations by enforcing stricter traceability.
A Sector Under Strain: Misuse of Messaging Platforms
Messaging apps have turned out to be the most important thing in India's digital life, from communication to enterprise. This very widespread use of messaging apps has made them an easy target for cybercriminals.
The scams that are frequently visible are:
- WhatsApp groupsare used for job and loan scams
- False communication from banks, government departments, and payment applications
- Sextortion and blackmail through unverified accounts
- Remote-access fraud with attackers who are watching WhatsApp Web sessions
- Coordinated spread of false information and distribution of deepfake videos
The employment of AI-generated personas and "SIM farms" has made it harder to secure the systems even more. Unless there is a very strict linking of users to authenticated SIM credentials, the platforms might degenerate into uncontrollable rafts of cybercrime.
Government and Regulatory Response
The Department of Telecommunications is initiating a process of stricter compliance measures and cooperating with the Ministry of Home Affairs, along with the Indian Cyber Crime Coordination Centre. The main points of the directions include the following:
- Identity verification linked to a SIM is mandatory for the creation of messaging accounts
- Device re-authentication on platforms often starts with WhatsApp Web
- Coordination with the telecom operators to the extent of getting suspicious login patterns
- Protocols for the sharing of data with law enforcement in the course of cybercrime investigations
- Compliance checks of digital platforms to verify adherence to national safety guidelines
This coordinated effort reflects the understanding that the security of communication platforms is the responsibility of both the regulators and the service providers.
The Bigger Picture: Strengthening India’s Digital Trust
The fresh regulations are in step with the worldwide trend where the platforms of messaging have to be more responsible, as governments are demanding more and more from them. The same discussions are going on in the EU, UK, and certain Southeast Asian regions.
For India, it is imperative to enhance identity management because:
- The nation has the largest base of messaging users in the whole world
- Cybercrime is increasing at a rate quicker than that of traditional crime
- Digital government services rely on communications that are secure
- Identity integrity is the basis for trust in online transactions and digital payments
The six-hour logout policy for WhatsApp Web is a small action, but it is an indication of a bigger transformation towards a regulation that is active rather than just policing that is reactive.
What Needs to Happen Next?
The implementation of SIM-linked regulations must involve several subsequent measures to make them effective.
- Strengthening Digital Literacy: It is necessary to educate users about the benefits of frequent logouts and security improvements.
- Ensuring Privacy Protections: The DPDP Act should create a strong barrier against the misuse of personal data in identity-linked messaging that will be implemented.
- Collaboration with Platforms: Messaging services should seek to secure authentication under the compromise of safety checks.
- Monitoring SIM fraud at the source: Illicit SIM provisioning enforcement is the main source of criminals, not just changing their methods.
- Continuous Review and Feedback: Policymaking needs to keep pace with real-life difficulties and new inventions in technology.
Conclusion
India's announcement to impose regulations on messaging apps with SIM linkage is a major step forward in preventing cybercrime from occurring in the first place. Although the immediate effect, like the six-hour logout requirement for WhatsApp Web, may annoy users, it is nevertheless part of a bigger goal: to develop a more secure and trustworthy digital communication environment.
Securing the communication that links millions of people is vital as India becomes more and more digital. Through a combination of regulatory measures, technological protection, and user education, the country is headed toward a time when criminals in the cyber world will find it very difficult to operate and where consumers will be able to interact online with much more confidence and safety.
References
- https://thehackernews.com/2025/12/india-orders-messaging-apps-to-work.html
- https://indianexpress.com/article/explained/explained-sci-tech/whatsapp-web-automatic-log-out-six-hourse-reason-10394142/
- https://www.ndtv.com/india-news/explained-how-will-new-sim-binding-rule-affect-whatsapp-signal-telegram-9728710
- https://www.hindustantimes.com/india-news/no-whatsapp-without-active-sim-centre-issues-new-rules-dot-sim-binding-prevent-cyber-crimes-101764495810135.html
.webp)
Executive Summary:
Footage of the Afghanistan cricket team singing ‘Vande Mataram’ after India’s triumph in ICC T20 WC 2024 exposed online. The CyberPeace Research team carried out a thorough research to uncover the truth about the viral video. The original clip was posted on X platform by Afghan cricketer Mohammad Nabi on October 23, 2023 where the Afghan players posted the video chanting ‘Allah-hu Akbar’ after winning the ODIs in the World Cup against Pakistan. This debunks the assertion made in the viral video about the people chanting Vande Mataram.

Claims:
Afghan cricket players chanted "Vande Mataram" to express support for India after India’s victory over Australia in the ICC T20 World Cup 2024.

Fact Check:
Upon receiving the posts, we analyzed the video and found some inconsistency in the video such as the lip sync of the video.
We checked the video in an AI audio detection tool named “True Media”, and the detection tool found the audio to be 95% AI-generated which made us more suspicious of the authenticity of the video.


For further verification, we then divided the video into keyframes. We reverse-searched one of the frames of the video to find any credible sources. We then found the X account of Afghan cricketer Mohammad Nabi, where he uploaded the same video in his account with a caption, “Congratulations! Our team emerged triumphant n an epic battle against ending a long-awaited victory drought. It was a true test of skills & teamwork. All showcased thr immense tlnt & unwavering dedication. Let's celebrate ds 2gether n d glory of our great team & people” on 23 Oct, 2023.

We found that the audio is different from the viral video, where we can hear Afghan players chanting “Allah hu Akbar” in their victory against Pakistan. The Afghan players were not chanting Vande Mataram after India’s victory over Australia in T20 World Cup 2014.
Hence, upon lack of credible sources and detection of AI voice alteration, the claim made in the viral posts is fake and doesn’t represent the actual context. We have previously debunked such AI voice alteration videos. Netizens must be careful before believing misleading information.
Conclusion:
The viral video claiming that Afghan cricket players chanted "Vande Mataram" in support of India is false. The video was altered from the original video by using audio manipulation. The original video of Afghanistan players celebrating victory over Pakistan by chanting "Allah-hu Akbar" was posted in the official Instagram account of Mohammad Nabi, an Afghan cricketer. Thus the information is fake and misleading.
- Claim: Afghan cricket players chanted "Vande Mataram" to express support for India after the victory over Australia in the ICC T20 World Cup 2024.
- Claimed on: YouTube
- Fact Check: Fake & Misleading

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions