#FactCheck: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
Executive Summary:
A viral video (archive link) claims General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Air Force jets and 250 soldiers during clashes with Pakistan. Verification revealed the footage is from an IIT Madras speech, with no such statement made. AI detection confirmed parts of the audio were artificially generated.
Claim:
The claim in question is that General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Indian Air Force jets and 250 soldiers during recent clashes with Pakistan.

Fact Check:
Upon conducting a reverse image search on key frames from the video, it was found that the original footage is from IIT Madras, where the Chief of Army Staff (COAS) was delivering a speech. The video is available on the official YouTube channel of ADGPI – Indian Army, published on 9 August 2025, with the description:
“Watch COAS address the faculty and students on ‘Operation Sindoor – A New Chapter in India’s Fight Against Terrorism,’ highlighting it as a calibrated, intelligence-led operation reflecting a doctrinal shift. On the occasion, he also focused on the major strides made in technology absorption and capability development by the Indian Army, while urging young minds to strive for excellence in their future endeavours.”
A review of the full speech revealed no reference to the destruction of six jets or the loss of 250 Army personnel. This indicates that the circulating claim is not supported by the original source and may contribute to the spread of misinformation.

Further using AI Detection tools like Hive Moderation we found that the voice is AI generated in between the lines.

Conclusion:
The claim is baseless. The video is a manipulated creation that combines genuine footage of General Dwivedi’s IIT Madras address with AI-generated audio to fabricate a false narrative. No credible source corroborates the alleged military losses.
- Claim: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Lost your phone? How to track and block your lost or stolen phone? Fear not, Say hello to Sanchar Saathi, the newly launched portal by the government. The smartphone has become an essential part of our daily life, our lots of personal data are stored in our smartphones, and if a phone is lost or stolen, it can be a frustrating experience. With the government initiative launching Sanchar Saathi Portal, you can now track and block your lost or stolen smartphone. The Portal uses a central equipment identity register to help users block their lost phones. It helps you track your lost and stolen smartphone. So now, say hello to Sanchar Saathi, the newly launched portal by the government. Users should keep an FIR copy of their lost/stolen smartphone handy for using certain features of the portal. FIR copy is also required for tracking the phone on the website. This portal allows users to track lost/stolen smartphones, and they can block the device across all telecom networks.
Preventing Data Leakage and Mobile Phone Theft
When you lose your phone or your phone is stolen, you worry as your smartphone holds your various personal sensitive information such as your bank account information, UPI IDs, and social media accounts such as WhatsApp, which cause a serious concern of data leakage and misuse in such a situation. Sanchar saathi portal addresses this problem and serves as a platform for blocking data saved on a lost or stolen device. This feature protects the users against financial fraud, identity thrift, and data leakage by blocking access to your lost or stolen device and ensuring that unauthorised parties cannot access or abuse important information.
How the Sanchar Saathi Portal Works
To file a complaint regarding their lost or stolen smartphones the users are required to provide IMEI (International Mobile Equipment Identity) number. The official website of the portal is https://sancharsaathi.gov.in/ users can access the “Citizen Centric Services” option on the homepage. Then users may, by clicking on “Block Your Lost/Stolen Mobile”, can fill out the form. Users need to fill in details such as IMEI number, contact number, model number of the smartphone, mobile purchase invoice, and information such as the date, time, district, and state where the device was lost or stolen. Users must keep a copy of the FIR handy and fill in their personal information, such as their name, email address, and residence. After completing and selecting the ‘Complete tab’, the form will be submitted, and access to the lost/stolen smartphone will be blocked.
Enhancing Security with SIM Card Verification
Using this portal, users can access their associated sim card numbers and block any unauthorised use. In this way portal allows owners to take immediate action if their sim card is being used or misused by someone else. The Sanchar Saathi Portal allows you to check the status of active SIM cards registered under an individual’s name. And it is an extra security feature provided by the portal. This proactive strategy helps users to safeguard their personal information against possible abuse and identity theft.
Advantages of the Sanchar Saathi Portal
The Sanchar Saathi platform offers various benefits for reducing mobile phone theft and protecting personal data. The portal offers a simplified and user-friendly platform for making complaints. The online complaint tracking function keeps consumers informed of the status of their complaints, increasing transparency and accountability.
The portal allows users to block access to personal data on lost/stolen smartphones which reduces the chances or potential risk of data leakage.
The portal SIM card verification feature acts as an extra layer of security, enabling users to monitor any unauthorised use of their personal information. This proactive approach empowers users to take immediate action if they detect any suspicious activity, preventing further damage to their personal data.
Conclusion
Our smartphones store large amounts of sensitive information and Data, so it becomes crucial to protect our smartphones from any unauthorised access, especially in case when the smartphone is lost or stolen. The Sanchar Saathi portal is a commendable step by the government by offering a comprehensive solution to combat mobile phone theft and protect personal data, the portal contributes to a safer digital environment for smartphone users.
The portal provides the option of blocking access to your lost/stolen device and also checking the SIM card verification. These features of the portal empower users to take control of their data security. In this way, the portal contributes to preventing mobile phone theft and data leakage.

Executive Summary:
As we researched a viral social media video we encountered, we did a comprehensive fact check utilizing reverse image search. The video circulated with the claim that it shows illegal Bangladeshi in Assam's Goalpara district carrying homemade spears and attacking a police and/or government official. Our findings are certain that this claim is false. This video was filmed in the Kishoreganj district, Bangladesh, on July 1, 2025, during a political argument involving two rival factions of the Bangladesh Nationalist Party (BNP). The footage has been intentionally misrepresented, putting the report into context regarding Assam to disseminate false information.

Claim:
The viral video shows illegal Bangladeshi immigrants armed with spears marching in Goalpara, Assam, with the intention of attacking police or officials.

Fact Check:
To establish if the claim was valid, we performed a reverse image search on some of the key frames from the video. We did our research on a number of news articles and social media posts from Bangladeshi sources. This led us to a reality check as the events confirmed in these reports took place in Ashtagram, Kishoreganj district, Bangladesh, in a violent political confrontation between factions of the Bangladesh Nationalist Party (BNP) on July 1, 2025, that ultimately resulted in about 40 injuries.

We also found on local media, in particular, Channel i News reported full accounts of the viral report and showed images from the video post. The individuals seen in the video were engaged in a political fight and wielding makeshift spears rather than transitioning into a cross-border attack. The Assam Police issued an official response on X (formerly Twitter) that denied the claim, while noting that nothing of that nature occurred in Goalpara nor in any other district of Assam.


Conclusion:
Based on our research, we conclude that the viral video does not show unlawful Bangladeshi immigrants in Assam. It depicts a political clash in Kishoreganj, Bangladesh, on July 1, 2025. The claim attached to the video is completely untrue and is intended to mislead the public as to where and what the incident depicted is.
Claim: Video shows illegal migrants with spears moving in groups to assault police!
Claimed On: Social Media
Fact Check: False and Misleading

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide