#FactCheck: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
Executive Summary:
A viral video (archive link) claims General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Air Force jets and 250 soldiers during clashes with Pakistan. Verification revealed the footage is from an IIT Madras speech, with no such statement made. AI detection confirmed parts of the audio were artificially generated.
Claim:
The claim in question is that General Upendra Dwivedi, Chief of Army Staff (COAS), admitted to losing six Indian Air Force jets and 250 soldiers during recent clashes with Pakistan.

Fact Check:
Upon conducting a reverse image search on key frames from the video, it was found that the original footage is from IIT Madras, where the Chief of Army Staff (COAS) was delivering a speech. The video is available on the official YouTube channel of ADGPI – Indian Army, published on 9 August 2025, with the description:
“Watch COAS address the faculty and students on ‘Operation Sindoor – A New Chapter in India’s Fight Against Terrorism,’ highlighting it as a calibrated, intelligence-led operation reflecting a doctrinal shift. On the occasion, he also focused on the major strides made in technology absorption and capability development by the Indian Army, while urging young minds to strive for excellence in their future endeavours.”
A review of the full speech revealed no reference to the destruction of six jets or the loss of 250 Army personnel. This indicates that the circulating claim is not supported by the original source and may contribute to the spread of misinformation.

Further using AI Detection tools like Hive Moderation we found that the voice is AI generated in between the lines.

Conclusion:
The claim is baseless. The video is a manipulated creation that combines genuine footage of General Dwivedi’s IIT Madras address with AI-generated audio to fabricate a false narrative. No credible source corroborates the alleged military losses.
- Claim: AI-Generated Audio Falsely Claims COAS Admitted to Loss of 6 Jets and 250 Soldiers
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.jpeg)
As technological advancements continue to shape the future, the rise of artificial intelligence brings with it significant potential benefits, yet also raises concerns about the spread of misinformation. Recognising the need for accountability on both ends, on 5th May, during the three-day World News Media Congress 2025 in Kraków, Poland the European Broadcasting Union (EBU) and the World Association of News Publishers (WAN-IFRA) have announced to the public the five core principles for their joint initiative called News Integrity in the Age of AI. The initiative is aimed at fostering dialogue and cooperation between media organisations and technology platforms, and the principles announced are to be a code of practice to be followed by all those taking part. With thousands of public and private media outlets around the world joining the effort, the initiative highlights the shared responsibility of AI developers to ensure that AI systems are trustworthy, safe, and supportive of a reliable news ecosystem. It represents a global call to action to uphold the integrity of news in this age of major influx and curb the growing challenge of misinformation.
The five core principles released focus on:
1. Authorisation of content by the originators is a must prior to its usage in Generative AI tools and models
2. High-quality and up-to-date news content must be recognised by third parties that are benefiting from it
3. There must be a focus on accuracy and attribution, making the original sources of news apparent to the public, promoting transparency
4. Harnessing the plural nature of the news perspectives, which will help AI-driven tools perform better and
5. An invitation to tech companies for an open dialogue with news outlets, facilitating conversation to collaborate and develop standards of transparency, accuracy, and safety.
As this initiative provides a unified platform to address and deliberate on issues affecting the integrity of news, there are also some other technical ways in which misinformation in news caused by AI can be curbed:
1. Encourage the usage of Smaller Generative AI Models: The Large Language Models (LLMs) have to be trained on a range of topics. Businesses don’t require such an expanse of information but just a little that is relevant. A narrower context of information to be sourced from allows better content navigation and a reduced chance of mix-up.
2. Fighting AI hallucination: This is a phenomenon that causes generative AI (such as chatbots and computer vision tools) to present nonsensical and inaccurate outputs as the system perceives objects or patterns that are imperceptible or non-existent to human observers. This occurs as a result of the system trying to focus on both language fluency and stitching information from different sources together. In order to deal with this, one can deploy retrieval augmented generation (RAG). This enables connection with external sources of data that include academic journals, a company’s organisational data, among other things, that would help in providing more accurate, domain-specific content.
Conclusion
This global call to action marks an important step toward fostering unified efforts to combat misinformation. The set of principles introduced is designed to be adaptable, providing a flexible framework that can evolve to address emerging challenges (through dialogue and discussion), including issues like copyright infringement. While AI offers powerful tools to support the news industry, it is essential to emphasise that human oversight remains crucial. These technological advancements are meant to enhance and augment the work of journalists, not replace it, ensuring that the core values of journalism, such as accuracy and integrity, are preserved in the age of AI.
References
● https://www.techtarget.com/searchenterpriseai/tip/Generative-AI-ethics-8-biggest-concerns
● https://trilateralresearch.com/responsible-ai/using-responsible-ai-to-combat-misinformation
● https://www.omdena.com/blog/the-ethical-role-of-ai-in-media-combating-misformation
● https://2024.jou.ufl.edu/page/ai-and-misinformation
● https://techxplore.com/news/2025-05-ai-counter-misinformation-fact-based.html
● https://www.advanced-television.com/2025/05/06/media-outlets-call-for-ai-companies-news-integrity-protection/https://www.ibm.com/think/insights/ai-misinformation

Introduction
Generative AI models are significant consumers of computational resources and energy required for training and running models. While AI is being hailed as a game-changer, however underneath the shiny exterior, cracks are present which significantly raises concerns for its environmental impact. The development, maintenance, and disposal of AI technology all come with a large carbon footprint. The energy consumption of AI models, particularly large-scale models or image generation systems, these models rely on data centers powered by electricity, often from non-renewable sources, which exacerbates environmental concerns and contributes to substantial carbon emissions.
As AI adoption grows, improving energy efficiency becomes essential. Optimising algorithms, reducing model complexity, and using more efficient hardware can lower the energy footprint of AI systems. Additionally, transitioning to renewable energy sources for data centers can help mitigate their environmental impact. There is a growing need for sustainable AI development, where environmental considerations are integral to model design and deployment.
A breakdown of how generative AI contributes to environmental risks and the pressing need for energy efficiency:
- Gen AI during the training phase has high power consumption, when vast amounts of computational power which is often utilising extensive GPU clusters for weeks or at times even months, consumes a substantial amount of electricity. Post this phase, the inference phase where the deployment of these models takes place for real-time inference, can be energy-extensive especially when we take into account the millions of users of Gen AI.
- The main source of energy used for training and deploying AI models often comes from non-renewable sources which then contribute to the carbon footprint. The data centers where the computations for Gen AI take place are a significant source of carbon emissions if they rely on the use of fossil fuels for their energy needs for the training and deployment of the models. According to a study by MIT, training an AI can produce emissions that are equivalent to around 300 round-trip flights between New York and San Francisco. According to a report by Goldman Sachs, Data Companies will use 8% of US power by 2030, compared to 3% in 2022 as their energy demand grows by 160%.
- The production and disposal of hardware (GPUs, servers) necessary for AI contribute to environmental degradation. Mining for raw materials and disposing of electronic waste (e-waste) are additional environmental concerns. E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment.
Efforts by the Industry to reduce the environmental risk posed by Gen AI
There are a few examples of how companies are making efforts to reduce their carbon footprint, reduce energy consumption and overall be more environmentally friendly in the long run. Some of the efforts are as under:
- Google's TPUs in particular the Google Tensor are designed specifically for machine learning tasks and offer a higher performance-per-watt ratio compared to traditional GPUs, leading to more efficient AI computations during the shorter periods requiring peak consumption.
- Researchers at Microsoft, for instance, have developed a so-called “1 bit” architecture that can make LLMs 10 times more energy efficient than the current leading system. This system simplifies the models’ calculations by reducing the values to 0 or 1, slashing power consumption but without sacrificing its performance.
- OpenAI has been working on optimizing the efficiency of its models and exploring ways to reduce the environmental impact of AI and using renewable energy as much as possible including the research into more efficient training methods and model architectures.
Policy Recommendations
We advocate for the sustainable product development process and press the need for Energy Efficiency in AI Models to counter the environmental impact that they have. These improvements would not only be better for the environment but also contribute to the greater and sustainable development of Gen AI. Some suggestions are as follows:
- AI needs to adopt a Climate justice framework which has been informed by a diverse context and perspectives while working in tandem with the UN’s (Sustainable Development Goals) SDGs.
- Working and developing more efficient algorithms that would require less computational power for both training and inference can reduce energy consumption. Designing more energy-efficient hardware, such as specialized AI accelerators and next-generation GPUs, can help mitigate the environmental impact.
- Transitioning to renewable energy sources (solar, wind, hydro) can significantly reduce the carbon footprint associated with AI. The World Economic Forum (WEF) projects that by 2050, the total amount of e-waste generated will have surpassed 120 million metric tonnes.
- Employing techniques like model compression, which reduces the size of AI models without sacrificing performance, can lead to less energy-intensive computations. Optimized models are faster and require less hardware, thus consuming less energy.
- Implementing scattered learning approaches, where models are trained across decentralized devices rather than centralized data centers, can lead to a better distribution of energy load evenly and reduce the overall environmental impact.
- Enhancing the energy efficiency of data centers through better cooling systems, improved energy management practices, and the use of AI for optimizing data center operations can contribute to reduced energy consumption.
Final Words
The UN Sustainable Development Goals (SDGs) are crucial for the AI industry just as other industries as they guide responsible innovation. Aligning AI development with the SDGs will ensure ethical practices, promoting sustainability, equity, and inclusivity. This alignment fosters global trust in AI technologies, encourages investment, and drives solutions to pressing global challenges, such as poverty, education, and climate change, ultimately creating a positive impact on society and the environment. The current state of AI is that it is essentially utilizing enormous power and producing a product not efficiently utilizing the power it gets. AI and its derivatives are stressing the environment in such a manner which if it continues will affect the clean water resources and other non-renewable power generation sources which contributed to the huge carbon footprint of the AI industry as a whole.
References
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/ais-hunger-for-power-can-be-tamed/111302991
- https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
- https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
- https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/
- https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/

Introduction
The development of high-speed broadband internet in the 90s triggered a growth in online gaming, particularly in East Asian countries like South Korea and China. This culminated in the proliferation of competitive video game genres, which had otherwise existed mostly in the form of high-score and face-to-face competitions at arcades. The online competitive gaming market has only become bigger over the years, with a separate domain for professional competition, called esports. This industry is projected to reach US$4.3 billion by 2029, driven by advancements in gaming technology, increased viewership, multi-million dollar tournaments, professional leagues, sponsorships, and advertising revenues. However, the industry is still in its infancy and struggles with fairness and integrity issues. It can draw lessons in regulation from the traditional sports market to address these challenges for uniform global growth.
The Growth of Esports
The appeal of online gaming lies in its design innovations, social connectivity, and accessibility. Its rising popularity has culminated in online gaming competitions becoming an industry, formally organised into leagues and tournaments with reward prizes reaching up to millions of dollars. Professional teams now have coaches, analysts and psychologists supporting their players. For scale, the 2024 ESports World Cup (EWS) held in Saudi Arabia had the largest combined prize pool of over US$60 million. Such tournaments can be viewed in arenas and streamed online, and by 2025, around 322.7 million people are forecast to be occasional viewers of esports events.
According to Statista, esports revenue is expected to demonstrate an annual growth rate (CAGR 2024-2029) of 6.59%, resulting in a projected market volume of US$5.9 billion by 2029. Esports has even been recognised in traditional sporting events, debuting as a medal sport in the Asian Games 2022. In 2024, the International Olympic Committee (IOC) announced the Olympic Esports Games, with the inaugural event set to take place in 2025 in Saudi Arabia. Hosting esports events such as the EWS is expected to boost tourism and the host country’s local economy.
The Challenges of Esports Regulation
While the esports ecosystem provides numerous opportunities for growth and partnerships, its under-regulation presents challenges. Due to the lack of a single governing body like the IOC for the Olympics or FIFA for football to lay down centralised rules, the industry faces certain challenges, such as :
- Integrity issues: Esports are not immune to cheating attempts. Match-fixing, using advanced software hacks, doping (e.g., Adderall use), and the use of other illegal aids are common. DOTA, Counter-Strike, and Overwatch tournaments are particularly susceptible to cheating scandals.
- Players’ Rights: The teams that contractually own professional players provide remuneration and exercise significant control over athletes, who face issues like overwork, a short-lived career, stress, the absence of collective bargaining forums, instability, etc.
- Fragmented National Regulations: While multiple countries have recognised esports as a sport, policies on esports governance and allied regulation vary within and across borders. For example, age restrictions and laws on gambling, taxation, labour, and advertising differ by country. This can create confusion, risks and extra costs, impacting the growth of the ecosystem.
- Cybersecurity Concerns: The esports industry carries substantial prize pools and has growing viewer engagement, which makes it increasingly vulnerable to Distributed Denial of Service (DDoS) attacks, malware, ransomware, data breaches, phishing, and account hijacking. Tournament organisers must prioritise investments in secure network infrastructure, perform regular security audits, encrypt sensitive data, implement network monitoring, utilise API penetration testing tools, deploy intrusion detection systems, and establish comprehensive incident response and mitigation plans.
Proposals for Esports Regulation: Lessons from Traditional Sports
To address the most urgent challenges to the esports industry as outlined above, the following interventions, drawing on the governance and regulatory frameworks of traditional sports, can be made:
- Need for a Centralised Esports Governing Body: Unlike traditional sports, the esports landscape lacks a Global Sports Organisation (GSO) to oversee its governance. Instead, it is handled de facto by game publishers with industry interests different from those of traditional GSOs. Publishers’ primary source of revenue is not esports, which means they can adopt policies unsuitable for its growth but good for their core business. Appointing a centralised governing body with the power to balance the interests of multiple stakeholders and manage issues like unregulated gambling, athlete health, and integrity challenges is a logical next step for this industry.
- Gambling/Betting Regulations: While national laws on gambling/betting vary, GSOs establish uniform codes of conduct that bind participants contractually, ensuring consistent ethical standards across jurisdictions. Similar rules in esports are managed by individual publishers/ tournament organisers, leading to inconsistencies and legal grey areas. The esports ecosystem needs standardised regulation to preserve fair play codes and competitive integrity.
- Anti-Doping Policies: There is increasing adderall abuse among young players to enhance performance with the rising monetary stakes in esports. The industry must establish a global framework similar to the World Anti-Doping Code, which, in conjunction with eight international standards, harmonises anti-doping policies across all traditional sports and countries in the world. The esports industry should either adopt this or develop its own policy to curb stimulant abuse.
- Norms for Participant Health: Professional players start around age 16 or 17 and tend to retire around 24. They may be subjected to rigorous practice hours and stringent contracts by the teams that own them. There is a need for international norm-setting by a federation overseeing the protection of underage players. Enforcement of these norms can be one of the responsibilities of a decentralised system comprising country and state-level bodies. This also ensures fair play governance.
- Respect and Diversity: While esports is technologically accessible, it still has room for better representation of diverse gender identities, age groups, abilities, races, ethnicities, religions, and sexual orientations. Embracing greater diversity and inclusivity would benefit the industry's growth and enhance its potential to foster social connectivity through healthy competition.
Conclusion
The development of the world’s first esports island in Abu Dhabi gives impetus to the rapidly growing esports industry with millions of fans across the globe. To sustain this momentum, stakeholders must collaborate to build a strong governance framework that protects players, supports fans, and strengthens the ecosystem. By learning from traditional sports, esports can establish centralised governance, enforce standardised anti-doping measures, safeguard athlete rights, and promote inclusivity, especially for young and diverse communities. Embracing regulation and inclusivity will not only enhance esports' credibility but also position it as a powerful platform for unity, creativity, and social connection in the digital age.
Resources
- https://www.statista.com/outlook/amo/esports/worldwide
- https://www.statista.com/statistics/490480/global-esports-audience-size-viewer-type/
- https://asoworld.com/blog/global-esports-market-report-2024/#:~:text=A%20key%20driver%20of%20this%20growth%20is%20the%20Sponsorship%20%26%20Advertising,US%24288.9%20million%20in%202024.
- https://lawschoolpolicyreview.com/2023/12/28/a-case-for-recognising-professional-esports-players-as-employees-of-their-game-publisher/
- https://levelblue.com/blogs/security-essentials/the-hidden-risks-of-esports-cybersecurity-on-the-virtual-battlefield
- https://medium.com/@heyimJoost/esports-governance-and-its-failures-9ac7b3ec37ea
- https://www.google.com/search?q=adderall+abuse+in+esports&oq=adderall+abuse+in+esports&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRiPAjIHCAIQIRiPAtIBCDU2MDdqMGo5qAIAsAIB&sourceid=chrome&ie=UTF-8
- https://americanaddictioncenters.org/blog/esports-adderall-abuse#:~:text=A%202020%20piece%20by%20the,it%20because%20everyone%20was%20using