#FactCheck: False Claims of Fireworks in Dubai International Stadium celebrating India’s Champions Trophy Victory 2025
Executive Summary:
A misleading video claiming to show fireworks at Dubai International Cricket Stadium following India’s 2025 ICC Champions Trophy win has gone viral, causing confusion among viewers. Our investigation confirms that the video is unrelated to the cricket tournament. It actually depicts the fireworks display from the December 2024 Arabian Gulf Cup opening ceremony at Kuwait’s Jaber Al-Ahmad Stadium. This incident underscores the rapid spread of outdated or misattributed content, particularly in relation to significant sports events, and highlights the need for vigilance in verifying such claims.

Claim:
The circulated video claims fireworks and a drone display at Dubai International Cricket Stadium after India's win in the ICC Champions Trophy 2025.

Fact Check:
A reverse image search of the most prominent keyframes in the viral video led it back to the opening ceremony of the 26th Arabian Gulf Cup, which was hosted by Jaber Al-Ahmad International Stadium in Kuwait on December 21, 2024. The fireworks seen in the video correspond to the imagery in this event. A second look at the architecture of the stadium also affirms that the venue is not Dubai International Cricket Stadium, as asserted. Additional confirmation from official sources and media outlets verifies that there was no such fireworks celebration in Dubai after India's ICC Champions Trophy 2025 win. The video has therefore been misattributed and shared with incorrect context.

Fig: Claimed Stadium Picture

Conclusion:
A viral video claiming to show fireworks at Dubai International Cricket Stadium after India's 2025 ICC Champions Trophy win is misleading. Our research confirms the video is from the December 2024 Arabian Gulf Cup opening ceremony at Kuwait’s Jaber Al-Ahmad Stadium. A reverse image search and architectural analysis of the stadium debunk the claim, with official sources verifying no such celebration took place in Dubai. The video has been misattributed and shared out of context.
- Claim: Fireworks in Dubai celebrate India’s Champions Trophy win.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

What Is a VPN and its Significance
A Virtual Private Network (VPN) creates a secure and reliable network connection between a device and the internet. It hides your IP address by rerouting it through the VPN’s host servers. For example, if you connect to a US server, you appear to be browsing from the US, even if you’re in India. It also encrypts the data being transferred in real-time so that it is not decipherable by third parties such as ad companies, the government, cyber criminals, or others.
All online activity leaves a digital footprint that is tracked for data collection, and surveillance, increasingly jeopardizing user privacy. VPNs are thus a powerful tool for enhancing the privacy and security of users, businesses, governments and critical sectors. They also help protect users on public Wi-Fi networks ( for example, at airports and hotels), journalists, activists and whistleblowers, remote workers and businesses, citizens in high-surveillance states, and researchers by affording them a degree of anonymity.
What VPNs Do and Don’t
- What VPNs Can Do:
- Mask your IP address to enhance privacy.
- Encrypt data to protect against hackers, especially on public Wi-Fi.
- Bypass geo-restrictions (e.g., access streaming content blocked in India).
- What VPNs Cannot Do:
- Make you completely anonymous and protect your identity (websites can still track you via cookies, browser fingerprinting, etc.).
- Protect against malware or phishing.
- Prevent law enforcement from tracing you if they have access to VPN logs.
- Free VPNs usually even share logs with third parties.
VPNs in the Context of India’s Privacy Policy Landscape
In April 2022, CERT-In (Computer Emergency Response Team- India) released Directions under Section 70B (6) of the Information Technology (“IT”) Act, 2000, mandating VPN service providers to store customer data such as “validated names of subscribers/customers hiring the services, period of hire including dates, IPs allotted to / being used by the members, email address and IP address and time stamp used at the time of registration/onboarding, the purpose for hiring services, validated address and contact numbers, and the ownership pattern of the subscribers/customers hiring services” collected as part of their KYC (Know Your Customer) requirements, for a period of five years, even after the subscription has been cancelled. While this directive was issued to aid with cybersecurity investigations, it undermines the core purpose of VPNs- anonymity and privacy. It also gave operators very little time to carry out compliance measures.
Following this, operators such as NordVPN, ExpressVPN, ProtonVPN, and others pulled their physical servers out of India, and now use virtual servers hosted abroad (e.g., Singapore) with Indian IP addresses. While the CERT-In Directions have extra-territorial applicability, virtual servers are able to bypass them since they physically operate from a foreign jurisdiction. This means that they are effectively not liable to provide user information to Indian investigative agencies, beating the whole purpose of the directive. To counter this, the Indian government could potentially block non-compliant VPN services in the future. Further, there are concerns about overreach since the Directions are unclear about how long CERT-In can retain the data it acquires from VPN operators, how it will be used and safeguarded, and the procedure of holding VPN operators responsible for compliance.
Conclusion: The Need for a Privacy-Conscious Framework
The CERT-In Directions reflect a governance model which, by prioritizing security over privacy, compromises on safeguards like independent oversight or judicial review to balance the two. The policy design renders a lose-lose situation because virtual VPN services are still available, while the government loses oversight. If anything, this can make it harder for the government to track suspicious activity. It also violates the principle of proportionality established in the landmark privacy judgment, Puttaswamy v. Union of India (II) by giving government agencies the power to collect excessive VPN data on any user. These issues underscore the need for a national-level, privacy-conscious cybersecurity framework that informs other policies on data protection and cybercrime investigations. In the meantime, users who use VPNs are advised to choose reputable providers, ensure strong encryption, and follow best practices to maintain online privacy and security.
References
- https://www.kaspersky.com/resource-center/definitions/what-is-a-vpn
- https://internetfreedom.in/top-secret-one-year-on-cert-in-refuses-to-reveal-information-about-compliance-notices-issued-under-its-2022-directions-on-cybersecurity/#:~:text=tl;dr,under%20this%20new%20regulatory%20mandate.
- https://www.wired.com/story/vpn-firms-flee-india-data-collection-law/#:~:text=Starting%20today%2C%20the%20Indian%20Computer,years%2C%20even%20after%20they%20have
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation

The global race for Artificial Intelligence is heating up, and India has become one of its most important battlegrounds. Over the past few months, tech giants like OpenAI (ChatGPT), Google (Gemini), X (Grok), Meta (Llama), and Perplexity AI have stepped up their presence in the country, not by selling their AI tools, but by offering them free or at deep discounts.
At first, it feels like a huge win for India’s digital generation. Students, professionals, and entrepreneurs today can tap into some of the world’s most powerful AI tools without paying a rupee. It feels like a digital revolution unfolding in real time. Yet, beneath this generosity lies a more complicated truth. Experts caution that this wave of “free” AI access isn’t without strings attached. This offering impacts how India handles data privacy, the fairness of competition, and the pace of the development of homegrown AI innovation that the country is focusing on.
The Market Strategy: Free Now, Pay Later
The choice of global AI companies to offer free access in India is a calculated business strategy. With one of the world’s largest and fastest-growing digital populations, India is a market no tech giant wants to miss. By giving away their AI tools for free, these firms are playing a long game:
- Securing market share early: Flooding the market with free access helps them quickly attract millions of users before Indian startups have a chance to catch up. Recent examples are Perplexity, ChatGPT Go and Gemini AI which are offering free subscriptions to Indian users.
- Gathering local data: Every interaction, every prompt, question, or language pattern, helps these models learn from larger datasets to improve their product offerings in India and the rest of the world. Nothing is free in the world - as the popular saying goes, “if something is free, means you are the product. The same goes for these AI platforms: they monetise user data by analysing chats and their behaviour to refine their model and build paid products. This creates the privacy risk as India currently lacks specific laws to govern how such data is stored, processed or used for AI training.
- Create user dependency: Once users grow accustomed to the quality and convenience of these global models, shifting to Indian alternatives, even when they become paid, will be difficult. This approach mirrors the “freemium” model used in other tech sectors, where users are first attracted through free access and later monetised through subscriptions or premium features, raising ethical concerns.
Impact on Indian Users
For most Indians, the short-term impact of free AI access feels overwhelmingly positive. Tools like ChatGPT and Gemini are breaking down barriers by democratising knowledge and making advanced technology available to everyone, from students, professionals, to small businesses. It’s changing how people learn, think and do - all without spending a single rupee.But the long-term picture isn’t quite as simple. Beneath the convenience lies a set of growing concerns:
- Data privacy risks: Many users don’t realise that their chats, prompts, or queries might be stored and used to train global AI models. Without strong data protection laws in action, sensitive Indian data could easily find its way into foreign systems.
- Overdependence on foreign technology: Once these AI tools become part of people’s daily lives, moving away from them gets harder — especially if free access later turns into paid plans or comes with restrictive conditions.
- Language and cultural bias: Most large AI models are still built mainly around English and Western data. Without enough Indian language content and cultural representation, the technology risks overlooking the very diversity that defines India
Impact on India’s AI Ecosystem
India’s Generative AI market, valued at USD $ 1.30 billion in 2024, is projected to reach 5.40 billion by 2033. Yet, this growth story may become uneven if global players dominate early.
Domestic AI startups face multiple hurdles — limited funding, high compute costs, and difficulty in accessing large, diverse datasets. The arrival of free, GPT-4-level models sharpens these challenges by raising user expectations and increasing customer acquisition costs.
As AI analyst Kashyap Kompella notes, “If users can access GPT-4-level quality at zero cost, their incentive to try local models that still need refinement will be low.” This could stifle innovation at home, resulting in a shallow domestic AI ecosystem where India consumes global technology but contributes little to its creation.
CCI’s Intervention: Guarding Fair Competition
The Competition Commission of India (CCI) has started taking note of how global AI companies are shaping India’s digital market. In a recent report, it cautioned that AI-driven pricing strategies such as offering free or heavily subsidised access could distort healthy competition and create an uneven playing field for smaller Indian developers.
The CCI’s decision to step in is both timely and necessary. Without proper oversight, such tactics could gradually push homegrown AI startups to the sidelines and allow a few foreign tech giants to gain disproportionate influence over India’s emerging AI economy.
What the Indian Government Should Do
To ensure India’s AI landscape remains competitive, inclusive, and innovation-driven, the government must adopt a balanced strategy that safeguards users while empowering local developers.
1. Promote Fair Competition
The government should mandate transparency in free access offers, including their duration, renewal terms, and data-use policies. Exclusivity deals between foreign AI firms and telecom or device companies must be closely monitored to prevent monopolistic practices.
2. Strengthen Data Protection
Under the Digital Personal Data Protection (DPDP) Act, companies should be required to obtain explicit consent from users before using data for model training. Encourage data localisation, ensuring that sensitive Indian data remains stored within India’s borders.
3. Support Domestic AI Innovation
Accelerate the implementation of the IndiaAI Mission to provide public compute infrastructure, open datasets, and research funding to local AI developers like Sarvam AI, an Indian company chosen by the government to build the country's first homegrown large language model (LLM) under IndianAI Mission.
4. Create an Open AI Ecosystem
India should develop national AI benchmarks to evaluate all models, foreign or domestic, on performance, fairness, and linguistic diversity. And at the same time, they have their own national data Centre to train their indigenous AI models.
5. Encourage Responsible Global Collaboration
Speaking at the AI Action Summit 2025, the Prime Minister highlighted that governance should go beyond managing risks and should also promote innovation for the global good. Building on this idea, India should encourage global AI companies to invest meaningfully in the country’s ecosystem through research labs, data centres, and AI education programmes. Such collaborations will ensure that these partnerships not only expand markets but also create value, jobs and knowledge within India.
Conclusion
The surge of free AI access across India represents a defining moment in the nation’s digital journey. On one hand, it’s empowering millions of people and accelerating AI awareness like never before. On the other hand, it poses serious challenges from over-reliance on foreign platforms to potential risks around data privacy and the slow growth of local innovation. India’s real test will be finding the right balance between access and autonomy, allowing global AI leaders to innovate and operate here, but within a framework that protects the interests of Indian users, startups, and data ecosystems. With strong and timely action under the Digital Personal Data Protection (DPDP) Act, the IndiaAI Mission, and the Competition Commission of India’s (CCI) active oversight, India can make sure this AI revolution isn’t just something that happens to the country, but for it.
References
- https://www.moneycontrol.com/artificial-intelligence/cci-study-flags-steep-barriers-for-indian-ai-startups-calls-for-open-data-and-compute-access-to-level-playing-field-article-13600606.html#
- https://www.imarcgroup.com/india-generative-ai-market
- https://www.mea.gov.in/Speeches-Statements.htm?dtl/39020/Opening_Address_by_Prime_Minister_Shri_Narendra_Modi_at_the_AI_Action_Summit_Paris_February_11_2025
- https://m.economictimes.com/tech/artificial-intelligence/nasscom-planning-local-benchmarks-for-indic-ai-models/articleshow/124218208.cms
- https://indianexpress.com/article/business/centre-selects-start-up-sarvam-to-build-country-first-homegrown-ai-model-9967243/#