#FactCheck: Old Thundercloud Video from Lviv city in Ukraine Ukraine (2021) Falsely Linked to Delhi NCR, Gurugram and Haryana
Executive Summary:
A viral video claims to show a massive cumulonimbus cloud over Gurugram, Haryana, and Delhi NCR on 3rd September 2025. However, our research reveals the claim is misleading. A reverse image search traced the visuals to Lviv, Ukraine, dating back to August 2021. The footage matches earlier reports and was even covered by the Ukrainian news outlet 24 Kanal, which published the story under the headline “Lviv Covered by Unique Thundercloud: Amazing Video”. Thus, the viral claim linking the phenomenon to a recent event in India is false.
Claim:
A viral video circulating on social media claims to show a massive cloud formation over Gurugram, Haryana, and the Delhi NCR region on 3rd September 2025. The cloud appears to be a cumulonimbus formation, which is typically associated with heavy rainfall, thunderstorms, and severe weather conditions.

Fact Check:
After conducting a reverse image search on key frames of the viral video, we found matching visuals from videos that attribute the phenomenon to Lviv, a city in Ukraine. These videos date back to August 2021, thereby debunking the claim that the footage depicts a recent weather event over Gurugram, Haryana, or the Delhi NCR region.


Further research revealed that a Ukrainian news channel named 24 Kanal, had reported on the Lviv thundercloud phenomenon in August 2021. The report was published under the headline “Lviv Covered by Unique Thundercloud: Amazing Video” ( original in Russian, translated into English).

Conclusion:
The viral video does not depict a recent weather event in Gurugram or Delhi NCR, but rather an old incident from Lviv, Ukraine, recorded in August 2021. Verified sources, including Ukrainian media coverage, confirm this. Hence, the circulating claim is misleading and false.
- Claim: Old Thundercloud Video from Lviv city in Ukraine Ukraine (2021) Falsely Linked to Delhi NCR, Gurugram and Haryana.
- Claimed On: Social Media
- Fact Check: False and Misleading.
Related Blogs

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency

Misinformation is a scourge in the digital world, making the most mundane experiences fraught with risk. The threat is considerably heightened in conflict settings, especially in the modern era, where geographical borders blur and civilians and conflict actors alike can take to the online realm to discuss -and influence- conflict events. Propaganda can complicate the narrative and distract from the humanitarian crises affecting civilians, while also posing a serious threat to security operations and law and order efforts. Sensationalised reports of casualties and manipulated portrayals of military actions contribute to a cycle of violence and suffering.
A study conducted by MIT found the mere thought of sharing news on social media reduced the ability to judge whether a story was true or false; the urge to share outweighed the consideration of accuracy (2023). Cross-border misinformation has become a critical issue in today's interconnected world, driven by the rise of digital communication platforms. To effectively combat misinformation, coordinated international policy frameworks and cooperation between governments, platforms, and global institutions are created.
The Global Nature of Misinformation
Cross-border misinformation is false or misleading information that spreads across countries. Out-of-border creators amplify information through social media and digital platforms and are a key source of misinformation. Misinformation can interfere with elections, and create serious misconceptions about health concerns such as those witnessed during the COVID-19 pandemic, or even lead to military conflicts.
The primary challenge in countering cross-border misinformation is the difference in national policies, legal frameworks and governance policies of social media platforms across various jurisdictions. Examining the existing international frameworks, such as cybersecurity treaties and data-sharing agreements used for financial crimes might be helpful to effectively address cross-border misinformation. Adapting these approaches to the digital information ecosystem, nations could strengthen their collective response to the spread of misinformation across borders. Global institutions like the United Nations or regional bodies like the EU and ASEAN can work together to set a unified response and uniform international standards for regulation dealing with misinformation specifically.
Current National and Regional Efforts
Many countries have taken action to deal with misinformation within their borders. Some examples include:
- The EU’s Digital Services Act has been instrumental in regulating online intermediaries and platforms including marketplaces, social networks, content-sharing platforms, app stores, etc. The legislation aims to prevent illegal and harmful activities online and the spread of disinformation.
- The primary legislation that governs cyberspace in India is the IT Act of 2000 and its corresponding rules (IT Rules, 2023), which impose strict requirements on social media platforms to counter misinformation content and enable the traceability of the creator responsible for the origin of misinformation. Platforms have to conduct due diligence, failing which they risk losing their safe harbour protection. The recently-enacted DPDP Act of 2023 indirectly addresses personal data misuse that can be used to contribute to the creation and spread of misinformation. Also, the proposed Digital India Act is expected to focus on “user harms” specific to the online world.
- In the U.S., the Right to Editorial Discretion and Section 230 of the Communications Decency Act place the responsibility for regulating misinformation on private actors like social media platforms and social media regulations. The US government has not created a specific framework addressing misinformation and has rather encouraged voluntary measures by SMPs to have independent policies to regulate misinformation on their platforms.
The common gap area across these policies is the absence of a standardised, global framework for addressing cross-border misinformation which results in uneven enforcement and dependence on national regulations.
Key Challenges in Achieving International Cooperation
Some of the key challenges identified in achieving international cooperation to address cross-border misinformation are as follows:
- Geopolitical tensions can emerge due to the differences in political systems, priorities, and trust issues between countries that hinder attempts to cooperate and create a universal regulation.
- The diversity in approaches to internet governance and freedom of speech across countries complicates the matters further.
- Further complications arise due to technical and legal obstacles around the issues of sovereignty, jurisdiction and enforcement, further complicating matters relating to the monitoring and removal of cross-border misinformation.
CyberPeace Recommendations
- The UN Global Principles For Information Integrity Recommendations for Multi-stakeholder Action, unveiled on 24 June 2024, are a welcome step for addressing cross-border misinformation. This can act as the stepping stone for developing a framework for international cooperation on misinformation, drawing inspiration from other successful models like climate change agreements, international criminal law framework etc.
- Collaborations like public-private partnerships between government, tech companies and civil societies can help enhance transparency, data sharing and accountability in tackling cross-border misinformation.
- Engaging in capacity building and technology transfers in less developed countries would help to create a global front against misinformation.
Conclusion
We are in an era where misinformation knows no borders and the need for international cooperation has never been more urgent. Global democracies are exploring solutions, both regulatory and legislative, to limit the spread of misinformation, however, these fragmented efforts fall short of addressing the global scale of the problem. Establishing a standardised, international framework, backed by multilateral bodies like the UN and regional alliances, can foster accountability and facilitate shared resources in this fight. Through collaborative action, transparent regulations, and support for developing nations, the world can create a united front to curb misinformation and protect democratic values, ensuring information integrity across borders.
References
- https://economics.mit.edu/sites/default/files/2023-10/A%20Model%20of%20Online%20Misinformation.pdf
- https://www.indiatoday.in/global/story/in-the-crosshairs-manufacturing-consent-and-the-erosion-of-public-trust-2620734-2024-10-21
- https://laweconcenter.org/resources/knowledge-and-decisions-in-the-information-age-the-law-economics-of-regulating-misinformation-on-social-media-platforms/
- https://www.article19.org/resources/un-article-19-global-principles-for-information-integrity/

Introduction
India's Computer Emergency Response Team (CERT-In) has unfurled its banner of digital hygiene, heralding the initiative 'Cyber Swachhta Pakhwada,' a clarion call to the nation's citizens to fortify their devices against the insidious botnet scourge. The government's Cyber Swachhta Kendra (CSK)—a Botnet Cleaning and Malware Analysis Centre—stands as a bulwark in this ongoing struggle. It is a digital fortress, conceived under the aegis of the National Cyber Security Policy, with a singular vision: to engender a secure cyber ecosystem within India's borders. The CSK's mandate is clear and compelling—to detect botnet infections within the subcontinent and to notify, enable cleaning, and secure systems of end users to stymie further infections.
What are Bots?
Bots are automated rogue software programs crafted with malevolent intent, lurking in the shadows of the internet. They are the harbingers of harm, capable of data theft, disseminating malware, and orchestrating cyberattacks, among other digital depredations.
A botnet infection is like a parasitic infestation within the electronic sinews of our devices—smartphones, computers, tablets—transforming them into unwitting soldiers in a hacker's malevolent legion. Once ensnared within the botnet's web, these devices become conduits for a plethora of malicious activities: the dissemination of spam, the obstruction of communications, and the pilfering of sensitive information such as banking details and personal credentials.
How, then, does one's device fall prey to such a fate? The vectors are manifold: an infected email attachment opened in a moment of incaution, a malicious link clicked in haste, a file downloaded from the murky depths of an untrusted source, or the use of an unsecured public Wi-Fi network. Each action can be the key that unlocks the door to digital perdition.
In an era where malware attacks and scams proliferate like a plague, the security of our personal devices has ascended to a paramount concern. To address this exigency and to aid individuals in the fortification of their smartphones, the Department of Telecommunications(DoT) has unfurled a suite of free bot removal tools. The government's outreach extends into the ether, dispatching SMS notifications to the populace and disseminating awareness of these digital prophylactics.
Stay Cyber Safe
To protect your device from botnet infections and malware, the Government of India, through CERT-In, recommends downloading the 'Free Bot Removal Tool' at csk.gov.in.' This SMS is not merely a reminder but a beacon guiding users to a safe harbor in the tumultuous seas of cyberspace.
Cyber Swachhta Kendra
The Cyber Swachhta Kendra portal emerges as an oasis in the desert of digital threats, offering free malware detection tools to the vigilant netizen. This portal, also known as the Botnet Cleaning and Malware Analysis Centre, operates in concert with Internet Service Providers (ISPs) and antivirus companies, under the stewardship ofCERT-In. It is a repository of knowledge and tools, a digital armoury where users can arm themselves against the specters of botnet infection.
To extricate your device from the clutches of a botnet or to purge the bots and malware that may lurk within, one must embark on a journey to the CSK website. There, under the 'Security Tools' tab, lies the arsenal of antivirus companies, each offering their own bot removal tool. For Windows users, the choice includes stalwarts such as eScan Antivirus, K7 Security, and Quick Heal. Android users, meanwhile, can venture to the Google Play Store and seek out the 'eScan CERT-IN Bot Removal ' tool or 'M-Kavach2,' a digital shield forged by C-DAC Hyderabad.
Once the chosen app is ensconced within your device, it will commence its silent vigil, scanning the digital sinews for any trace of malware, excising any infections with surgical precision. But the CSK portal's offerings extend beyond mere bot removal tools; it also proffers other security applications such as 'USB Pratirodh' and 'AppSamvid.' These tools are not mere utilities but sentinels standing guard over the sanctity of our digital lives.
USB Pratirodh
'USB Pratirodh' is a desktop guardian, regulating the ingress and egress of removable storage media. It demands authentication with each new connection, scanning for malware, encrypting data, and allowing changes to read/write permissions. 'AppSamvid,' on the other hand, is a gatekeeper for Windows users, permitting only trusted executables and Java files to run, safeguarding the system from the myriad threats that lurk in the digital shadows.
Conclusion
In this odyssey through the digital safety frontier, the Cyber Swachhta Kendra stands as a testament to the power of collective vigilance. It is a reminder that in the vast, interconnected web of the internet, the security of one is the security of all. As we navigate the dark corners of the internet, let us equip ourselves with knowledge and tools, and may our devices remain steadfast sentinels in the ceaseless battle against the unseen adversaries of the digital age.
References
- https://timesofindia.indiatimes.com/gadgets-news/five-government-provided-botnet-and-malware-cleaning-tools/articleshow/107951686.cms
- https://indianexpress.com/article/technology/tech-news-technology/cyber-swachhta-kendra-free-botnet-detection-removal-tools-digital-india-8650425/