#FactCheck: Old Thundercloud Video from Lviv city in Ukraine Ukraine (2021) Falsely Linked to Delhi NCR, Gurugram and Haryana
Executive Summary:
A viral video claims to show a massive cumulonimbus cloud over Gurugram, Haryana, and Delhi NCR on 3rd September 2025. However, our research reveals the claim is misleading. A reverse image search traced the visuals to Lviv, Ukraine, dating back to August 2021. The footage matches earlier reports and was even covered by the Ukrainian news outlet 24 Kanal, which published the story under the headline “Lviv Covered by Unique Thundercloud: Amazing Video”. Thus, the viral claim linking the phenomenon to a recent event in India is false.
Claim:
A viral video circulating on social media claims to show a massive cloud formation over Gurugram, Haryana, and the Delhi NCR region on 3rd September 2025. The cloud appears to be a cumulonimbus formation, which is typically associated with heavy rainfall, thunderstorms, and severe weather conditions.

Fact Check:
After conducting a reverse image search on key frames of the viral video, we found matching visuals from videos that attribute the phenomenon to Lviv, a city in Ukraine. These videos date back to August 2021, thereby debunking the claim that the footage depicts a recent weather event over Gurugram, Haryana, or the Delhi NCR region.


Further research revealed that a Ukrainian news channel named 24 Kanal, had reported on the Lviv thundercloud phenomenon in August 2021. The report was published under the headline “Lviv Covered by Unique Thundercloud: Amazing Video” ( original in Russian, translated into English).

Conclusion:
The viral video does not depict a recent weather event in Gurugram or Delhi NCR, but rather an old incident from Lviv, Ukraine, recorded in August 2021. Verified sources, including Ukrainian media coverage, confirm this. Hence, the circulating claim is misleading and false.
- Claim: Old Thundercloud Video from Lviv city in Ukraine Ukraine (2021) Falsely Linked to Delhi NCR, Gurugram and Haryana.
- Claimed On: Social Media
- Fact Check: False and Misleading.
Related Blogs

AI systems have grown in both popularity and complexity on which they operate. They are enhancing accessibility for all, including people with disabilities, by revolutionising sectors including healthcare, education, and public services. We are at the stage where AI-powered solutions that can help people with mental, physical, visual or hearing impairments perform everyday and complex tasks are being created.
Generative AI is now being used to amplify human capability. The development of tools for speech-to-text and image recognition is helping in facilitating communication and interaction for visually or hearing-impaired individuals, and smart prosthetics are providing tailored support. Unfortunately, even with these developments, PWDs have continued to face challenges. Therefore, it is important to balance innovation with ethical considerations aand ensuring that these technologies are designed with qualities like privacy, equity, and inclusivity in mind.
Access to Tech: the Barriers Faced by PWDs
PWDs face several barriers while accessing technology. Identifying these challenges is important as they lack computer accessibility, in the use of hardware and software, which has become a norm in life nowadays. Website functions that only work when users click with a mouse, self-service kiosks without accessibility features, touch screens without screen reader software or tactile keyboards, and out-of-order equipment, such as lifts, captioning mirrors and description headsets, are just some difficulties that they face in their day-to-day life.
While they are helpful, much of the current technology doesn’t fully address all disabilities. For example, many assistive devices focus on visual or mobility impairments, but they fall short of addressing cognitive or sensory conditions. In addition to this, these solutions often lack personalisation, making them less effective for individuals with diverse needs. AI has significant potential to bridge this gap. With adaptive systems like voice assistants, real-time translation, and personalised features, AI can create more inclusive solutions, improving access to both digital and physical spaces for everyone.
The Importance of Inclusive AI Design
Creating an Inclusive AI design is important. It ensures that PWDs are not excluded from technological advancements because of the impairments that they are suffering from. The concept of an ‘inclusive or universal’ design promotes creating products and services that are usable for the widest possible range of people. Tech Developers have an ethical responsibility to create advancements in AI that serve everyone. Accessibility features should be built into the core design. They should be treated as a practice rather than an afterthought. However, bias in AI development often stems from data of a non-representative nature, or assumptions can lead to systems that overlook or poorly serve PWDs. If AI algorithms are trained on limited or biased data, they risk excluding marginalised groups, making ethical, inclusive design a necessity for equity and accessibility.
Regulatory Efforts to Ensure Accessible AI
In India, the Rights of Persons with Disabilities Act of 2016 impresses upon the need to provide PWDs with equal accessibility to technology. Subsequently, the DPDP Act of 2023 highlights data privacy concerns for the disabled under section 9 to process their data.
On the international level, the newly incorporated EU’s AI Act mandates measures for transparent, safe, and fair access to AI systems along with including measures that are related to accessibility.
In the US, the Americans with Disabilities Act of 1990 and Section 508 of the 1998 amendment to the Rehabilitation Act of 1973 are the primary legislations that work on promoting digital accessibility in public services.
Challenges in implementing Regulations for AI Accessibility for PWDs
Defining the term ‘inclusive AI’ is a challenge. When working on implementing regulations and compliance for the accessibility of AI, if the primary work is left undefined, it makes the task of creating tools to address the issue an issue. The rapid pace of tech and AI development has more often outpaced legal frameworks in development. This leads to the creation of enforcement gaps. Countries like Canada and tech industry giants like Microsoft and Google are leading forces behind creating accessible AI innovations. Their regulatory frameworks focus on developing AI ethics with inclusivity and collaboration with disability rights groups.
India’s efforts in creating an inclusive AI include the redesign of the Sugamya Bharat app. The app had been created to assist PWDs and the elderly. It will now be incorporating AI features specifically to assist the intended users.
Though AI development has opportunities for inclusivity, unregulated development can be risky. Regulation plays a critical role in ensuring that AI-driven solutions prioritise inclusivity, fairness, and accessibility, harnessing AI’s potential to empower PWDs and contribute to a more inclusive society.
Conclusion
AI development can offer PWDs unprecedented independence and accessibility in leading their lives. The development of AI while keeping inclusivity and fairness in mind is needed to be prioritised. AI that is free from bias, combined with robust regulatory frameworks, together are essential in ensuring that AI serves equitably. Collaborations between tech developers, policymakers, and disability advocates need to be supported and promoted to build AI systems. This will in turn work towards bridging the accessibility gaps for PWDs. As AI continues to evolve, maintaining a steadfast commitment to inclusivity will be crucial in preventing marginalisation and advancing true technological progress for all.
References
- https://www.business-standard.com/india-news/over-1-4k-accessibility-related-complaints-filed-on-govt-app-75-solved-124090800118_1.html
- https://www.forbes.com/councils/forbesbusinesscouncil/2023/06/16/empowering-individuals-with-disabilities-through-ai-technology/ .
- https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities
- Thehttps://blogs.microsoft.com/on-the-issues/2018/05/07/using-ai-to-empower-people-with-disabilities/andensur,personalization

Introduction
In a distressing incident that highlights the growing threat of cyber fraud, a software engineer in Bangalore fell victim to fraudsters who posed as police officials. These miscreants, operating under the guise of a fake courier service and law enforcement, employed a sophisticated scam to dupe unsuspecting individuals out of their hard-earned money. Unfortunately, this is not an isolated incident, as several cases of similar fraud have been reported recently in Bangalore and other cities. It is crucial for everyone to be aware of these scams and adopt preventive measures to protect themselves.
Bangalore Techie Falls Victim to ₹33 Lakh
The software engineer received a call from someone claiming to be from FedEx courier service, informing him that a parcel sent in his name to Taiwan had been seized by the Mumbai police for containing illegal items. The call was then transferred to an impersonator posing as a Mumbai Deputy Commissioner of Police (DCP), who alleged that a money laundering case had been registered against him. The fraudsters then coerced him into joining a Skype call for verification purposes, during which they obtained his personal details, including bank account information.
Under the guise of verifying his credentials, the fraudsters manipulated him into transferring a significant amount of money to various accounts. They assured him that the funds would be returned after the completion of the procedure. However, once the money was transferred, the fraudsters disappeared, leaving the victim devastated and financially drained.
Best Practices to Stay Safe
- Be vigilant and skeptical: Maintain a healthy level of skepticism when receiving unsolicited calls or messages, especially if they involve sensitive information or financial matters. Be cautious of callers pressuring you to disclose personal details or engage in immediate financial transactions.
- Verify the caller’s authenticity: If someone claims to represent a legitimate organisation or law enforcement agency, independently verify their credentials. Look up the official contact details of the organization or agency and reach out to them directly to confirm the authenticity of the communication.
- Never share sensitive information: Avoid sharing personal information, such as bank account details, passwords, or Aadhaar numbers, over the phone or through unfamiliar online platforms. Legitimate organizations will not ask for such information without proper authentication protocols.
- Use secure communication channels: When communicating sensitive information, prefer secure platforms or official channels that provide end-to-end encryption. Avoid switching to alternative platforms or applications suggested by unknown callers, as fraudsters can exploit these.
- Educate yourself and others: Stay informed about the latest cyber fraud techniques and scams prevalent in your region. Share this knowledge with family, friends, and colleagues to create awareness and prevent them from falling victim to similar schemes.
- Implement robust security measures: Keep your devices and software updated with the latest security patches. Utilize robust anti-virus software, firewalls, and spam filters to safeguard against malicious activities. Regularly review your financial statements and account activity to detect any unauthorized transactions promptly.
Conclusion:
The incident involving the Bangalore techie and other victims of cyber fraud highlights the importance of remaining vigilant and adopting preventive measures to safeguard oneself from such scams. It is disheartening to see individuals falling prey to impersonators who exploit their trust and manipulate them into sharing sensitive information. By staying informed, exercising caution, and following best practices, we can collectively minimize the risk and protect ourselves from these fraudulent activities. Remember, the best defense against cyber fraud is a well-informed and alert individual.
.webp)
Introduction
In July 2025, the Digital Defence Report prepared by Microsoft raised an alarm that India is part of the top target countries in AI-powered nation-state cyberattacks with malicious agents automating phishing, creating convincing deepfakes, and influencing opinion with the help of generative AI (Microsoft Digital Defence Report, 2025). Most of the attention in the world has continued to be on the United States and Europe, but Asia-Pacific and especially India have become a major target in terms of AI-based cyber activities. This blog discusses the role of AI in espionage, redefining the threat environment of India, the reaction of the government, and what India can learn by looking at the example of cyber giants worldwide.
Understanding AI-Powered Cyber Espionage
Conventional cyber-espionage intends to hack systems, steal information or bring down networks. With the emergence of generative AI, these strategies have changed completely. It is now possible to automate reconnaissance, create fake voices and videos of authorities and create highly advanced phishing campaigns which can pass off as genuine even to a trained expert. According to the report made by Microsoft, AI is being used by state-sponsored groups to expand their activities and increase accuracy in victims (Microsoft Digital Defence Report, 2025). Based on SQ Magazine, almost 42 percent of state-based cyber campaigns in 2025 had AIs like adaptive malware or intelligent vulnerability scanners (SQ Magazine, 2025).
AI is altering the power dynamic of cyberspace. The tools previously needing significant technical expertise or substantial investments have become ubiquitous, and smaller countries can conduct sophisticated cyber operations as well as non-state actors. The outcome is the speeding up of the arms race with AI serving as the weapon and the armour.
India’s Exposure and Response
The weakness of the threat landscape lies in the growing online infrastructure and geopolitical location. The attack surface has expanded the magnitude of hundreds of millions of citizens with the integration of platforms like DigiLocker and CoWIN. Financial institutions, government portals and defence networks are increasingly becoming targets of cyber attacks that are more sophisticated. Faking videos of prominent figures, phishing letters with the official templates, and manipulation of the social media are currently all being a part of disinformation campaigns (Microsoft Digital Defence Report, 2025).
According to the Data Security Council of India (DSCI), the India Cyber Threat Report 2025 reported that attacks using AI are growing exponentially, particularly in the shape of malicious behaviour and social engineering (DSCI, 2025). The nodal cyber-response agency of India, CERT-In, has made several warnings regarding scams related to AI and AI-generated fake content that is aimed at stealing personal information or deceiving the population. Meanwhile, enforcement and red-teaming actions have been intensified, but the communication between central agencies and state police and the private platforms is not even. There is also an acute shortage of cybersecurity talents in India, as less than 20 percent of cyber defence jobs are occupied by qualified specialists (DSCI, 2025).
Government and Policy Evolution
The government response to AI-enabled threats is taking three forms, namely regulation, institutional enhancing, and capacity building. The Digital Personal Data Protection Act 2023 saw a major move in defining digital responsibility (Government of India, 2023). Nonetheless, threats that involve AI-specific issues like data poisoning, model manipulation, or automated disinformation remain grey areas. The following National Cybersecurity Strategy will attempt to remedy them by establishing AI-government guidelines and responsibility standards to major sectors.
At the institutional level, the efforts of such organisations as the National Critical Information Infrastructure Protection Centre (NCIIPC) and the Defence Cyber Agency are also being incorporated into their processes with the help of AI-based monitoring. There is also an emerging public-private initiative. As an example, the CyberPeace Foundation and national universities have signed a memorandum of understanding that currently facilitates the specialised training in AI-driven threat analysis and digital forensics (Times of India, August 2025). Even after these positive indications, India does not have any cohesive system of reporting cases of AI. The publication on arXiv in September 2025 underlines the importance of the fact that legal approaches to AI-failure reporting need to be developed by countries to approach AI-initiated failures in such fields as national security with accountability (arXiv, 2025).
Global Implications and Lessons for India
Major economies all over the world are increasing rapidly to integrate AI innovation with cybersecurity preparedness. The United States and United Kingdom are spending big on AI-enhanced military systems, performing machine learning in security operations hubs and organising AI-based “red team” exercises (Microsoft Digital Defence Report, 2025). Japan is testing cross-ministry threat-sharing platforms that utilise AI analytics and real-time decision-making (Microsoft Digital Defence Report, 2025).
Four lessons can be distinguished as far as India is concerned.
- To begin with, the cyber defence should shift to proactive intelligence in place of reactive investigation. It is not only possible to detect the adversary behaviour after the attacks, but to simulate them in advance using AI.
- Second, teamwork is essential. The issue of cybersecurity cannot be entrusted to government enforcement. The private sector that maintains the majority of the digital infrastructure in India must be actively involved in providing information and knowledge.
- Third, there is the issue of AI sovereignty. Building or hosting its own defensive AI tools in India will diminish dependence on foreign vendors, and minimise the possible vulnerabilities of the supply-chain.
- Lastly, the initial defence is digital literacy. The citizens should be trained on how to detect deepfakes, phishing, and other manipulated information. The importance of creating human awareness cannot be underestimated as much as technical defences (SQ Magazine, 2025).
Conclusion
AI has altered the reasoning behind cyber warfare. There are quicker attacks, more difficult to trace and scalable as never before. In the case of India, it is no longer about developing better firewalls but rather the ability to develop anticipatory intelligence to counter AI-powered threats. This requires a national policy that incorporates technology, policy and education.
India can transform its vulnerability to strength with the sustained investment, ethical AI governance, and healthy cooperation between the government and the business sector. The following step in cybersecurity does not concern who possesses more firewalls than the other but aims to learn and adjust more quickly and successfully in a world where machines already belong to the battlefield (Microsoft Digital Defence Report, 2025).
References:
- Microsoft Digital Defense Report 2025
- India Cyber Threat Report 2025, DSCI
- Lucknow based organisations to help strengthen cybercrime research training policy ecosystem
- AI Cyber Attacks Statistics 2025: How Attacks, Deepfakes & Ransomware Have Escalated, SQ Magazine
- Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India.
- The Digital Personal Data Protection Act, 2023