#FactCheck: Old Ukraine Blast Video Falsely Shared as Iran Strike on Israeli Nuclear Site
Executive Summary
A video showing a massive fire and explosion is going viral on social media. The clip shows a large plume of smoke followed by a sudden blast. It is being shared with the claim that it depicts Iran attacking a nuclear reactor in Israel amid the ongoing Iran-Israel conflict. However, research by CyberPeace found that the claim is misleading. The viral video is actually from 2017 and shows a massive explosion at an ammunition depot in Ukraine.
Claim:
On social media platform X (formerly Twitter), a user shared the video on March 21, 2026, with the caption:“Israel’s nuclear reactor was targeted with Fateh and Khyber missiles. Well done Iran! The whole world is with you.”

Fact Check:
To verify the viral claim, we extracted keyframes from the video and conducted a reverse image search. During this process, we found the same video uploaded on March 23, 2017, on a YouTube channel named “null.” According to the upload, the video shows a massive explosion at an ammunition depot in Balakliya, Ukraine. Using these clues, we performed a keyword search and found a report published on March 24, 2017, by Global News.

According to the report, a major fire and explosion broke out at a large military ammunition depot in Balakliya, located in Ukraine’s Kharkiv region. The incident resulted in one death, while nearly 20,000 people from surrounding areas were evacuated to safer locations.
Conclusion:
The claim that the video shows Iran attacking a nuclear reactor in Israel is misleading. The viral footage is actually from 2017 and depicts an explosion at an ammunition depot in Ukraine.
Related Blogs

Social media has become far more than a tool of communication, engagement and entertainment. It shapes politics, community identity, and even shapes agendas. When misused, the consequences can be grave: communal disharmony, riots, false rumours, harassment or worse. Emphasising the need for digital Atmanirbhar, Prime Minister Narendra Modi recently urged India’s youth to develop the country’s own social media platforms, like Facebook, Instagram and X, to ensure that the nation’s technological ecosystems remain secure and independent, reinforcing digital autonomy. This growing influence of platforms has sharpened the tussle between government regulation, the independence of social media companies, and the protection of freedom of expression in most countries.
Why Government Regulation Is Especially Needed
While self-regulation has its advantages, ‘real-world harms’ show why state oversight cannot be optional:
- Incitement to violence and communal unrest: Misinformation and hate speech can inflame tensions. In Manipur (May 2023), false posts, including unverified sexual-violence claims, spread online, worsening clashes. Authorities shut down mobile internet on 3 May 2023 to curb “disinformation and false rumours,” showing how quickly harmful content can escalate and why enforceable moderation rules matter.
- Fake news and misinformation: False content about health, elections or individuals spreads far faster than corrections. During COVID-19, an “infodemic” of fake cures, conspiracy theories and religious discrimination went viral on WhatsApp and Facebook, starting with false claims that the virus came from eating bats. The WHO warned of serious knock-on effects, and a Reuters Institute study found that although such claims by public figures were fewer, they gained the highest engagement, showing why self-regulation alone often fails to stop it.
Nepal’s Example:
Nepal provides a clear example of the tension between government regulation and the self-regulation tussle of social media. In 2023, the government issued rules requiring all social media platforms, whether local or foreign, to register with the Ministry of Communication and Information Technology, appoint a local contact person, and comply with Nepali law. By 2025, major platforms such as Facebook, Instagram, and YouTube had not met the registration deadline. In response, the Nepal Telecommunications Authority began blocking unregistered platforms until they complied. While journalists, civil-rights groups and Gen Z criticised the move as potentially limiting free speech and exposing corruption against the government. The government argued it was necessary to stop harmful content and misinformation. The case shows that without enforceable obligations, self-regulation can leave platforms unaccountable, but it must also balance with protecting free speech.
Self-Regulation: Strengths and Challenges
Most social-media companies prefer to self-regulate. They write community rules, trust & safety guidelines, and give users ways to flag harmful posts, and lean on a mix of staff, outside boards and AI filters to handle content that crosses the line. The big advantage here is speed: when something dangerous appears, a platform can react within minutes, far quicker than a court or lawmaker. Because they know their systems inside out, from user habits to algorithmic quirks, they can adapt fast.
But there’s a downside. These platforms thrive on engagement, hence sensational or hateful posts often keep people scrolling longer. That means the very content that makes money can also be the content that most needs moderating , a built-in conflict of interest.
Government Regulation: Strengths and Risks
Public rules make platforms answerable. Laws can require illegal content to be removed, force transparency and protect user rights. They can also stop serious harms such as fake news that might spark violence, and they often feel more legitimate when made through open, democratic processes.
Yet regulation can lag behind technology. Vague or heavy-handed rules may be misused to silence critics or curb free speech. Global enforcement is messy, and compliance can be costly for smaller firms.
Practical Implications & Hybrid Governance
For users, regulation brings clearer rights and safer spaces, but it must be carefully drafted to protect legitimate speech. For platforms, self-regulation gives flexibility but less certainty; government rules provide a level playing field but add compliance costs. For governments, regulation helps protect public safety, reduce communal disharmony, and fight misinformation, but it requires transparency and safeguards to avoid misuse.
Hybrid Approach
A combined model of self-regulation plus government regulation is likely to be most effective. Laws should establish baseline obligations: registration, local grievance officers, timely removal of illegal content, and transparency reporting. Platforms should retain flexibility in how they implement these obligations and innovate with tools for user safety. Independent audits, civil society oversight, and simple user appeals can help keep both governments and platforms accountable.
Conclusion
Social media has great power. It can bring people together, but it can also spread false stories, deepen divides and even stir violence. Acting on their own, platforms can move fast and try new ideas, but that alone rarely stops harmful content. Good government rules can fill the gap by holding companies to account and protecting people’s rights.
The best way forward is to mix both approaches, clear laws, outside checks, open reporting, easy complaint systems and support for local platforms, so the digital space stays safer and more trustworthy.
References
- https://timesofindia.indiatimes.com/india/need-desi-social-media-platforms-to-secure-digital-sovereignty-pm/articleshow/123327780.cms#
- https://www.bbc.com/news/world-asia-india-66255989
- https://nepallawsunshine.com/social-media-registration-in-nepal/ https://www.newsonair.gov.in/nepal-bans-26-unregistered-social-media-sites-including-facebook-whatsapp-instagram/
- https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
- https://www.drishtiias.com/daily-updates/daily-news-analysis/social-media-regulation-in-india

Introduction
The Digital Personal Data Protection (DPDP) Act 2023 of India is a significant transition for privacy legislation in this age of digital data. A key element of this new law is a requirement for organisations to have appropriate, user-friendly consent mechanisms in place for their customers so that collection, use or removal of an individual's personal data occurs in a clear and compliant manner. As a means of putting this requirement into practice, the Ministry of Electronics and Information Technology (MeitY) issued a comprehensive Business Requirements Document (BRD) in June 2025 to guide organizations, as well as Consent Managers, on how to create a Consent Management System (CMS). This document establishes the technical and functional framework by which organizations and individuals (Data Principals) will exercise control over the way their data is gathered, used and removed.
Understanding the BRD and Its Purpose
BRD represents an optional guide created as part of the "Code for Consent" programme run by MeitY in India. The purpose of the BRD is to provide guidance to startups, digital platforms and other enterprises on how to create a technology system that supports management of user consent per the requirements of the DPDP Act. Although the contents of the BRD do not carry any legal weight, it lays out a clear path for organisations to create their own consent mechanisms using best practices that align with the principles of transparency, accountability and purpose limitation in the DPDP Act.
The goal is threefold:
- Enable complete consent lifecycle management from collection to withdrawal.
- Empower individuals to manage their consents actively and transparently.
- Support data fiduciaries and processors with an interoperable system that ensures compliance.
Key Components of the Consent Management System
The BRD proposes the development of a modular Consent Management System (CMS) that provides users with secure APIs and user-friendly interfaces. This system will allow for a variety of features and modules, including:
- Consent Lifecycle Management – consent should be specific, informed and tied to an explicit purpose. The CMS will manage the collection, validation, renewal, updates and withdrawal of consent. Each transaction of consent will create a tamper-proof “consent artifact,” which will include the timestamp of creation as well as an ID identifying the purpose for which it was given.
- User Dashboard – A user will be able to view and modify the status of their active, expired or withdrawn consent and revoke access at any time via the multilingual user-friendly interface. This would make the system accessible to people from different regions and cultures.
- Notification Engine – The CMS will automatically notify users, fiduciaries and processors of any action taken with respect to consent, in order to ensure real-time updates and accountability.
- Grievance Redress Mechanism – The CMS will include a complaints mechanism that allows users to submit complaints related to the misuse of consent or the denial of their rights. This will enable tracking of the complaint resolution status, and will allow for escalation if necessary.
- Audit and Logging – As part of the CMS's internal controls for compliance and regulatory purposes, the CMS must maintain an immutable record of every instance of consent for auditing and regulatory review. The records must be encrypted, time-stamped, and linked permanently to a user and purpose ID.
- Cookie Consent Management – A separate module will enable users to manage cookie consent for websites separately from any other consents.
Roles and Responsibilities
The BRD identifies the various stakeholders involved and their associated responsibilities.
- Data Principals (Users): The user has full authority to give, withhold, amend, or revoke their consent for the use of their personal data, at any time.
- Data Fiduciaries (Companies): Companies (the fiduciaries) must collect the data principals' consents for each particular reason and must only begin processing a data subject's personal data after validating that consent through the CMS. Companies must also provide the data principals with any information or notifications needed, as well as how to resolve their complaints.
- Data Processors: Data Processors must strictly adhere to the consent stated in the CMS, and Data Processors may only process personal data on behalf of the Data Fiduciary.
- Consent Managers: The Consent Managers are independent entities that are registered with the Data Protection Board. They are responsible for administering the CMS, allowing users to manage their consent across different platforms.
This layered structure ensures transparency and shared responsibility for the consent ecosystem.
Technical Specifications and Security
The following principles of the DPDP Act must be followed to remain compliant with the DPDP Act.
- End-to-End Encryption: All exchanges of data with users must be encrypted using a minimum of TSL 1.3 and also encrypting within that standard.
- API-First Approach: API’s will be utilized to validate, withdraw and update consent in a secured manner using external sources.
- Interoperability/Accessibility: The CMS needs to allow for users to utilize several different languages (e.g. Hindi, Tamil, etc.) and be appropriate for use with various types of mobile devices and different abilities.
- Data Retention Policy: The CMS should also include automatic deletion of consent data (when the consent has expired or has been withdrawn) in order to maintain compliance with data retention limits.
Legal Relevance and Timelines
While the BRD itself is not enforceable, it is directly aligned with the upcoming enforcement of the DPDP Act, 2023. The Act was passed in August 2023 but is expected to come into effect in stages, once officially notified by the central government. Draft implementation rules, including those defining the role of Consent Managers, were released for public consultation in early 2025.
For businesses, the BRD serves as an early compliance tool—offering both a conceptual roadmap and technical framework to prepare before the law is enforced. Legal experts have described it as a critical resource for aligning data governance systems with emerging regulatory expectations.
Implications for Businesses
Organizations that collect and process user data will be required to overhaul their consent workflows:
- No blanket consents: Every data processing activity must have explicit, separate consent.
- Granular audit logs: Companies must maintain tamper-proof logs for every consent action.
- Integration readiness: Enterprises need to integrate their platforms with third-party or in-house CMS platforms via the specified APIs.
- Grievance redress and user support: Systems must be in place to handle complaints and withdrawal requests in a timely, verifiable manner.
Failing to comply once the DPDP Act is in force may expose companies to penalties, reputational damage, and potential regulatory action.
Conclusion
The BRD on Consent Management of India is a forward-looking initiative laying a technological framework that is an essential component of the DPDP Act concerning user consent; Although not yet a legal document, it provides an extent of going into all the necessary discipline for companies to prepare. As data protection grows in importance, developing consent mechanisms based on security, transparency, and the needs of the user is no longer just a regulatory requirement, but rather a requirement for the development of trust. This is the time for businesses to establish or implement CMS solutions that support this objective to be better equipped for the future of data governance in India.
References
- https://d38ibwa0xdgwxx.cloudfront.net/whatsnew-docs/8d5409f5-d26c-4697-b10e-5f6fb2d583ef.pdf
- https://ssrana.in/articles/ministry-releases-business-requirement-document-for-consent-management-under-the-dpdp-act-2023/
- https://dpo-india.com/Blogs/consent-dpdpa/
- https://corporate.cyrilamarchandblogs.com/2025/06/the-ghost-in-the-machine-the-recent-business-requirement-document-on-consent/
- https://www.mondaq.com/india/privacy-protection/1660964/analysis-of-the-business-requirement-document-for-consent-management-system

A video claiming to show a Hatha yogi performing extreme penance on a snow-covered mountain amid strong icy winds is going viral on social media. In the clip, the ascetic is seen balancing on one hand in a yoga posture, while users portray the visuals as a rare example of extraordinary spiritual endurance in harsh climatic conditions.
However, an investigation by the CyberPeace Foundation has found the claim to be false. Our analysis confirms that the viral video is AI-generated and does not depict a real person or an actual event.
Claim:
A Instagram user shared the video with the caption:
“Hatha yogi, what kind of soil are these people made of?” The post suggests that the visuals show a real yogi performing intense meditation on a frozen mountain.
- https://www.instagram.com/reels/DTK32TvDGIJ/
- (Archive link as provided) https://perma.cc/H84M-MGXZ

Fact Check:
To verify the claim, the CyberPeace Foundation conducted a detailed examination of the viral video.No credible or verifiable news reports were found to support the claim that such an incident ever occurred.
The viral video was analysed using the AI detection tool Deepfake-O-Meter.Its AVSRDD (2025) module flagged the video as AI-generated, confirming that the visuals were digitally created and not recorded in real life.
Multiple indicators within the footage,such as unnatural body balance, environmental inconsistencies, and visual artifacts are consistent with AI-generated content.

Conclusion
The viral video purportedly showing a yogi meditating on a frozen mountain is not real. It has been created using artificial intelligence and is being circulated on social media with a misleading narrative. Users are advised to exercise caution and verify content before sharing such sensational claims.