#FactCheck - Old Wedding Fire Video Misleadingly Shared as Iranian Hypersonic Missile Strike in Tel Aviv
Executive Summary:
Amid the ongoing conflict involving the United States, Israel, and Iran, a video showing a building engulfed in flames is being widely circulated on social media. In the clip, a large fire can be seen inside a building while several people appear to be running in panic. The video is being shared with the claim that Iran fired a hypersonic missile targeting a ceremony in Tel Aviv, Israel, allegedly killing several Israeli military generals and other prominent figures.
However, research by the CyberPeace found that the claim is false. The video being circulated as footage of an attack in Israel actually predates the current conflict and shows a fire that broke out during a wedding ceremony.
Claim
A Facebook user named “Syed Asif Raza Jafri” shared the video on March 13, 2026, claiming that an Iranian hypersonic missile had struck a grand ceremony in Tel Aviv, where several Israeli military officers, generals, soldiers, and other important personalities were present. According to the post, the attack resulted in multiple casualties.
Source:
- https://www.facebook.com/reel/902182825912364
- https://ghostarchive.org/archive/rZryr

Fact Check
To verify the claim, we began our research using the Google Lens reverse image search tool. Several key frames from the viral video were extracted and searched online.
During the search, we found the same video shared earlier on multiple foreign social media accounts. A Facebook user named “Es de Bombero” from Chile had posted the video on January 17, 2026, describing it in Spanish as footage of a fire that broke out during a wedding celebration.

Our research shows that the viral video had been circulating on social media since at least January 15, 2026, well before the escalation of the current conflict. According to a report published on March 1, 2026, by BBC, the large-scale attacks on Iran by the United States and Israel began on February 28, 2026, after which Iran’s Supreme Leader Ali Khamenei was reported dead.
Additionally, a March 12, 2026 report by Al Jazeera stated that a house near Tel Aviv in central Israel was damaged by a rocket reportedly fired by Hezbollah, which has previously carried out joint attacks in coordination with Iran.

Conclusion
The viral video being shared as footage of an Iranian hypersonic missile strike in Tel Aviv is misleading. The clip is an older video of a fire that reportedly broke out during a wedding ceremony and was circulating online before the current conflict began.
While the exact location of the incident shown in the video cannot be independently verified, it is clear that the footage has no connection to the ongoing war between the United States, Israel, and Iran.
Related Blogs

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india
.webp)
Introduction
The digital communication landscape in India is set to change significantly as the Department of Telecommunications is preparing to implement new rules for messaging apps that operate using SIM cards. This step is part of the government’s effort to tackle cybercrime at its roots by enforcing stricter verification and reducing the number of communication platforms that can be misused. One clear change that users will notice is that WhatsApp Web sessions will now be automatically logged out every six hours, disrupting the previously uninterrupted use across multiple devices. Although this may appear to be a simple inconvenience, the measure is part of a broader plan to address the growing problem of cyber fraud. Cybercriminals exploit messaging apps like WhatsApp without keeping the registered SIM in the device, making it difficult to trace fraud. These efforts are surely gonna address these challenges at the root.
The Incident: What Has Changed?
The new regulations will make it mandatory for messaging platforms to create a direct link between user accounts and verified SIM identities. By this method, every account in the network can be associated with a valid and traceable mobile number. Because of this requirement, it is expected that WhatsApp is going to tighten the management of device sessions. The six-hour logout cycle for WhatsApp Web is implemented to prevent long-lived and unmonitored sessions that are sometimes taken advantage of in account takeovers, device-based breaches, and remote access scams. This change significantly affects the user experience. WhatsApp Web, often used for communication, customer support, and coordination, will now require more frequent authentication through mobile devices. Though mobile access remains uninterrupted, desktop and browser-linked sessions will be subjected to tighter security controls.
Why Identity-Linked Messaging Matters
India is facing a rapidly evolving cybercrime ecosystem in which messaging applications play a central role. Scammers often rely on fake, unverified, or illegally obtained SIM cards to create temporary accounts that can be used for various illegal activities, such as sending phishing messages, impersonating government officials, and deceiving victims through call centres set up for scams.
The new rules take into consideration the following main issues:
- Anonymity of accounts makes large-scale fraud possible: Criminals operate bulk scams using hundreds of SIM-linked accounts.
- Freedom to drop identities: Illegal SIMs are discarded after fraud, making it difficult for the police to trace the criminals.
- Multi-device vulnerabilities that last for a long time: Access without permission to WhatsApp Web sessions that last for a long time is seen as the main reason for OTP theft, account hijacking, and on-device social engineering.
The government wants to disrupt these foundations by enforcing stricter traceability.
A Sector Under Strain: Misuse of Messaging Platforms
Messaging apps have turned out to be the most important thing in India's digital life, from communication to enterprise. This very widespread use of messaging apps has made them an easy target for cybercriminals.
The scams that are frequently visible are:
- WhatsApp groupsare used for job and loan scams
- False communication from banks, government departments, and payment applications
- Sextortion and blackmail through unverified accounts
- Remote-access fraud with attackers who are watching WhatsApp Web sessions
- Coordinated spread of false information and distribution of deepfake videos
The employment of AI-generated personas and "SIM farms" has made it harder to secure the systems even more. Unless there is a very strict linking of users to authenticated SIM credentials, the platforms might degenerate into uncontrollable rafts of cybercrime.
Government and Regulatory Response
The Department of Telecommunications is initiating a process of stricter compliance measures and cooperating with the Ministry of Home Affairs, along with the Indian Cyber Crime Coordination Centre. The main points of the directions include the following:
- Identity verification linked to a SIM is mandatory for the creation of messaging accounts
- Device re-authentication on platforms often starts with WhatsApp Web
- Coordination with the telecom operators to the extent of getting suspicious login patterns
- Protocols for the sharing of data with law enforcement in the course of cybercrime investigations
- Compliance checks of digital platforms to verify adherence to national safety guidelines
This coordinated effort reflects the understanding that the security of communication platforms is the responsibility of both the regulators and the service providers.
The Bigger Picture: Strengthening India’s Digital Trust
The fresh regulations are in step with the worldwide trend where the platforms of messaging have to be more responsible, as governments are demanding more and more from them. The same discussions are going on in the EU, UK, and certain Southeast Asian regions.
For India, it is imperative to enhance identity management because:
- The nation has the largest base of messaging users in the whole world
- Cybercrime is increasing at a rate quicker than that of traditional crime
- Digital government services rely on communications that are secure
- Identity integrity is the basis for trust in online transactions and digital payments
The six-hour logout policy for WhatsApp Web is a small action, but it is an indication of a bigger transformation towards a regulation that is active rather than just policing that is reactive.
What Needs to Happen Next?
The implementation of SIM-linked regulations must involve several subsequent measures to make them effective.
- Strengthening Digital Literacy: It is necessary to educate users about the benefits of frequent logouts and security improvements.
- Ensuring Privacy Protections: The DPDP Act should create a strong barrier against the misuse of personal data in identity-linked messaging that will be implemented.
- Collaboration with Platforms: Messaging services should seek to secure authentication under the compromise of safety checks.
- Monitoring SIM fraud at the source: Illicit SIM provisioning enforcement is the main source of criminals, not just changing their methods.
- Continuous Review and Feedback: Policymaking needs to keep pace with real-life difficulties and new inventions in technology.
Conclusion
India's announcement to impose regulations on messaging apps with SIM linkage is a major step forward in preventing cybercrime from occurring in the first place. Although the immediate effect, like the six-hour logout requirement for WhatsApp Web, may annoy users, it is nevertheless part of a bigger goal: to develop a more secure and trustworthy digital communication environment.
Securing the communication that links millions of people is vital as India becomes more and more digital. Through a combination of regulatory measures, technological protection, and user education, the country is headed toward a time when criminals in the cyber world will find it very difficult to operate and where consumers will be able to interact online with much more confidence and safety.
References
- https://thehackernews.com/2025/12/india-orders-messaging-apps-to-work.html
- https://indianexpress.com/article/explained/explained-sci-tech/whatsapp-web-automatic-log-out-six-hourse-reason-10394142/
- https://www.ndtv.com/india-news/explained-how-will-new-sim-binding-rule-affect-whatsapp-signal-telegram-9728710
- https://www.hindustantimes.com/india-news/no-whatsapp-without-active-sim-centre-issues-new-rules-dot-sim-binding-prevent-cyber-crimes-101764495810135.html
.webp)
Introduction
Against the dynamic backdrop of Mumbai, where the intersection of age-old markets and cutting-edge innovation is a daily reality, an initiative of paramount importance has begun to take shape within the hallowed walls of the Reserve Bank of India (RBI). This is not just a tweak, a nudge in policy, or a subtle refinement of protocols. What we're observing is nothing short of a paradigmatic shift, a recalibration of systemic magnitude, that aims to recalibrate the way India's financial monoliths oversee, manage, and secure their informational bedrock – their treasured IT systems.
On the 7th of November, 2023, the Reserve Bank of India, that bastion of monetary oversight and national fiscal stability, unfurled a new doctrine – the 'Master Direction on Information Technology Governance, Risk, Controls, and Assurance Practices.' A document comprehensive in its reach, it presents not merely an update but a consolidation of all previously issued guidelines, instructions, and circulars relevant to IT governance, plaited into a seamless narrative that extols virtues of structured control and unimpeachable assurance practices. Moreover, it grasps the future potential of Business Continuity and Disaster Recovery Management, testaments to RBI's forward-thinking vision.
This novel edict has been crafted with a target audience that spans the varied gamut of financial entities – from Scheduled Commercial Banks to Non-Banking Financial Companies, from Credit Information Companies to All India Financial Institutions. These are the juggernauts that keep the economic wheels of the nation churning, and RBI's precision-guided document is an unambiguous acknowledgment of the vital role IT holds in maintaining the heartbeat of these financial bodies. Here lies a riveting declaration that robust governance structures aren't merely preferred but essential to manage the landscape of IT-related risks that balloon in an era of ever-proliferating digital complexity.
Directive Structure
The directive's structure is a combination of informed precision and intuitive foresight. Its seven chapters are not simply a grouping of topics; they are the seven pillars upon which the temple of IT governance is to be erected. The introductory chapter does more than set the stage – it defines the very reality, the scope, and the applicability of the directive, binding the reader in an inextricable covenant of engagement and anticipation. It's followed by a deep dive into the cradle of IT governance in the second chapter, drawing back the curtain to reveal the nuanced roles and defiant responsibilities bestowed upon the Board of Directors, the IT Strategy Committee, the clairvoyant Senior Management, the IT Steering Committee, and the pivotal Head of IT Function.
As we move along to the third chapter, we encounter the nuts and bolts of IT Infrastructure & Services Management. This is not just a checklist; it is an orchestration of the management of IT services, third-party liaisons, the calculus of capacity management, and the nuances of project management. Here terms like change and patch management, cryptographic controls, and physical and environmental safeguards leap from the page – alive with earnest practicality, demanding not just attention but action.
Transparency deepens as we glide into the fourth chapter with its robust exploration of IT and Information Security Risk Management. Here, the demand for periodic dissection of IT-related perils is made clear, along with the edifice of an IT and Information Security Risk Management Framework, buttressed by the imperatives of Vulnerability Assessment and Penetration Testing.
The fifth chapter presents a tableau of circumspection and preparedness, as it waxes eloquent on the necessity and architecture of a well-honed Business Continuity Plan and a disaster-ready DR Policy. It is a paean to the anticipatory stance financial institutions must employ in a world fraught with uncertainty.
Continuing the narrative, the sixth chapter places the spotlight on Information Systems Audit, delineating the precise role played by the Audit Committee of the Board in ushering in accountability through an exhaustive IS Audit of the institution's virtual expanse.
And as we perch on the final chapter, we're privy to the 'repeal and other provisions' of the directive, underscoring the interplay of other applicable laws and the interpretation a reader may yield from the directive's breadth.
Conclusion
To proclaim that this directive is a mere step forward in the RBI's exhaustive and assiduous efforts to propel India's financial institutions onto the digital frontier would be a grave understatement. What we are witnessing is the inception of a more adept, more secure, and more resilient financial sector. This directive is nothing less than a beacon, shepherding in an epoch of IT governance marked by impervious governance structures, proactive risk management, and an unyielding commitment to the pursuit of excellence and continuous improvement. This is no ephemeral shift - this is, indisputably, a revolutionary stride into a future where confidence and competence stand as the watchwords in navigating the digital terra incognita.