#FactCheck:AI-Generated War Video Falsely Linked to Israel-Iran Tensions Goes Viral
Executive Summary
A video is being widely shared on social media linking it to the ongoing tensions between Israel and Iran. The clip shows multiple fighter jets flying across the sky, while massive flames appear to be rising from tall buildings below. The visuals are dramatic and alarming, creating the impression of a large-scale military strike. Users sharing the video claim that after Israel carried out an attack, Iran launched a retaliatory strike on Israel, and that the viral footage captures the aftermath of this counterattack. However, research conducted by the CyberPeace found the claim to be misleading. Our research revealed that the viral video is not authentic but AI-generated.
Claim
On the social media platform Facebook, a user shared the viral video with the caption: “Iran has also carried out a retaliatory attack on Israel.”
(Post link and archive link provided above.)

Factcheck
Upon closely examining the video, we noticed several irregularities in the visuals and motion patterns, which raised suspicion that the footage may have been generated using artificial intelligence. To verify this, we analyzed the video using the AI detection tool developed by Hive Moderation. According to the analysis report, there is a 62 percent likelihood that the viral video is AI-generated.

As part of further verification, we also scanned the video using Sightengine. The results indicated an even stronger probability, suggesting that the video is 99 percent AI-generated.

Conclusion
Our research confirms that the viral video does not depict a real military attack. It is AI-generated content being falsely shared in the context of Israel-Iran tensions.
Related Blogs

Introduction
How Generative Artificial Intelligence, or GenAI, is changing the employee workday is no longer limited to writing emails or debugging code, but now also includes analysing contracts, generating reports, and much more. The use of AI tools in everyday work has become commonplace, but the speed at which companies have adopted these technologies has created a new kind of risk. Unlike threats that come from an outside attacker, Shadow AI is created inside an organisation by a legitimate employee who uses unapproved AI tools to make their work more efficient and productive. In many cases, the employee is unaware of the potential security, data privacy, and compliance risks involved in using such tools to perform their job duties.
What Is Shadow AI?
Shadow AI is when individuals use AI tools at work that aren’t provided by the company, like tools or other software programs, without the knowledge or permission of the employer. Examples of shadow AI include:
- Using personal ChatGPT or other chatbot accounts to complete tasks at the office
- Uploading business-related documents to online AI technologies for analysis or summarisation.
- Copying proprietary source code into an online AI model for debugging
- Installing browser extensions and add-ons that are not approved by IT or Security personnel.
How Shadow AI Is Harmful
1. Uncontrolled Data Exposure
When employees access or input information into their user-created AI, it becomes outside the controls of the company, such as both employee personal information and any third-party personal information, private company information (such as source code or contracts), and company internal strategies. After a user enters data into their user-created AIs, the company loses all ability to monitor how that data is stored, processed, or maintained. A data leak situation exists without a malicious cyberattack. The biggest risk of a data leak is not maliciousness but rather the loss of control and governance over sensitive data.
2. Regulatory and Legal Non-Compliance
Data protection laws like GDPR, India’s Digital Personal Data Protection (DPDP) Act, HIPAA, and other relevant sectoral laws require businesses to process data in accordance with the law, to minimise the amount of data they use, and to be accountable for their actions. Shadow AI often results in the unlawful use of personal data due to a lack of a legal basis for the processing, unauthorised cross-border data transfers, and not having appropriate contractual protections in place with their AI service providers. Regulators do not see the convenience of employees as an excuse for not complying with the law, and therefore, the organisation is ultimately responsible for any violations that occur.
3. Loss of Intellectual Property
Employees frequently use AI tools to speed up tasks involving proprietary information—debugging code, reviewing contracts, or summarising internal research. When done using unapproved AI platforms, this can expose trade secrets and intellectual property, eroding competitive advantage and creating long-term business risk.
Real-Life Example: Samsung’s ChatGPT Data Leak
In 2023, a case study exemplifying the Shadow AI risk occurred when Samsung Electronics placed a temporary ban on employee access to ChatGPT and other AI tools after reports from engineers revealed they were using ChatGPT to create debugging processes for internal source code and to summarise meeting notes. Consequently, confidential source code related to semiconductors was inadvertently uploaded onto a public AI platform. While there were no known incursions into the company’s system due to this incident, Samsung faced a significant challenge: once sensitive information is input into a public AI tool, it exists on external servers that are outside of the company’s purview or control.
As a result of this incident, Samsung restricted employee use of ChatGPT on corporate devices, issued a series of internal communications prohibiting the sharing of corporate data with public AI tools, and increased the urgency of their discussions regarding the adoption of secure, enterprise-level AI (artificial intelligence) solutions.
What Organisations Are Doing Today
Many organisations respond to Shadow AI risk by:
- Blocking access at the network level
- Circulating warning emails or policies
While these actions may reduce immediate exposure, they fail to address the root cause: employees still need AI to perform their jobs efficiently. As a result, bans often push AI usage underground, increasing Shadow AI rather than eliminating it.
Why Blocking AI Does Not Work—Governance Does
History has demonstrated that prohibition does not work - we see this when trying to block access to cloud storage, instant messaging and collaboration tools. Employees are forced to use personal devices and/or accounts when their employers block AI, which means employers do not have real-time visibility into how their employees are using these technologies, and creates friction with the security and compliance team as they try to enforce the types of tools their employees can use. Prohibiting AI adoption will not stop it from being adopted; it will just create a challenge for employers regarding how safe and responsible it is. The challenge for effective organisations is therefore to shift from denial and develop governance-first AI strategies aimed at controlling data usage, protection and security, rather than merely restricting access to a list of specific tools.
Shadow AI: A Silent Legal Liability Under the GDPR
Shadow AI isn't a problem for the Information Technology Department; it is a failure of Governance, Compliance and Law. By using AI tools that have not been approved as a result, the organisation processes personal data without a lawful basis (Article 6 of the General Data Protection Regulation (GDPR)), repurposes data for use beyond its original intent and in breach of the Purpose Limitation (Article 5(1)(b)), and routinely exceeds necessity and in breach of Data Minimisation (Article 5(1)(c)). The outcome of these actions is the use of tools that involve International Data Transfers Without Authorisation and are therefore in breach of Chapter V, and violate Article 32 because there are no enforceable safeguards in place. Most significantly, the failure to demonstrate Oversight, Logging and Control under Articles 5(2) and 24 constitutes a failure in Accountability. Therefore, from a Regulatory perspective, Shadow AI is not accidental and is not defensible.
The Right Solution: Secure and Governed AI Adoption
1. Provide Approved AI Tools
Employers have an obligation to supply business-approved AI technology for helping workers to be productive while maintaining maximum protections, like storing data separately and not using employees' data for training a model; defining how long data is kept, and the rules around deleting that data. When employees are provided with verified and secure AI options that align with their work processes, they will rely significantly less on Shadow AI.
2. Enforce Zero-Trust Data Access
The governance of AI systems must follow the principles of "zero trust," granting access to data only through the principle of "least privilege," which means that data access will only be allowed by the system user, and providing continuous verification of user-identity and context; this supports and helps establish context-aware controls to monitor and track all user activities, which will be especially important as agent-like AI systems become increasingly autonomous and are capable of operating at machine-speed where even small errors in configuration, will result in rapid and large expose to data.
3. Apply DLP and Audit Logging
It is important to have robust data loss prevention measures in place to protect sensitive data that is sent outside an organisation. The first end user or machine that accesses the data should be detailed in a comprehensive audit log that indicates when and how the data is accessed. In combination with other controls, these measures create accountability, comply with regulations, and assist with appropriately detecting and responding to incidents.
4. Maintain Visibility Across AI, Cloud, and SaaS
Security teams need unified visibility across AI tools, personal cloud applications, and SaaS platforms. Risks move across systems, and controls must follow the data wherever it flows.
Conclusion
This new threat exposes an organisation to the risk of data loss through leaks, regulatory fines, liability for the loss of intellectual property, and reputational damage, all of which can occur without any intent to cause harm. The way forward is not to block AI, but to adopt a clear framework built on governance, visibility, and secure enablement. This approach allows organisations to use AI with confidence, while ensuring trust, accountability, and effective oversight to protect data and support AI in reaching its full transformative potential. AI use is encouraged, but it must be done responsibly, ethically, and securely.
References
- https://bronson.ai/resources/shadow-ai/
- https://www.varonis.com/blog/shadow-ai
- https://www.waymakeros.com/learn/gdpr-hipaa-shadow-ai-compliance-nightmare
- https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/
- https://www.usatoday.com/story/special/contributor-content/2025/05/23/shadow-ai-the-hidden-risk-in-todays-workplace/83822081007

Introduction:
Welcome to the second edition of our blog on Digital forensics series. In our previous blog we discussed what digital forensics is, the process followed by the tools, and the subsequent challenges faced in the field. Further, we looked at how the future of Digital Forensics will hold in the current scenario. Today, we will explore differences between 3 particular similar sounding terms that vary significantly in functionality when implemented: Copying, Cloning and Imaging.
In Digital Forensics, the preservation and analysis of electronic evidence are important for investigations and legal proceedings. Replication of the data and devices is one of the fundamental tasks in this domain, without compromising the integrity of the original evidence.
Three primary techniques -- copying, cloning, and imaging -- are used for this purpose. Each technique has its own strengths and is applied according to the needs of the investigation.
In this blog, we will examine the differences between copying, cloning and imaging. We will talk about the importance of each technique, their applications and why imaging is considered the best for forensic investigations.
Copying
Copying means duplicating data or files from one location to another. When one does copying, it implies that one is using standard copy commands. However, when dealing with evidence, it might be hard to use copy only. It is because the standard copy can alter the metadata and change the hidden or deleted data .
The characteristics of copying include:
- Speed: copying is simpler and faster,compared to cloning or imaging.
- Risk: The risk involved in copying is that the metadata might be altered and all the data might be captured.
Cloning
It is the process where the transfer of the entire contents of a hard drive or a storage device is done on another storage device. This process is known as cloning . This way, the cloning process captures both the active data and the unallocated space and hidden partitions, thus containing the whole structure of the original device. Cloning is generally used at the sector level of the device. Clones can be used as the working copy of a device .
Characteristics of cloning:
- bit-for-bit replication: cloning keeps the exact content and the whole structure of the original device.
- Use cases: cloning is used when it is needed to keep the original device intact for further examination or a legal affair.
- Time consuming: Cloning is usually longer in comparison to simple copying since it involves the whole detailed replication. Though it depends on various factors like the size of the storage device, the speed of the devices involved, and the method of cloning.
Imaging:
It is the process of creating a forensic image of a storage device. A forensic image is a replica copy of every bit of data that was on the source device, this including the allocated, unallocated, and the available slack space .
The image is then used for analysis and investigation, and the original evidence is left untouched. Images can’t be used as the working copies of a device. Unlike cloning, which produces working copies, forensic images are typically used for analysis and investigation purposes and are not intended for regular use as working copies.
Characteristics of Imaging:
- Integrity: Imaging ensures the integrity and authenticity of the evidence produced
- Flexibility: Forensic image replicas can be mounted as a virtual drive to create image-specific mode for analysis of data without affecting the original evidence .
- Metadata: Imaging captures metadata associated with the data, thus promoting forensic analysis.
Key Differences
- Purpose: Copying is for everyday use but not good for forensic investigations requiring data integrity. Cloning and imaging are made for forensic preservation.
- Depth of Replication: Cloning and imaging captures the entire storage device including hidden, unallocated, and deleted data whereas copying may miss crucial forensic data.
- Data Integrity: Imaging and cloning keep the integrity of the original evidence thus making them suitable for legal and forensic use. Which is a critical aspect of forensic investigations.
- Forensic Soundness: Imaging is considered the best in digital forensics due to its comprehensive and non-invasive nature.
- Cloning is generally from one hard disk to another, where as imaging creates a compressed file that contains a snapshot of the entire hard drive or a specific partitions
Conclusion
Therefore, copying, cloning, and imaging all deal with duplication of data or storage devices with significant variations, especially in digital forensic. However, for forensic investigations, imaging is the most selected approach due to the correct preservation of the evidence state for any analysis or legal use . Therefore, it is essential for forensic investigators to understand these rigorous differences to avail of real and uncontaminated digital evidence for their investigation and legal argument.
%20(2).webp)
Introduction
Digitalization in India has been a transformative force, India is also marked as the second country in the world in terms of active internet users. With this adoption of digitalization and technology, the country is becoming a digitally empowered society and knowledge-based economy. However, the number of cyber crimes in the country has also seen a massive spike recently with the sophisticated cyber attacks and manipulative techniques being used by cybercriminals to lure innocent individuals and businesses.
As per recent reports, over 740,000 cybercrime cases were reported to the I4C, in the first four months of 2024, which raises serious concern on the growing nature of cyber crimes in the country. Recently Prime Minister Modi in his Mann Ki Baat address, cautioned the public about a particular rising cyber scam known as ‘digital arrest’ and highlighted the seriousness of the issue and urged people to be aware and alert about such scams to counter them. The government has been keen on making efforts to reduce and combat cyber crimes by introducing new measures and strengthening the regulatory landscape governing cyberspace in India.
Indian Cyber Crime Coordination Centre
Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework and eco-system for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. I4C handles the ‘National Cyber Crime Reporting Portal’ (https://cybercrime.gov.in) and the 1930 Cyber Crime Helpline. Recently at the Indian Cyber Crime Coordination Centre (I4C) Foundation Day celebration, Union Home Minister Amit Shah launched the Cyber Fraud Mitigation Centre (CFMC), Samanvay platform (Joint Cybercrime Investigation Facilitation System), 'Cyber Commandos' program and Online Suspect Registry as efforts to combat the cyber crimes, establish cyber resilence and awareness and strengthening capabilities of law enforcement agencies.
Regulatory landscape Governing Cyber Crimes
Information Technology Act, 2000 (IT Act) and the rules made therein, the Intermediary Guidelines, Digital Personal Data Protection Act, 2023 and Bhartiya Nyay Sanhita, 2023 are the major legislation in India governing Cyber Laws.
CyberPeace Recommendations
There has been an alarming uptick in cybercrimes in the country highlighting the need for proactive approaches to counter these emerging threats. The government should prioritise its efforts by introducing robust policies and technical measures to reduce cybercrime in the country. The law enforcement agencies' capabilities must be strengthened with advanced technologies to deal with cyber crimes especially considering the growing sophisticated nature of cyber crime tactics used by cyber criminals.
The netizens must be aware of the manipulative tactics used by cyber criminals to target them. Social media companies must also implement robust measures on their respective platforms to counter and prevent cyber crimes. Coordinated approaches by all relevant authorities, including law enforcement, cybersecurity agencies, and regulatory bodies, along with increased awareness and proactive engagement by netizens, can significantly reduce cyber threats and online criminal activities.
References
- https://www.statista.com/statistics/1499739/india-cyber-crime-cases-reported-to-i4c/#:~:text=Cyber%20crime%20cases%20registered%20by%20I4C%20India%202019%2D2024&text=Over%20740%2C000%20cases%20of%20cyber,related%20to%20online%20financial%20fraud
- https://www.deccanherald.com/india/parliament-panel-to-examine-probe-agencies-efforts-to-tackle-cyber-crime-illegal-immigration-3270314
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2003158