#FactCheck: Viral AI video claims Iran has destroyed Israel in an airstrike
Executive Summary:
A video is circulating on social media claiming to be footage of the aftermath of Iran's missile strikes on Israel. The video shows destruction, damaged infrastructure, and panic among civilian casualties. After our own digital verification, visual inspection, and frame-by-frame inspection, we have determined that the video is fake. The video is just AI-generated clips and not related to any incident.

Claim:
The viral video claims that a recent military strike by Iran resulted in the destruction of parts of Israel, following an initial missile attack launched by Iran. The footage appears current and depicts significant destruction of buildings and widespread chaos in the streets.

FACT CHECK:
We conducted our research on the viral video to determine if it was AI-generated. During the research we broke the video into individual still frames, and upon closely examining the frames, several of the visuals he showed us had odd-shaped visual features, abnormal body proportions, and flickering movements that don't occur in real footage. We took several still frames and checked them in image search sites to see if they had appeared before. The search results revealed that several clips in the video had appeared previously, in separate and unrelated circumstances, which indicates that they are neither recent nor original.

While examining the Instagram profile, we noticed that the account frequently shares visually dramatic AI content that appears digitally created. Many earlier posts from the same page include scenes that are unrealistic, such as wrecked aircraft in desolate areas or buildings collapsing in unnatural ways. In the current video, for instance, the fighter jets shown have multiple wings, which is not technically or aerodynamically possible in real life. The profile’s bio, which reads "Resistance of Artificial Intelligence," suggests that the page intentionally focuses on sharing AI-generated or fictional content.

We also ran the viral post through Tenorshare.AI for Deep-Fake detection, and the result came 94% AI. All findings resulting from our research established that the video is synthetic and unrelated to any event occurring in Israel, and therefore debunked a false narrative propagated on social media.

Conclusion:
Our research found that the video is fake and contains AI-generated images and is not related to any real missile strike or destruction occurring in Israel. The source is specific to fuel the panic and misinformation in a context of already-heightened geopolitical tension. We call on viewers not to share this unverified information and to rely on trusted sources. When there are sensitive international developments, the dissemination of fake imagery can promote fear, confusion, and misinformation on a global scale.
- Claim: Real Footage of Iran’s Missile Strikes on Israel
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

The global race for Artificial Intelligence is heating up, and India has become one of its most important battlegrounds. Over the past few months, tech giants like OpenAI (ChatGPT), Google (Gemini), X (Grok), Meta (Llama), and Perplexity AI have stepped up their presence in the country, not by selling their AI tools, but by offering them free or at deep discounts.
At first, it feels like a huge win for India’s digital generation. Students, professionals, and entrepreneurs today can tap into some of the world’s most powerful AI tools without paying a rupee. It feels like a digital revolution unfolding in real time. Yet, beneath this generosity lies a more complicated truth. Experts caution that this wave of “free” AI access isn’t without strings attached. This offering impacts how India handles data privacy, the fairness of competition, and the pace of the development of homegrown AI innovation that the country is focusing on.
The Market Strategy: Free Now, Pay Later
The choice of global AI companies to offer free access in India is a calculated business strategy. With one of the world’s largest and fastest-growing digital populations, India is a market no tech giant wants to miss. By giving away their AI tools for free, these firms are playing a long game:
- Securing market share early: Flooding the market with free access helps them quickly attract millions of users before Indian startups have a chance to catch up. Recent examples are Perplexity, ChatGPT Go and Gemini AI which are offering free subscriptions to Indian users.
- Gathering local data: Every interaction, every prompt, question, or language pattern, helps these models learn from larger datasets to improve their product offerings in India and the rest of the world. Nothing is free in the world - as the popular saying goes, “if something is free, means you are the product. The same goes for these AI platforms: they monetise user data by analysing chats and their behaviour to refine their model and build paid products. This creates the privacy risk as India currently lacks specific laws to govern how such data is stored, processed or used for AI training.
- Create user dependency: Once users grow accustomed to the quality and convenience of these global models, shifting to Indian alternatives, even when they become paid, will be difficult. This approach mirrors the “freemium” model used in other tech sectors, where users are first attracted through free access and later monetised through subscriptions or premium features, raising ethical concerns.
Impact on Indian Users
For most Indians, the short-term impact of free AI access feels overwhelmingly positive. Tools like ChatGPT and Gemini are breaking down barriers by democratising knowledge and making advanced technology available to everyone, from students, professionals, to small businesses. It’s changing how people learn, think and do - all without spending a single rupee.But the long-term picture isn’t quite as simple. Beneath the convenience lies a set of growing concerns:
- Data privacy risks: Many users don’t realise that their chats, prompts, or queries might be stored and used to train global AI models. Without strong data protection laws in action, sensitive Indian data could easily find its way into foreign systems.
- Overdependence on foreign technology: Once these AI tools become part of people’s daily lives, moving away from them gets harder — especially if free access later turns into paid plans or comes with restrictive conditions.
- Language and cultural bias: Most large AI models are still built mainly around English and Western data. Without enough Indian language content and cultural representation, the technology risks overlooking the very diversity that defines India
Impact on India’s AI Ecosystem
India’s Generative AI market, valued at USD $ 1.30 billion in 2024, is projected to reach 5.40 billion by 2033. Yet, this growth story may become uneven if global players dominate early.
Domestic AI startups face multiple hurdles — limited funding, high compute costs, and difficulty in accessing large, diverse datasets. The arrival of free, GPT-4-level models sharpens these challenges by raising user expectations and increasing customer acquisition costs.
As AI analyst Kashyap Kompella notes, “If users can access GPT-4-level quality at zero cost, their incentive to try local models that still need refinement will be low.” This could stifle innovation at home, resulting in a shallow domestic AI ecosystem where India consumes global technology but contributes little to its creation.
CCI’s Intervention: Guarding Fair Competition
The Competition Commission of India (CCI) has started taking note of how global AI companies are shaping India’s digital market. In a recent report, it cautioned that AI-driven pricing strategies such as offering free or heavily subsidised access could distort healthy competition and create an uneven playing field for smaller Indian developers.
The CCI’s decision to step in is both timely and necessary. Without proper oversight, such tactics could gradually push homegrown AI startups to the sidelines and allow a few foreign tech giants to gain disproportionate influence over India’s emerging AI economy.
What the Indian Government Should Do
To ensure India’s AI landscape remains competitive, inclusive, and innovation-driven, the government must adopt a balanced strategy that safeguards users while empowering local developers.
1. Promote Fair Competition
The government should mandate transparency in free access offers, including their duration, renewal terms, and data-use policies. Exclusivity deals between foreign AI firms and telecom or device companies must be closely monitored to prevent monopolistic practices.
2. Strengthen Data Protection
Under the Digital Personal Data Protection (DPDP) Act, companies should be required to obtain explicit consent from users before using data for model training. Encourage data localisation, ensuring that sensitive Indian data remains stored within India’s borders.
3. Support Domestic AI Innovation
Accelerate the implementation of the IndiaAI Mission to provide public compute infrastructure, open datasets, and research funding to local AI developers like Sarvam AI, an Indian company chosen by the government to build the country's first homegrown large language model (LLM) under IndianAI Mission.
4. Create an Open AI Ecosystem
India should develop national AI benchmarks to evaluate all models, foreign or domestic, on performance, fairness, and linguistic diversity. And at the same time, they have their own national data Centre to train their indigenous AI models.
5. Encourage Responsible Global Collaboration
Speaking at the AI Action Summit 2025, the Prime Minister highlighted that governance should go beyond managing risks and should also promote innovation for the global good. Building on this idea, India should encourage global AI companies to invest meaningfully in the country’s ecosystem through research labs, data centres, and AI education programmes. Such collaborations will ensure that these partnerships not only expand markets but also create value, jobs and knowledge within India.
Conclusion
The surge of free AI access across India represents a defining moment in the nation’s digital journey. On one hand, it’s empowering millions of people and accelerating AI awareness like never before. On the other hand, it poses serious challenges from over-reliance on foreign platforms to potential risks around data privacy and the slow growth of local innovation. India’s real test will be finding the right balance between access and autonomy, allowing global AI leaders to innovate and operate here, but within a framework that protects the interests of Indian users, startups, and data ecosystems. With strong and timely action under the Digital Personal Data Protection (DPDP) Act, the IndiaAI Mission, and the Competition Commission of India’s (CCI) active oversight, India can make sure this AI revolution isn’t just something that happens to the country, but for it.
References
- https://www.moneycontrol.com/artificial-intelligence/cci-study-flags-steep-barriers-for-indian-ai-startups-calls-for-open-data-and-compute-access-to-level-playing-field-article-13600606.html#
- https://www.imarcgroup.com/india-generative-ai-market
- https://www.mea.gov.in/Speeches-Statements.htm?dtl/39020/Opening_Address_by_Prime_Minister_Shri_Narendra_Modi_at_the_AI_Action_Summit_Paris_February_11_2025
- https://m.economictimes.com/tech/artificial-intelligence/nasscom-planning-local-benchmarks-for-indic-ai-models/articleshow/124218208.cms
- https://indianexpress.com/article/business/centre-selects-start-up-sarvam-to-build-country-first-homegrown-ai-model-9967243/#

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/
%20(1).webp)
Digitisation in Agriculture
The traditional way of doing agriculture has undergone massive digitization in recent years, whereby several agricultural processes have been linked to the Internet. This globally prevalent transformation, driven by smart technology, encompasses the use of sensors, IoT devices, and data analytics to optimize and automate labour-intensive farming practices. Smart farmers in the country and abroad now leverage real-time data to monitor soil conditions, weather patterns, and crop health, enabling precise resource management and improved yields. The integration of smart technology in agriculture not only enhances productivity but also promotes sustainable practices by reducing waste and conserving resources. As a result, the agricultural sector is becoming more efficient, resilient, and capable of meeting the growing global demand for food.
Digitisation of Food Supply Chains
There has also been an increase in the digitisation of food supply chains across the globe since it enables both suppliers and consumers to keep track of the stage of food processing from farm to table and ensures the authenticity of the food product. The latest generation of agricultural robots is being tested to minimise human intervention. It is thought that AI-run processes can mitigate labour shortage, improve warehousing and storage and make transportation more efficient by running continuous evaluations and adjusting the conditions real-time while increasing yield. The company Muddy Machines is currently trialling an autonomous asparagus-harvesting robot called Sprout that not only addresses labour shortages but also selectively harvests green asparagus, which traditionally requires careful picking. However, Chris Chavasse, co-founder of Muddy Machines, highlights that hackers and malicious actors could potentially hack into the robot's servers and prevent it from operating by driving it into a ditch or a hedge, thereby impending core crop activities like seeding and harvesting. Hacking agricultural pieces of machinery also implies damaging a farmer’s produce and in turn profitability for the season.
Case Study: Muddy Machines and Cybersecurity Risks
A cyber attack on digitised agricultural processes has a cascading impact on online food supply chains. Risks are non-exhaustive and spill over to poor protection of cargo in transit, increased manufacturing of counterfeit products, manipulation of data, poor warehousing facilities and product-specific fraud, amongst others. Additional impacts on suppliers are also seen, whereby suppliers have supplied the food products but fail to receive their payments. These cyber-threats may include malware(primarily ransomware) that accounts for 38% of attacks, Internet of Things (IoT) attacks that comprise 29%, Distributed Denial of Service (DDoS) attacks, SQL Injections, phishing attacks etc.
Prominent Cyber Attacks and Their Impacts
Ransomware attacks are the most popular form of cyber threats to food supply chains and may include malicious contaminations, deliberate damage and destruction of tangible assets (like infrastructure) or intangible assets (like reputation and brand). In 2017, NotPetya malware disrupted the world’s largest logistics giant Maersk and destroyed all end-user devices in more than 60 countries. Interestingly, NotPetya was also linked to the malfunction of freezers connected to control systems. The attack led to these control systems being compromised, resulting in freezer failures and potential spoilage of food, highlighting the vulnerability of industrial control systems to cyber threats.
Further Case Studies
NotPetya also impacted Mondelez, the maker of Oreos but disrupting its email systems, file access and logistics for weeks. Mondelez’s insurance claim was also denied since NotPetya malware was described as a “war-like” action, falling outside the purview of the insurance coverage. In April 2021, over the Easter weekend, Bakker Logistiek, a logistics company based in the Netherlands that offers air-conditioned warehousing and food transportation for Dutch supermarkets, experienced a ransomware attack. This incident disrupted their supply chain for several days, resulting in empty shelves at Albert Heijn supermarkets, particularly for products such as packed and grated cheese. Despite the severity of the attack, the company successfully restored their operations within a week by utilizing backups. JBS, one of the world’s biggest meat processing companies, also had to pay $11 million in ransom via Bitcoin to resolve a cyber attack in the same year, whereby computer networks at JBS were hacked, temporarily shutting down their operations and endangering consumer data. The disruption threatened food supplies and risked higher food prices for consumers. Additional cascading impacts also include low food security and hindrances in processing payments at retail stores.
Credible Threat Agents and Their Targets
Any cyber-attack is usually carried out by credible threat agents that can be classified as either internal or external threat agents. Internal threat agents may include contractors, visitors to business sites, former/current employees, and individuals who work for suppliers. External threat agents may include activists, cyber-criminals, terror cells etc. These threat agents target large organisations owing to their larger ransom-paying capacity, but may also target small companies due to their vulnerability and low experience, especially when such companies are migrating from analogous methods to digitised processes.
The Federal Bureau of Investigation warns that the food and agricultural systems are most vulnerable to cyber-security threats during critical planting and harvesting seasons. It noted an increase in cyber-attacks against six agricultural co-operatives in 2021, with ancillary core functions such as food supply and distribution being impacted. Resultantly, cyber-attacks may lead to a mass shortage of food not only meant for human consumption but also for animals.
Policy Recommendations
To safeguard against digital food supply chains, Food defence emerges as one of the top countermeasures to prevent and mitigate the effects of intentional incidents and threats to the food chain. While earlier, food defence vulnerability assessments focused on product adulteration and food fraud, including vulnerability assessments of agriculture technology now be more relevant.
Food supply organisations must prioritise regular backups of data using air-gapped and password-protected offline copies, and ensure critical data copies are not modifiable or deletable from the main system. For this, blockchain-based food supply chain solutions may be deployed, which are not only resilient to hacking, but also allow suppliers and even consumers to track produce. Companies like Ripe.io, Walmart Global Tech, Nestle and Wholechain deploy blockchain for food supply management since it provides overall process transparency, improves trust issues in the transactions, enables traceable and tamper-resistant records and allows accessibility and visibility of data provenance. Extensive recovery plans with multiple copies of essential data and servers in secure, physically separated locations, such as hard drives, storage devices, cloud or distributed ledgers should be adopted in addition to deploying operations plans for critical functions in case of system outages. For core processes which are not labour-intensive, including manual operation methods may be used to reduce digital dependence. Network segmentation, updates or patches for operating systems, software, and firmware are additional steps which can be taken to secure smart agricultural technologies.
References
- Muddy Machines website, Accessed 26 July 2024. https://www.muddymachines.com/
- “Meat giant JBS pays $11m in ransom to resolve cyber-attack”, BBC, 10 June 2021. https://www.bbc.com/news/business-57423008
- Marshall, Claire & Prior, Malcolm, “Cyber security: Global food supply chain at risk from malicious hackers.”, BBC, 20 May 2022. https://www.bbc.com/news/science-environment-61336659
- “Ransomware Attacks on Agricultural Cooperatives Potentially Timed to Critical Seasons.”, Private Industry Notification, Federal Bureau of Investigation, 20 April https://www.ic3.gov/Media/News/2022/220420-2.pdf.
- Manning, Louise & Kowalska, Aleksandra. (2023). “The threat of ransomware in the food supply chain: a challenge for food defence”, Trends in Organized Crime. https://doi.org/10.1007/s12117-023-09516-y
- “NotPetya: the cyberattack that shook the world”, Economic Times, 5 March 2022. https://economictimes.indiatimes.com/tech/newsletters/ettech-unwrapped/notpetya-the-cyberattack-that-shook-the-world/articleshow/89997076.cms?from=mdr
- Abrams, Lawrence, “Dutch supermarkets run out of cheese after ransomware attack.”, Bleeping Computer, 12 April 2021. https://www.bleepingcomputer.com/news/security/dutch-supermarkets-run-out-of-cheese-after-ransomware-attack/
- Pandey, Shipra; Gunasekaran, Angappa; Kumar Singh, Rajesh & Kaushik, Anjali, “Cyber security risks in globalised supply chains: conceptual framework”, Journal of Global Operations and Strategic Sourcing, January 2020. https://www.researchgate.net/profile/Shipra-Pandey/publication/338668641_Cyber_security_risks_in_globalized_supply_chains_conceptual_framework/links/5e2678ae92851c89c9b5ac66/Cyber-security-risks-in-globalized-supply-chains-conceptual-framework.pdf
- Daley, Sam, “Blockchain for Food: 10 examples to know”, Builin, 22 March 2023 https://builtin.com/blockchain/food-safety-supply-chain