#Factcheck-False Claims of Houthi Attack on Israel’s Ashkelon Power Plant
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.

Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.

This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The video shows massive fire at Israel's Ashkelon power plant
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
A post on X (formerly Twitter) featuring an image that has been widely shared with misleading captions, claiming to show men riding an elephant next to a tiger in Bihar, India. This post has sparked both fascination and skepticism on social media. However, our investigation has revealed that the image is misleading. It is not a recent photograph; rather, it is a photo of an incident from 2011. Always verify claims before sharing.

Claims:
An image purporting to depict men riding an elephant next to a tiger in Bihar has gone viral, implying that this astonishing event truly took place.

Fact Check:
After investigation of the viral image using Reverse Image Search shows that it comes from an older video. The footage shows a tiger that was shot after it became a man-eater by forest guard. The tiger killed six people and caused panic in local villages in the Ramnagar division of Uttarakhand in January, 2011.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that men rode an elephant alongside a tiger in Bihar is false. The photo presented as recent actually originates from the past and does not depict a current event. Social media users should exercise caution and verify sensational claims before sharing them.
- Claim: The video shows people casually interacting with a tiger in Bihar
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading

Introduction
Artificial Intelligence (AI) is fast transforming our future in the digital world, transforming healthcare, finance, education, and cybersecurity. But alongside this technology, bad actors are also weaponising it. More and more, state-sponsored cyber actors are misusing AI tools such as ChatGPT and other generative models to automate disinformation, enable cyberattacks, and speed up social engineering operations. This write-up explores why and how AI, in the form of large language models (LLMs), is being exploited in cyber operations associated with adversarial states, and the necessity for international vigilance, regulation, and AI safety guidelines.
The Shift: AI as a Cyber Weapon
State-sponsored threat actors are misusing tools such as ChatGPT to turbocharge their cyber arsenal.
- Phishing Campaigns using AI- Generative AI allows for highly convincing and grammatically correct phishing emails. Unlike the shoddily written scams of yesteryears, these AI-based messages are tailored according to the victim's location, language, and professional background, increasing the attack success rate considerably. Example: It has recently been reported by OpenAI and Microsoft that Russian and North Korean APTs have employed LLMs to create customised phishing baits and malware obfuscation notes.
- Malware Obfuscation and Script Generation- Big Language Models (LLMs) such as ChatGPT may be used by cyber attackers to help write, debug, and camouflage malicious scripts. While the majority of AI instruments contain safety mechanisms to guard against abuse, threat actors often exploit "jailbreaking" to evade these protections. Once such constraints are lifted, the model can be utilised to develop polymorphic malware that alters its code composition to avoid detection. It can also be used to obfuscate PowerShell or Python scripts to render them difficult for conventional antivirus software to identify. Also, LLMs have been employed to propose techniques for backdoor installation, additional facilitating stealthy access to hijacked systems.
- Disinformation and Narrative Manipulation
State-sponsored cyber actors are increasingly employing AI to scale up and automate disinformation operations, especially on election, protest, and geopolitical dispute days. With LLMs' assistance, these actors can create massive amounts of ersatz news stories, deepfake interview transcripts, imitation social media posts, and bogus public remarks on online forums and petitions. The localisation of content makes this strategy especially perilous, as messages are written with cultural and linguistic specificity, making them credible and more difficult to detect. The ultimate aim is to seed societal unrest, manipulate public sentiments, and erode faith in democratic institutions.
Disrupting Malicious Uses of AI – OpenAI Report (June 2025)
OpenAI released a comprehensive threat intelligence report called "Disrupting Malicious Uses of AI" and the “Staying ahead of threat actors in the age of AI”, which outlined how state-affiliated actors had been testing and misusing its language models for malicious intent. The report named few advanced persistent threat (APT) groups, each attributed to particular nation-states. OpenAI highlighted that the threat actors used the models mostly for enhancing linguistic quality, generating social engineering content, and expanding operations. Significantly, the report mentioned that the tools were not utilized to produce malware, but rather to support preparatory and communicative phases of larger cyber operations.
AI Jailbreaking: Dodging Safety Measures
One of the largest worries is how malicious users can "jailbreak" AI models, misleading them into generating banned content using adversarial input. Some methods employed are:
- Roleplay: Simulating the AI being a professional criminal advisor
- Obfuscation: Concealing requests with code or jargon
- Language Switching: Proposing sensitive inquiries in less frequently moderated languages
- Prompt Injection: Lacing dangerous requests within innocent-appearing questions
These methods have enabled attackers to bypass moderation tools, transforming otherwise moral tools into cybercrime instruments.
Conclusion
As AI generations evolve and become more accessible, its application by state-sponsored cyber actors is unprecedentedly threatening global cybersecurity. The distinction between nation-state intelligence collection and cybercrime is eroding, with AI serving as a multiplier of adversarial campaigns. AI tools such as ChatGPT, which were created for benevolent purposes, can be targeted to multiply phishing, propaganda, and social engineering attacks. The cross-border governance, ethical development practices, and cyber hygiene practices need to be encouraged. AI needs to be shaped not only by innovation but by responsibility.
References
- https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
- https://www.bankinfosecurity.com/openais-chatgpt-hit-nation-state-hackers-a-28640
- https://oecd.ai/en/incidents/2025-06-13-b5e9
- https://www.microsoft.com/en-us/security/security-insider/meet-the-experts/emerging-AI-tactics-in-use-by-threat-actors
- https://www.wired.com/story/youre-not-ready-for-ai-hacker-agents/
- https://www.cert-in.org.in/PDF/Digital_Threat_Report_2024.pdf
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf
.webp)
Introduction
Social media has emerged as a leading source of communication and information; its relevance cannot be ignored during natural disasters since it is relied upon by governments and disaster relief organisations as a tool for disseminating aid and relief-related resources and communications instantly. During disaster times, social media has emerged as a primary source for affected populations to access information on relief resources; community forums offering aid resources and official government channels for government aid have enabled efficient and timely administration of relief initiatives.
However, given the nature of social media, misinformation risks during natural disasters has also emerged as a primary concern that severely hampers aid administration during natural disasters. The disaster-disinformation network offers some sensationalised influential campaigns against communities at their most vulnerable. Victims who seek reliable resources during natural calamities often reach out to inhospitable campaigns and may experience delayed or lack of access to necessary healthcare, significantly impacting their recovery and survival. This delay can lead to worsening medical conditions and an increased death toll among those affected by the disaster. Victims may lack clear information on the appropriate agencies to seek assistance from, causing confusion and delays in receiving help.
Misinformation Threat Landscape during Natural Disaster
During the 2018 floods in Kerala, it was noted that a fake video on water leakage from the Mullaperyar Dam created panic among the citizens and negatively impacted the rescue operations. Similarly, in 2017, reports emerged claiming that Hurricane Irma had caused sharks to be displaced onto a Florida highway. Similar stories, accompanied by the same image, resurfaced following Hurricanes Harvey and Florence. The disaster-affected nation may face international criticism and fail to receive necessary support due to its perceived inability to manage the crisis effectively. This lack of confidence from the global community can further exacerbate the challenges faced by the nation, leaving it more vulnerable and isolated in its time of need.
The spread of misinformation through social media severely hinders the administration of aid and relief operations during natural disasters since it hinders first responders' efforts to counteract and reduce the spread of misinformation, rumours, and false information and declines public trust in government, media, and non-governmental organisations (NGOs), who are often the first point of contact for both victims and officials due to their familiarity with the region and the community. In Moldova, it was noted that foreign influence has exploited the ongoing drought to create divisions between the semi-autonomous regions of Transnistria and Gagauzia and the central government in Chisinau. News coverage critical of the government leverages economic and energy insecurities to incite civil unrest in this already unstable region. Additionally, First responders may struggle to locate victims and assist them to safety, complicating rescue operations. The inability to efficiently find and evacuate those in need can result in prolonged exposure to dangerous conditions and a higher risk of injury or death.
Further, international aid from other countries could be impeded, affecting the overall relief effort. Without timely and coordinated support from the global community, the disaster response may be insufficient, leaving many needs unmet. Further, misinformation also impedes military, reducing the effectiveness of rescue and relief operations. Military assistance often plays a crucial role in disaster response, and any delays can hinder efforts to provide immediate and large-scale aid.
Misinformation also creates problems of allocation of relief resources to unaffected areas which resultantly impacts aid processes for regions in actual need. Following the April 2015 earthquake in Nepal, a Facebook post claimed that 300 houses in Dhading needed aid. Shared over 1,000 times, it reached around 350,000 people within 48 hours. The originator aimed to seek help for Ward #4’s villagers via social media. Given the average Facebook user has 350 contacts, the message was widely viewed. However, the need had already been reported on quakemap.org, a crisis-mapping database managed by Kathmandu Living Labs, a week earlier. Helping Hands, a humanitarian group was notified on May 7, and by May 11, Ward #4 received essential food and shelter. The re-sharing and sensationalisation of outdated information could have wasted relief efforts since critical resources would have been redirected to a region that had already been secured.
Policy Recommendations
Perhaps the most important step in combating misinformation during natural disasters is the increasing public education and the rapid, widespread dissemination of early warnings. This was best witnessed in the November 1970 tropical cyclone in southeastern Bangladesh, combined with a high tide, struck southeastern Bangladesh, leaving more than 300,000 people dead and 1.3 million homeless. In May 1985, when a comparable cyclone and storm surge hit the same area, local dissemination of disaster warnings was much improved and the people were better prepared to respond to them. The loss of life, while still high (at about 10,000), the numbers were about 3% of that in 1970. On a similar note, when a devastating cyclone struck the same area of Bangladesh in May 1994, fewer than 1,000 people died. In India, the 1977 cyclone in Andra Pradesh killed 10,000 people, but a similar storm in the same area 13 years later killed only 910. The dramatic difference in mortalities was owed to a new early-warning system connected with radio stations to alert people in low-lying areas.
Additionally, location-based filtering for monitoring social media during disasters is considered as another best practice to curb misinformation. However, agencies should be aware that this method may miss local information from devices without geolocation enabled. A 2012 Georgia Tech study found that less than 1.4 percent of Twitter content is geolocated. Additionally, a study by Humanity Road and Arizona State University on Hurricane Sandy data indicated a significant decline in geolocation data during weather events.
Alternatively, Publish frequent updates to promote transparency and control the message. In emergency management and disaster recovery, digital volunteers—trusted agents who provide online support—can assist overwhelmed on-site personnel by managing the vast volume of social media data. Trained digital volunteers help direct affected individuals to critical resources and disseminate reliable information.
Enhancing the quality of communication requires double-verifying information to eliminate ambiguity and reduce the impact of misinformation, rumors, and false information must also be emphasised. This approach helps prevent alert fatigue and "cry wolf" scenarios by ensuring that only accurate, relevant information is disseminated. Prioritizing ground truth over assumptions and swiftly releasing verified information or acknowledging the situation can bolster an agency's credibility. This credibility allows the agency to collaborate effectively with truth amplifiers. Prebunking and Debunking methods are also effective way to counter misinformation and build cognitive defenses to recognise red flags. Additionally, evaluating the relevance of various social media information is crucial for maintaining clear and effective communication.
References
- https://www.nature.com/articles/s41598-023-40399-9#:~:text=Moreover%2C%20misinformation%20can%20create%20unnecessary,impacting%20the%20rescue%20operations29.
- https://www.redcross.ca/blog/2023/5/why-misinformation-is-dangerous-especially-during-disasters
- https://www.soas.ac.uk/about/blog/disinformation-during-natural-disasters-emerging-vulnerability
- https://www.dhs.gov/sites/default/files/publications/SMWG_Countering-False-Info-Social-M dia-Disasters-Emergencies_Mar2018-508.pdf