#FactCheck : Iraq Religious Gathering Video Misused as Khamenei Funeral Footage
Executive Summary
A video showing a massive gathering of people dressed in black is widely circulating on social media. The clip is being shared with the claim that it shows crowds mourning the funeral of Iran’s Supreme Leader Ayatollah Ali Khamenei following his alleged killing in February 2026 However, research by the CyberPeace found that the claim is misleading and the video is unrelated to Iran.
Claim:
The viral video shows a large crowd gathered in a public square, with a mosque featuring a golden dome visible in the background. Social media posts claim that the footage captures mourners attending Ayatollah Khamenei’s funeral after his reported death in a joint US-Israel operation.

Fact Check:
To verify the claim, we extracted keyframes from the video and conducted a reverse image search. This led us to a similar clip uploaded on January 15 by an Iraqi broadcaster, Karbala TV, on Facebook. In the footage, a large crowd can be seen carrying a symbolic coffin near a shrine with a golden dome—matching the visuals seen in the viral video. According to the Arabic caption, the video shows a “symbolic funeral” procession held at the Kazimayn Shrine in Baghdad, Iraq. The event is part of an annual religious observance commemorating Imam Musa al-Kazim, the seventh Imam in Shia Islam, who is believed to have died after being poisoned in the 8th century.
Every year, large numbers of Shia devotees gather at the shrine in Baghdad to pay their respects during this commemoration. The visuals seen in the viral clip are consistent with this annual gathering.

Conclusion:
The claim that the video shows crowds at Ayatollah Khamenei’s funeral is false. The footage is unrelated and actually depicts a religious gathering in Baghdad, Iraq, held as part of an annual Shia ritual.
Related Blogs

Introduction
Due to the rapid growth of high-capability AI systems around the world, growing concerns regarding safety, accountability, and governance have arisen throughout the world; thus, California has responded by passing the Transparency in Frontier Artificial Intelligence Act (TFAIA), the first state statute focused on "frontier" (highly capable) AI models. This statute is unique in that it does not only target harms caused by AI models in the form of consumer protection as compared to the majority of state statutes; rather, this statute addresses the catastrophic and systemic risks to society associated with large-scale AI systems. As California is a global technology leader, the TFAIA is positioned to have a significant impact on both domestic regulation and the evolution of international legal frameworks for AI technology (and as such has the potential to influence corporate compliance practices and the establishment of global norms related to the use of AI).
Understanding the Transparency in Frontier Artificial Intelligence Act
The Transparency in Frontier Artificial Intelligence Act provides a specific regulatory process for companies that create sophisticated AI systems with societal, economic, or national security implications. Covered developers are required to publish an extensive safety and transparency policy that details how they navigate risk throughout the artificial intelligence lifecycle. The act requires developers to notify the government of any significant incidents or failures with their deployed frontier models on a timely basis.
A significant aspect of the TFAIA is that it establishes the concept of "process transparency", which does not explicitly control how AI developers create their models, but rather holds them accountable for their internal safety governance by mandating that they develop Documented safety frameworks that outline risk assessment, mitigation, and monitoring processes. The act allows developers to protect their trade secrets, patents, and national defense concerns by providing them with limited opportunities for exemption and/or redaction of their documents so that they can maintain a balance between data openness and safeguarding sensitive information..
Extraterritorial Impact on Global AI Developers
While the Act is a state law, its implementation has far-reaching effects. Many of the largest AI companies have facilities, research labs or customers in California. Therefore, to be compliant with the TFAIA, these companies are required to do so commercially. The ability to develop a unified compliance model across regions enables companies to avoid developing duplicate compliance models.
This same pattern has occurred in other regulatory areas, like data protection regulations; where a region's regulations effectively became global compliance benchmarks for that regulatory area. The TFAIA could similarly serve as a global standard for transparency in frontier AI and shape how companies build their governance structure globally even if they don't have explicit regulations in the regions where they operate.
Influence on International AI Regulatory Models
The TFAIA offers a unique perspective on global discussions about regulating AI. In contrast to other legislation which defines different levels of risk depending on the type of AI, the TFAIA targets specifically high-impact or emerging technologies. Other nations may see value in this model of tiered regulations based on capability and apply it for their own regulation of AI, with the strictest obligations placed on those with the most critical potential harm.
The TFAIA may serve as a guide for international public policy makers by showing how they can reference existing standards and best practices in developing regulations, thus improving interoperability and potentially lessening regulatory barriers to cross-border AI innovations.
Corporate Governance, Compliance Costs, and Competition
From an industry perspective, the Act revolutionizes the way companies govern themselves. Developers are now required to create thorough risk assessments, red-teaming exercises, incident response protocols, and have board oversight for AI safety and regulation. The number of people involved in this process increases accountability but at the same time the increases will create a burden of cost for all involved.
The burden of compliance will be easier for large tech companies than for smaller or start-ups, and thus large tech companies may solidify their position of dominance over the development of frontier AI. Smaller and newer developers may be blocked from entering the market unless some form of proportional or scaled compliance mechanism for where they operate emerges. These developments certainly raise issues surrounding innovation policy and competition law at a global scale that will need to be addressed by regulators in conjunction with AI safety concerns.
Transparency, Public Trust, and Accountability
The TFAIA bolsters the capability of citizens, researchers and journalists to oversee the development and the use of artificial intelligence (AI) through its requirement for public disclosure of the safety framework of AI systems. The disclosures will allow citizens, researchers and journalists to critically evaluate corporate claims of responsible AI development. Over time, this evaluation could increase trust in publically regulated AI systems and would expose businesses that exhibit a poor risk management process.
However, how useful this transparency is depends on the quality and comparability of the information being disclosed. Many current disclosures are either too vague or too complex, thus limiting the ability to conduct meaningful oversight. There should be a push for clearer guidance and/or the establishment of standardised disclosure forms for the purposes of public accountability (i.e., citizens) and uniformity between countries.
Conclusion
The Transparency in Frontier Artificial Intelligence Act is a transformative development in the regulation of Artificial Intelligence Technology, specifically, a whole new risk profile of this new generation of AI / (Advanced High-Powered) Technologies such as Autonomous Vehicles. This new California law will create global impact because it Be will change how technology companies operate, create regulatory frameworks and develop standards to govern/oversee the use of Autonomous Vehicles. The Act creates a “transparent” means for regulating (or governing) Autonomous Vehicles as opposed to relying solely on “technical” means for these systems. As other regions experience similar challenges that US Government is facing with respect to this new generation of AI (written laws), California's approach will likely be used as an example for how AI laws are written in the future and develop a more unified and responsible international AI regulatory framework.
References
- https://www.whitecase.com/insight-alert/california-enacts-landmark-ai-transparency-law-transparency-frontier-artificial
- https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/
- https://www.mofo.com/resources/insights/251001-california-enacts-ai-safety-transparency-regulation-tfaia-sb-53
- https://www.dlapiper.com/en/insights/publications/2025/10/california-law-mandates-increased-developer-transparency-for-large-ai-models

Executive Summary:
A picture about the April 8 solar eclipse, which was authored by AI and was not a real picture of the astronomical event, has been spreading on social media. Despite all the claims of the authenticity of the image, the CyberPeace’s analysis showed that the image was made using Artificial Intelligence image-creation algorithms. The total solar eclipse on April 8 was observable only in those places on the North American continent that were located in the path of totality, whereas a partial visibility in other places was possible. NASA made the eclipse live broadcast for people who were out of the totality path. The spread of false information about rare celestial occurrences, among others, necessitates relying on trustworthy sources like NASA for correct information.
Claims:
An image making the rounds through social networks, looks like the eclipse of the sun of the 8th of April, which makes it look like a real photograph.




Fact Check:
After receiving the news, the first thing we did was to try with Keyword Search to find if NASA had posted any lookalike image related to the viral photo or any celestial events that might have caused this photo to be taken, on their official social media accounts or website. The total eclipse on April 8 was experienced by certain parts of North America that were located in the eclipse pathway. A part of the sky above Mazatlan, Mexico, was the first to witness it. Partial eclipse was also visible for those who were not in the path of totality.
Next, we ran the image through the AI Image detection tool by Hive moderation, which found it to be 99.2% AI-generated.

Following that, we applied another AI Image detection tool called Isitai, and it found the image to be 96.16% AI-generated.

With the help of AI detection tools, we came to the conclusion that the claims made by different social media users are fake and misleading. The viral image is AI-generated and not a real photograph.
Conclusion:
Hence, it is a generated image by AI that has been circulated on the internet as a real eclipse photo on April 8. In spite of some debatable claims to the contrary, the study showed that the photo was created using an artificial intelligence algorithm. The total eclipse was not visible everywhere in North America, but rather only in a certain part along the eclipse path, with partial visibility elsewhere. Through AI detection tools, we were able to establish a definite fact that the image is fake. It is very important, when you are talking about rare celestial phenomena, to use the information that is provided by the trusted sources like NASA for the accurate reason.
- Claim: A viral image of a solar eclipse claiming to be a real photograph of the celestial event on April 08
- Claimed on: X, Facebook, Instagram, website
- Fact Check: Fake & Misleading

CAPTCHA, or the Completely Automated Public Turing Test to Tell Computers and Humans Apart function, is an image or distorted text that users have to identify or interpret to prove they are human. 2007 marked the inception of CAPTCHA, and Google developed its free service called reCAPTCHA, one of the most commonly used technologies to tell computers apart from humans. CAPTCHA protects websites from spam and abuse by using tests considered easy for humans but were supposed to be difficult for bots to solve.
But, now this has changed. With AI becoming more and more sophisticated, it is now capable of solving CAPTCHA tests at a rate that is more accurate than humans, rendering them increasingly ineffective. This raises the question of whether CAPTCHA is still effective as a detection tool with the advancements of AI.
CAPTCHA Evolution: From 2007 Till Now
CAPTCHA has evolved through various versions to keep bots at bay. reCAPTCHA v1 relied on distorted text recognition, v2 introduced image-based tasks and behavioural analysis, and v3 operated invisibly, assigning risk scores based on user interactions. While these advancements improved user experience and security, AI now solves CAPTCHA with 96% accuracy, surpassing humans (50-86%). Bots can mimic human behaviour, undermining CAPTCHA’s effectiveness and raising the question: is it still a reliable tool for distinguishing real people from bots?
Smarter Bots and Their Rise
AI advancements like machine learning, deep learning and neural networks have developed at a very fast pace in the past decade, making it easier for bots to bypass CAPTCHA. They allow the bots to process and interpret the CAPTCHA types like text and images with almost human-like behaviour. Some examples of AI developments against bots are OCR or Optical Character Recognition. The earlier versions of CAPTCHA relied on distorted text: AI because of this tech is able to recognise and decipher the distorted text, making CAPTCHA useless. AI is trained on huge datasets which allows Image Recognition by identifying the objects that are specific to the question asked. These bots can mimic human habits and patterns by Behavioural Analysis and therefore fool the CAPTCHA.
To defeat CAPTCHA, attackers have been known to use Adversarial Machine Learning, which refers to AI models trained specifically to defeat CAPTCHA. They collect CAPTCHA datasets and answers and create an AI that can predict correct answers. The implications that CAPTCHA failures have on platforms can range from fraud to spam to even cybersecurity breaches or cyberattacks.
CAPTCHA vs Privacy: GDPR and DPDP
GDPR and the DPDP Act emphasise protecting personal data, including online identifiers like IP addresses and cookies. Both frameworks mandate transparency when data is transferred internationally, raising compliance concerns for reCAPTCHA, which processes data on Google’s US servers. Additionally, reCAPTCHA's use of cookies and tracking technologies for risk scoring may conflict with the DPDP Act's broad definition of data. The lack of standardisation in CAPTCHA systems highlights the urgent need for policymakers to reevaluate regulatory approaches.
CyberPeace Analysis: The Future of Human Verification
CAPTCHA, once a cornerstone of online security, is losing ground as AI outperforms humans in solving these challenges with near-perfect accuracy. Innovations like invisible CAPTCHA and behavioural analysis provided temporary relief, but bots have adapted, exploiting vulnerabilities and undermining their effectiveness. This decline demands a shift in focus.
Emerging alternatives like AI-based anomaly detection, biometric authentication, and blockchain verification hold promise but raise ethical concerns like privacy, inclusivity, and surveillance. The battle against bots isn’t just about tools but it’s about reimagining trust and security in a rapidly evolving digital world.
AI is clearly winning the CAPTCHA war, but the real victory will be designing solutions that balance security, user experience and ethical responsibility. It’s time to embrace smarter, collaborative innovations to secure a human-centric internet.
References
- https://www.business-standard.com/technology/tech-news/bot-detection-no-longer-working-just-wait-until-ai-agents-come-along-124122300456_1.html
- https://www.milesrote.com/blog/ai-defeating-recaptcha-the-evolving-battle-between-bots-and-web-security
- https://www.technologyreview.com/2023/10/24/1081139/captchas-ai-websites-computing/
- https://datadome.co/guides/captcha/recaptcha-gdpr/