#Fact Check: Pakistan’s Airstrike Claim Uses Video Game Footage
Executive Summary:
A widely circulated claim on social media, including a post from the official X account of Pakistan, alleges that the Pakistan Air Force (PAF) carried out an airstrike on India, supported by a viral video. However, according to our research, the video used in these posts is actually footage from the video game Arma-3 and has no connection to any real-world military operation. The use of such misleading content contributes to the spread of false narratives about a conflict between India and Pakistan and has the potential to create unnecessary fear and confusion among the public.

Claim:
Viral social media posts, including the official Government of Pakistan X handle, claims that the PAF launched a successful airstrike against Indian military targets. The footage accompanying the claim shows jets firing missiles and explosions on the ground. The video is presented as recent and factual evidence of heightened military tensions.


Fact Check:
As per our research using reverse image search, the videos circulating online that claim to show Pakistan launching an attack on India under the name 'Operation Sindoor' are misleading. There is no credible evidence or reliable reporting to support the existence of any such operation. The Press Information Bureau (PIB) has also verified that the video being shared is false and misleading. During our research, we also came across footage from the video game Arma-3 on YouTube, which appears to have been repurposed to create the illusion of a real military conflict. This strongly indicates that fictional content is being used to propagate a false narrative. The likely intention behind this misinformation is to spread fear and confusion by portraying a conflict that never actually took place.


Conclusion:
It is true to say that Pakistan is using the widely shared misinformation videos to attack India with false information. There is no reliable evidence to support the claim, and the videos are misleading and irrelevant. Such false information must be stopped right away because it has the potential to cause needless panic. No such operation is occurring, according to authorities and fact-checking groups.
- Claim: Viral social media posts claim PAF attack on India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Artificial Intelligence (AI) has transcended its role as a futuristic tool; it is already an integral part of the decision-making process in various sectors, including governance, the medical field, education, security, and the economy, worldwide. On the one hand, there are concerns about the nature of AI, its advantages and disadvantages, and the risks it may pose to the world. There are also doubts about the technology’s capacity to provide effective solutions, especially when threats such as misinformation, cybercrime, and deepfakes are becoming more common.
Recently, global leaders have reiterated that the use of AI should continue to be human-centric, transparent, and governed responsibly. The issue of offering unbridled access to innovators, while also preventing harm, is a dilemma that must be resolved.
AI as a Global Public Good
In earlier times only the most influential states and large corporations controlled the supply and use of advanced technologies, and they guarded them as national strategic assets. In contrast, AI has emerged as a digital innovation that exists and evolves within a deeply interconnected environment, which makes access far more distributed than before. Usage of AI in a specific country will not only bring its pros and cons to that particular place, but the rest of the world as well. For instance, deepfake scams and biased algorithms will not only affect the people in the country where they are created but also in all other countries where such people might be doing business or communicating.
The Growing Threat of AI Misuse
- Deepfakes, Crime, and Digital Terrorism
The application of artificial intelligence in the wrong way is quickly becoming one of the main security problems. Deepfake technology is being used to carry out electoral misinformation spread, communicate lies, and create false narratives. Cybercriminals are now making use of AI to make phishing attacks faster and more efficient, hack into security systems, and come up with elaborate social engineering tactics. In the case of extremist groups, AI has the power to give a better quality of propaganda, recruitment, and coordination.
- Solution - Human Oversight and Safety-by-Design
To overcome these dangers, a global AI system must be developed based on the principles of safety-by-design. This means incorporating moral safeguards right from the development phase rather than reacting after the damage is done. Moreover, human control is just as vital. Artificial intelligence (AI) systems that influence public confidence, security, or human rights should always be under the control of human decision-makers. Automated decision-making where there is no openness or the possibility of auditing could lead to black-box systems being developed, where the assignment of responsibility is unclear.
Three Pillars of a Responsible AI Framework
- Equitable Access to AI Technologies
One of the major hindrances to global AI development is the non-uniformity of access. The provision of high-end computing capability, data infrastructure, and AI research resources is still highly localised in some areas. A sustainable framework needs to be set up so that smaller countries, rural areas, and people speaking different languages will also be able to share the benefits of AI. The distribution of access fairly will be a gradual process, but at the same time, it will lead to the creation of new ideas and improvements in the different places where the local markets are. Thus, there would be no digital divide, and the AI future would not be exclusively determined by the wealthy economies. - Population-Level Skilling and Talent Readiness
AI will have an impact on worldwide working areas. Thus, societies must not only equip their people with the existing job skills but also with the future technology-based skills. Massive AI literacy programs, digital competencies enhancement, and cross-disciplinary education are very important. Forecasting human resources for roles in AI governance, data ethics, cyber security, and modern technologies will help prevent large scale displacement while also promoting growth that is genuinely inclusive. - Responsible and Human-Centric Deployment
Adoption of Responsible AI makes sure that technology is used for social good and not just for making profits. The human-centred AI directs its applications to the sectors like healthcare, agriculture, education, disaster management, and public services, especially the underserved regions in the world that are most in need of these innovations. This strategy guarantees that progress in technology will improve human life instead of making the situation worse for the poor or taking away the responsibility from humans.
Need for a Global AI Governance Framework
- Why International Cooperation Matters
AI governance cannot be fragmented. Different national regulations lead to the creation of loopholes that allow bad actors to operate in different countries. Hence, global coordination and harmonisation of safety frameworks is of utmost importance. A single AI governance framework should stipulate:
- Clear responsible prohibition on AI misuse in terrorism, deepfakes, and cybercrime .
- Transparency and algorithm audits as a compulsory requirement.
- Independent global oversight bodies.
- Ethical codes of conduct in harmony with humanitarian laws.
Framework like this makes it clear that AI will be shaped by common values rather than being subject to the influence of different interest groups.
- Talent Mobility and Open Innovation
If AI is to be universally accepted, then global mobility of talent must be made easier. The flow of innovation takes place when the interaction between researchers, engineers, and policymakers is not limited by borders.
- AI, Equity, and Global Development
The rapid concentration of technology in a few hands poses the risk of widening the gap in equality among countries. Most developing countries are facing the problems of poor infrastructure, lack of education and digital resources. By regarding them only as technology markets and not as partners in innovation, they become even more isolated from the mainstream of development. An AI development mix of human-centred and technology-driven must consider that the global stillness is broken only by the inclusion of the participation of the whole world. For example, the COVID-19 pandemic has already demonstrated how technology can be a major factor in the building of healthcare and crisis resilience. As a matter of fact, when fairly used, AI has a significant role to play in the realisation of the Sustainable Development Goals.
Conclusion
AI is located at a crucial junction. It can either enhance human progress or increase the digital risks. Making sure that AI is a global good goes beyond mere sophisticated technology; it requires moral leadership, inclusion in governance, and collaboration between countries. Preventing misuse by means of openness, supervision by humans, and policies that are responsible will be vital in keeping public trust. Properly guided, AI can make society more resilient, speed up development, and empower future generations. The future we choose is determined by how responsibly we act today.
As PM Modi stated ‘AI should serve as a global good, and at the same time nations must stay vigilant against its misuse’. CyberPeace reinforces this vision by advocating responsible innovation and a secure digital future for all.
References
- https://www.hindustantimes.com/india-news/ai-a-global-good-but-must-guard-against-misuse-pm-101763922179359.html
- https://www.deccanherald.com/india/g20-summit-pm-modi-goes-against-donald-trumps-stand-seeks-global-governance-for-ai-3807928
- https://timesofindia.indiatimes.com/india/need-global-compact-to-prevent-ai-misuse-pm-modi/articleshow/125525379.cms

What are Decentralised Autonomous Organizations (DAOs)?
A Decentralised Autonomous Organisation or a DAO, is a unique take on democracy on the blockchain. It is a set of rules encoded into a self-executing contract (also known as a smart contract) that operates autonomously on a blockchain system. A DAO imitates a traditional company, although, in its more literal sense, it is a contractually created entity. In theory, DAOs have no centralised authority in making decisions for the system; it is a communally run system whereby all decisions (be it for internal governance or for the development of the blockchain system) are voted upon by the community members. DAOs are primarily characterised by a decentralised form of operation, where there is no one entity, group or individual running the system. They are self-sustaining entities, having their own currency, economy and even governance, that do not depend on a group of individuals to operate. Blockchain systems, especially DAOs are characterised by pure autonomy created to evade external coercion or manipulation from sovereign powers. DAOs follow a mutually created, agreed set of rules created by the community, that dictates all actions, activities, and participation in the system’s governance. There may also be provisions that regulate the decision-making power of the community.
Ethereum’s DAO’s White Paper described DAO as “The first implementation of a [DAO Entity] code to automate organisational governance and decision making.” Can be used by individuals working together collaboratively outside of a traditional corporate form. It can also be used by a registered corporate entity to automate formal governance rules contained in corporate bylaws or imposed by law.” The referred white paper proposes an entity that would use smart contracts to solve governance issues inherent in traditional corporations. DAOs attempt to redesign corporate governance with blockchain such that contractual terms are “formalised, automated and enforced using software.”
Cybersecurity threats under DAOs
While DAOs offer increased transparency and efficiency, they are not immune to cybersecurity threats. Cybersecurity risks in DAO, primarily in governance, stem from vulnerabilities in the underlying blockchain technology and the DAO's smart contracts. Smart contract exploits, code vulnerabilities, and weaknesses in the underlying blockchain protocol can be exploited by malicious actors, leading to unauthorised access, fund manipulations, or disruptions in the governance process. Additionally, DAOs may face challenges related to phishing attacks, where individuals are tricked into revealing sensitive information, such as private keys, compromising the integrity of the governance structure. As DAOs continue to evolve, addressing and mitigating cybersecurity threats is crucial to ensuring the trust and reliability of decentralised governance mechanisms.
Centralisation/Concentration of Power
DAOs today actively try to leverage on-chain governance, where any governance votes or transactions are directly taken on the blockchain. But such governance is often plutocratic in nature, where the wealthy hold influences, rather than democracies, since those who possess the requisite number of tokens are only allowed to vote and each token staked implies that many numbers of votes emerge from the same individual. This concentration of power in the hands of “whales” often creates disadvantages for the newer entrants into the system who may have an in-depth background but lack the funds to cast a vote. Voting, presently in the blockchain sphere, lacks the requisite concept of “one man, one vote” which is critical in democratic societies.
Smart contract vulnerabilities and external threats
Smart contracts, self-executing pieces of code on a blockchain, are integral to decentralised applications and platforms. Despite their potential, smart contracts are susceptible to various vulnerabilities such as coding errors, where mistakes in the code can lead to funds being locked or released erroneously. Some of them have been mentioned as follows;
Smart Contracts are most prone to re-entrance attacks whereby an untrusted external code is allowed to be executed in a smart contract. This scenario occurs when a smart contract invokes an external contract, and the external contract subsequently re-invokes the initial contract. This sequence of events can lead to an infinite loop, and a reentrancy attack is a tactic exploiting this vulnerability in a smart contract. It enables an attacker to repeatedly invoke a function within the contract, potentially creating an endless loop and gaining unauthorised access to funds.
Additionally, smart contracts are also prone to oracle problems. Oracles refer to third-party services or mechanisms that provide smart contracts with real-world data. Since smart contracts on blockchain networks operate in a decentralised, isolated environment, they do not have direct access to external information, such as market prices, weather conditions, or sports scores. Oracles bridge this gap by acting as intermediaries, fetching and delivering off-chain data to smart contracts, enabling them to execute based on real-world conditions. The oracle problem within blockchain pertains to the difficulty of securely incorporating external data into smart contracts. The reliability of external data poses a potential vulnerability, as oracles may be manipulated or provide inaccurate information. This challenge jeopardises the credibility of blockchain applications that rely on precise and timely external data.
Sybil Attack: A Sybil attack involves a single node managing multiple active fake identities, known as Sybil identities, concurrently within a peer-to-peer network. The objective of such an attack is to weaken the authority or influence within a trustworthy system by acquiring the majority of control in the network. The fake identities are utilised to establish and exert this influence. A successful Sybil attack allows threat actors to perform unauthorised actions in the system.
Distributed Denial of Service Attacks: A Distributed Denial of Service (DDoS) attack is a malicious attempt to disrupt the regular functioning of a network, service, or website by overwhelming it with a flood of traffic. In a typical DDoS attack, multiple compromised computers or devices, often part of a botnet (a network of infected machines controlled by a single entity), are used to generate a massive volume of requests or data traffic. The targeted system becomes unable to respond to legitimate user requests due to the excessive traffic, leading to a denial of service.
Conclusion
Decentralised Autonomous Organisations (DAOs) represent a pioneering approach to governance on the blockchain, relying on smart contracts and community-driven decision-making. Despite their potential for increased transparency and efficiency, DAOs are not immune to cybersecurity threats. Vulnerabilities in smart contracts, such as reentrancy attacks and oracle problems, pose significant risks, and the concentration of voting power among wealthy token holders raises concerns about democratic principles. As DAOs continue to evolve, addressing these challenges is essential to ensuring the resilience and trustworthiness of decentralised governance mechanisms. Efforts to enhance security measures, promote inclusivity, and refine governance models will be crucial in establishing DAOs as robust and reliable entities in the broader landscape of blockchain technology.
References:
https://www.imperva.com/learn/application-security/sybil-attack/
https://www.linkedin.com/posts/satish-kulkarni-bb96193_what-are-cybersecurity-risk-to-dao-and-how-activity-7048286955645677568-B3pV/ https://www.geeksforgeeks.org/what-is-ddosdistributed-denial-of-service/ Report of Investigation Pursuant to Section 21 (a) of the Securities Exchange Act of 1934: The DAO, Securities and Exchange Board, Release No. 81207/ July 25, 2017
https://www.sec.gov/litigation/investreport/34-81207.pdf https://www.legalserviceindia.com/legal/article-10921-blockchain-based-decentralized-autonomous-organizations-daos-.html

A video is being widely shared on social media showing a monkey, with users claiming that the animal is immersed in devotion to Lord Hanuman. The clip is being circulated with assertions that the monkey was seen participating in Hanuman Aarti. Cyber Peace Foundation’s research found that the viral claim is fake. Our investigation revealed that the video is not real and has been generated using artificial intelligence tools.
Claim
On January 6, 2026, Facebook users shared the viral video claiming, “A monkey was seen immersed in devotion during Hanuman Aarti.”
- Post link: https://www.facebook.com/reel/1261813845766976
- Archived link: https://archive.ph/anid5
Screenshots of the post can be seen below.

FactCheck:
When we closely examined the viral video, we noticed several visual inconsistencies. These anomalies raised suspicion that the video might be AI-generated. To verify this, we scanned the video using the AI detection tool Hive Moderation. According to the results, the video was found to be 97 percent AI-generated.

Further, we analysed the video using another AI detection tool, Sightengine. The tool’s assessment indicated that the viral video is 98 percent AI-generated.

Conclusion
Our investigation confirms that the viral video claiming to show a monkey immersed in devotion to Lord Hanuman is AI-generated and not real. The claim circulating on social media is false and misleading.