#FactCheck: AI Video made by Pakistan which says they launched a cross-border airstrike on India's Udhampur Airbase
Executive Summary:
A social media video claims that India's Udhampur Air Force Station was destroyed by Pakistan's JF-17 fighter jets. According to official sources, the Udhampur base is still fully operational, and our research proves that the video was produced by artificial intelligence. The growing problem of AI-driven disinformation in the digital age is highlighted by this incident.

Claim:
A viral video alleges that Pakistan's JF-17 fighter jets successfully destroyed the Udhampur Air Force Base in India. The footage shows aircraft engulfed in flames, accompanied by narration claiming the base's destruction during recent cross-border hostilities.

Fact Check :
The Udhampur Air Force Station was destroyed by Pakistani JF-17 fighter jets, according to a recent viral video that has been shown to be completely untrue. The audio and visuals in the video have been conclusively identified as AI-generated based on a thorough analysis using AI detection tools such as Hive Moderation. The footage was found to contain synthetic elements by Hive Moderation, confirming that the images were altered to deceive viewers. Further undermining the untrue claims in the video is the Press Information Bureau (PIB) of India, which has clearly declared that the Udhampur Airbase is still fully operational and has not been the scene of any such attack.

Our analysis of recent disinformation campaigns highlights the growing concern that AI-generated content is being weaponized to spread misinformation and incite panic, which is highlighted by the purposeful misattribution of the video to a military attack.
Conclusion:
It is untrue that the Udhampur Air Force Station was destroyed by Pakistan's JF-17 fighter jets. This claim is supported by an AI-generated video that presents irrelevant footage incorrectly. The Udhampur base is still intact and fully functional, according to official sources. This incident emphasizes how crucial it is to confirm information from reliable sources, particularly during periods of elevated geopolitical tension.
- Claim: Recent video footage shows destruction caused by Pakistani jets at the Udhampur Airbase.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
As we delve deeper into the intricate, almost esoteric digital landscape of the 21st century, we are confronted by a new and troubling phenomenon that threatens the very bastions of our personal security. This is not a mere subplot in some dystopian novel but a harsh and palatable reality firmly rooted in today's technologically driven society. We must grapple with the consequences of the alarming evolution of cyber threats, particularly the sophisticated use of artificial intelligence in creating face swaps—a technique now cleverly harnessed by nefarious actors to undermine the bedrock of biometric security systems.
What is GoldPickaxe?
It was amidst the hum of countless servers and data centers that the term 'GoldPickaxe' began to echo, sending shivers down the spines of cybersecurity experts. Originating from the intricate web spun by a group of Chinese hackers as reported in Dark Reading. GoldPickaxe represents the latest in a long lineage of digital predators. It is an astute embodiment of the disguise, blending into the digital environment as a seemingly harmless government service app. But behind its innocuous facade, it bears the intent to ensnare and deceive, with the elderly demographic being especially susceptible to its trap.
Victims, unassuming and trustful, are cajoled into revealing their most sensitive information: phone numbers, private details, and, most alarmingly, their facial data. These virtual reflections, intended to be the safeguard of one's digital persona, are snatched away and misused in a perilous transformation. The attackers harness such biometric data, feeding it into the arcane furnaces of deepfake technology, wherein AI face-swapping crafts eerily accurate and deceptive facsimiles. These digital doppelgängers become the master keys, effortlessly bypassing the sentinel eyes of facial recognition systems that lock the vaults of Southeast Asia's financial institutions.
Through the diligent and unyielding work of the research team at Group-IB, the trajectory of one victim's harrowing ordeal—a Vietnamese individual pilfered of a life-altering $40,000—sheds light on the severity of this technological betrayal. The advancements in deep face technology, once seen as a marvel of AI, now present a clear and present danger, outpacing the mechanisms meant to deter unauthorized access, and leaving the unenlightened multitude unaware and exposed.
Adding weight to the discussion, experts, a potentate in biometric technology, commented with a somber tone: 'This is why we see face swaps as a tool of choice for hackers. It gives the threat actor this incredible level of power and control.' This chilling testament to the potency of digital fraudulence further emphasizes that even seemingly impregnable ecosystems, such as that of Apple’s, are not beyond the reach of these relentless invaders.
New Threat
Emerging from this landscape is the doppelgänger of GoldPickaxe specifically tailored for the iOS landscape—GoldDigger's mutation into GoldPickaxe for Apple's hallowed platform is nothing short of a wake-up call. It engenders not just a single threat but an evolving suite of menaces, including its uncanny offspring, 'GoldDiggerPlus,' which is wielding the terrifying power to piggyback on real-time communications of the affected devices. Continuously refined and updated, these threats become chimeras, each iteration more elusive, more formidable than its predecessor.
One ingenious and insidious tactic exploited by these cyber adversaries is the diversionary use of Apple's TestFlight, a trusted beta testing platform, as a trojan horse for their malware. Upon clampdown by Apple, the hackers, exhibiting an unsettling level of adaptability, inveigle users to endorse MDM profiles, hitherto reserved for corporate device management, thereby chaining these unknowing participants to their will.
How To Protect
Against this stark backdrop, the question of how one might armor oneself against such predation looms large. It is a question with no simple answer, demanding vigilance and proactive measures.
General Vigilance : Aware of the Trojan's advance, Apple is striving to devise countermeasures, yet individuals can take concrete steps to safeguard their digital lives.
Consider Lockdown Mode: It is imperative to exhibit discernment with TestFlight installations, to warily examine MDM profiles, and seriously consider embracing the protective embrace of Lockdown Mode. Activating Lockdown Mode on an iPhone is akin to drawing the portcullis and manning the battlements of one's digital stronghold. The process is straightforward: a journey to the settings menu, a descent into privacy and security, and finally, the sanctification of Lockdown Mode, followed by a device restart. It is a curtailment of convenience, yes, but a potent defense against the malevolence lurking in the unseen digital thicket.
As 'GoldPickaxe' insidiously carves its path into the iOS realm—a rare and unsettling occurrence—it flags the possible twilight of the iPhone's vaunted reputation for tight security. Should these shadow operators set their sights beyond Southeast Asia, angling their digital scalpels towards the U.S., Canada, and other English-speaking enclaves, the consequences could be dire.
Conclusion
Thus, it is imperative that as digital citizens, we fortify ourselves with best practices in cybersecurity. Our journey through cyberspace must be cautious, our digital trails deliberate and sparse. Let the specter of iPhone malware serve as a compelling reason to arm ourselves with knowledge and prudence, the twin guardians that will let us navigate the murky waters of the internet with assurance, outwitting those who weave webs of deceit. In heeding these words, we preserve not only our financial assets but the sanctity of our digital identities against the underhanded schemes of those who would see them usurped.
References
- https://www.timesnownews.com/technology-science/new-ios-malware-stealing-face-id-data-bank-infos-on-iphones-how-to-protect-yourself-article-107761568
- https://www.darkreading.com/application-security/ios-malware-steals-faces-defeat-biometrics-ai-swaps
- https://www.tomsguide.com/computing/malware-adware/first-ever-ios-trojan-discovered-and-its-stealing-face-id-data-to-break-into-bank-accounts

The United Nations in December 2019 passed a resolution that established an open-ended ad hoc committee. This committee was tasked to develop a ‘comprehensive international convention on countering the use of ICTs for criminal purposes’. The UN Convention on Cybercrime is an initiative of the UN member states to foster the principles of international cooperation and establish legal frameworks to provide mechanisms for combating cybercrime. The negotiations for the convention had started in early 2022. It became the first binding international criminal justice treaty to have been negotiated in over 20 years upon its adoption by the UN General Assembly.
This convention addresses the limitations of the Budapest Convention on Cybercrime by encircling a broader range of issues and perspectives from the member states. The UN Convention against Cybercrime will open for signature at a formal ceremony hosted in Hanoi, Viet Nam, in 2025. The convention will finally enter into force 90 days after being ratified by the 40th signatory.
Objectives and Features of the Convention
- The UN Convention against Cybercrime addresses various aspects of cybercrime. These include prevention, investigation, prosecution and international cooperation.
- The convention aims to establish common standards for criminalising cyber offences. These include offences like hacking, identity theft, online fraud, distribution of illegal content, etc. It outlines procedural and technical measures for law enforcement agencies for effective investigation and prosecution while ensuring due process and privacy protection.
- Emphasising the importance of cross-border collaboration among member states, the convention provides mechanisms for mutual legal assistance, extradition and sharing of information and expertise. The convention aims to enhance the capacity of developing countries to combat cybercrime through technical assistance, training, and resources.
- It seeks to balance security measures with the protection of fundamental rights. The convention highlights the importance of safeguarding human rights and privacy in cybercrime investigations and enforcement.
- The Convention emphasises the importance of prevention through awareness campaigns, education, and the promotion of a culture of cybersecurity. It encourages collaborations through public-private partnerships to enhance cybersecurity measures and raise awareness, such as protecting vulnerable groups like children, from cyber threats and exploitation.
Key Provisions of the UN Cybercrime Convention
Some key provisions of the Convention are as follows:
- The convention differentiates cyber-dependent crimes like hacking from cyber-enabled crimes like online fraud. It defines digital evidence and establishes standards for its collection, preservation, and admissibility in legal proceedings.
- It defines offences against confidentiality, integrity, and availability of computer data and includes unauthorised access, interference with data, and system sabotage. Further, content-related offences include provisions against distributing illegal content, such as CSAM and hate speech. It criminalises offences like identity theft, online fraud and intellectual property violations.
- LEAs are provided with tools for electronic surveillance, data interception, and access to stored data, subject to judicial oversight. It outlines the mechanisms for cross-border investigations, extradition, and mutual legal assistance.
- The establishment of a central body to coordinate international efforts, share intelligence, and provide technical assistance includes the involvement of experts from various fields to advise on emerging threats, legal developments, and best practices.
Comparisons with the Budapest Convention
The Budapest Convention was adopted by the Committee of Ministers of the Council of Europe at the 109th Session on 8 November 2001. This Convention was the first international treaty that addressed internet and computer crimes. A comparison between the two Conventions is as follows:
- The global participation in the UNCC is inclusive of all UN member states whereas the latter had primarily European with some non-European signatories.
- The scope of the UNCC is broader and covers a wide range of cyber threats and cybercrimes, whereas the Budapest convention is focused on specific offences like hacking and fraud.
- UNCC strongly focuses on privacy and human rights protections and the Budapest Convention had limited focus on human rights.
- UNCC has extensive provisions for assistance to developing countries and this is in contrast to the Budapest Convention which did not focus much on capacity building.
Future Outlook
The development of the UNCC was a complex process. The diverse views on key issues have been noted and balancing different legal systems, cultural perspectives and policy priorities has been a challenge. The rapid technology evolution that is taking place requires the Convention to be adaptable to effectively address emerging cyber threats. Striking a balance remains a critical concern. The Convention aims to provide a blended approach to tackling cybercrime by addressing the needs of countries, both developed and developing.
Conclusion
The resolution containing the UN Convention against Cybercrime is a step in global cooperation to combat cybercrime. It was adopted without a vote by the 193-member General Assembly and is expected to enter into force 90 days after ratification by the 40th signatory. The negotiations and consultations are finalised for the Convention and it is open for adoption and ratification by member states. It seeks to provide a comprehensive legal framework that addresses the challenges posed by cyber threats while respecting human rights and promoting international collaboration.
References
- https://consultation.dpmc.govt.nz/un-cybercrime-convention/principlesandobjectives/supporting_documents/Background.pdf
- https://news.un.org/en/story/2024/12/1158521
- https://www.interpol.int/en/News-and-Events/News/2024/INTERPOL-welcomes-adoption-of-UN-convention-against-cybercrime#:~:text=The%20UN%20convention%20establishes%20a,and%20grooming%3B%20and%20money%20laundering
- https://www.cnbctv18.com/technology/united-nations-adopts-landmark-global-treaty-to-combat-cybercrime-19529854.htm

Introduction
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
.png)
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
- Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
- Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms: Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
- User Empowerment to Counter Misinformation: Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
- Partnership with Fact-Checking/Expert Organizations: Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
References
- https://mark-hurlstone.github.io/THKE.22.BJP.pdf
- https://futurefreespeech.org/wp-content/uploads/2024/01/Empowering-Audiences-Through-%E2%80%98Prebunking-Michael-Bang-Petersen-Background-Report_formatted.pdf
- https://newsreel.pte.hu/news/unprecedented_challenges_Debunking_disinformation
- https://misinforeview.hks.harvard.edu/article/global-vaccination-badnews/