#FactCheck - Visuals of Jharkhand Police catching a truck load of cash and gold coins is an AI-generated image
Executive Summary:
An image has been spread on social media about the truck carrying money and gold coins impounded by Jharkhand Police that also during lok sabha elections in 2024. The Research Wing, CyberPeace has verified the image and found it to be generated using artificial intelligence. There are no credible news articles supporting claims about the police having made such a seizure in Jharkhand. The images were checked using AI image detection tools and proved to be AI made. It is advised to share any image or content after verifying its authenticity.

Claims:
The viral social media post depicts a truck intercepted by the Jharkhand Police during the 2024 Lok Sabha elections. It was claimed that the truck was filled with large amounts of cash and gold coins.



Fact Check:
On receiving the posts, we started with keyword-search to find any relevant news articles related to this post. If such a big incident really happened it would have been covered by most of the media houses. We found no such similar articles. We have closely analysed the image to find any anomalies that are usually found in AI generated images. And found the same.

The texture of the tree in the image is found to be blended. Also, the shadow of the people seems to be odd, which makes it more suspicious and is a common mistake in most of the AI generated images. If we closely look at the right hand of the old man wearing white attire, it is clearly visible that the thumb finger is blended with his apparel.
We then analysed the image in an AI image detection tool named ‘Hive Detector’. Hive Detector found the image to be AI-generated.

To validate the AI fabrication, we checked with another AI image detection tool named ‘ContentAtScale AI detection’ and it detected the image as 82% AI. Generated.

After validation of the viral post using AI detection tools, it is apparent that the claim is misleading and fake.
Conclusion:
The viral image of the truck impounded by Jharkhand Police is found to be fake and misleading. The viral image is found to be AI-generated. There has been no credible source that can support the claim made. Hence, the claim made is false and misleading. The Research Wing, CyberPeace previously debunked such AI-generated images with misleading claims. Netizens must verify such news that circulates in Social Media with bogus claims before sharing it further.
- Claim: The photograph shows a truck intercepted by Jharkhand Police during the 2024 Lok Sabha elections, which was allegedly loaded with huge amounts of cash and gold coins.
- Claimed on: Facebook, Instagram, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Artificial Intelligence (AI) has transcended its role as a futuristic tool; it is already an integral part of the decision-making process in various sectors, including governance, the medical field, education, security, and the economy, worldwide. On the one hand, there are concerns about the nature of AI, its advantages and disadvantages, and the risks it may pose to the world. There are also doubts about the technology’s capacity to provide effective solutions, especially when threats such as misinformation, cybercrime, and deepfakes are becoming more common.
Recently, global leaders have reiterated that the use of AI should continue to be human-centric, transparent, and governed responsibly. The issue of offering unbridled access to innovators, while also preventing harm, is a dilemma that must be resolved.
AI as a Global Public Good
In earlier times only the most influential states and large corporations controlled the supply and use of advanced technologies, and they guarded them as national strategic assets. In contrast, AI has emerged as a digital innovation that exists and evolves within a deeply interconnected environment, which makes access far more distributed than before. Usage of AI in a specific country will not only bring its pros and cons to that particular place, but the rest of the world as well. For instance, deepfake scams and biased algorithms will not only affect the people in the country where they are created but also in all other countries where such people might be doing business or communicating.
The Growing Threat of AI Misuse
- Deepfakes, Crime, and Digital Terrorism
The application of artificial intelligence in the wrong way is quickly becoming one of the main security problems. Deepfake technology is being used to carry out electoral misinformation spread, communicate lies, and create false narratives. Cybercriminals are now making use of AI to make phishing attacks faster and more efficient, hack into security systems, and come up with elaborate social engineering tactics. In the case of extremist groups, AI has the power to give a better quality of propaganda, recruitment, and coordination.
- Solution - Human Oversight and Safety-by-Design
To overcome these dangers, a global AI system must be developed based on the principles of safety-by-design. This means incorporating moral safeguards right from the development phase rather than reacting after the damage is done. Moreover, human control is just as vital. Artificial intelligence (AI) systems that influence public confidence, security, or human rights should always be under the control of human decision-makers. Automated decision-making where there is no openness or the possibility of auditing could lead to black-box systems being developed, where the assignment of responsibility is unclear.
Three Pillars of a Responsible AI Framework
- Equitable Access to AI Technologies
One of the major hindrances to global AI development is the non-uniformity of access. The provision of high-end computing capability, data infrastructure, and AI research resources is still highly localised in some areas. A sustainable framework needs to be set up so that smaller countries, rural areas, and people speaking different languages will also be able to share the benefits of AI. The distribution of access fairly will be a gradual process, but at the same time, it will lead to the creation of new ideas and improvements in the different places where the local markets are. Thus, there would be no digital divide, and the AI future would not be exclusively determined by the wealthy economies. - Population-Level Skilling and Talent Readiness
AI will have an impact on worldwide working areas. Thus, societies must not only equip their people with the existing job skills but also with the future technology-based skills. Massive AI literacy programs, digital competencies enhancement, and cross-disciplinary education are very important. Forecasting human resources for roles in AI governance, data ethics, cyber security, and modern technologies will help prevent large scale displacement while also promoting growth that is genuinely inclusive. - Responsible and Human-Centric Deployment
Adoption of Responsible AI makes sure that technology is used for social good and not just for making profits. The human-centred AI directs its applications to the sectors like healthcare, agriculture, education, disaster management, and public services, especially the underserved regions in the world that are most in need of these innovations. This strategy guarantees that progress in technology will improve human life instead of making the situation worse for the poor or taking away the responsibility from humans.
Need for a Global AI Governance Framework
- Why International Cooperation Matters
AI governance cannot be fragmented. Different national regulations lead to the creation of loopholes that allow bad actors to operate in different countries. Hence, global coordination and harmonisation of safety frameworks is of utmost importance. A single AI governance framework should stipulate:
- Clear responsible prohibition on AI misuse in terrorism, deepfakes, and cybercrime .
- Transparency and algorithm audits as a compulsory requirement.
- Independent global oversight bodies.
- Ethical codes of conduct in harmony with humanitarian laws.
Framework like this makes it clear that AI will be shaped by common values rather than being subject to the influence of different interest groups.
- Talent Mobility and Open Innovation
If AI is to be universally accepted, then global mobility of talent must be made easier. The flow of innovation takes place when the interaction between researchers, engineers, and policymakers is not limited by borders.
- AI, Equity, and Global Development
The rapid concentration of technology in a few hands poses the risk of widening the gap in equality among countries. Most developing countries are facing the problems of poor infrastructure, lack of education and digital resources. By regarding them only as technology markets and not as partners in innovation, they become even more isolated from the mainstream of development. An AI development mix of human-centred and technology-driven must consider that the global stillness is broken only by the inclusion of the participation of the whole world. For example, the COVID-19 pandemic has already demonstrated how technology can be a major factor in the building of healthcare and crisis resilience. As a matter of fact, when fairly used, AI has a significant role to play in the realisation of the Sustainable Development Goals.
Conclusion
AI is located at a crucial junction. It can either enhance human progress or increase the digital risks. Making sure that AI is a global good goes beyond mere sophisticated technology; it requires moral leadership, inclusion in governance, and collaboration between countries. Preventing misuse by means of openness, supervision by humans, and policies that are responsible will be vital in keeping public trust. Properly guided, AI can make society more resilient, speed up development, and empower future generations. The future we choose is determined by how responsibly we act today.
As PM Modi stated ‘AI should serve as a global good, and at the same time nations must stay vigilant against its misuse’. CyberPeace reinforces this vision by advocating responsible innovation and a secure digital future for all.
References
- https://www.hindustantimes.com/india-news/ai-a-global-good-but-must-guard-against-misuse-pm-101763922179359.html
- https://www.deccanherald.com/india/g20-summit-pm-modi-goes-against-donald-trumps-stand-seeks-global-governance-for-ai-3807928
- https://timesofindia.indiatimes.com/india/need-global-compact-to-prevent-ai-misuse-pm-modi/articleshow/125525379.cms
.webp)
Introduction
Pagers were commonly utilized in the late 1990s and early 2000s, especially in fields that needed fast, reliable communication and swift alerts and information sharing. Pagers typically offer a broader coverage range, particularly in remote areas with limited cellular signals, which enhances their dependability. They are simple electronic devices with minimal features, making them easy to use and less prone to technical issues. The decline in their use has been caused by the rise of mobile phones and their extensive features, offering more advanced communication options like voice calls, text messages, and internet access. Despite this, pagers are still used in some specific industries.
A shocking incident occurred on 17th September 2014, where thousands of pager devices exploded within seconds across Lebanon in a synchronized attack, targeting the US-designated terror group Hezbollah. The explosions killed at least 9 and injured over 2,800 individuals in the country that has been caught up in the Israel-Palestine tensions in its backyard.
The Pager Bombs Incident
On Tuesday, 17th September 2024, hundreds of pagers carried by Hezbollah members in Lebanon exploded in an unprecedented attack, surpassing a series of covert assassinations and cyber-attacks in the region over recent years. The Iran-backed militant group claimed the wireless devices began to explode around 3:30 p.m., local time, in a targeted attack on Hezbollah operatives. The pagers that exploded were new and had been purchased by Hezbollah in recent months. Experts say the explosions underscore Hezbollah's vulnerability as its communication network was compromised to deadly effect. Several areas of the country were affected, particularly Beirut's southern suburbs, a populous area that is a known Hezbollah stronghold. At least 9 people were killed, including a child, and about 2,800 people were wounded, overwhelming Lebanese hospitals.
Second Wave of Attack
As per the most recent reports, the next day, following the pager bombing incident, a second wave of blasts hit Beirut and multiple parts of Lebanon. Certain wireless devices such as walkie-talkies, solar equipment, and car batteries exploded, resulting in at least 9 people killed and 300 injured, according to the Lebanese Health Ministry. The attack is said to have embarrassed Hezbollah, incapacitated many of its members, and raised fears about a greater escalation of hostilities between the Iran-backed Lebanese armed group and Israel.
A New Kind of Threat - ‘Cyber-Physical’ Attacks
The incident raises serious concerns about physical tampering with daily-use electronic devices and the possibility of triggering a new age of warfare. This highlights the serious physical threat posed, wherein even devices such as smartwatches, earbuds, and pacemakers could be vulnerable to physical tampering if an attacker gains physical access to them. We are potentially looking at a new age of ‘cyber-physical’ threats where the boundaries between the digital and the physical are blurring rapidly. It raises questions about unauthorised access and manipulation targeting the physical security of such electronic devices. There is a cause for concern regarding the global supply chain across sectors, if even seemingly-innocuous devices can be weaponised to such devastating effect. Such kinds of attacks are capable of causing significant disruption and casualties, as demonstrated by pager bombings in Lebanon, which resulted in numerous deaths and injuries. It also raises questions on the regulatory mechanism and oversights checks at every stage of the electronic device lifecycle, from component manufacturing to the final assembly and shipment or supply. This is a grave issue because embedding explosives and doing malicious modifications by adversaries can turn such electronic devices into weapons.
CyberPeace Outlook
The pager bombing attack demonstrates a new era of threats in warfare tactics, revealing the advanced coordination and technical capabilities of adversaries where they have weaponised the daily use of electronic devices. They have targeted the hardware security of electronic devices, presenting a serious new threat to hardware security. The threat is grave, and has understandably raised widespread apprehension globally. Such kind of gross weaponisation of daily-use devices, specially in the conflict context, also triggers concerns about the violation of International Humanitarian Law principles. It also raises serious questions on the liabilities of companies, suppliers and manufacturers of such devices, who are subject to regulatory checks and ensuring the authenticity of their products.
The incident highlights the need for a more robust regulatory landscape, with stricter supply chain regulations as we adjust to the realities of a possible new era of weaponisation and conflict expression. CyberPeace recommends the incorporation of stringent tracking and vetting processes in product supply chains, along with the strengthening of international cooperation mechanisms to ensure compliance with protocols regarding the responsible use of technology. These will go a long way towards establishing peace in the global cyberspace and restore trust and safety with regards to everyday technologies.
References:
1. https://indianexpress.com/article/what-is/what-is-a-pager-9573113/
5. https://www.theguardian.com/world/2024/sep/18/hezbollah-pager-explosion-lebanon-israel-gold-apollo

Starting in mid-December, 2024, a series of attacks have targeted Chrome browser extensions. A data protection company called Cyberhaven, California, fell victim to one of these attacks. Though identified in the U.S., the geographical extent and potential of the attack are yet to be determined. Assessment of these cases can help us to be better prepared for such instances if they occur in the near future.
The Attack
Browser extensions are small software applications that add and enable functionality or a capacity (feature) to a web browser. These are written in CSS, HTML, or JavaScript and like other software, can be coded to deliver malware. Also known as plug-ins, they have access to their own set of Application Programming Interface (APIs). They can also be used to remove unwanted elements as per customisation, such as pop-up advertisements and auto-play videos, when one lands on a website. Some examples of browser extensions include Ad-blockers (for blocking ads and content filtering) and StayFocusd (which limits the time of the users on a particular website).
In the aforementioned attack, the publisher of the browser at Cyberhaven received a phishing mail from an attacker posing to be from the Google Chrome Web Store Developer Support. It mentioned that their browser policies were not compatible and encouraged the user to click on the “Go to Policy”action item, which led the user to a page that enabled permissions for a malicious OAuth called Privacy Policy Extension (Open Authorisation is an adopted standard that is used to authorise secure access for temporary tokens). Once the permission was granted, the attacker was able to inject malicious code into the target’s Chrome browser extension and steal user access tokens and session cookies. Further investigation revealed that logins of certain AI and social media platforms were targeted.
CyberPeace Recommendations
As attacks of such range continue to occur, it is encouraged that companies and developers take active measures that would make their browser extensions less susceptible to such attacks. Google also has a few guidelines on how developers can safeguard their extensions from their end. These include:
- Minimal Permissions For Extensions- It is encouraged that minimal permissions for extensions barring the required APIs and websites that it depends on are acquired as limiting extension privileges limits the surface area an attacker can exploit.
- Prioritising Protection Of Developer Accounts- A security breach on this end could lead to compromising all users' data as this would allow attackers to mess with extensions via their malicious codes. A 2FA (2-factor authentication) by setting a security key is endorsed.
- HTTPS over HTTP- HTTPS should be preferred over HTTP as it requires a Secure Sockets Layer (SSL)/ transport layer security(TLS) certificate from an independent certificate authority (CA). This creates an encrypted connection between the server and the web browser.
Lastly, as was done in the case of the attack at Cyberhaven, it is encouraged to promote the practice of transparency when such incidents take place to better deal with them.
References
- https://indianexpress.com/article/technology/tech-news-technology/hackers-hijack-companies-chrome-extensions-cyberhaven-9748454/
- https://indianexpress.com/article/technology/tech-news-technology/google-chrome-extensions-hack-safety-tips-9751656/
- https://www.techtarget.com/whatis/definition/browser-extension
- https://www.forbes.com/sites/daveywinder/2024/12/31/google-chrome-2fa-bypass-attack-confirmed-what-you-need-to-know/
- https://www.cloudflare.com/learning/ssl/why-use-https/