#FactCheck - Visuals of Jharkhand Police catching a truck load of cash and gold coins is an AI-generated image
Executive Summary:
An image has been spread on social media about the truck carrying money and gold coins impounded by Jharkhand Police that also during lok sabha elections in 2024. The Research Wing, CyberPeace has verified the image and found it to be generated using artificial intelligence. There are no credible news articles supporting claims about the police having made such a seizure in Jharkhand. The images were checked using AI image detection tools and proved to be AI made. It is advised to share any image or content after verifying its authenticity.

Claims:
The viral social media post depicts a truck intercepted by the Jharkhand Police during the 2024 Lok Sabha elections. It was claimed that the truck was filled with large amounts of cash and gold coins.



Fact Check:
On receiving the posts, we started with keyword-search to find any relevant news articles related to this post. If such a big incident really happened it would have been covered by most of the media houses. We found no such similar articles. We have closely analysed the image to find any anomalies that are usually found in AI generated images. And found the same.

The texture of the tree in the image is found to be blended. Also, the shadow of the people seems to be odd, which makes it more suspicious and is a common mistake in most of the AI generated images. If we closely look at the right hand of the old man wearing white attire, it is clearly visible that the thumb finger is blended with his apparel.
We then analysed the image in an AI image detection tool named ‘Hive Detector’. Hive Detector found the image to be AI-generated.

To validate the AI fabrication, we checked with another AI image detection tool named ‘ContentAtScale AI detection’ and it detected the image as 82% AI. Generated.

After validation of the viral post using AI detection tools, it is apparent that the claim is misleading and fake.
Conclusion:
The viral image of the truck impounded by Jharkhand Police is found to be fake and misleading. The viral image is found to be AI-generated. There has been no credible source that can support the claim made. Hence, the claim made is false and misleading. The Research Wing, CyberPeace previously debunked such AI-generated images with misleading claims. Netizens must verify such news that circulates in Social Media with bogus claims before sharing it further.
- Claim: The photograph shows a truck intercepted by Jharkhand Police during the 2024 Lok Sabha elections, which was allegedly loaded with huge amounts of cash and gold coins.
- Claimed on: Facebook, Instagram, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Introduction
As various technological developments enable our phones to take on a greater role, these devices, along with the applications they host, also become susceptible to greater risks. Recently, Zimperium, a tech company that provides security services for mobiles and applications from threats like malware, phishing, etc., has announced its identification of a malware that is targeted toward stealing information from Indian Banks. The Indian Express reports that data from over 25 million devices has been exfiltrated, making it increasingly dangerous, just going by the it has affected so far.
Understanding the Threat: The Case of FatBoyPanel
A malware is a malicious software that is a file or a program, intentionally harmful to a network, server, computer, and other devices. It is also of various types; however, in the context of the aforementioned case, it is a Trojan horse i.e., a file/program designed to trick the victim into assuming it to be a legitimate software program that is trying to gain access. They are able to execute malicious functions on a device as soon as they are activated post-installation.
The FatBoyPanel, as it is called, is a malware management system that carried out a massive cyberattack, targeting Indian mobile users and their bank details. Their modus operandi included the process of social engineering, wherein attackers posed as bank officials who called their target and warned them that if no immediate action was taken to update their bank details, their account would be suspended immediately. On panicking and asking for instructions, they were told to download a banking application from the link sent in the form of an Android Package Kit (APK) file (that requires one to enable “Install from Unknown Sources” ) and install it. Various versions of similar incidents were acted on by other attackers, all to trick the target into downloading the file sent. The apps sent through the links are fake, and once installed, they immediately ask for critical permissions such as access to contacts, device storage, overlay permissions (to show fake login pages over real apps), and access to SMS messages (to steal OTPs and banking alerts). This aids in capturing text messages (especially OTPs related to banks), read stored files, monitor app usage, etc. This data is stolen and then sent to the FatBoyPanel backend, where hackers are able to see real-time data on their dashboard, which they can further download and sell. FatBoyPanel is a C&C (command and control) server that acts as a centralised control room.
Protecting Yourself: Essential Precautions in the Digital Realm
Although there are various other types of malware, how one must deal with them remains the same. Following are a few instructions that one can practice in order to stay safe:
- Be cautious with app downloads: Only download apps from official app stores (Google Play Store, Apple App Store). Even then, check the developer's reputation, app permissions, and user reviews before installing.
- Keep your operating system and apps updated: Updates often include security patches that protect against known vulnerabilities.
- Be wary of suspicious links and attachments: Avoid clicking on links or opening attachments in unsolicited emails, SMS messages, or social media posts. Verify the sender's authenticity before interacting.
- Enable multi-factor authentication (MFA) wherever possible: While malware like FatBoyPanel can sometimes bypass OTP-based MFA, it still adds an extra layer of security against many other threats.
- Use strong and unique passwords: Employ a combination of uppercase and lowercase letters, numbers, and symbols for all your online accounts. Avoid reusing passwords across different platforms.
- Install and maintain a reputable mobile security app: These apps can help detect and remove malware, as well as warn you about malicious websites and links (Bitdefender, etc.)
- Regularly review app permissions and give access judiciously: Check what permissions your installed apps have and revoke any that seem unnecessary or excessive.
- Educate yourself and stay informed: Keep up-to-date with the latest cybersecurity threats and best practices.
Conclusion
The emergence of malware management systems indicates just how sophisticated the attackers have become over the years. Vigilance at the level of the general public is recommended, but so are increasing efforts in awareness regarding such methods of crime, as people continue to remain vulnerable in aspects related to cybersecurity. Sensitive information at stake, we must take steps to sensitise and better prepare the public to deal with the growing landscape of the digital world.
References
- https://zimperium.com/blog/mobile-indian-cyber-heist-fatboypanel-and-his-massive-data-breach
- https://indianexpress.com/article/technology/tech-news-technology/fatboypanel-new-malware-targeting-indian-users-what-is-it-9965305/
- https://www.techtarget.com/searchsecurity/definition/malware

In 2023, PIB reported that up to 22% of young women in India are affected by Polycystic Ovarian Syndrome (PCOS). However, access to reliable information regarding the condition and its treatment remains a challenge. A study by the PGIMER Chandigarh conducted in 2021 revealed that approximately 37% of affected women rely on the internet as their primary source of information for PCOS. However, it can be difficult to distinguish credible medical advice from misleading or inaccurate information online since the internet and social media are rife with misinformation. The uptake of misinformation can significantly delay the diagnosis and treatment of medical conditions, jeopardizing health outcomes for all.
The PCOS Misinformation Ecosystem Online
PCOS is one of the most common disorders diagnosed in the female endocrine system, characterized by the swelling of ovaries and the formation of small cysts on their outer edges. This may lead to irregular menstruation, weight gain, hirsutism, possible infertility, poor mental health, and other symptoms. However, there is limited research on its causes, leaving most medical practitioners in India ill-equipped to manage the issue effectively and pushing women to seek alternate remedies from various sources.
This creates space for the proliferation of rumours, unverified cures and superstitions, on social media, For example, content on YouTube, Facebook, and Instagram may promote “miracle cures” like detox teas or restrictive diets, or viral myths claiming PCOS can be “cured” through extreme weight loss or herbal remedies. Such misinformation not only creates false hope for women but also delays treatment, or may worsen symptoms.
How Tech Platforms Amplify Misinformation
- Engagement vs. Accuracy: Social media algorithms are designed to reward viral content, even if it’s misleading or incendiary since it generates advertisement revenue. Further, non-medical health influencers often dominate health conversations online and offer advice with promises of curing the condition.
- Lack of Verification: Although platforms like YouTube try to provide verified health-related videos through content shelves, and label unverified content, the sheer volume of content online means that a significant chunk of content escapes the net of content moderation.
- Cultural Context: In India, discussions around women’s health, especially reproductive health, are stigmatized, making social media the go-to source for private, albeit unreliable, information.
Way Forward
a. Regulating Health Content on Tech Platforms: Social media is a significant source of health information to millions who may otherwise lack access to affordable healthcare. Rather than rolling back content moderation practices as seen recently, platforms must dedicate more resources to identify and debunk misinformation, particularly health misinformation.
b. Public Awareness Campaigns: Governments and NGOs should run nationwide campaigns in digital literacy to educate on women’s health issues in vernacular languages and utilize online platforms for culturally sensitive messaging to reach rural and semi-urban populations. This is vital for countering the stigma and lack of awareness which enables misinformation to proliferate.
c. Empowering Healthcare Communication: Several studies suggest a widespread dissatisfaction among women in many parts of the world regarding the information and care they receive for PCOS. This is what drives them to social media for answers. Training PCOS specialists and healthcare workers to provide accurate details and counter misinformation during patient consultations can improve the communication gaps between healthcare professionals and patients.
d. Strengthening the Research for PCOS: The allocation of funding for research in PCOS is vital, especially in the face of its growing prevalence amongst Indian women. Academic and healthcare institutions must collaborate to produce culturally relevant, evidence-based interventions for PCOS. Information regarding this must be made available online since the internet is most often a primary source of information. An improvement in the research will inform improved communication, which will help reduce the trust deficit between women and healthcare professionals when it comes to women’s health concerns.
Conclusion
In India, the PCOS misinformation ecosystem is shaped by a mix of local and global factors such as health communication failures, cultural stigma, and tech platform design prioritizing engagement over accuracy. With millions of women turning to the internet for guidance regarding their conditions, they are increasingly vulnerable to unverified claims and pseudoscientific remedies which can lead to delayed diagnoses, ineffective treatments, and worsened health outcomes. The rising number of PCOS cases in the country warrants the bridging of health research and communications gaps so that women can be empowered with accurate, actionable information to make the best decisions regarding their health and well-being.
Sources
- https://pib.gov.in/PressReleasePage.aspx?PRID=1893279#:~:text=It%20is%20the%20most%20prevailing%20female%20endocrine,neuroendocrine%20system%2C%20sedentary%20lifestyle%2C%20diet%2C%20and%20obesity.
- https://www.thinkglobalhealth.org/article/india-unprepared-pcos-crisis?utm_source=chatgpt.com
- https://www.bbc.com/news/articles/ckgz2p0999yo
- https://pmc.ncbi.nlm.nih.gov/articles/PMC9092874/
%20(1).jpg)
Introduction
Artificial Intelligence (AI) driven autonomous weapons are reshaping military strategy, acting as force multipliers that can independently assess threats, adapt to dynamic combat environments, and execute missions with minimal human intervention, pushing the boundaries of modern warfare tactics. AI has become a critical component of modern technology-driven warfare and has simultaneously impacted many spheres in a technology-driven world. Nations often prioritise defence for significant investments, supporting its growth and modernisation. AI has become a prime area of investment and development for technological superiority in defence forces. India’s focus on defence modernisation is evident through initiatives like the Defence AI Council and the Task Force on Strategic Implementation of AI for National Security.
The main requirement that Autonomous Weapons Systems (AWS) require is the “autonomy” to perform their functions when direction or input from a human actor is absent. AI is not a prerequisite for the functioning of AWSs, but, when incorporated, AI could further enable such systems. While militaries seek to apply increasingly sophisticated AI and automation to weapons technologies, several questions arise. Ethical concerns have been raised for AWS as the more prominent issue by many states, international organisations, civil society groups and even many distinguished figures.
Ethical Concerns Surrounding Autonomous Weapons
The delegation of life-and-death decisions to machines is the ethical dilemma that surrounds AWS. A major concern is the lack of human oversight, raising questions about accountability. What if AWS malfunctions or violates international laws, potentially committing war crimes? This ambiguity fuels debate over the dangers of entrusting lethal force to non-human actors. Additionally, AWS poses humanitarian risks, particularly to civilians, as flawed algorithms could make disastrous decisions. The dehumanisation of warfare and the violation of human dignity are critical concerns when AWS is in question, as targets become reduced to mere data points. The impact on operators’ moral judgment and empathy is also troubling, alongside the risk of algorithmic bias leading to unjust or disproportionate targeting. These ethical challenges are deeply concerning.
Balancing Ethical Considerations and Innovations
It is immaterial how advanced a computer becomes in simulating human emotions like compassion, empathy, altruism, or other emotions as the machine will only be imitating them, not experiencing them as a human would. A potential solution to this ethical predicament is using a 'human-in-the-loop' or 'human-on-the-loop' semi-autonomous system. This would act as a compromise between autonomy and accountability.
A “human-on-the-loop” system is designed to provide human operators with the ability to intervene and terminate engagements before unacceptable levels of damage occur. For example, defensive weapon systems could autonomously select and engage targets based on their programming, during which a human operator retains full supervision and can override the system within a limited period if necessary.
In contrast, a ‘human-in-the-loop” system is intended to engage individual targets or specific target groups pre-selected by a human operator. Examples would include homing munitions that, once launched to a particular target location, search for and attack preprogrammed categories of targets within the area.
International Debate and Regulatory Frameworks
The regulation of autonomous weapons that employ AI, in particular, is a pressing global issue due to the ethical, legal, and security concerns it contains. There are many ongoing efforts at the international level which are in discussion to regulate such weapons. One such example is the initiative under the United Nations Convention on CertainConventional Weapons (CCW), where member states, India being an active participant, debate the limits of AI in warfare. However, existing international laws, such as the Geneva Conventions, offer legal protection by prohibiting indiscriminate attacks and mandating the distinction between combatants and civilians. The key challenge lies in achieving global consensus, as different nations have varied interests and levels of technological advancement. Some countries advocate for a preemptive ban on fully autonomous weapons, while others prioritise military innovation. The complexity of defining human control and accountability further complicates efforts to establish binding regulations, making global cooperation both essential and challenging.
The Future of AI in Defence and the Need for Stronger Regulations
The evolution of autonomous weapons poses complex ethical and security challenges. As AI-driven systems become more advanced, a growing risk of its misuse in warfare is also advancing, where lethal decisions could be made without human oversight. Proactive regulation is crucial to prevent unethical use of AI, such as indiscriminate attacks or violations of international law. Setting clear boundaries on autonomous weapons now can help avoid future humanitarian crises. India’s defence policy already recognises the importance of regulating the use of AI and AWS, as evidenced by the formation of bodies like the Defence AI Project Agency (DAIPA) for enabling AI-based processes in defence Organisations. Global cooperation is essential for creating robust regulations that balance technological innovation with ethical considerations. Such collaboration would ensure that autonomous weapons are used responsibly, protecting civilians and combatants, while encouraging innovation within a framework prioritising human dignity and international security.
Conclusion
AWS and AI in warfare present significant ethical, legal, and security challenges. While these technologies promise enhanced military capabilities, they raise concerns about accountability, human oversight, and humanitarian risks. Balancing innovation with ethical responsibility is crucial, and semi-autonomous systems offer a potential compromise. India’s efforts to regulate AI in defence highlight the importance of proactive governance. Global cooperation is essential in establishing robust regulations that ensure AWS is used responsibly, prioritising human dignity and adherence to international law, while fostering technological advancement.
References
● https://indianexpress.com/article/explained/reaim-summit-ai-war-weapons-9556525/