#FactCheck - Visuals of Jharkhand Police catching a truck load of cash and gold coins is an AI-generated image
Executive Summary:
An image has been spread on social media about the truck carrying money and gold coins impounded by Jharkhand Police that also during lok sabha elections in 2024. The Research Wing, CyberPeace has verified the image and found it to be generated using artificial intelligence. There are no credible news articles supporting claims about the police having made such a seizure in Jharkhand. The images were checked using AI image detection tools and proved to be AI made. It is advised to share any image or content after verifying its authenticity.

Claims:
The viral social media post depicts a truck intercepted by the Jharkhand Police during the 2024 Lok Sabha elections. It was claimed that the truck was filled with large amounts of cash and gold coins.



Fact Check:
On receiving the posts, we started with keyword-search to find any relevant news articles related to this post. If such a big incident really happened it would have been covered by most of the media houses. We found no such similar articles. We have closely analysed the image to find any anomalies that are usually found in AI generated images. And found the same.

The texture of the tree in the image is found to be blended. Also, the shadow of the people seems to be odd, which makes it more suspicious and is a common mistake in most of the AI generated images. If we closely look at the right hand of the old man wearing white attire, it is clearly visible that the thumb finger is blended with his apparel.
We then analysed the image in an AI image detection tool named ‘Hive Detector’. Hive Detector found the image to be AI-generated.

To validate the AI fabrication, we checked with another AI image detection tool named ‘ContentAtScale AI detection’ and it detected the image as 82% AI. Generated.

After validation of the viral post using AI detection tools, it is apparent that the claim is misleading and fake.
Conclusion:
The viral image of the truck impounded by Jharkhand Police is found to be fake and misleading. The viral image is found to be AI-generated. There has been no credible source that can support the claim made. Hence, the claim made is false and misleading. The Research Wing, CyberPeace previously debunked such AI-generated images with misleading claims. Netizens must verify such news that circulates in Social Media with bogus claims before sharing it further.
- Claim: The photograph shows a truck intercepted by Jharkhand Police during the 2024 Lok Sabha elections, which was allegedly loaded with huge amounts of cash and gold coins.
- Claimed on: Facebook, Instagram, X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary
A digitally manipulated image of World Bank President Ajay Banga has been circulating on social media, falsely portraying him as holding a Khalistani flag. The image was shared by a Pakistan-based X (formerly Twitter) user, who also incorrectly identified Banga as the President of the International Monetary Fund (IMF), thereby fuelling misleading speculation that he supports the Khalistani movement against India.
The Claim
On February 5, an X user with the handle @syedAnas0101010 posted an image allegedly showing Ajay Banga holding a Khalistani flag. The user misidentified him as the IMF President and captioned the post, “IMF president sending signals to INDIA.” The post quickly gained traction, amplifying false narratives and political speculation. Here is the link and archive link to the post, along with a screenshot:
Fact Check:
To verify the authenticity of the image, the CyberPeace Fact Check Desk conducted a detailed research . The image was first subjected to a reverse image search using Google Lens, which led to a Reuters news report published on June 13, 2023. The original photograph, captured by Reuters photojournalist Jonathan Ernst, showed Ajay Banga arriving at the World Bank headquarters in Washington, D.C., on June 2, 2023, marking his first day in office. In the authentic image, Banga is seen holding a coffee cup, not a flag.
Further analysis confirmed that the viral image had been digitally altered to replace the coffee cup with a Khalistani flag, thereby misrepresenting the context and intent of the original photograph. Here is the link to the report, along with a screenshot.

To strengthen the findings, the altered image was also analysed using the Hive Moderation AI detection tool. The tool’s assessment indicated a high likelihood that the image contained AI-generated or manipulated elements, reinforcing the conclusion that the image was not genuine. Below is a screenshot of the result.

Conclusion
The viral image claiming to show World Bank President Ajay Banga holding a Khalistani flag is fake. The photograph was digitally manipulated to spread misinformation and provoke political speculation. In reality, the original Reuters image from June 2023 shows Banga holding a coffee cup during his arrival at the World Bank headquarters. The claim that he supports the Khalistani movement is false and misleading.

Introduction
Microsoft has unveiled its ambitious roadmap for developing a quantum supercomputer with AI features, acknowledging the transformative power of quantum computing in solving complex societal challenges. Quantum computing has the potential to revolutionise AI by enhancing its capabilities and enabling breakthroughs in different fields. Microsoft’s groundbreaking announcement of its plans to develop a quantum supercomputer, its potential applications, and the implications for the future of artificial intelligence (AI). However, there is a need for regulation in the realms of quantum computing and AI and significant policies and considerations associated with these transformative technologies. This technological advancement will help in the successful development and deployment of quantum computing, along with the potential benefits and challenges associated with its implementation.
What isQuantum computing?
Quantum computing is an emerging field of computer science and technology that utilises principles from quantum mechanics to perform complex calculations and solve certain types of problems more efficiently than classical computers. While classical computers store and process information using bits, quantum computers use quantum bits or qubits.
Interconnected Future
Quantum computing promises to significantly expand AI’s capabilities beyond its current limitations. Integrating these two technologies could lead to profound advancements in various sectors, including healthcare, finance, and cybersecurity. Quantum computing and artificial intelligence (AI) are two rapidly evolving fields that have the potential to revolutionise technology and reshape various industries. This section explores the interdependence of quantum computing and AI, highlighting how integrating these two technologies could lead to profound advancements across sectors such as healthcare, finance, and cybersecurity.
- Enhancing AI Capabilities:
Quantum computing holds the promise of significantly expanding the capabilities of AI systems. Traditional computers, based on classical physics and binary logic, need help solving complex problems due to the exponential growth of computational requirements. Quantum computing, on the other hand, leverages the principles of quantum mechanics to perform computations on quantum bits or qubits, which can exist in multiple states simultaneously. This inherent parallelism and superposition property of qubits could potentially accelerate AI algorithms and enable more efficient processing of vast amounts of data.
- Solving Complex Problems:
The integration of quantum computing and AI has the potential to tackle complex problems that are currently beyond the reach of classical computing methods. Quantum machine learning algorithms, for example, could leverage quantum superposition and entanglement to analyse and classify large datasets more effectively. This could have significant applications in healthcare, where AI-powered quantum systems could aid in drug discovery, disease diagnosis, and personalised medicine by processing vast amounts of genomic and clinical data.
- Advancements in Finance and Optimisation:
The financial sector can benefit significantly from integrating quantum computing and AI. Quantum algorithms can be employed to optimise portfolios, improve risk analysis models, and enhance trading strategies. By harnessing the power of quantum machine learning, financial institutions can make more accurate predictions and informed decisions, leading to increased efficiency and reduced risks.
- Strengthening Cybersecurity:
Quantum computing can also play a pivotal role in bolstering cybersecurity defences. Quantum techniques can be employed to develop new cryptographic protocols that are resistant to quantum attacks. In conjunction with quantum computing, AI can further enhance cybersecurity by analysing massive amounts of network traffic and identifying potential vulnerabilities or anomalies in real time, enabling proactive threat mitigation.
- Quantum-Inspired AI:
Beyond the direct integration of quantum computing and AI, quantum-inspired algorithms are also being explored. These algorithms, designed to run on classical computers, draw inspiration from quantum principles and can improve performance in specific AI tasks. Quantum-inspired optimisation algorithms, for instance, can help solve complex optimisation problems more efficiently, enabling better resource allocation, supply chain management, and scheduling in various industries.
How Quantum Computing and AI Should be Regulated-
As quantum computing and artificial intelligence (AI) continues to advance, questions arise regarding the need for regulations to govern these technologies. There is debate surrounding the regulation of quantum computing and AI, considering the potential risks, ethical implications, and the balance between innovation and societal protection.
- Assessing Potential Risks: Quantum computing and AI bring unprecedented capabilities that can significantly impact various aspects of society. However, they also pose potential risks, such as unintended consequences, privacy breaches, and algorithmic biases. Regulation can help identify and mitigate these risks, ensuring these technologies’ responsible development and deployment.
- Ethical Implications: AI and quantum computing raise ethical concerns related to privacy, bias, accountability, and the impact on human autonomy. For AI, issues such as algorithmic fairness, transparency, and decision-making accountability must be addressed. Quantum computing, with its potential to break current encryption methods, requires regulatory measures to protect sensitive information. Ethical guidelines and regulations can provide a framework to address these concerns and promote responsible innovation.
- Balancing Innovation and Regulation: Regulating quantum computing and AI involves balancing fostering innovation and protecting society’s interests. Excessive regulation could stifle technological advancements, hinder research, and impede economic growth. On the other hand, a lack of regulation may lead to the proliferation of unsafe or unethical applications. A thoughtful and adaptive regulatory approach is necessary, considering the dynamic nature of these technologies and allowing for iterative improvements based on evolving understanding and risks.
- International Collaboration: Given the global nature of quantum computing and AI, international collaboration in regulation is essential. Harmonising regulatory frameworks can avoid fragmented approaches, ensure consistency, and facilitate ethical and responsible practices across borders. Collaborative efforts can also address data privacy, security, and cross-border data flow challenges, enabling a more unified and cooperative approach towards regulation.
- Regulatory Strategies: Regulatory strategies for quantum computing and AI should adopt a multidisciplinary approach involving stakeholders from academia, industry, policymakers, and the public. Key considerations include:
- Risk-based Approach: Regulations should focus on high-risk applications while allowing low-risk experimentation and development space.
- Transparency and Explainability: AI systems should be transparent and explainable to enable accountability and address concerns about bias, discrimination, and decision-making processes.
- Privacy Protection: Regulations should safeguard individual privacy rights, especially in quantum computing, where current encryption methods may be vulnerable.
- Testing and Certification: Establishing standards for the testing and certification of AI systems can ensure their reliability, safety, and adherence to ethical principles.
- Continuous Monitoring and Adaptation: Regulatory frameworks should be dynamic, regularly reviewed, and adapted to keep pace with the evolving landscape of quantum computing and AI.
Conclusion:
Integrating quantum computing and AI holds immense potential for advancing technology across diverse domains. Quantum computing can enhance the capabilities of AI systems, enabling the solution of complex problems, accelerating data processing, and revolutionising industries such as healthcare, finance, and cybersecurity. As research and development in these fields progress, collaborative efforts among researchers, industry experts, and policymakers will be crucial in harnessing the synergies between quantum computing and AI to drive innovation and shape a transformative future.The regulation of quantum computing and AI is a complex and ongoing discussion. Striking the right balance between fostering innovation, protecting societal interests, and addressing ethical concerns is crucial. A collaborative, multidisciplinary approach to regulation, considering international cooperation, risk assessment, transparency, privacy protection, and continuous monitoring, is necessary to ensure these transformative technologies' responsible development and deployment.

Introduction
In today’s digital world, where everything is related to data, the more data you own, the more control and compliance you have over the market, which is why companies are looking for ways to use data to improve their business. But at the same time, they have to make sure they are protecting people’s privacy. It is very tricky to strike a balance between both of them. Imagine you are trying to bake a cake where you need to use all the ingredients to make it taste great, but you also have to make sure no one can tell what’s in it. That’s kind of what companies are dealing with when it comes to data. Here, ‘Pseudonymisation’ emerges as a critical technical and legal mechanism that offers a middle ground between data anonymisation and unrestricted data processing.
Legal Framework and Regulatory Landscape
Pseudonymisation, as defined by the General Data Protection Regulation (GDPR) in Article 4(5), refers to “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”. This technique represents a paradigm shift in data protection strategy, enabling organisations to preserve data utility while significantly reducing privacy risks. The growing importance of this balance is evident in the proliferation of data protection laws worldwide, from GDPR in Europe to India’s Digital Personal Data Protection Act (DPDP) of 2023.
Its legal treatment varies across jurisdictions, but a convergent approach is emerging that recognises its value as a data protection safeguard while maintaining that the pseudonymised data remains personal data. Article 25(1) of GDPR recognises it as “an appropriate technical and organisational measure” and emphasises its role in reducing risks to data subjects. It protects personal data by reducing the risk of identifying individuals during data processing. The European Data Protection Board’s (EDPB) 2025 Guidelines on Pseudonymisation provide detailed guidance emphasising the importance of defining the “pseudonymisation domain”. It defines who is prevented from attributing data to specific individuals and ensures that the technical and organised measures are in place to block unauthorised linkage of pseudonymised data to the original data subjects. In India, while the DPDP Act does not explicitly define pseudonymisation, legal scholars argue that such data would still fall under the definition of personal data, as it remains potentially identifiable. The Act defines personal data defined in section 2(t) broadly as “any data about an individual who is identifiable by or in relation to such data,” suggesting that the pseudonymised information, being reversible, would continue to require compliance with data protection obligations.
Further, the DPDP Act, 2023 also includes principles of data minimisation and purpose limitation. Section 8(4) says that a “Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the Rules made under it.” The concept of Pseudonymization fits here because it is a recognised technical safeguard, which means companies can use pseudonymization as one of the methods or part of their compliance toolkit under Section 8(4) of the DPDP Act. However, its use should be assessed on a case to case basis, since ‘encryption’ is also considered one of the strongest methods for protecting personal data. The suitability of pseudonymization depends on the nature of the processing activity, the type of data involved, and the level of risk that needs to be mitigated. In practice, organisations may use pseudonymization in combination with other safeguards to strengthen overall compliance and security.
The European Court of Justice’s recent jurisprudence has introduced nuanced considerations about when pseudonymised data might not constitute personal data for certain entities. In cases where only the original controller possesses the means to re-identify individuals, third parties processing such data may not be subject to the full scope of data protection obligations, provided they cannot reasonably identify the data subjects. The “means reasonably likely” assessment represents a significant development in understanding the boundaries of data protection law.
Corporate Implementation Strategies
Companies find that pseudonymisation is not just about following rules, but it also brings real benefits. By using this technique, businesses can keep their data more secure and reduce the damage in the event of a breach. Customers feel more confident knowing that their information is protected, which builds trust. Additionally, companies can utilise this data for their research or other important purposes without compromising user privacy.
Key Benefits of Pseudonymisation:
- Enhanced Privacy Protection: It hides personal details like names or IDs with fake ones (with artificial values or codes), making it harder for accidental privacy breaches.
- Preserved Data Utility: Unlike completely anonymous data, pseudonymised data keeps its usefulness by maintaining important patterns and relationships within datasets.
- Facilitate Data Sharing: It’s easier to share pseudonymised data with partners or researchers because it protects privacy while still being useful.
However, using pseudonymisation is not as easy as companies have to deal with tricky technical issues like choosing the right methods, such as encryption or tokenisation and managing security keys safely. They have to implement strong policies to stop anyone from figuring out who the data belongs to. This can get expensive and complicated, especially when dealing with a large amount of data, and it often requires expert help and regular upkeep.
Balancing Privacy Rights and Data Utility
The primary challenge in pseudonymisation is striking the right balance between protecting individuals' privacy and maintaining the utility of the data. To get this right, companies need to consider several factors, such as why they are using the data, the potential hacker's level of skill, and the type of data being used.
Conclusion
Pseudonymisation offers a practical middle ground between full anonymisation and restricted data use, enabling organisations to harness the value of data while protecting individual privacy. Legally, it is recognised as a safeguard but still treated as personal data, requiring compliance under frameworks like GDPR and India’s DPDP Act. For companies, it is not only regulatory adherence but also ensuring that it builds trust and enhances data security. However, its effectiveness depends on robust technical methods, governance, and vigilance. Striking the right balance between privacy and data utility is crucial for sustainable, ethical, and innovation-driven data practices.
References:
- https://gdpr-info.eu/art-4-gdpr/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://gdpr-info.eu/art-25-gdpr/
- https://www.edpb.europa.eu/system/files/2025-01/edpb_guidelines_202501_pseudonymisation_en.pdf
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915