#FactCheck- Viral video of UP Police patrolling on e-rickshaw is AI-generated
Executive Summary
A video is being widely shared on social media showing a police officer driving an e-rickshaw, while two other policemen are seen in the back seat. Users sharing the clip claim that, due to a shortage of petrol, this is a new initiative by the Uttar Pradesh Police. However, research by CyberPeace found the viral claim to be false. Our research also confirms that the video is not real but AI-generated.
Claim
An Instagram user shared the viral video claiming that due to fuel shortages, Uttar Pradesh Police has started patrolling using e-rickshaws.
- Post link: https://www.instagram.com/reel/DWepKWXAeiE/
- Archive: https://archive.ph/QBNXs

Fact Check
To verify the claim, we first conducted a keyword search on Google but found no credible media reports supporting this claim.

Next, we extracted keyframes from the viral video and performed a reverse image search using Google Lens. During this process, we found the same video uploaded on an Instagram channel on March 28, 2026. The uploader clearly mentioned that the video was created purely for entertainment purposes.

We further analyzed the video using AI detection tools. When scanned with Hive Moderation, the results indicated that the video is approximately 94% AI-generated.

In the next step, we also tested the clip using DeepAI. According to its analysis, the video is about 97% AI-generated.

Conclusion
Our research clearly shows that the viral video is not authentic. It is an AI-generated clip created for entertainment purposes, and the claim that Uttar Pradesh Police has started e-rickshaw patrolling due to petrol shortage is false.
Related Blogs
.webp)
Introduction
MEITY’s Indian Computer Emergency Response Team (CERT-In) in collaboration with SISA, a global leader in forensics-driven cyber security company, launched the ‘Certified Security Professional for Artificial Intelligence’ (CSPAI) program on 23rd September. This initiative marks the first of its kind ANAB-accredited AI security certification. The CSPAI also complements global AI governance efforts. International efforts like the OECD AI Principles and the European Union's AI Act, which aim to regulate AI technologies to ensure fairness, transparency, and accountability in AI systems are the sounding board for this initiative.
About the Initiative
The Certified Security Professional for Artificial Intelligence (CSPAI) is the world’s first ANAB-accredited certification program that focuses on Cyber Security for AI. The collaboration between CERT-In and SISA plays a pivotal role in shaping AI security policies. Such partnerships between the public and private players bridge the gap between government regulatory needs and the technological expertise of private players, creating comprehensive and enforceable AI security policies. The CSPAI has been specifically designed to integrate AI and GenAI into business applications while aligning security measures to meet the unique challenges that AI systems pose. The program emphasises the strategic application of Generative AI and Large Language Models in future AI deployments. It also highlights the significant advantages of integrating LLMs into business applications.
The program is tailored for security professionals to understand the do’s and don’ts of AI integration into business applications, with a comprehensive focus on sustainable practices for securing AI-based applications. This is achieved through comprehensive risk identification and assessment frameworks recommended by ISO and NIST. The program also emphasises continuous assessment and conformance to AI laws across various nations, ensuring that AI applications adhere to standards for trustworthy and ethical AI practices.
Aim of the Initiative
As AI technology integrates itself to become an intrinsic part of business operations, a growing need for AI security expertise across industries is visible. Keeping this thought in the focal point, the accreditation program has been created to equip professionals with the knowledge and tools to secure AI systems. The CSPAI program aims to make a safer digital future while creating an environment that fosters innovation and responsibility in the evolving cybersecurity landscape focusing on Generative AI (GenAI) and Large Language Models (LLMs).
Conclusion
This Public-Private Partnership between the CERT-In and SISA, which led to the creation of the Certified Security Professional for Artificial Intelligence (CSPAI) represents a groundbreaking initiative towards AI and its responsible usage. CSPAI can be seen as an initiative addressing the growing demand for cybersecurity expertise in AI technologies. As AI becomes more embedded in business operations, the program aims to equip security professionals with the knowledge to assess, manage, and mitigate risks associated with AI applications. CSPAI as a programme aims to promote trustworthy and ethical AI usage by aligning with frameworks from ISO and NIST and ensuring adherence to AI laws globally. The approach is a significant step towards creating a safer digital ecosystem while fostering responsible AI innovation. This certification will significantly impact the healthcare, finance, and defence sectors, where AI is rapidly becoming indispensable. By ensuring that AI applications meet the requirements of security and ethical standards in these sectors, CSPAI can help build public trust and encourage broader AI adoption.
References
- https://pib.gov.in/PressReleasePage.aspx?PRID=2057868
- https://www.sisainfosec.com/training/payment-data-security-programs/cspai/
- https://timesofindia.indiatimes.com/business/india-business/cert-in-and-sisa-launch-ai-security-certification-program-to-integrate-ai-into-business-applications/articleshow/113622067.cms

Executive Summary
A digitally manipulated image of World Bank President Ajay Banga has been circulating on social media, falsely portraying him as holding a Khalistani flag. The image was shared by a Pakistan-based X (formerly Twitter) user, who also incorrectly identified Banga as the President of the International Monetary Fund (IMF), thereby fuelling misleading speculation that he supports the Khalistani movement against India.
The Claim
On February 5, an X user with the handle @syedAnas0101010 posted an image allegedly showing Ajay Banga holding a Khalistani flag. The user misidentified him as the IMF President and captioned the post, “IMF president sending signals to INDIA.” The post quickly gained traction, amplifying false narratives and political speculation. Here is the link and archive link to the post, along with a screenshot:
Fact Check:
To verify the authenticity of the image, the CyberPeace Fact Check Desk conducted a detailed research . The image was first subjected to a reverse image search using Google Lens, which led to a Reuters news report published on June 13, 2023. The original photograph, captured by Reuters photojournalist Jonathan Ernst, showed Ajay Banga arriving at the World Bank headquarters in Washington, D.C., on June 2, 2023, marking his first day in office. In the authentic image, Banga is seen holding a coffee cup, not a flag.
Further analysis confirmed that the viral image had been digitally altered to replace the coffee cup with a Khalistani flag, thereby misrepresenting the context and intent of the original photograph. Here is the link to the report, along with a screenshot.

To strengthen the findings, the altered image was also analysed using the Hive Moderation AI detection tool. The tool’s assessment indicated a high likelihood that the image contained AI-generated or manipulated elements, reinforcing the conclusion that the image was not genuine. Below is a screenshot of the result.

Conclusion
The viral image claiming to show World Bank President Ajay Banga holding a Khalistani flag is fake. The photograph was digitally manipulated to spread misinformation and provoke political speculation. In reality, the original Reuters image from June 2023 shows Banga holding a coffee cup during his arrival at the World Bank headquarters. The claim that he supports the Khalistani movement is false and misleading.

Introduction
US President Biden takes a step by signing a key executive order to manage the risks posed by AI. The new presidential order on Artificial intelligence (AI) sets rules on the rapidly growing technology that has big potential but also covers risks. The presidential order was signed on 30th October 2023. It is a strong action that the US president has taken on AI safety and security. This order will require the developers to work on the most powerful AI model to share their safety test results with the government before releasing their product to the public. It also includes developing standards for ethically using AI and for detecting AI-generated content and labelling it as such. Tackling the many dangers of AI as it rapidly advances, the technology poses certain risks by replacing human workers, spreading misinformation and stealing people's data. The white house is also making clear that this is not just America’s problem and that the US needs to work with the world to set standards here and to ensure the responsible use of AI. The white house is also urging Congress to do more and pass comprehensive privacy legislation. The order includes new safety guidelines for AI developers, standards to disclose AI-generated content and requirements for federal agencies that are utilising AI. The white house says that it is the strongest action that any government has taken on AI safety and security. In the most recent events, India has reported the biggest ever data breach, where data of 815 million Indians has been leaked. ICMR is the Indian Council of Medical Research and is the imperial medical research institution of India.
Key highlights of the presidential order
The presidential order requires developers to share safety test results. It focuses on developing standards, tools & tests to ensure safe AI. It will ensure protection from AI-enabled frauds and protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, protect against risks of using AI to engineer dangerous material and provide guidelines for detecting AI -AI-generated content and establishing overall standards for AI safety and security.
Online content authentication and labelling
Biden administration has asked the Department of Commerce to set guidelines to help authenticate content coming from the government, meaning the American people should be able to trust official documents coming from the government. So, focusing on content authentication, they have also talked about labelling AI-generated content, making the differentiation between a real authentic piece of content and something that has been manipulated or generated using AI.
ICMR Breach
On 31/10/2023, an American intelligence and cybersecurity agency flagged the biggest-ever data breach, putting the data of 81.5 crore Indians at stake and at at potential risk of making its way to the dark market. The cyber agency has informed that a ‘threat actor’, also known as ‘pwn001’ shared a thread on Breach Forums, which is essentially claimed as the ‘premier Databreach discussion and leaks forum’. The forum confirms a breach of 81.5 crore Indians. As of today,, ICRM has not issued any official statement, but it has informed the government that the prestigious Central Bureau of Investigation (CBI) will be taking on the investigation and apprehending the cybercriminals behind the cyber attack. The bad actor’s alias, 'pwn001,' made a post on X (formerly Twitter); the post informed that Aadhaar and passport information, along with personal data such as names, phone numbers, and addresses. It is claimed that the data was extracted from the COVID-19 test details of citizens registered with ICMR. This poses a serious threat to the Indian Netizen from any form of cybercrime from anywhere in the world.
Conclusion:
The US presidential order on AI is a move towards making Artificial intelligence safe and secure. This is a major step by the Biden administration, which is going to protect both Americans and the world from the considerable dangers of AI. The presidential order requires developing standards, tools, and tests to ensure AI safety. The US administration will work with allies and global partners, including India, to develop a strong international framework to govern the development and use of AI. It will ensure the responsible use of AI. With the passing of legislation such as the Digital Personal Data Protection Act, 2023, it is pertinent that the Indian government works towards creating precautionary and preventive measures to protect Indian data. As the evolution of cyber laws is coming along, we need to keep an eye on emerging technologies and update/amend our digital routines and hygienes to stay safe and secure.
References:
- https://m.dailyhunt.in/news/india/english/lokmattimes+english-epaper-lokmaten/biden+signs+landmark+executive+order+to+manage+ai+risks-newsid-n551950866?sm=Y
- https://www.hindustantimes.com/technology/in-indias-biggest-data-breach-personal-information-of-81-5-crore-people-leaked-101698719306335-amp.html?utm_campaign=fullarticle&utm_medium=referral&utm_source=inshorts