#FactCheck- Viral video of UP Police patrolling on e-rickshaw is AI-generated
Executive Summary
A video is being widely shared on social media showing a police officer driving an e-rickshaw, while two other policemen are seen in the back seat. Users sharing the clip claim that, due to a shortage of petrol, this is a new initiative by the Uttar Pradesh Police. However, research by CyberPeace found the viral claim to be false. Our research also confirms that the video is not real but AI-generated.
Claim
An Instagram user shared the viral video claiming that due to fuel shortages, Uttar Pradesh Police has started patrolling using e-rickshaws.
- Post link: https://www.instagram.com/reel/DWepKWXAeiE/
- Archive: https://archive.ph/QBNXs

Fact Check
To verify the claim, we first conducted a keyword search on Google but found no credible media reports supporting this claim.

Next, we extracted keyframes from the viral video and performed a reverse image search using Google Lens. During this process, we found the same video uploaded on an Instagram channel on March 28, 2026. The uploader clearly mentioned that the video was created purely for entertainment purposes.

We further analyzed the video using AI detection tools. When scanned with Hive Moderation, the results indicated that the video is approximately 94% AI-generated.

In the next step, we also tested the clip using DeepAI. According to its analysis, the video is about 97% AI-generated.

Conclusion
Our research clearly shows that the viral video is not authentic. It is an AI-generated clip created for entertainment purposes, and the claim that Uttar Pradesh Police has started e-rickshaw patrolling due to petrol shortage is false.
Related Blogs

Introduction
In recent times the evolution of cyber laws has picked up momentum, primarily because of new and emerging technologies. However, just as with any other law, the same is also strengthened and substantiated by judicial precedents and judgements. Recently Delhi High Court has heard a matter between Tata Sky and Linkedin, where the court has asked them to present their Chief Grievance Officer details and SoP per the intermediary guidelines 2021.
Furthermore, in another news, officials from RBI and Meity have been summoned by the Parliamentary Standing Committee in order to address the rising issues of cyber securities and cybercrimes in India. This comes on the very first day of the monsoon session of the parliament this year. As we move towards the aspects of digital India, addressing these concerns are of utmost importance to safeguard the Indian Netizen.
The Issue
Tata Sky changed its name to Tata Play last year and has since then made its advent in the OTT sector as well. As the rebranding took place, the company was very cautious of anyone using the name Tata Sky in a bad light. Tata Play found that a lot of people on Linkedin had posted their work experience in Tata Sky for multiple years, as any new recruiter cannot verify the same. This poses a misappropriation of the brand’s name. This issue was reported to Linkedin multiple times by officials of Tata Play, but no significant action was seen. This led to an issue between the two brands; hence, a matter has been filed in front of the Hon’ble Delhi High Court to address the issue. The court has taken due cognisance of the issue, and hence in accordance with the Intermediary Guidelines 2021, the court has directed Linkedlin to provide the details of their Cheif Grievance Officer in the public domain and also to share the SoP for the redressal of issues and grievances. The guidelines made it mandatory for all intermediaries to set up a dedicated office in India and appoint a Chief Grievance Officer responsible for effective and efficient redressal of the platform-related offences and grievances within the stipulated period.
The job platform has also been ordered to share the SoPs and the various requirements and safety checks for users to create profiles over Linkedin. The policy of Linkedin is focused towards the users as well as the companies existing on the platform in order to create a synergy between the two.
RBI and Meity Official at Praliament
As we go deeper into cyberspace, especially after the pandemic, we have seen an exponential rise in cybercrimes. Based on statistics, 4 out of 10 people have been victims of cybercrimes in 2022-23, and it is estimated that 70% of the population has been subjected to direct or indirect cybercrime. As per the latest statistics, 85% of Indian children have been subjected to cyberbullying in some form or the other.
The government has taken note of the rising numbers of such crimes and threats, and hence the Parliamentary Committee has summoned the officials from RBI and the Ministery of Electronics and Information Technology to the parliament on July 20, 2023, i.e. the first day of monsoon session at the parliament. This comes at a very crucial time as the Digital Personal Data Protection Bill is to be tabled in the parliament this session and this marks the revamping of the legislation and regulations in the Indian cyberspace. As emerging technologies have started to surround us it is pertinent to create legal safeguards and practices to protect the Indian Netizen at large.
Conclusion
The legal crossroads between Tata Sky and Linkedin will go a long way in establishing the mandates under the Intermediary guidelines in the form of legal precedents. The compliance with the rule of law is the most crucial aspect of any democracy. Hence the separation of power between the Legislature, Judiciary and Execution has been fundamental in safeguarding basic and fundamental rights. Similarly, the RBI and Meity officials being summoned to the parliament shows the transparency in the system and defines the true spirit of democracy., which will contribute towards creating a safe and secured Indian Cyberspace.

Introduction
Phishing as a Service (PhaaS) platform 'LabHost' has been a significant player in cybercrime targeting North American banks, particularly financial institutes in Canada. LabHost offers turnkey phishing kits, infrastructure for hosting pages, email content generation, and campaign overview services to cybercriminals in exchange for a monthly subscription. The platform's popularity surged after introducing custom phishing kits for Canadian banks in the first half of 2023.Fortra reports that LabHost has overtaken Frappo, cybercriminals' previous favorite PhaaS platform, and is now the primary driving force behind most phishing attacks targeting Canadian bank customers.
In the digital realm, where the barriers to entry for nefarious activities are crumbling, and the tools of the trade are being packaged and sold with the same customer service one might expect from a legitimate software company. This is the world of Phishing-as-a-Service (PhaaS), and at the forefront of this ominous trend is LabHost, a platform that has been instrumental in escalating attacks on North American banks, with a particular focus on Canadian financial institutions.
LabHost is not a newcomer to the cybercrime scene, but its ascent to infamy was catalyzed by the introduction of custom phishing kits tailored for Canadian banks in the first half of 2023. The platform operates on a subscription model, offering turnkey solutions that include phishing kits, infrastructure for hosting malicious pages, email content generation, and campaign overview services. For a monthly fee, cybercriminals are handed the keys to a kingdom of deception and theft.
Emergence of Labhost
The rise of LabHost has been meticulously chronicled by various cyber security firms which reports that LabHost has dethroned the previously favored PhaaS platform, Frappo. LabHost has become the primary driving force behind the majority of phishing attacks targeting customers of Canadian banks. Despite suffering a disruptive outage in early October 2023, LabHost has rebounded with vigor, orchestrating several hundreds of attacks per month.
Their investigation into LabHost's operations reveals a tiered membership system: Standard, Premium, and World, with monthly fees of $179, $249, and $300, respectively. Each tier offers an escalating scope of targets, from Canadian banks to 70 institutions worldwide, excluding North America. The phishing templates provided by LabHost are not limited to financial entities; they also encompass online services like Spotify, postal delivery services like DHL, and regional telecommunication service providers.
LabRat
The true ingenuity of LabHost lies in its integration with 'LabRat,' a real-time phishing management tool that enables cybercriminals to monitor and control an active phishing attack. This tool is a linchpin in man-in-the-middle style attacks, designed to capture two-factor authentication codes, validate credentials, and bypass additional security measures. In essence, LabRat is the puppeteer's strings, allowing the phisher to manipulate the attack with precision and evade the safeguards that are the bulwarks of our digital fortresses.
LabSend
In the aftermath of its October disruption, LabHost unveiled 'LabSend,' an SMS spamming tool that embeds links to LabHost phishing pages in text messages. This tool orchestrates a symphony of automated smishing campaigns, randomizing portions of text messages to slip past the vigilant eyes of spam detection systems. Once the SMS lure is cast, LabSend responds to victims with customizable message templates, a Machiavellian touch to an already insidious scheme.
The Proliferation of PhaaS
The proliferation of PhaaS platforms like LabHost, 'Greatness,' and 'RobinBanks' has democratized cybercrime, lowering the threshold for entry and enabling even the most unskilled hackers to launch sophisticated attacks. These platforms are the catalysts for an exponential increase in the pool of threat actors, thereby magnifying the impact of cybersecurity on a global scale.
The ease with which these services can be accessed and utilized belies the complexity and skill traditionally required to execute successful phishing campaigns. Stephanie Carruthers, who leads an IBM X-Force phishing research project, notes that crafting a single phishing email can consume upwards of 16 hours, not accounting for the time and resources needed to establish the infrastructure for sending the email and harvesting credentials.
PhaaS platforms like LabHost have commoditized this process, offering a buffet of malevolent tools that can be customized and deployed with a few clicks. The implications are stark: the security measures that businesses and individuals have come to rely on, such as multi-factor authentication (MFA), are no longer impenetrable. PhaaS platforms have engineered ways to circumvent these defenses, rendering them vulnerable to exploitation.
Emerging Cyber Defense
In the face of this escalating threat, a multi-faceted defense strategy is imperative. Cybersecurity solutions like SpamTitan employ advanced AI and machine learning to identify and block phishing threats, while end-user training platforms like SafeTitan provide ongoing education to help individuals recognize and respond to phishing attempts. However, with phishing kits now capable of bypassing MFA,it is clear that more robust solutions, such as phishing-resistant MFA based on FIDO/WebAuthn authentication or Public Key Infrastructure (PKI), are necessary to thwart these advanced attacks.
Conclusion
The emergence of PhaaS platforms represents a significant shift in the landscape of cybercrime, one that requires a vigilant and sophisticated response. As we navigate this treacherous terrain, it is incumbent upon us to fortify our defenses, educate our users, and remain ever-watchful of the evolving tactics of cyber adversaries.
References
- https://www-bleepingcomputer-com.cdn.ampproject.org/c/s/www.bleepingcomputer.com/news/security/labhost-cybercrime-service-lets-anyone-phish-canadian-bank-users/amp/
- https://www.techtimes.com/articles/302130/20240228/phishing-platform-labhost-allows-cybercriminals-target-banks-canada.htm
- https://www.spamtitan.com/blog/phishing-as-a-service-threat/
- https://timesofindia.indiatimes.com/gadgets-news/five-government-provided-botnet-and-malware-cleaning-tools/articleshow/107951686.cms

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/