#FactCheck - "AI-Generated Image of UK Police Officers Bowing to Muslims Goes Viral”
Executive Summary:
A viral picture on social media showing UK police officers bowing to a group of social media leads to debates and discussions. The investigation by CyberPeace Research team found that the image is AI generated. The viral claim is false and misleading.

Claims:
A viral image on social media depicting that UK police officers bowing to a group of Muslim people on the street.


Fact Check:
The reverse image search was conducted on the viral image. It did not lead to any credible news resource or original posts that acknowledged the authenticity of the image. In the image analysis, we have found the number of anomalies that are usually found in AI generated images such as the uniform and facial expressions of the police officers image. The other anomalies such as the shadows and reflections on the officers' uniforms did not match the lighting of the scene and the facial features of the individuals in the image appeared unnaturally smooth and lacked the detail expected in real photographs.

We then analysed the image using an AI detection tool named True Media. The tools indicated that the image was highly likely to have been generated by AI.



We also checked official UK police channels and news outlets for any records or reports of such an event. No credible sources reported or documented any instance of UK police officers bowing to a group of Muslims, further confirming that the image is not based on a real event.
Conclusion:
The viral image of UK police officers bowing to a group of Muslims is AI-generated. CyberPeace Research Team confirms that the picture was artificially created, and the viral claim is misleading and false.
- Claim: UK police officers were photographed bowing to a group of Muslims.
- Claimed on: X, Website
- Fact Check: Fake & Misleading
Related Blogs
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation

Executive Summary:
Recent reports circulating on various social media platforms have falsely claimed that an air taxi prototype is operational and providing services between Amritsar, Chandigarh, Delhi, and Jaipur. These claims, accompanied by images and videos, have been widely shared, leading to significant public attention. However, upon conducting a thorough examination using reverse image search, it has been determined that the information is misleading and inaccurate. These assertions do not reflect the current reality and are not substantiated by credible sources

Claim:
The claim suggests that an air taxi prototype is already operational, servicing routes between Amritsar, Chandigarh, Delhi, and Jaipur. This assertion is accompanied by images of a futuristic aircraft, implying that such technology is currently being used to transport commercial passengers.

Fact Check:
The claim of air taxi and routes between Amritsar, Chandigarh, Delhi, and Jaipur has been found to be misleading. Also, so far, neither the Indian government nor the respective aviation authorities have issued any sort of public declarations nor industry insiders to claim any launch of any air taxi service. Further research followed a keyword-based search that directed us to a news report published in The Times of India on January 20, 2025. A similar post to the one seen in the viral video accompanied the report. It stated that Bengaluru-based aerospace startup Sarla Aviation launched its prototype air taxi called “Shunya” during the Bharat Mobility Global Expo. Under this plan, it looks to initiate electric flying taxis in Bangalore by 2028. This urban air transport program for India will be similar to what they are posting in this regard.

Conclusion:
The viral claim saying that there is an air taxi service in India between Amritsar, Chandigarh, Delhi, and Jaipur is entirely false. The pictures and information going viral are misleading and do not relate to any progress or implementation of air taxi technology in India. To date, there is no official confirmation or credible evidence that supports such a service. Information must be verified from reliable sources before it is believed or shared in order to prevent the spread of misinformation.
- Claim: A viral post claims an air taxi is operational between Amritsar, Chandigarh, Delhi, and Jaipur.
- Claimed On: Social Media
- Fact Check: False and Misleading

The global race for Artificial Intelligence is heating up, and India has become one of its most important battlegrounds. Over the past few months, tech giants like OpenAI (ChatGPT), Google (Gemini), X (Grok), Meta (Llama), and Perplexity AI have stepped up their presence in the country, not by selling their AI tools, but by offering them free or at deep discounts.
At first, it feels like a huge win for India’s digital generation. Students, professionals, and entrepreneurs today can tap into some of the world’s most powerful AI tools without paying a rupee. It feels like a digital revolution unfolding in real time. Yet, beneath this generosity lies a more complicated truth. Experts caution that this wave of “free” AI access isn’t without strings attached. This offering impacts how India handles data privacy, the fairness of competition, and the pace of the development of homegrown AI innovation that the country is focusing on.
The Market Strategy: Free Now, Pay Later
The choice of global AI companies to offer free access in India is a calculated business strategy. With one of the world’s largest and fastest-growing digital populations, India is a market no tech giant wants to miss. By giving away their AI tools for free, these firms are playing a long game:
- Securing market share early: Flooding the market with free access helps them quickly attract millions of users before Indian startups have a chance to catch up. Recent examples are Perplexity, ChatGPT Go and Gemini AI which are offering free subscriptions to Indian users.
- Gathering local data: Every interaction, every prompt, question, or language pattern, helps these models learn from larger datasets to improve their product offerings in India and the rest of the world. Nothing is free in the world - as the popular saying goes, “if something is free, means you are the product. The same goes for these AI platforms: they monetise user data by analysing chats and their behaviour to refine their model and build paid products. This creates the privacy risk as India currently lacks specific laws to govern how such data is stored, processed or used for AI training.
- Create user dependency: Once users grow accustomed to the quality and convenience of these global models, shifting to Indian alternatives, even when they become paid, will be difficult. This approach mirrors the “freemium” model used in other tech sectors, where users are first attracted through free access and later monetised through subscriptions or premium features, raising ethical concerns.
Impact on Indian Users
For most Indians, the short-term impact of free AI access feels overwhelmingly positive. Tools like ChatGPT and Gemini are breaking down barriers by democratising knowledge and making advanced technology available to everyone, from students, professionals, to small businesses. It’s changing how people learn, think and do - all without spending a single rupee.But the long-term picture isn’t quite as simple. Beneath the convenience lies a set of growing concerns:
- Data privacy risks: Many users don’t realise that their chats, prompts, or queries might be stored and used to train global AI models. Without strong data protection laws in action, sensitive Indian data could easily find its way into foreign systems.
- Overdependence on foreign technology: Once these AI tools become part of people’s daily lives, moving away from them gets harder — especially if free access later turns into paid plans or comes with restrictive conditions.
- Language and cultural bias: Most large AI models are still built mainly around English and Western data. Without enough Indian language content and cultural representation, the technology risks overlooking the very diversity that defines India
Impact on India’s AI Ecosystem
India’s Generative AI market, valued at USD $ 1.30 billion in 2024, is projected to reach 5.40 billion by 2033. Yet, this growth story may become uneven if global players dominate early.
Domestic AI startups face multiple hurdles — limited funding, high compute costs, and difficulty in accessing large, diverse datasets. The arrival of free, GPT-4-level models sharpens these challenges by raising user expectations and increasing customer acquisition costs.
As AI analyst Kashyap Kompella notes, “If users can access GPT-4-level quality at zero cost, their incentive to try local models that still need refinement will be low.” This could stifle innovation at home, resulting in a shallow domestic AI ecosystem where India consumes global technology but contributes little to its creation.
CCI’s Intervention: Guarding Fair Competition
The Competition Commission of India (CCI) has started taking note of how global AI companies are shaping India’s digital market. In a recent report, it cautioned that AI-driven pricing strategies such as offering free or heavily subsidised access could distort healthy competition and create an uneven playing field for smaller Indian developers.
The CCI’s decision to step in is both timely and necessary. Without proper oversight, such tactics could gradually push homegrown AI startups to the sidelines and allow a few foreign tech giants to gain disproportionate influence over India’s emerging AI economy.
What the Indian Government Should Do
To ensure India’s AI landscape remains competitive, inclusive, and innovation-driven, the government must adopt a balanced strategy that safeguards users while empowering local developers.
1. Promote Fair Competition
The government should mandate transparency in free access offers, including their duration, renewal terms, and data-use policies. Exclusivity deals between foreign AI firms and telecom or device companies must be closely monitored to prevent monopolistic practices.
2. Strengthen Data Protection
Under the Digital Personal Data Protection (DPDP) Act, companies should be required to obtain explicit consent from users before using data for model training. Encourage data localisation, ensuring that sensitive Indian data remains stored within India’s borders.
3. Support Domestic AI Innovation
Accelerate the implementation of the IndiaAI Mission to provide public compute infrastructure, open datasets, and research funding to local AI developers like Sarvam AI, an Indian company chosen by the government to build the country's first homegrown large language model (LLM) under IndianAI Mission.
4. Create an Open AI Ecosystem
India should develop national AI benchmarks to evaluate all models, foreign or domestic, on performance, fairness, and linguistic diversity. And at the same time, they have their own national data Centre to train their indigenous AI models.
5. Encourage Responsible Global Collaboration
Speaking at the AI Action Summit 2025, the Prime Minister highlighted that governance should go beyond managing risks and should also promote innovation for the global good. Building on this idea, India should encourage global AI companies to invest meaningfully in the country’s ecosystem through research labs, data centres, and AI education programmes. Such collaborations will ensure that these partnerships not only expand markets but also create value, jobs and knowledge within India.
Conclusion
The surge of free AI access across India represents a defining moment in the nation’s digital journey. On one hand, it’s empowering millions of people and accelerating AI awareness like never before. On the other hand, it poses serious challenges from over-reliance on foreign platforms to potential risks around data privacy and the slow growth of local innovation. India’s real test will be finding the right balance between access and autonomy, allowing global AI leaders to innovate and operate here, but within a framework that protects the interests of Indian users, startups, and data ecosystems. With strong and timely action under the Digital Personal Data Protection (DPDP) Act, the IndiaAI Mission, and the Competition Commission of India’s (CCI) active oversight, India can make sure this AI revolution isn’t just something that happens to the country, but for it.
References
- https://www.moneycontrol.com/artificial-intelligence/cci-study-flags-steep-barriers-for-indian-ai-startups-calls-for-open-data-and-compute-access-to-level-playing-field-article-13600606.html#
- https://www.imarcgroup.com/india-generative-ai-market
- https://www.mea.gov.in/Speeches-Statements.htm?dtl/39020/Opening_Address_by_Prime_Minister_Shri_Narendra_Modi_at_the_AI_Action_Summit_Paris_February_11_2025
- https://m.economictimes.com/tech/artificial-intelligence/nasscom-planning-local-benchmarks-for-indic-ai-models/articleshow/124218208.cms
- https://indianexpress.com/article/business/centre-selects-start-up-sarvam-to-build-country-first-homegrown-ai-model-9967243/#