#FactCheck - Viral Post of Gautam Adani’s Public Arrest Found to Be AI-Generated
Executive Summary:
A viral post on X (formerly twitter) shared with misleading captions about Gautam Adani being arrested in public for fraud, bribery and corruption. The charges accuse him, his nephew Sagar Adani and 6 others of his group allegedly defrauding American investors and orchestrating a bribery scheme to secure a multi-billion-dollar solar energy project awarded by the Indian government. Always verify claims before sharing posts/photos as this came out to be AI-generated.

Claim:
An image circulating of public arrest after a US court accused Gautam Adani and executives of bribery.
Fact Check:
There are multiple anomalies as we can see in the picture attached below, (highlighted in red circle) the police officer grabbing Adani’s arm has six fingers. Adani’s other hand is completely absent. The left eye of an officer (marked in blue) is inconsistent with the right. The faces of officers (marked in yellow and green circles) appear distorted, and another officer (shown in pink circle) appears to have a fully covered face. With all this evidence the picture is too distorted for an image to be clicked by a camera.


A thorough examination utilizing AI detection software concluded that the image was synthetically produced.
Conclusion:
A viral image circulating of the public arrest of Gautam Adani after a US court accused of bribery. After analysing the image, it is proved to be an AI-Generated image and there is no authentic information in any news articles. Such misinformation spreads fast and can confuse and harm public perception. Always verify the image by checking for visual inconsistency and using trusted sources to confirm authenticity.
- Claim: Gautam Adani arrested in public by law enforcement agencies
- Claimed On: Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Artificial Intelligence (AI) provides a varied range of services and continues to catch intrigue and experimentation. It has altered how we create and consume content. Specific prompts can now be used to create desired images enhancing experiences of storytelling and even education. However, as this content can influence public perception, its potential to cause misinformation must be noted as well. The realistic nature of the images can make it hard to discern as artificially generated by the untrained eye. As AI operates by analysing the data it was trained on previously to deliver, the lack of contextual knowledge and human biases (while framing prompts) also come into play. The stakes are higher whilst dabbling with subjects such as history, as there is a fine line between the creation of content with the intent of mere entertainment and the spread of misinformation owing to biases and lack of veracity left unchecked. AI-generated images enhance storytelling but can also spread misinformation, especially in historical contexts. For instance, an AI-generated image of London during the Black Death might include inaccurate details, misleading viewers about the past.
The Rise of AI-Generated Historical Images as Entertainment
Recently, generated images and videos of various historical instances along with the point of view of the people present have been floating all over the internet. Some of them include the streets of London during the Black Death in the 1300s in England, the eruption of Mount Vesuvius at Pompeii etc. Hogne and Dan, two creators who operate accounts named POV Lab and Time Traveller POV on TikTok state that they create such videos as they feel that seeing the past through a first-person perspective is an interesting way to bring history back to life while highlighting the cool parts, helping the audience learn something new. Mostly sensationalised for visual impact and storytelling, such content has been called out by historians for inconsistencies with respect to details particular of the time. Presently, artists admit to their creations being inaccurate, reasoning them to be more of an artistic interpretation than fact-checked documentaries.
It is important to note that AI models may inaccurately depict objects (issues with lateral inversion), people(anatomical implausibilities), or scenes due to "present-ist" bias. As noted by Lauren Tilton, an associate professor of digital humanities at the University of Richmond, many AI models primarily rely on data from the last 15 years, making them prone to modern-day distortions especially when analysing and creating historical content. The idea is to spark interest rather than replace genuine historical facts while it is assumed that engagement with these images and videos is partly a product of the fascination with upcoming AI tools. Apart from this, there are also chatbots like Hello History and Charater.ai which enable simulations of interacting with historical figures that have piqued curiosity.
Although it makes for an interesting perspective, one cannot ignore that our inherent biases play a role in how we perceive the information presented. Dangerous consequences include feeding into conspiracy theories and the erasure of facts as information is geared particularly toward garnering attention and providing entertainment. Furthermore, exposure of such content to an impressionable audience with a lesser attention span increases the gravity of the matter. In such cases, information regarding the sources used for creation becomes an important factor.
Acknowledging the risks posed by AI-generated images and their susceptibility to create misinformation, the Government of Spain has taken a step in regulating the AI content created. It has passed a bill (for regulating AI-Generated content) that mandates the labelling of AI-generated images and failure to do so would warrant massive fines (up to $38 million or 7% of turnover on companies). The idea is to ensure that content creators label their content which would help to spot images that are artificially created from those that are not.
The Way Forward: Navigating AI and Misinformation
While AI-generated images make for exciting possibilities for storytelling and enabling intrigue, their potential to spread misinformation should not be overlooked. To address these challenges, certain measures should be encouraged.
- Media Literacy and Awareness – In this day and age critical thinking and media literacy among consumers of content is imperative. Awareness, understanding, and access to tools that aid in detecting AI-generated content can prove to be helpful.
- AI Transparency and Labeling – Implementing regulations similar to Spain’s bill on labelling content could be a guiding crutch for people who have yet to learn to tell apart AI-generated content from others.
- Ethical AI Development – AI developers must prioritize ethical considerations in training using diverse and historically accurate datasets and sources which would minimise biases.
As AI continues to evolve, balancing innovation with responsibility is essential. By taking proactive measures in the early stages, we can harness AI's potential while safeguarding the integrity and trust of the sources while generating images.
References:
- https://www.npr.org/2023/06/07/1180768459/how-to-identify-ai-generated-deepfake-images
- https://www.nbcnews.com/tech/tech-news/ai-image-misinformation-surged-google-research-finds-rcna154333
- https://www.bbc.com/news/articles/cy87076pdw3o
- https://newskarnataka.com/technology/government-releases-guide-to-help-citizens-identify-ai-generated-images/21052024/
- https://www.technologyreview.com/2023/04/11/1071104/ai-helping-historians-analyze-past/
- https://www.psypost.org/ai-models-struggle-with-expert-level-global-history-knowledge/
- https://www.youtube.com/watch?v=M65IYIWlqes&t=2597s
- https://www.vice.com/en/article/people-are-creating-records-of-fake-historical-events-using-ai/?utm_source=chatgpt.com
- https://www.reuters.com/technology/artificial-intelligence/spain-impose-massive-fines-not-labelling-ai-generated-content-2025-03-11/?utm_source=chatgpt.com
- https://www.theguardian.com/film/2024/sep/13/documentary-ai-guidelines?utm_source=chatgpt.com

Introduction
There is a rising desire for artificial intelligence (AI) laws that limit threats to public safety and protect human rights while allowing for a flexible and inventive setting. Most AI policies prioritize the use of AI for the public good. The most compelling reason for AI innovation as a valid goal of public policy is its promise to enhance people's lives by assisting in the resolution of some of the world's most difficult difficulties and inefficiencies and to emerge as a transformational technology, similar to mobile computing. This blog explores the complex interplay between AI and internet governance from an Indian standpoint, examining the challenges, opportunities, and the necessity for a well-balanced approach.
Understanding Internet Governance
Before delving into an examination of their connection, let's establish a comprehensive grasp of Internet Governance. This entails the regulations, guidelines, and criteria that influence the global operation and management of the Internet. With the internet being a shared resource, governance becomes crucial to ensure its accessibility, security, and equitable distribution of benefits.
The Indian Digital Revolution
India has witnessed an unprecedented digital revolution, with a massive surge in internet users and a burgeoning tech ecosystem. The government's Digital India initiative has played a crucial role in fostering a technology-driven environment, making technology accessible to even the remotest corners of the country. As AI applications become increasingly integrated into various sectors, the need for a comprehensive framework to govern these technologies becomes apparent.
AI and Internet Governance Nexus
The intersection of AI and Internet governance raises several critical questions. How should data, the lifeblood of AI, be governed? What role does privacy play in the era of AI-driven applications? How can India strike a balance between fostering innovation and safeguarding against potential risks associated with AI?
- AI's Role in Internet Governance:
Artificial Intelligence has emerged as a powerful force shaping the dynamics of the internet. From content moderation and cybersecurity to data analysis and personalized user experiences, AI plays a pivotal role in enhancing the efficiency and effectiveness of Internet governance mechanisms. Automated systems powered by AI algorithms are deployed to detect and respond to emerging threats, ensuring a safer online environment.
A comprehensive strategy for managing the interaction between AI and the internet is required to stimulate innovation while limiting hazards. Multistakeholder models including input from governments, industry, academia, and civil society are gaining appeal as viable tools for developing comprehensive and extensive governance frameworks.
The usefulness of multistakeholder governance stems from its adaptability and flexibility in requiring collaboration from players with a possible stake in an issue. Though flawed, this approach allows for flaws that may be remedied using knowledge-building pieces. As AI advances, this trait will become increasingly important in ensuring that all conceivable aspects are covered.
The Need for Adaptive Regulations
While AI's potential for good is essentially endless, so is its potential for damage - whether intentional or unintentional. The technology's highly disruptive nature needs a strong, human-led governance framework and rules that ensure it may be used in a positive and responsible manner. The fast emergence of GenAI, in particular, emphasizes the critical need for strong frameworks. Concerns about the usage of GenAI may enhance efforts to solve issues around digital governance and hasten the formation of risk management measures.
Several AI governance frameworks have been published throughout the world in recent years, with the goal of offering high-level guidelines for safe and trustworthy AI development. The OECD's "Principles on Artificial Intelligence" (OECD, 2019), the EU's "Ethics Guidelines for Trustworthy AI" (EU, 2019), and UNESCO's "Recommendations on the Ethics of Artificial Intelligence" (UNESCO, 2021) are among the multinational organizations that have released their own principles. However, the advancement of GenAI has resulted in additional recommendations, such as the OECD's newly released "G7 Hiroshima Process on Generative Artificial Intelligence" (OECD, 2023).
Several guidance documents and voluntary frameworks have emerged at the national level in recent years, including the "AI Risk Management Framework" from the United States National Institute of Standards and Technology (NIST), a voluntary guidance published in January 2023, and the White House's "Blueprint for an AI Bill of Rights," a set of high-level principles published in October 2022 (The White House, 2022). These voluntary policies and frameworks are frequently used as guidelines by regulators and policymakers all around the world. More than 60 nations in the Americas, Africa, Asia, and Europe had issued national AI strategies as of 2023 (Stanford University).
Conclusion
Monitoring AI will be one of the most daunting tasks confronting the international community in the next centuries. As vital as the need to govern AI is the need to regulate it appropriately. Current AI policy debates too often fall into a false dichotomy of progress versus doom (or geopolitical and economic benefits versus risk mitigation). Instead of thinking creatively, solutions all too often resemble paradigms for yesterday's problems. It is imperative that we foster a relationship that prioritizes innovation, ethical considerations, and inclusivity. Striking the right balance will empower us to harness the full potential of AI within the boundaries of responsible and transparent Internet Governance, ensuring a digital future that is secure, equitable, and beneficial for all.
References
- The Key Policy Frameworks Governing AI in India - Access Partnership
- AI in e-governance: A potential opportunity for India (indiaai.gov.in)
- India and the Artificial Intelligence Revolution - Carnegie India - Carnegie Endowment for International Peace
- Rise of AI in the Indian Economy (indiaai.gov.in)
- The OECD Artificial Intelligence Policy Observatory - OECD.AI
- Artificial Intelligence | UNESCO
- Artificial intelligence | NIST

A video circulating on social media claims that British Prime Minister Keir Starmer was forcibly thrown out of a pub by its owner. The clip has been widely shared by users, many of whom are drawing political comparisons and questioning democratic norms. However, research conducted by Cyber Peace Foundation has found that the viral claim is misleading. Our research reveals that the video dates back to 2021, a time when Keir Starmer was not the Prime Minister of the United Kingdom, but the leader of the opposition Labour Party.
Claim
On January 12, 2026, a video was shared on social media platform X (formerly Twitter) with the claim that British Prime Minister Sir Keir Starmer was asked to leave a pub by its owner. The post suggests that the pub owner was unhappy with Starmer’s performance and contrasts the incident with how political dissent is allegedly handled in India. The viral video, approximately 32 seconds long, shows a man angrily confronting Keir Starmer in English, stating that he had supported the Labour Party all his life but was disappointed with Starmer’s leadership. The man is then heard asking Starmer to leave the pub.
Links to the viral post and its archived version were reviewed as part of the research.

Fact Check
To verify the claim, we extracted key frames from the viral video and conducted a Google reverse image search. During this process, we found the same video posted on an X account on April 19, 2021.The visuals in the 2021 post matched the viral video exactly, clearly indicating that the footage is not recent.The original post described the incident as an event involving Labour Party leader Keir Starmer during his visit to the Raven pub in Bath, and included a warning about strong language used by the pub owner, Rod Humphries. Here is the link to the original video, along with a screenshot:

Further keyword searches led us to a report published by NBC News on April 19, 2021. According to the report, Keir Starmer, then the leader of the UK’s opposition Labour Party, was confronted and asked to leave a pub in the city of Bath. The pub owner reportedly accused Starmer of failing to oppose COVID-19 lockdown measures strongly enough at a time when strict restrictions were in place across the UK.
- https://www.nbcnews.com/video/anti-lockdown-pub-landlord-screams-at-u-k-labour-party-leader-to-get-out-of-his-pub-110466117702

We also verified who held the office of British Prime Minister in 2021. Official UK government records confirm that Boris Johnson was the Prime Minister at that time, while Keir Starmer served as the Leader of the Opposition.

Conclusion
Our research confirms that the viral video is old and misleadingly presented. The footage is from 2021, when Keir Starmer was not the Prime Minister of the United Kingdom, but the opposition Labour Party leader. Sharing the video with the claim that it shows a current British Prime Minister being thrown out of a pub is factually incorrect.