#FactCheck! Viral Image Claiming Virat Kohli and Rohit Sharma Visited Kedarnath Is AI-Generated
A photo featuring Indian cricketers Virat Kohli and Rohit Sharma is being widely shared on social media. In the image, both players are seen holding a Shivling, with the Kedarnath temple visible in the background. Users sharing the image claim that Virat Kohli and Rohit Sharma recently visited Kedarnath.
However, CyberPeace Foundation’s investigation found the claim to be false. Our verification established that the viral image is not real but has been created using Artificial Intelligence (AI) and is being circulated with a misleading narrative.
The Claim
An Instagram user shared the viral image on December 22, 2025, with the caption stating that Rohit Sharma and Virat Kohli are in Kedarnath. The post has since been widely reshared by other users, who assumed the image to be authentic. Link, archive link, screenshot:

Fact Check
On closely examining the viral image, the Desk noticed visual inconsistencies suggesting that it may be AI-generated. To verify this, the image was scanned using the AI detection tool HIVE Moderation. According to the results, the image was found to be 99 per cent AI-generated.

Further verification was conducted using another AI detection tool, Sightengine. The analysis revealed that the image was 93 per cent likely to be AI-generated, reinforcing the findings from the previous tool.

Conclusion
CyberPeace Foundation’s research confirms that the viral image claiming Virat Kohli and Rohit Sharma visited Kedarnath is fabricated. The image has been generated using AI technology and is being falsely shared on social media as a real photograph.
Related Blogs
.webp)
The Delhi High Court vide order dated 21st November 2024 directed the Centre to nominate members for a committee constituted to examine the issue of deepfakes. The court was informed by the Union Ministry of Electronics and Information Technology (MeitY) that a committee had been formed on 20 November 2024 on deepfake matters. The Delhi High Court passed an order while hearing two writ petitions against the non-regulation of deepfake technology in the country and the threat of its potential misuse. The Centre submitted that it was actively taking measures to address and mitigate the issues related to deepfake technology. The court directed the central government to nominate the members within a week.
The court further stated that the committee shall examine and take into consideration the suggestions filed by the petitioners and consider the regulations as well as statutory frameworks in foreign countries like the European Union. The court has directed the committee to invite the experiences and suggestions of stakeholders such as intermediary platforms, telecom service providers, victims of deepfakes, and websites which provide and deploy deepfakes. The counsel for the petitioners stated that delay in the creation, detection and removal of deepfakes is causing immense hardship to the public at large. Further, the court has directed the said committee to submit its report, as expeditiously as possible, preferably within three months. The matter is further listed on 24th March 2025.
CyberPeace Outlook
Through the issue of misuse of deepfakes by bad actors, it has become increasingly difficult for users to differentiate between genuine and altered content created by deepfakes. This increasing misuse has led to a rise in cyber crimes and poses dangers to users' privacy. Bad actors use any number of random pictures or images collected from the internet to create such non-consensual deepfake content. Such deepfake videos further pose risks of misinformation and fake news campaigns with the potential to sway elections, cause confusion and mistrust in authorities, and more.
The conceivable legislation governing the deepfake is the need of the hour. It is important to foster regulated, ethical and responsible consumption of technology. The comprehensive legislation governing the issue can help ensure technology can be used in a better manner. The dedicated deepfake regulation and deploying ethical practices through a coordinated approach by concerned stakeholders can effectively manage the problems presented by the misuse of deepfake technology. Legal frameworks in this regard need to be equipped to handle the challenges posed by deepfake and AI. Accountability in AI is also a complex issue that requires comprehensive legal reforms. The government should draft policies and regulations that balance innovation and regulation. Through a multifaceted approach and comprehensive regulatory landscape, we can mitigate the risks posed by deepfakes and safeguard privacy, trust, and security in the digital age.
References
- https://www.devdiscourse.com/article/law-order/3168452-delhi-high-court-calls-for-action-on-deepfake-regulation
- https://images.assettype.com/barandbench/2024-11-23/w63zribm/Chaitanya_Rohilla_vs_Union_of_India.pdf

Introduction
The United Nations General Assembly (UNGA) has unanimously adopted the first global resolution on Artificial Intelligence (AI), encouraging countries to take into consideration human rights, keeping personal data safe, and further monitoring the threats associated with AI. This non-binding resolution proposed by the United States and co-sponsored by China and over 120 other nations advocates the strengthening of privacy policies. This step is crucial for governments across the world to shape how AI grows because of the dangers it carries that could undermine the protection, promotion, and right to human dignity and fundamental freedoms. The resolution emphasizes the importance of respecting human rights and fundamental freedoms throughout the life cycle of AI systems, highlighting the benefits of digital transformation and safe AI systems.
Key highlights
● This is indeed a landmark move by the UNGA, which adopted the first global resolution on AI. This resolution encourages member countries to safeguard human rights, protect personal data, and monitor AI for risks.
● Global leaders have shown their consensus for safe, secure, trustworthy AI systems that advance sustainable development and respect fundamental freedom.
● Resolution is the latest in a series of initiatives by governments around the world to shape AI. Therefore, AI will have to be created and deployed through the lens of humanity and dignity, Safety and Security, human rights and fundamental freedoms throughout the life cycle of AI systems.
● UN resolution encourages global cooperation, warns against improper AI use, and emphasizes the issues of human rights.
● The resolution aims to protect from potential harm and ensure that everyone can enjoy its benefits. The United States has worked with over 120 countries at the United Nations, including Russia, China, and Cuba, to negotiate the text of the resolution adopted.
Brief Analysis
AI has become increasingly prevalent in recent years, with chatbots such as the Chat GPT taking the world by storm. AI has been steadily attempting to replicate human-like thinking and solve problems. Furthermore, machine learning, a key aspect of AI, involves learning from experience and identifying patterns to solve problems autonomously. The contemporary emergence of AI has, however, raised questions about its ethical implications, potential negative impact on society, and whether it is too late to control it.
While AI is capable of solving problems quickly and performing various tasks with ease, it also has its own set of problems. As AI continues to grow, global leaders have called for regulations to prevent significant harm due to the unregulated AI landscape to the world and encourage the use of trustworthy AI. The European Union (EU) has come up with an AI act called the “European AI Act”. Recently, a Senate bill called “The AI Consent Bill” was introduced in the US. Similarly, India is also proactively working towards setting the stage for a more regulated Al landscape by fostering dialogues and taking significant measures. Recently, the Ministry of Electronics and Information Technology (MeitY) issued an advisory on AI, which requires explicit permission to deploy under-testing or unreliable AI models related to India's Internet. The following advisory also indicates measures advocating to combat deepfakes or misinformation.
AI has thus become a powerful tool that has raised concerns about its ethical implications and the potential negative influence on society. Governments worldwide are taking action to regulate AI and ensure that it remains safe and effective. Now, the groundbreaking move of the UNGA, which adopted the global resolution on AI, with the support of all 193 U.N. member nations, shows the true potential of efforts by countries to regulate AI and promote safe and responsible use globally.
New AI tools have emerged in the public sphere, which may threaten humanity in an unexpected direction. AI is able to learn by itself through machine learning to improve itself, and developers often are surprised by the emergent abilities and qualities of these tools. The ability to manipulate and generate language, whether with words, images, or sounds, is the most important aspect of the current phase of the ongoing AI Revolution. In the future, AI can have several implications. Hence, it is high time to regulate AI and promote the safe, secure and responsible use of it.
Conclusion
The UNGA has approved its global resolution on AI, marking significant progress towards creating global standards for the responsible development and employment of AI. The resolution underscores the critical need to protect human rights, safeguard personal data, and closely monitor AI technologies for potential hazards. It calls for more robust privacy regulations and recognises the dangers associated with improper AI systems. This profound resolution reflects a unified stance among UN member countries on overseeing AI to prevent possible negative effects and promote safe, secure and trustworthy AI.
References

Introduction
In a major policy shift aimed at synchronizing India's fight against cyber-enabled financial crimes, the government has taken a landmark step by bringing the Indian Cyber Crime Coordination Centre (I4C) under the ambit of the Prevention of Money Laundering Act (PMLA). In the notification released in the official gazette on 25th April, 2025, the Department of Revenue, Ministry of Finance, included the Indian Cyber Crime Coordination Centre (I4C) under Section 66 of the Prevention of Money Laundering Act, 2002 (hereinafter referred to as “PMLA”). The step comes as a significant attempt to resolve the asynchronous approach of different agencies (Enforcement Directorate (ED), State Police, CBI, CERT-In, RBI) set up under the government responsible for preventing and often possessing key information regarding cyber crimes and financial crimes. As it is correctly put, "When criminals sprint and the administration strolls, the finish line is lost.”
The gazetted notification dated 25th April, 2025, read as follows:
“In exercise of the powers conferred by clause (ii) of sub-section (1) of section 66 of the Prevention of Money-laundering Act, 2002 (15 of 2003), the Central Government, on being satisfied that it is necessary in the public interest to do so, hereby makes the following further amendment in the notification of the Government of India, in the Ministry of Finance, Department of Revenue, published in the Gazette of India, Extraordinary, Part II, section 3, sub-section (i) vide number G.S.R. 381(E), dated the 27th June, 2006, namely:- In the said notification, after serial number (26) and the entry relating thereto, the following serial number and entry shall be inserted, namely:— “(27) Indian Cyber Crime Coordination Centre (I4C).”.
Outrunning Crime: Strengthening Enforcement through Rapid Coordination
The usage of cyberspace to commit sophisticated financial crimes and white-collar crimes is a one criminal parallel passover that no one was looking forward to. The disenchanted reality of today’s world is that the internet is used for as much bad as it is for good. The internet has now entered the financial domain, facilitating various financial crimes. Money laundering is a financial crime that includes all processes or activities that are in connection with the concealment, possession, acquisition, or use of proceeds of crime and projecting it as untainted money. In the offence of money laundering, there is an intricate web and trail of financial transactions that are hard to track, as they are, and with the advent of the internet, the transactions are often digital, and the absence of crucial information hampers the evidentiary chain. With this new step, the Enforcement Directorate (ED) will now make headway into the investigation with the information exchange under PMLA from and to I4C, removing the obstacles that existed before this notification.
Impact
The decision of the finance ministry has to be seen in terms of all that is happening around the globe, with the rapid increase in sophisticated financial crimes. By formally empowering the I4C to share and receive information with the Enforcement Directorate under PMLA, the government acknowledges the blurred lines between conventional financial crime and cybercrime. It strengthens India’s financial surveillance, where money laundering and cyber fraud are increasingly two sides of the same coin. The assessment of the impact can be made from the following facilitations enabled by the decision:
- Quicker internet detection of money laundering
- Money trail tracking in real time across online platforms
- Rapid freeze of cryptocurrency wallets or assets obtained fraudulently
Another important aspect of this decision is that it serves as a signal that India is finally equipping itself and treating cyber-enabled financial crimes with the gravitas that is the need of the hour. This decision creates a two-way intelligence flow between cybercrime detection units and financial enforcement agencies.
Conclusion
To counter the fragmented approach in handling cyber-enabled white-collar crimes and money laundering, the Indian government has fortified its legal and enforcement framework by extending PMLA’s reach to the Indian Cyber Crime Coordination Centre (I4C). All the decisions and the brainstorming that led up to this notification are crucial at this point in time for the cybercrime framework that India needs to be on par with other countries. Although India has come a long way in designing a robust cybercrime intelligence structure, as long as it excludes and works in isolation, it will be ineffective. So, the current decision in discussion should only be the beginning of a more comprehensive policy evolution. The government must further integrate and devise a separate mechanism to track “digital footprints” and incorporate a real-time red flag mechanism in digital transactions suspected to be linked to laundering or fraud.