#FactCheck : Old video of Ranveer Singh at Kashi Vishwanath Temple falsely linked to ‘Dhurandhar 2’ success
Executive Summary
Following the reported box office success of ‘Dhurandhar 2: The Revenge’, released on March 19, 2026, a video of Ranveer Singh visiting a temple is being widely shared on social media. Users claim that the actor visited the Kashi Vishwanath Temple to offer prayers after the film’s success. Research by CyberPeace found that the viral claim is misleading. The video of Ranveer Singh visiting the Kashi Vishwanath Temple is not recent. It dates back to 2024, when he visited the temple with Kriti Sanon, and is unrelated to the release or success of ‘Dhurandhar 2: The Revenge’.
Claim
An Instagram user “newsbharatplus” shared the video on March 26, 2026, with a caption stating that after the massive success of Dhurandhar 2, Ranveer Singh visited the temple and performed rituals.

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. This led us to a report published by Dainik Jagran on April 14, 2024. According to the report, Ranveer Singh had visited the Kashi Vishwanath Temple along with Kriti Sanon and noted fashion designer Manish Malhotra. During the visit, the trio was seen offering prayers, wearing traditional attire, and applying sandalwood tilak.
https://www.jagran.com/entertainment/bollywood-ranveer-singh-and-kriti-sanon-visits-kashi-vishwanath-temple-with-manish-malhotra-see-photos-here-23696781.html

We also found a video report on the official YouTube channel of Times Now Navbharat, uploaded on April 15, 2024, showing Ranveer Singh and Kriti Sanon at the temple. The report also featured visuals from a fashion event held in Varanasi.
- https://www.youtube.com/watch?v=OMuW_SVbfb4

Conclusion
The viral claim is misleading. The video of Ranveer Singh visiting the Kashi Vishwanath Temple is not recent. It dates back to 2024, when he visited the temple with Kriti Sanon, and is unrelated to the release or success of ‘Dhurandhar 2: The Revenge’.
Related Blogs

A report by MarketsandMarkets in 2024 showed that the global AI market size is estimated to grow from USD 214.6 billion in 2024 to USD 1,339.1 billion in 2030, at a CAGR of 35.7%. AI has become an enabler of productivity and innovation. A Forbes Advisor survey conducted in 2023 reported that 56% of businesses use AI to optimise their operations and drive efficiency. Further, 51% use AI for cybersecurity and fraud management, 47% employ AI-powered digital assistants to enhance productivity and 46% use AI to manage customer relationships.
AI has revolutionised business functions. According to a Forbes survey, 40% of businesses rely on AI for inventory management, 35% harness AI for content production and optimisation and 33% deploy AI-driven product recommendation systems for enhanced customer engagement. This blog addresses the opportunities and challenges posed by integrating AI into operational efficiency.
Artificial Intelligence and its resultant Operational Efficiency
AI has exemplary optimisation or efficiency capabilities and is widely used to do repetitive tasks. These tasks include payroll processing, data entry, inventory management, patient registration, invoicing, claims processing, and others. AI use has been incorporated into such tasks as it can uncover complex patterns using NLP, machine learning, and deep learning beyond human capabilities. It has also shown promise in improving the decision-making process for businesses in time-critical, high-pressure situations.
AI-driven efficiency is visible in industries such as the manufacturing industry for predictive maintenance, in the healthcare industry for streamlining diagnostics and in logistics for route optimisation. Some of the most common real-world examples of AI increasing operational efficiency are self-driving cars (Tesla), facial recognition (Apple Face ID), language translation (Google Translate), and medical diagnosis (IBM Watson Health)
Harnessing AI has advantages as it helps optimise the supply chain, extend product life cycles, and ultimately conserve resources and cut operational costs.
Policy Implications for AI Deployment
Some of the policy implications for development for AI deployment are as follows:
- Develop clear and adaptable regulatory frameworks for the ongoing and future developments in AI. The frameworks need to ensure that innovation is not hindered while managing the potential risks.
- As AI systems rely on high-quality data that is accessible and interoperable to function effectively and without proper data governance, these systems may produce results that are biased, inaccurate and unreliable. Therefore, it is necessary to ensure data privacy as it is essential to maintain trust and prevent harm to individuals and organisations.
- Policy developers need to focus on creating policies that upskill the workforce which complements AI development and therefore job displacement.
- To ensure cross-border applicability and efficiency of standardising AI policies, the policy-makers need to ensure that international cooperation is achieved when developing the policies.
Addressing Challenges and Risks
Some of the main challenges that emerge with the development of AI are algorithmic bias, cybersecurity threats and the dependence on exclusive AI solutions or where the company retains exclusive control over the source codes. Some policy approaches that can be taken to mitigate these challenges are:
- Having a robust accountability mechanism.
- Establishing identity and access management policies that have technical controls like authentication and authorisation mechanisms.
- Ensure that the learning data that AI systems use follows ethical considerations such as data privacy, fairness in decision-making, transparency, and the interpretability of AI models.
Conclusion
AI can contribute and provide opportunities to drive operational efficiency in businesses. It can be an optimiser for productivity and costs and foster innovation for different industries. But this power of AI comes with its own considerations and therefore, it must be balanced with proactive policies that address the challenges that emerge such as the need for data governance, algorithmic bias and risks associated with cybersecurity. A solution to overcome these challenges is establishing an adaptable regulatory framework, fostering workforce upskilling and promoting international collaborations. As businesses integrate AI into core functions, it becomes necessary to leverage its potential while safeguarding fairness, transparency, and trust. AI is not just an efficiency tool, it has become a stimulant for organisations operating in a rapidly evolving digital world.
References
- https://indianexpress.com/article/technology/artificial-intelligence/ai-indian-businesses-long-term-gain-operational-efficiency-9717072/
- https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html
- https://www.forbes.com/councils/forbestechcouncil/2024/08/06/smart-automation-ais-impact-on-operational-efficiency/
- https://www.processexcellencenetwork.com/ai/articles/ai-operational-excellence
- https://www.leewayhertz.com/ai-for-operational-efficiency/
- https://www.forbes.com/councils/forbestechcouncil/2024/11/04/bringing-ai-to-the-enterprise-challenges-and-considerations/

Executive Summary:
A viral post currently circulating on various social media platforms claims that Reliance Jio is offering a ₹700 Holi gift to its users, accompanied by a link for individuals to claim the offer. This post has gained significant traction, with many users engaging in it in good faith, believing it to be a legitimate promotional offer. However, after careful investigation, it has been confirmed that this post is, in fact, a phishing scam designed to steal personal and financial information from unsuspecting users. This report seeks to examine the facts surrounding the viral claim, confirm its fraudulent nature, and provide recommendations to minimize the risk of falling victim to such scams.
Claim:
Reliance Jio is offering a ₹700 reward as part of a Holi promotional campaign, accessible through a shared link.

Fact Check:
Upon review, it has been verified that this claim is misleading. Reliance Jio has not provided any promo deal for Holi at this time. The Link being forwarded is considered a phishing scam to steal personal and financial user details. There are no reports of this promo offer on Jio’s official website or verified social media accounts. The URL included in the message does not end in the official Jio domain, indicating a fake website. The website requests for the personal information of individuals so that it could be used for unethical cyber crime activities. Additionally, we checked the link with the ScamAdviser website, which flagged it as suspicious and unsafe.


Conclusion:
The viral post claiming that Reliance Jio is offering a ₹700 Holi gift is a phishing scam. There is no legitimate offer from Jio, and the link provided leads to a fraudulent website designed to steal personal and financial information. Users are advised not to click on the link and to report any suspicious content. Always verify promotions through official channels to protect personal data from cybercriminal activities.
- Claim: Users can claim ₹700 by participating in Jio's Holi offer.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
With the rise of AI deepfakes and manipulated media, it has become difficult for the average internet user to know what they can trust online. Synthetic media can have serious consequences, from virally spreading election disinformation or medical misinformation to serious consequences like revenge porn and financial fraud. Recently, a Pune man lost ₹43 lakh when he invested money based on a deepfake video of Infosys founder Narayana Murthy. In another case, that of Babydoll Archi, a woman from Assam had her likeness deepfaked by an ex-boyfriend to create revenge porn.
Image or video manipulation used to leave observable traces. Online sources may advise examining the edges of objects in the image, checking for inconsistent patterns, lighting differences, observing the lip movements of the speaker in a video or counting the number of fingers on a person’s hand. Unfortunately, as the technology improves, such folk advice might not always help users identify synthetic and manipulated media.
The Coalition for Content Provenance and Authenticity (C2PA)
One interesting project in the area of trust-building under these circumstances has been the Coalition for Content Provenance and Authenticity (C2PA). Started in 2019 by Adobe and Microsoft, C2PA is a collaboration between major players in AI, social media, journalism, and photography, among others. It set out to create a standard for publishers of digital media to prove the authenticity of digital media and track changes as they occur.
When photos and videos are captured, they generally store metadata like the date and time of capture, the location, the device it was taken on, etc. C2PA developed a standard for sharing and checking the validity of this metadata, and adding additional layers of metadata whenever a new user makes any edits. This creates a digital record of any and all changes made. Additionally, the original media is bundled with this metadata. This makes it easy to verify the source of the image and check if the edits change the meaning or impact of the media. This standard allows different validation software, content publishers and content creation tools to be interoperable in terms of maintaining and displaying proof of authenticity.

The standard is intended to be used on an opt-in basis and can be likened to a nutrition label for digital media. Importantly, it does not limit the creativity of fledgling photo editors or generative AI enthusiasts; it simply provides consumers with more information about the media they come across.
Could C2PA be Useful in an Indian Context?
The World Economic Forum’s Global Risk Report 2024, identifies India as a significant hotspot for misinformation. The recent AI Regulation report by MeitY indicates an interest in tools for watermarking AI-based synthetic content for ease of detecting and tracking harmful outcomes. Perhaps C2PA can be useful in this regard as it takes a holistic approach to tracking media manipulation, even in cases where AI is not the medium.
Currently, 26 India-based organisations like the Times of India or Truefy AI have signed up to the Content Authenticity Initiative (CAI), a community that contributes to the development and adoption of tools and standards like C2PA. However, people are increasingly using social media sites like WhatsApp and Instagram as sources of information, both of which are owned by Meta and have not yet implemented the standard in their products.
India also has low digital literacy rates and low resistance to misinformation. Part of the challenge would be showing people how to read this nutrition label, to empower people to make better decisions online. As such, C2PA is just one part of an online trust-building strategy. It is crucial that education around digital literacy and policy around organisational adoption of the standard are also part of the strategy.
The standard is also not foolproof. Current iterations may still struggle when presented with screenshots of digital media and other non-technical digital manipulation. Linking media to their creator may also put journalists and whistleblowers at risk. Actual use in context will show us more about how to improve future versions of digital provenance tools, though these improvements are not guarantees of a safer internet.
The largest advantage of C2PA adoption would be the democratisation of fact-checking infrastructure. Since media is shared at a significantly faster rate than it can be verified by professionals, putting the verification tools in the hands of people makes the process a lot more scalable. It empowers citizen journalists and leaves a public trail for any media consumer to look into.
Conclusion
From basic colour filters to make a scene more engaging, to removing a crowd from a social media post, to editing together videos of a politician to make it sound like they are singing a song, we are so accustomed to seeing the media we consume be altered in some way. The C2PA is just one way to bring transparency to how media is altered. It is not a one-stop solution, but it is a viable starting point for creating a fairer and democratic internet and increasing trust online. While there are risks to its adoption, it is promising to see that organisations across different sectors are collaborating on this project to be more transparent about the media we consume.
References
- https://c2pa.org/
- https://contentauthenticity.org/
- https://indianexpress.com/article/technology/tech-news-technology/kate-middleton-9-signs-edited-photo-9211799/
- https://photography.tutsplus.com/articles/fakes-frauds-and-forgeries-how-to-detect-image-manipulation--cms-22230
- https://www.media.mit.edu/projects/detect-fakes/overview/
- https://www.youtube.com/watch?v=qO0WvudbO04&pp=0gcJCbAJAYcqIYzv
- https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
- https://indianexpress.com/article/technology/tech-news-technology/ai-law-may-not-prescribe-penal-consequences-for-violations-9457780/
- https://thesecretariat.in/article/meity-s-ai-regulation-report-ambitious-but-no-concrete-solutions
- https://www.ndtv.com/lifestyle/assam-what-babydoll-archi-viral-fame-says-about-india-porn-problem-8878689
- https://www.meity.gov.in/static/uploads/2024/02/9f6e99572739a3024c9cdaec53a0a0ef.pdf