#FactCheck : Old video of Ranveer Singh at Kashi Vishwanath Temple falsely linked to ‘Dhurandhar 2’ success
Executive Summary
Following the reported box office success of ‘Dhurandhar 2: The Revenge’, released on March 19, 2026, a video of Ranveer Singh visiting a temple is being widely shared on social media. Users claim that the actor visited the Kashi Vishwanath Temple to offer prayers after the film’s success. Research by CyberPeace found that the viral claim is misleading. The video of Ranveer Singh visiting the Kashi Vishwanath Temple is not recent. It dates back to 2024, when he visited the temple with Kriti Sanon, and is unrelated to the release or success of ‘Dhurandhar 2: The Revenge’.
Claim
An Instagram user “newsbharatplus” shared the video on March 26, 2026, with a caption stating that after the massive success of Dhurandhar 2, Ranveer Singh visited the temple and performed rituals.

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. This led us to a report published by Dainik Jagran on April 14, 2024. According to the report, Ranveer Singh had visited the Kashi Vishwanath Temple along with Kriti Sanon and noted fashion designer Manish Malhotra. During the visit, the trio was seen offering prayers, wearing traditional attire, and applying sandalwood tilak.
https://www.jagran.com/entertainment/bollywood-ranveer-singh-and-kriti-sanon-visits-kashi-vishwanath-temple-with-manish-malhotra-see-photos-here-23696781.html

We also found a video report on the official YouTube channel of Times Now Navbharat, uploaded on April 15, 2024, showing Ranveer Singh and Kriti Sanon at the temple. The report also featured visuals from a fashion event held in Varanasi.
- https://www.youtube.com/watch?v=OMuW_SVbfb4

Conclusion
The viral claim is misleading. The video of Ranveer Singh visiting the Kashi Vishwanath Temple is not recent. It dates back to 2024, when he visited the temple with Kriti Sanon, and is unrelated to the release or success of ‘Dhurandhar 2: The Revenge’.
Related Blogs

Introduction
Words come easily, but not necessarily the consequences that follow. Imagine a 15-year-old child on the internet hoping that the world will be nice to him and help him gain confidence, but instead, someone chooses to be mean on the internet, or the child becomes the victim of a new kind of cyberbullying, i.e., online trolling. The consequences of trolling can have serious repercussions, including eating disorders, substance abuse, conduct issues, body dysmorphia, negative self-esteem, and, in tragic cases, self-harm and suicide attempts in vulnerable individuals. The effects of online trolling can include anxiety, depression, and social isolation. This is one example, and hate speech and online abuse can touch anyone, regardless of age, background, or status. The damage may take different forms, but its impact is far-reaching. In today’s digital age, hate speech spreads rapidly through online platforms, often amplified by AI algorithms.
As we celebrate today, i.e., 18th June, the International Day for Countering Hate Speech, if we have ever been mean to someone on the internet, we pledge never to repeat that kind of behaviour, and if we have been the victim, we will stand against the perpetrator and report it.
This year, the theme for the International Day for Countering Hate Speech is “Hate Speech and Artificial Intelligence Nexus: Building coalitions to reclaim inclusive and secure environments free of hatred. UN Secretary-General Antonio Guterres, in his statement, said, “Today, as this year’s theme reminds us, hate speech travels faster and farther than ever, amplified by Artificial Intelligence. Biased algorithms and digital platforms are spreading toxic content and creating new spaces for harassment and abuse."
Coded Convictions: How AI Reflects and Reinforces Ideologies
Algorithms have swiftly taken the place of feelings; they tamper with your taste, and they do so with a lighter foot, invisibly. They are becoming an important component of social media user interaction and content distribution. While these tools are designed to improve user experience, they frequently inadvertently spread divisive ideologies and push extremist propaganda. This amplification can strengthen the power of extremist organisations, spread misinformation, and deepen societal tensions. This phenomenon, known as “algorithmic radicalisation,” demonstrates how social media companies may utilise a discriminating content selection approach to entice people down ideological rabbit holes and shape their ideas. AI-driven algorithms often prioritise engagement over ethics, enabling divisive and toxic content to trend and placing vulnerable groups, especially youth and minorities, at risk. The UN’s Strategy and Plan of Action on Hate Speech, launched on June 18, 2019, recognises that while AI holds promise for early detection and prevention of harmful speech, it also demands stringent human rights safeguards. Without regulation, these tools can themselves become purveyors of bias and exclusion.
India’s Constitutional Resolve and Civilizational Ethos against Hate
India has always taken pride in being inclusive and united rather than divided. As far as hate speech is concerned, India's stand is no different. The United Nations, India believes in the same values as its international counterpart. Although India has won many battles against hate speech, the war is not over and is now more prominent than ever due to the advancement in communication technologies. In India, while the right to freedom of speech and expression is protected under Article 19(1)(a), its exercise is limited subject to reasonable restrictions under Article 19(2). Landmark rulings such as Ramji Lal Modi v. State of U.P. and Amish Devgan v. UOI have clarified that speech can be curbed if it incites violence or undermines public order. Section 69A of the IT Act, 2000, empowers the government to block content, and these principles are also reflected in Section 196 of the BNS, 2023 (153A IPC) and Section 299 of the BNS, 2023 (295A IPC). Platforms are also required to track down the creators of harmful content and remove it within a reasonable hour and fulfil their due diligence requirements under IT rules.
While there is no denying that India needs to be well-equipped and prepared normatively to tackle hate propaganda and divisive forces. India’s rich culture and history, rooted in philosophies of Vasudhaiva Kutumbakam (the world is one family) and pluralistic traditions, have long stood as a beacon of tolerance and coexistence. By revisiting these civilizational values, we can resist divisive forces and renew our collective journey toward harmony and peaceful living.
CyberPeace Message
The ultimate goal is to create internet and social media platforms that are better, safer and more harmonious for each individual, irrespective of his/her/their social and cultural background. CyberPeace stands resolute on promoting digital media literacy, cyber resilience, and consistently pushing for greater accountability for social media platforms.
References
- https://www.un.org/en/observances/countering-hate-speech
- https://www.artemishospitals.com/blog/the-impact-of-trolling-on-teen-mental-health
- https://www.orfonline.org/expert-speak/from-clicks-to-chaos-how-social-media-algorithms-amplify-extremism
- https://www.techpolicy.press/indias-courts-must-hold-social-media-platforms-accountable-for-hate-speech/

Introduction
The United Nations (UN) has unveiled a set of principles, known as the 'Global Principles for Information Integrity', to combat the spread of online misinformation, disinformation, and hate speech. These guidelines aim to address the widespread harm caused by false information on digital platforms. The UN's Global Principles are based on five core principles: social trust and resilience, independent, free, and pluralistic media, healthy incentives, transparency and research, and public empowerment. The UN chief emphasized that the threats to information integrity are not new but are now spreading at unprecedented speeds due to digital platforms and artificial intelligence technologies.
These principles aim to enhance global cooperation in order to create a safer online environment. It was further highlighted that the spread of misinformation, disinformation, hate speech, and other risks in the information environment poses threats to democracy, human rights, climate action, and public health. This impact is intensified by the emergence of rapidly advancing Artificial Intelligence Technology (AI tech) that poses a growing threat to vulnerable groups in information environments.
The Highlights of Key Principles
- Societal Trust and Resilience: Trust in information sources and the ability and resilience to handle disruptions are critical for maintaining information integrity. Both are at risk from state and non-state actors exploiting the information ecosystem.
- Healthy Incentives: Current business models reliant on targeted advertising threaten information integrity. The complex, opaque nature of digital advertising benefits large tech companies and it requires reforms to ensure transparency and accountability.
- Public Empowerment: People require the capability to manage their online interactions, the availability of varied and trustworthy information, and the capacity to make informed decisions. Media and digital literacy are crucial, particularly for marginalized populations.
- Independent, Free, and Pluralistic Media: A free press supports democracy by fostering informed discourse, holding power accountable, and safeguarding human rights. Journalists must operate safely and freely, with access to diverse news sources.
- Transparency and research: Technology companies must be transparent about how information is propagated and how personal data is used. Research and privacy-preserving data access should be encouraged to address information integrity gaps while protecting those investigating and reporting on these issues.
Stakeholders Called for Action
Stakeholders, including technology companies, AI actors, advertisers, media, researchers, civil society organizations, state and political actors, and the UN, have been called to take action under the UN Global Principles for Information Integrity. These principles should be used to build and participate in broad cross-sector coalitions that bring together diverse expertise from civil society, academia, media, government, and the international private sector, focussing on capacity-building and meaningful youth engagement through dedicated advisory groups. Additionally, collaboration is required to develop multi-stakeholder action plans at regional, national, and local levels, engaging communities in grassroots initiatives and ensuring that youth are fully and meaningfully involved in the process.
Implementation and Monitoring
To effectively implement the UN Global Principles at large requires developing a multi-stakeholder action plan at various levels such as at the regional, national, and local levels. These plans should be informed and created by advice and counsel from an extensive range of communities including any of the grassroots initiatives having a deep understanding of regional challenges and their specific needs. Monitoring and evaluation are also regarded as essential components of the implementation process. Regular assessments of the progress, combined with the flexibility to adapt strategies as needed, will help ensure that the principles are effectively translated into practice.
Challenges and Considerations
Implementing these Global Principles of the UN will have certain challenges. The complexities that the digital landscape faces with the rapid pace of technological revamp, and alterations in the diversity of cultural and political contexts all present significant hurdles. Furthermore, the efforts to combat misinformation must be balanced with protecting fundamental rights, including the right to freedom of expression and privacy. Addressing these challenges to counter informational integrity will require continuous and ongoing collaboration with constant dialogue among stakeholders towards a commitment to innovation and continuous learning. It is also important to recognise and address the power imbalance within the information ecosystem, ensuring that all voices are heard and that any person, specifically, the marginalised communities is not cast aside.
Conclusion
The UN Global Principles for Online Misinformation and Information Integrity provide a comprehensive framework for addressing the critical challenges that are present while facing information integrity today. Advocating and promoting societal trust, healthy incentives, public empowerment, independent media, and transparency, these principles offer a passage towards a more resilient and trustworthy digital environment. The future success of these principles depends upon the collaborative efforts of all stakeholders, working together to safeguard the integrity of information for everyone.
References
- https://www.business-standard.com/world-news/un-unveils-global-principles-to-combat-online-misinformation-hate-speech-124062500317_1.html
- https://www.un.org/sustainabledevelopment/blog/2024/06/global-principles-information-integrity-launch/
- https://www.un.org/sites/un2.un.org/files/un-global-principles-for-information-integrity-en.pdf
- https://www.un.org/en/content/common-agenda-report/assets/pdf/Common_Agenda_Report_English.pdf

Introduction
We live in a time where technological change is no longer slow or subtle. Robotics, automation, artificial intelligence, and digital systems are transforming the way we work, think, and even imagine the future. This is often celebrated as great progress. But a deeper question quietly waits behind the noise. Is every advancement truly an uplift when seen through the lens of scriptures, culture, and Indian philosophical thought? Are we creating for the good of humanity, or are we only chasing convenience and speed? And what kind of future are we actually preparing, not just for ourselves, but for those who will be born into this world shaped by these tools from the very beginning?
India has long been seen as a land that values balance, purity, and harmony with nature. Its rivers, mountains, forests, and traditions are not just geography or history, they are part of a civilizational way of thinking that connects life, duty, and responsibility. In this context, it becomes important to ask what the long-term cost of our technological appetite might be. Every invention has a footprint. Industries change landscapes. Energy demands reshape ecosystems. Convenience today often hides consequences that only appear years later. Progress, when measured only in speed and output, forgets to ask what it takes away in silence.
There is also a quieter change happening inside the human mind. As tools become smarter, humans begin to feel more powerful. The thought slowly shifts from “I can use this” to “I control this.” With artificial intelligence, the language becomes even bolder. We start hearing phrases like “we can create worlds, faces, voices, even minds.” But history have always warned us about ‘overreach’. Not because power is evil, but because pride blinds judgment. When ability grows faster than wisdom, imbalance follows. We can already see early signs of this in concerns about shrinking attention spans, weakening cognitive habits, and a growing dependence on systems that think for us before we learn to think for ourselves deeply.
None of this is an argument to reject innovation. The idea is not to blacklist technology or romanticise the past. The real question is about direction and responsibility. Advancements are not only for the comfort of the present generation. They shape the mental, moral, and emotional world of future generations who will grow up surrounded by these systems as something normal and unavoidable. What values will guide that world? What habits will it encourage? What will it quietly take away?
This is where the richness of Indian thought becomes relevant, not as nostalgia, but as guidance. Ideas of dharma, restraint, balance, and ethical action were never anti-progress. They were reminders that power without responsibility becomes dangerous, and that ability without humility leads to decline. In modern terms, we talk about safety by design, ethical innovation, and human-centred technology. In older language, we talked about duty, limits, and the consequences of unchecked desire. The words have changed, but the concern is the same.
Perhaps the real question is not whether we are becoming creators, but whether we remember that we are also caretakers. We do not bring existence out of nothing. We reshape what already exists. And in that reshaping, the line between wisdom and arrogance, between progress and pride, becomes the most important line of all.
The futuristic impact of AI, robotics, and technologies
In every yuga, humans have extended the limits of what they can do. What changes is not the desire, but the form it takes. Our ancient history speak of extraordinary abilities, not as fantasies, but as reminders of how power tests character. Figures like Naradmuni (a prominent divine sage (Rishiraja) in Hinduism) are described as moving from one place to another in moments. Others gained immense strength, knowledge, or influence through years of discipline and tapasya. Ravana (Figure from Ramayana) himself was learned and powerful, far beyond ordinary human measure. Sanjaya (the charioteer and advisor of King Dhritarashtra in the Mahabharata) receives the gift of divya drishti and narrates the events of the battlefield without being physically present there, seeing and speaking across distance in a way that still feels remarkable even today.
In this yuga, that ancient search for power and reach has not disappeared, it has only changed its language, and today it speaks through robotics, artificial intelligence, and advanced technologies, making us ask whether we are truly creators or only very advanced arrangers of what already exists.
In this age, science and technology are attempting something similar in a different language. We may not travel like Naradmuni, but we send our voices, images, and thoughts across the world in seconds. We build machines that can see, listen, respond, and even imitate human thinking. Artificial intelligence and robotics promise comfort, speed, and efficiency, and in many ways, they truly improve human life. Yet the old question remains. Not just what can we do, but how far should we go, and at what cost.
When we primarily build for human convenience, we often fail to thoroughly examine the long-term consequences. The environmental impact of large-scale technology is already visible in the pressure on resources, the growth of waste, and the slow damage to air, water, and soil. Nature does not recover at the pace of human ambition. What feels like small compromises today can become heavy burdens for tomorrow.
There is also the impact on the human mind. As systems become more capable, humans risk becoming more dependent. When answers arrive instantly, patience weakens. When machines start deciding for us, the habit of deep thinking slowly fades. Over time, this can affect attention, memory, and judgment. Knowledge becomes easier to reach, but wisdom becomes harder to build. Just as in old stories, the danger is not in having power, but in losing clarity while using it.
Future generations will not encounter these technologies as new inventions. They will be born into them. What we treat today as tools, they will experience as the normal environment of life. This makes responsibility unavoidable. The real question is not only whether these systems work, but what kind of humans they will shape.
The purpose of this reflection is not to reject progress. It is to ask for balance. Building for human comfort is important, but building without studying long-term impact is risky. If this age has the power to create intelligent systems, it must also have the wisdom to protect the environment, care for future generations, and preserve the depth of the human mind. Otherwise, advancement becomes speed without direction, and power without responsibility.
The Acceleration of the Technological Age
The current era has reached a state where technological progress now occurs through instantaneous changes which transform our methods of working and decision-making and future planning. People often view robotics and automation and artificial intelligence as signs of progress yet a less audible inquiry persists through time which asks whether every technological advancement enhances human existence or whether we merely pursue efficient and easy solutions without thinking about their implications. Indian philosophical thought offers a useful lens here, one that does not reject progress but asks whether it aligns with balance, responsibility, and long-term harmony. The definition of intelligence according to this perspective extends beyond computational skills and pattern imitation because it requires people to achieve awareness and intent and their complete understanding. Current machines possess the ability to mimic human reasoning and produce language while they can replicate decision-making processes, but they lack both consciousness and personal experience.
Power, Responsibility, and Ethical Imbalance
The development of new technological capabilities brings with it ethical responsibilities which every society must handle. Human beings must take on new ethical duties which match their increasing capabilities according to historical evidence. The current situation shows that people create new things at a speed which exceeds their ability to think about those innovations. Systems exist to enhance operational performance while they determine human actions and extend their power but they do not always evaluate their complete impact. Indian traditions emphasize dharma, the principle of balance and rightful action, which shows that power without ethical grounding creates destructive human force. The state of imbalance exists without showing its presence at all times. The process of imbalance development takes place through three channels: environmental degradation, social inequalities, and the gradual decline of human control.
The current society demonstrates this transformation through its existing results. The algorithms now determine our consumption choices and our methods of understanding everything around us. The system provides users with personalized comfort, but it also creates hidden patterns that determine their preferences. The process starts with decision assistance before it progresses to decision influence which eventually leads to decision conditioning. The concept of swatantrata as inner freedom becomes more complicated within such an environment. People stop making freedom choices when they find it easier to select between things that exist in their surroundings because they lose their ability to choose. People start to measure their work activities and personal identity through systems that use optimization techniques and digital validation systems, which leads to a decrease of space that exists for individuals to think and consider matters independently.
Technology, Ecology, and Civilizational Values
The environmental impact of technological demand exists together with social transformations. All systems need power while all infrastructure creates environmental effects and all products, we use contain unknown expenses which become apparent after many years. India's civilizational values maintain their dedication to nature because people see rivers and forests and ecosystems as essential parts of existence. Success in modern society measures output as the main achievement while actual value disappears through the evaluation process. The future requires us to create new things but we must also decide which things to keep intact.
The current situation requires progress to be defined differently because it needs to be measured through precise management instead of continuous rapid development. The question now extends beyond technological advancement to include the need for technologies to be operated through intelligent guidance. The increasing abilities of machines create a greater need for people to maintain their essential human characteristics. Human beings must actively maintain their capacity to make ethical decisions and understand their life's meaning and purpose. The future depends on two factors: the “innovations that will emerge and the values that will guide their development.”
Conclusion
It is high time we pause and honestly examine the path we are taking. The question is not whether technology should grow, but whether its overreach should be allowed to shape the future without restraint. We are building faster than ever, developing systems that touch every part of life. That makes it even more important to study their long-term impact, not only on markets or productivity, but on nature, on the human mind, and on the generations who will inherit this tech-driven world.
Progress should benefit those who come after us, not quietly weaken them. A future where people are born into pure convenience, surrounded by tools that think, decide, and act for them, may look comfortable, but comfort alone does not build strong, aware, or responsible human beings. Growth without effort and ease without discipline slowly takes away depth, resilience, and clarity. Technology should support human potential, not replace it.
This is why morality, ethics, and balance cannot be treated as optional ideas. They must guide innovation, not follow it. We do not need to overcreate. We need to create ‘wisely’. We need to build systems that remain under human control, not systems that slowly train humans to surrender their judgment, attention, and responsibility. Tools should remain tools. They should serve life, not define it.
Indian thought has always placed intention at the centre of action. Karma is not judged only by outcome, but by the spirit in which an act is performed. A tool in itself is neither pure nor impure. It becomes one or the other through the hand that uses it. This is a lens through which modern technology can also be examined. Artificial intelligence can help doctors read scans faster, help farmers predict weather patterns, and help students in remote areas access knowledge. At the same time, it can be used to watch, to sort, to exclude, and to reduce human beings to data points that fit neatly into a system. The difference lies not in the machine, but in the values of those who design and deploy it.
The purpose of this reflection is simple. We should build, but we should build with responsibility. We should innovate, but with awareness of consequences. True progress is not just about what is possible today. It is about what remains healthy, meaningful, and sustainable tomorrow. If this age can combine intelligence with humility, and power with restraint, then technology will not become a symbol of overreach. It will become a sign of maturity.