#FactCheck - Viral Video of Anti-RSS Slogans Is From 2022 Telangana, Not Uttar Pradesh
Executive Summary
A video showing a group of people wearing Muslim caps raising provocative slogans against the Rashtriya Swayamsevak Sangh (RSS) is being widely shared on social media. Users sharing the clip claim that the incident took place recently in Uttar Pradesh. However, CyberPeace research found the claim to be false. The probe established that the video is neither recent nor related to Uttar Pradesh. In fact, the footage dates back to 2022 and is from Telangana. The slogans heard in the video were raised during a protest against Goshamahal MLA T. Raja Singh, and the clip is now being circulated with a misleading claim.
Claim
On January 21, 2026, a user on social media platform X (formerly Twitter) shared the video claiming it showed people in Uttar Pradesh chanting slogans such as, “Kaat daalo saalon ko, RSS walon ko” and “Gustakh-e-Nabi ka sar chahiye.” The post suggested that such slogans were being raised openly in Uttar Pradesh despite strict law enforcement. Links to the post and its archive are provided below.

Fact Check:
To verify the claim, CyberPeace research conducted a reverse image search using keyframes from the viral video. The same footage was found on a Facebook account where it had been uploaded on August 26, 2022, indicating that the video is not recent.

Further verification led the team to a report published by news portal OpIndia on August 25, 2022, which featured identical visuals from the viral clip. According to the report, the video showed a protest march organised against BJP MLA T. Raja Singh following his alleged controversial remarks about Prophet Muhammad. The report identified one of the individuals in the video as Kaleem Uddin, who was allegedly heard raising the slogan “Kaat daalo saalon ko,” to which the crowd responded “RSS walon ko.” The slogan was linked to incitement against RSS members.

To confirm the location, the video was examined closely. A shop sign reading “Royal Time House” was visible in the footage. Using Google Street View, the same shop was located in Nalgonda, Telangana, conclusively establishing that the video was filmed there and not in Uttar Pradesh.

Conclusion
CyberPeace research confirmed that the viral video is from 2022 and was recorded in Telangana, not Uttar Pradesh. The clip is being falsely circulated with a misleading claim to give it a communal and political angle.
Related Blogs

Introduction
In today’s digital world, data has emerged as the new currency that influences global politics, markets, and societies. Companies, governments, and tech behemoths aim to control data because it accords them influence and power. However, a fundamental challenge brought about by this increased reliance on data is how to strike a balance between privacy protection and innovation and utility.
In recognition of these dangers, more than 200 Nobel laureates, scientists, and world leaders have recently signed the Global Call for AI Red Lines. Governments are urged by this initiative to create legally binding international regulations on artificial intelligence by 2026. Its goal is to stop AI from going beyond moral and security bounds, particularly in areas like political manipulation, mass surveillance, cyberattacks, and dangers to democratic institutions.
One way to address the threat to privacy is through pseudonymization, which makes it possible to use data valuable for research and innovation by substituting personal identifiers for artificial ones. Pseudonymization thus directly advances the AI Red Lines initiative's mission of facilitating technological advancement while lowering the risks of data misuse and privacy violations.
The Red Lines of AI: Why do they matter?
The Global Call for AI Red Lines initiative represents a collective attempt to impose precaution before catastrophe, which was done with the objective of recognising the Red Lines in the use of AI tools. Thus, anything that unites the risks of using AI is due to the absence of global safeguards. Some of these Red Lines can be understood as;
- Cybersecurity breaches in the form of exposure of financial and personal data due to AI-driven hacking and surveillance.
- Occurrence of privacy invasions due to endless tracking.
- Generative AI can also help to create realistic fake content, undermining the trust of public discourses, leading to misinformation.
- Algorithmic amplification of polarising content can also threaten civic stability, leading to a demographic disruption.
Legal Frameworks and Regulatory Landscape
The regulations of Artificial Intelligence stand fragmented across jurisdictions, leaving significant loopholes aside. Some of the frameworks already provide partial guidance. The European Union’s Artificial Intelligence Act 2024 bans “unacceptable” AI practices, whereas the US-China Agreement also ensures that nuclear weapons remain under human, not machine-controlled. The UN General Assembly has adopted resolutions urging safe and ethical AI usage, with a binding and elusive global treaty.
On the front of data protection, the General Data Protection Regulations (GDPR) of EU offers a clear definition of Pseudonymisation under Article 4(5). It also describes a process where personal data is altered in a way that it cannot be attributed to an individual without additional information, which must be stored securely and separately. Importantly, pseudonymised data still qualifies as “personal data” under GDPR. However, India’s Digital Personal Data Protection Act (DPDP) 2023 adopts a similar stance. It does not explicitly define pseudonymisation in broad terms, such as “personal data” by including potentially reversible identifiers. According to Section 8(4) of the Act, companies are meant to adopt appropriate technical or organisational measures. International bodies and conventions like the OECD Principles on AI or the Council of Europe Convention 108+ emphasize accountability, transparency, and data minimisation. Collectively, these instruments point towards pseudonymization as a best practice, though interpretations of its scope differ.
Strategies for Corporate Implementation
For a company, pseudonymisation is not just about compliance, it is also a practical solution that offers measurable benefits. By pseudonymising data, businesses can get benefits, such as;
- Enhancing Privacy protection by masking identifiers like names or IDs by reducing the impact of data breaches.
- Preserving Data Utility, unlike having a full anonymisation, pseudonymisation also retains patterns that are essential for analytical innovation.
- Facilitating data sharing can allow organizations to collaborate with their partners and researchers while maintaining proper trust.
According to these benefits, competitive advantages get translated to clauses where customers find it more likely to trust organizations that prioritise data protection, while pseudonymisation further enables the firms to engage in cross-border collaboration without violating local data laws.
Balancing Privacy Rights and Data Utility
Balancing is a central dilemma; on one side lies the case of necessity over data utility, where companies, researchers and governments rely on large datasets to enhance the scale of AI innovation. On the other hand lies the question of the right to privacy, which is a non-negotiable principle protected under the international human rights law.
Pseudonymisation offers a practical compromise by enabling the use of sensitive data while reducing the privacy risks. Taking examples of different domains, such as healthcare, it allows the researchers to work with patient information without exposing identities, whereas in finance, it supports fraud detection without revealing the customer details.
Conclusion
The rapid rise of artificial intelligence has led to the outpacing of regulations, raising urgent questions related to safety, fairness and accountability. The global call for recognising the AI red lines is a bold step that looks in the direction of setting universal boundaries. Yet, alongside the remaining global treaties, practical safeguards are also needed. Pseudonymisation exemplifies such a safeguard, which is legally recognised under the GDPR and increasingly relevant in India’s DPDP Act. It balances the twin imperatives of privacy, protection, and data utility. For organizations, adopting pseudonymisation is not only about ensuring regulatory compliance, rather, it is also about building trust, ensuring resilience, and aligning with the broader ethical responsibilities in this digital age. As the future of AI is debatable, the guiding principles also need to be clear. By embedding techniques for preserving privacy, like pseudonymisation, into AI systems, we can take a significant step towards developing a sustainable, ethical and innovation-driven digital ecosystem.
References
https://www.techaheadcorp.com/blog/shadow-ai-the-risks-of-unregulated-ai-usage-in-enterprises/
https://planetmainframe.com/2024/11/the-risks-of-unregulated-ai-what-to-know/
https://cepr.org/voxeu/columns/dangers-unregulated-artificial-intelligence
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/

Overview of the India-UK Joint Tech Security Initiative
India and the UK have been deepening their technological and security ties through various initiatives and agreements. One of the key developments in this partnership is the India-UK Joint Tech Security Initiative, which focuses on enhancing collaboration in areas like cybersecurity, artificial intelligence (AI),telecommunications, and critical technologies. Building upon the bilateral cooperation agenda set out in the India-UK Roadmap 2030, which seeks to bolster cooperation across various sectors, including trade, climate change, antidefense, the UK and India launched the Joint Tech Security Initiative (TSI) on July 24, 2024. This initiative will priorities collaboration in critical and emerging technologies across priority sectors. Coordinating with the national security agencies of both countries, the TSI will set priority areas and identify interdependencies for cooperation on critical and emerging technologies. This, in turn, will help build meaningful technology value chain partnerships between India & the UK.
The TSI will be coordinated by the National Security Advisors (NSAs) of both countries through existing and new dialogues. The NSAswill set priority areas and identify interdependencies for cooperation on critical and emerging tech, helping build meaningful technology value chain partnerships between the two countries. Progress made on the initiative will be reviewed on a half-yearly basis at the Deputy NSA level. A bilateral mechanism will be established led by India's Ministry of External Affairs and the UK government for promotion of trade in critical and emerging technologies, including resolution of relevant licensing or regulatory issues. Both countries view this TSI as a platform and a strong signal of intent to build and grow sustainable and tangible partnerships across priority tech sectors. They will explore how to build a deeper strategic partnership between UK and Indian research and technology centres and Incubators, enhance cooperation across UK and India tech and innovation ecosystems, and create a channel for industry and academia to help shape the TSI.
The UK and India are launching new bilateral initiatives to expand and deepen their technology security partnership. These initiatives will focus on various domains, including telecoms, critical minerals, semiconductors, and energy security.
In telecoms, the UK and India will build a new Future Telecoms Partnership, focusing on joint research on future telecoms, open RAN systems, testbed linkups, telecoms security, spectrum innovation, software and systems architecture. This will include collaboration between UK's SONIC Labs, India's Centre for Development of Telematics (C-DOT), and Dot's Telecoms Startup Mission.
In critical minerals, the UK and India will expand their collaboration on critical minerals, working together to improve supply chain resilience, explore possible research and development and technology partnerships along the complete critical minerals value chain, and share best practices on ESG standards. They will establish a roadmap for cooperation and establish a UK-India ‘critical minerals’ community of academics, innovators, and industry.
Key Areas of Collaboration:
- Strengthening cybersecurity defense and enhancing resilience through joint cybersecurity exercises and information-sharing and developing common standards and best practices while collaborating with their respective organisations, ie, CERT-In and NCSC.
- Promotion of ethical AI development and deployment with AI ethics guidelines and frameworks, and efforts encouraging academic collaborations. Support for new partnerships between UK and Indian research organizations alongside existing joint programmes using AI to tackle global challenges.
- Building secure and resilient telecom infrastructure with a focus on security and exchange of expertise and regulatory cooperation. Collaboration on Open Radio Access Networks tech to name as an example.
- Critical and emerging technologies development by advancing research and innovation in the quantum, semiconductors and biotechnology niches. Promoting and investing in tech startups and innovation ecosystems. Engaging in policy dialogues on tech governance and standards.
- Digital economy and trade facilitation to promote economic growth by enhancing frameworks and agreements for it. Collaborating on digital payment systems and fintech solutions and most importantly promoting data protection and privacy standards.
Outlook and Impact on the Industry
The initiative sets out a new approach for how the UK and India work together on the defining technologies of this decade. These include areas such as telecoms, critical minerals, AI, quantum, health/biotechnology, advanced materials and semiconductors. While the initiative looks promising, several challenges need to be addressed such as the need to put robust regulatory frameworks in place, and develop a balanced approach for data privacy and information exchange in the cross-border data flows. It is imperative to install mechanisms that ensure that intellectual property is protected while the facilitation of technology transfer is not hampered. Above all, geopolitical risks need to be navigated in a manner that the tensions are reduced and a stable partnership grows. The Initiative builds on a series of partnerships between India and the UK, as well as between industry and academia. Abilateral mechanism, led by India’s Ministry of External Affairs and the UK government, will promote trade in critical and emerging technologies, including the resolution of relevant licensing or regulatory issues.
Conclusion
This initiative, at its core, will drive forward a bilateral partnership that is framed on boosting economic growth and deepening cooperation across key issues including trade, technology, education, culture and climate. By combining their strengths, the UK and India are poised to create a robust framework for technological innovation and security that could serve as a model for international cooperation in tech.
References
- https://www.hindustantimes.com/india-news/india-uk-launch-joint-tech-security-initiative-101721876539784.html
- https://www.gov.uk/government/publications/uk-india-technology-security-initiative-factsheet/uk-india-technology-security-initiative-factsheet
- https://www.business-standard.com/economy/news/india-uk-unveil-futuristic-technology-security-initiative-to-seal-fta-soon-124072500014_1.htm
- https://bharatshakti.in/india-uk-technology-security-initiative/

Introduction
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
.png)
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
- Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
- Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms: Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
- User Empowerment to Counter Misinformation: Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
- Partnership with Fact-Checking/Expert Organizations: Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
References
- https://mark-hurlstone.github.io/THKE.22.BJP.pdf
- https://futurefreespeech.org/wp-content/uploads/2024/01/Empowering-Audiences-Through-%E2%80%98Prebunking-Michael-Bang-Petersen-Background-Report_formatted.pdf
- https://newsreel.pte.hu/news/unprecedented_challenges_Debunking_disinformation
- https://misinforeview.hks.harvard.edu/article/global-vaccination-badnews/