#FactCheck - Viral Clip Claiming PM Modi Pushed BJP President Nitin Nabin Is Misleading
Executive Summary:
A video clip featuring Prime Minister Narendra Modi and the newly elected Bharatiya Janata Party (BJP) national president, Nitin Nabin, is going viral on social media. In the clip, PM Modi is seen apparently pushing Nitin Nabin, prompting claims that Nabin had accidentally stepped between the Prime Minister and the camera, after which Modi allegedly pushed him out of the frame. CyberPeace’s research found that the viral clip is misleading and cropped. The original, unedited video shows Prime Minister Modi gesturing for Nitin Nabin to move ahead and offer floral tributes to the statues of Bharatiya Jana Sangh founder Syama Prasad Mukherjee and Pandit Deendayal Upadhyaya at the BJP headquarters in Delhi. It is pertinent to note that on 20 January 2026, BJP leader Nitin Nabin was elected as the party’s national president. Several senior BJP leaders, including Prime Minister Narendra Modi, were present at the event. During his address, PM Modi remarked, “Nitin Nabin ji is my boss, and I am a party worker.” The statement received widespread attention, following which multiple videos linking to the remark began circulating on social media. A Facebook user shared the viral clip with a Hindi caption alleging that despite calling himself a “party worker,” PM Modi pushed his “boss” out of the camera frame. The post further mocked the position of BJP president, claiming it to be merely ceremonial. (Archived link)
To verify the claim, we conducted a reverse image and video search, which led us to a longer version of the video uploaded on news agency INS’s official X handle on 20 January 2026. The caption stated that PM Modi, BJP president Nitin Nabin, Defence Minister Rajnath Singh, Home Minister Amit Shah, Union Minister Nitin Gadkari and senior leader J.P. Nadda paid tributes to Syama Prasad Mukherjee and Pandit Deendayal Upadhyaya at the BJP headquarters.

In the full video, PM Modi and Nitin Nabin are seen walking together. PM Modi then requests Nitin Nabin to proceed first for the floral tribute, placing his hand on Nabin’s back as a gesture to move forward. The viral clip selectively cuts this moment out of context and loops it to create a misleading impression. The complete footage clearly shows that PM Modi asked Nitin Nabin to offer tributes first, after which other leaders followed. There is no indication whatsoever that Nitin Nabin was pushed out of the camera frame, as claimed in the viral posts. We also found the live broadcast of the ‘Bharatiya Janata Party Sangathan Parv’ on BJP’s official YouTube channel. The same visuals appear at the end of the live stream, further confirming that PM Modi was merely gesturing for Nitin Nabin to proceed first.
Additionally, photographs available on Nitin Nabin’s official X handle show him offering floral tributes ahead of PM Modi, who is seen standing behind and waiting.

Conclusion:
CyberPeace research confirms that the viral clip has been cropped and shared with a false narrative. In the original context, Prime Minister Narendra Modi was respectfully inviting BJP national president Nitin Nabin to move ahead and pay tributes, not pushing him out of the camera frame.
Related Blogs

Introduction
The spread of information in the quickly changing digital age presents both advantages and difficulties. The phrases "misinformation" and "disinformation" are commonly used in conversations concerning information inaccuracy. It's important to counter such prevalent threats, especially in light of how they affect countries like India. It becomes essential to investigate the practical ramifications of misinformation/disinformation and other prevalent digital threats. Like many other nations, India has had to deal with the fallout from fraudulent internet actions in 2023, which has highlighted the critical necessity for strong cybersecurity safeguards.
The Emergence of AI Chatbots; OpenAI's ChatGPT and Google's Bard
The launch of OpenAI's ChatGPT in November 2022 was a major turning point in the AI space, inspiring the creation of rival chatbot ‘Google's Bard’ (Launched in 2023). These chatbots represent a significant breakthrough in artificial intelligence (AI) as they produce replies by combining information gathered from huge databases, driven by Large Language Models (LLMs). In the same way, AI picture generators that make use of diffusion models and existing datasets have attracted a lot of interest in 2023.
Deepfake Proliferation in 2023
Deepfake technology's proliferation in 2023 contributed to misinformation/disinformation in India, affecting politicians, corporate leaders, and celebrities. Some of these fakes were used for political purposes while others were for creating pornographic and entertainment content. Social turmoil, political instability, and financial ramifications were among the outcomes. The lack of tech measures about the same added difficulties in detection & prevention, causing widespread synthetic content.
Challenges of Synthetic Media
Problems of synthetic media, especially AI-powered or synthetic Audio video content proliferated widely during 2023 in India. These included issues with political manipulation, identity theft, disinformation, legal and ethical issues, security risks, difficulties with identification, and issues with media integrity. It covered an array of consequences, ranging from financial deception and the dissemination of false information to swaying elections and intensifying intercultural conflicts.
Biometric Fraud Surge in 2023
Biometric fraud in India, especially through the Aadhaar-enabled Payment System (AePS), has become a major threat in 2023. Due to the AePS's weaknesses being exploited by cybercriminals, many depositors have had their hard-earned assets stolen by fraudulent activity. This demonstrates the real effects of biometric fraud on those who have had their Aadhaar-linked data manipulated and unauthorized access granted. The use of biometric data in financial systems raises more questions about the security and integrity of the nation's digital payment systems in addition to endangering individual financial stability.
Government strategies to counter digital threats
- The Indian Union Government has sent a warning to the country's largest social media platforms, highlighting the importance of exercising caution when spotting and responding to deepfake and false material. The advice directs intermediaries to delete reported information within 36 hours, disable access in compliance with IT Rules 2021, and act quickly against content that violates laws and regulations. The government's dedication to ensuring the safety of digital citizens was underscored by Union Minister Rajeev Chandrasekhar, who also stressed the gravity of deepfake crimes, which disproportionately impact women.
- The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts to identify misinformation and deepfakes.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2021 were amended in 2023. The online gaming industry is required to abide by a set of rules. These include not hosting harmful or unverified online games, not promoting games without approval from the SRB, labelling real-money games with a verification mark, educating users about deposit and winning policies, setting up a quick and effective grievance redressal process, requesting user information, and forbidding the offering of credit or financing for real-money gaming. These steps are intended to guarantee ethical and open behaviour throughout the online gaming industry.
- With an emphasis on Personal Data Protection, the government enacted the Digital Personal Data Protection Act, 2023. It is a brand-new framework for digital personal data protection which aims to protect the individual's digital personal data.
- The " Cyber Swachhta Kendra " (Botnet Cleaning and Malware Analysis Centre) is a part of the Government of India's Digital India initiative under the (MeitY) to create a secure cyberspace. It uses malware research and botnet identification to tackle cybersecurity. It works with antivirus software providers and internet service providers to establish a safer digital environment.
Strategies by Social Media Platforms
Various social media platforms like YouTube, and Meta have reformed their policies on misinformation and disinformation. This shows their comprehensive strategy for combating deepfake, misinformation/disinformation content on the network. The platform YouTube prioritizes eliminating content that transgresses its regulations, decreasing the amount of questionable information that is recommended, endorsing reliable news sources, and assisting reputable authors. YouTube uses unambiguous facts and expert consensus to thwart misrepresentation. In order to quickly delete information that violates policies, a mix of content reviewers and machine learning is used throughout the enforcement process. Policies are designed in partnership with external experts and producers. In order to improve the overall quality of information that users have access to, the platform also gives users the ability to flag material, places a strong emphasis on media literacy, and gives precedence to giving context.
Meta’s policies address different misinformation categories, aiming for a balance between expression, safety, and authenticity. Content directly contributing to imminent harm or political interference is removed, with partnerships with experts for assessment. To counter misinformation, the efforts include fact-checking partnerships, directing users to authoritative sources, and promoting media literacy.
Promoting ‘Tech for Good’
By 2024, the vision for "Tech for Good" will have expanded to include programs that enable people to understand the ever-complex digital world and promote a more secure and reliable online community. The emphasis is on using technology to strengthen cybersecurity defenses and combat dishonest practices. This entails encouraging digital literacy and providing users with the knowledge and skills to recognize and stop false information, online dangers, and cybercrimes. Furthermore, the focus is on promoting and exposing effective strategies for preventing cybercrime through cooperation between citizens, government agencies, and technology businesses. The intention is to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while also promoting innovation and connectedness.
Conclusion
In the evolving digital landscape, difficulties are presented by false information powered by artificial intelligence and the misuse of advanced technology by bad actors. Notably, there are ongoing collaborative efforts and progress in creating a secure digital environment. Governments, social media corporations, civil societies and tech companies have shown a united commitment to tackling the intricacies of the digital world in 2024 through their own projects. It is evident that everyone has a shared obligation to establish a safe online environment with the adoption of ethical norms, protective laws, and cybersecurity measures. The "Tech for Good" goal for 2024, which emphasizes digital literacy, collaboration, and the ethical use of technology, seems promising. The cooperative efforts of people, governments, civil societies and tech firms will play a crucial role as we continue to improve our policies, practices, and technical solutions.
References:
- https://news.abplive.com/fact-check/deepfakes-ai-driven-misinformation-year-2023-brought-new-era-of-digital-deception-abpp-1651243
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445

Introduction
Rajeev Chandrasekhar, the Union minister of state for information technology (IT), said that the Global Partnership on Artificial Intelligence (GPAI) Summit, which brings together 29 member governments, including the European Union, announced on 13th December 2023 that the New Delhi Declaration had been adopted. The proclamation committed to developing AI applications for medical treatment and agribusiness jointly and taking the needs of the Global South into account when developing AI.
In addition, signing countries committed to leveraging the GPAI infrastructure to establish a worldwide structure for AI safety and trust, as well as to make AI advantages and approaches accessible to all. In order to complete the recommended structure in six months, India also submitted a proposal to host the GPAI Global Governance Summit.
“The New Delhi Declaration, which aims to place GPAI at the forefront of defining the future of AI in terms of both development and building cooperative AI across the partner states, has been unanimously endorsed by 29 GPAI member countries. Nations have come to an agreement to develop AI applications in healthcare, agriculture, and numerous other fields that affect all of our nations and citizens,” Chandrasekhar stated.
The statement highlights GPAI's critical role in tackling modern AI difficulties, such as generative AI, through submitted AI projects meant to maximize benefits and minimize related risks while solving community problems and worldwide difficulties.
GPAI
Global Partnership on Artificial Intelligence (GPAI) is an organisation of 29 countries from the Americas (North and South), Europe and Asia. It has important players such as the US, France, Japan and India, but it excludes China. The previous meeting took place in Japan. In 2024, India will preside over GPAI.
In order to promote and steer the responsible implementation of artificial intelligence based on human rights, multiculturalism, gender equality, innovation, economic growth, the surroundings, and social impact, this forum was established in 2020. Its goal is to bring together elected officials and experts in order to make tangible contributions to the 2030 Agenda and the UN Sustainable Development Goals (SDGs).
Given the quick and significant advancements in artificial intelligence over the previous year, the meeting in New Delhi attracted particular attention. They have sparked worries about its misuse as well as enthusiasm about its possible advantages.
The Summit
The G20 summit, which India hosted in September 2023, provided an atmosphere for the discussions at the GPAI summit. There, participants of this esteemed worldwide economic conference came to an agreement on how to safely use AI for "Good and for All."
In order to safeguard people's freedoms and security, member governments pledged to address AI-related issues "in a responsible, inclusive, and human-centric manner."
The key tactic devised is to distribute AI's advantages fairly while reducing its hazards. Promoting international collaboration and discourse on global management for AI is the first step toward accomplishing this goal.
A major milestone in that approach was the GPAI summit.
The conversation on AI was started by India's Prime Minister Narendra Modi, who is undoubtedly one of the most tech-aware and tech-conscious international authorities.
He noted that every system needs to be revolutionary, honest, and trustworthy in order to be sustained.
"There is no doubt that AI is transformative, but it is up to us to make it more and more transparent." He continued by saying that when associated social, ethical, and financial concerns are appropriately addressed, trust will increase.
After extensive discussions, the summit attendees decided on a strategy to establish global collaboration on a number of AI-related issues. The proclamation pledged to place GPAI at the leading edge of defining AI in terms of creativity and cooperation while expanding possibilities for AI in healthcare, agriculture, and other areas of interest, according to Union Minister Rajeev Chandrasekhar.
There was an open discussion of a number of issues, including disinformation, joblessness and bias, protection of sensitive information, and violations of human rights. The participants reaffirmed their dedication to fostering dependable, safe, and secure AI within their respective domains.
Concerns raised by AI
- The issue of legislation comes first. There are now three methods in use. In order to best promote inventiveness, the UK government takes a "less is more" approach to regulation. Conversely, the European Union (EU) is taking a strong stance, planning to propose a new Artificial Intelligence Act that might categorize AI 'in accordance with use-case situations based essentially on the degree of interference and vulnerability'.
- Second, analysts say that India has the potential to lead the world in discussions about AI. For example, India has an advantage when it comes to AI discussions because of its personnel, educational system, technological stack, and populace, according to Markham Erickson of Google's Centers for Excellence. However, he voiced the hope that Indian regulations will be “interoperable” with those of other countries in order to maximize the benefits for small and medium-sized enterprises in the nation.
- Third, there is a general fear about how AI will affect jobs, just as there was in the early years of the Internet's development. Most people appear to agree that while many jobs won't be impacted, certain jobs might be lost as artificial intelligence develops and gets smarter. According to Erickson, the solution to the new circumstances is to create "a more AI-skilled workforce."
- Finally, a major concern relates to deepfakes defined as 'digital media, video, audio and images, edited and manipulated, using Artificial Intelligence (AI).'
Need for AI Strategy in Commercial Businesses
Firstly, astute or mobile corporate executives such as Shailendra Singh, managing director of Peak XV Partners, feel that all organisations must now have 'an AI strategy'.
Second, it is now impossible to isolate the influence of digital technology and artificial intelligence from the study of international relations (IR), foreign policy, and diplomacy. Academics have been contemplating and penning works of "the geopolitics of AI."
Combat Strategies
"We will talk about how to combine OECD capabilities to maximize our capacity to develop the finest approaches to the application and management of AI for the benefit of our people. The French Minister of Digital Transition and Telecommunications", Jean-Noël Barrot, informed reporters.
Vice-Minister of International Affairs for Japan's Ministry of Internal Affairs and Communications Hiroshi Yoshida stated, "We particularly think GPAI should be more inclusive so that we encourage more developing countries to join." Mr Chandrasekhar stated, "Inclusion of lower and middle-income countries is absolutely core to the GPAI mission," and added that Senegal has become a member of the steering group.
India's role in integrating agribusiness into the AI agenda was covered in a paragraph. The proclamation states, "We embrace the use of AI innovation in supporting sustainable agriculture as a new thematic priority for GPAI."
Conclusion
The New Delhi Declaration, which was adopted at the GPAI Summit, highlights the cooperative determination of 29 member nations to use AI for the benefit of all people. GPAI, which will be led by India in 2024, intends to influence AI research with an emphasis on healthcare, agriculture, and resolving ethical issues. Prime Minister Narendra Modi stressed the need to use AI responsibly and build clarity and confidence. Legislative concerns, India's potential for leadership, employment effects, and the difficulty of deepfakes were noted. The conference emphasized the importance of having an AI strategy in enterprises and covered battle tactics, with a focus on GPAI's objective, which includes tolerance for developing nations. Taken as a whole, the summit presents GPAI as an essential tool for navigating the rapidly changing AI field.
References
- https://www.thehindu.com/news/national/ai-summit-adopts-new-delhi-declaration-on-inclusiveness-collaboration/article67635398.ece
- https://www.livemint.com/news/india/gpai-meet-adopts-new-delhi-ai-declaration-11702487342900.html
- https://startup.outlookindia.com/sector/policy/global-partnership-on-ai-member-nations-unanimously-adopt-new-delhi-declaration-news-10065
- https://gpai.ai/

Starting in mid-December, 2024, a series of attacks have targeted Chrome browser extensions. A data protection company called Cyberhaven, California, fell victim to one of these attacks. Though identified in the U.S., the geographical extent and potential of the attack are yet to be determined. Assessment of these cases can help us to be better prepared for such instances if they occur in the near future.
The Attack
Browser extensions are small software applications that add and enable functionality or a capacity (feature) to a web browser. These are written in CSS, HTML, or JavaScript and like other software, can be coded to deliver malware. Also known as plug-ins, they have access to their own set of Application Programming Interface (APIs). They can also be used to remove unwanted elements as per customisation, such as pop-up advertisements and auto-play videos, when one lands on a website. Some examples of browser extensions include Ad-blockers (for blocking ads and content filtering) and StayFocusd (which limits the time of the users on a particular website).
In the aforementioned attack, the publisher of the browser at Cyberhaven received a phishing mail from an attacker posing to be from the Google Chrome Web Store Developer Support. It mentioned that their browser policies were not compatible and encouraged the user to click on the “Go to Policy”action item, which led the user to a page that enabled permissions for a malicious OAuth called Privacy Policy Extension (Open Authorisation is an adopted standard that is used to authorise secure access for temporary tokens). Once the permission was granted, the attacker was able to inject malicious code into the target’s Chrome browser extension and steal user access tokens and session cookies. Further investigation revealed that logins of certain AI and social media platforms were targeted.
CyberPeace Recommendations
As attacks of such range continue to occur, it is encouraged that companies and developers take active measures that would make their browser extensions less susceptible to such attacks. Google also has a few guidelines on how developers can safeguard their extensions from their end. These include:
- Minimal Permissions For Extensions- It is encouraged that minimal permissions for extensions barring the required APIs and websites that it depends on are acquired as limiting extension privileges limits the surface area an attacker can exploit.
- Prioritising Protection Of Developer Accounts- A security breach on this end could lead to compromising all users' data as this would allow attackers to mess with extensions via their malicious codes. A 2FA (2-factor authentication) by setting a security key is endorsed.
- HTTPS over HTTP- HTTPS should be preferred over HTTP as it requires a Secure Sockets Layer (SSL)/ transport layer security(TLS) certificate from an independent certificate authority (CA). This creates an encrypted connection between the server and the web browser.
Lastly, as was done in the case of the attack at Cyberhaven, it is encouraged to promote the practice of transparency when such incidents take place to better deal with them.
References
- https://indianexpress.com/article/technology/tech-news-technology/hackers-hijack-companies-chrome-extensions-cyberhaven-9748454/
- https://indianexpress.com/article/technology/tech-news-technology/google-chrome-extensions-hack-safety-tips-9751656/
- https://www.techtarget.com/whatis/definition/browser-extension
- https://www.forbes.com/sites/daveywinder/2024/12/31/google-chrome-2fa-bypass-attack-confirmed-what-you-need-to-know/
- https://www.cloudflare.com/learning/ssl/why-use-https/