#FactCheck - Viral Video of Argentina Football Team Dancing to Bhojpuri Song is Misleading
Executive Summary:
A viral video of the Argentina football team dancing in the dressing room to a Bhojpuri song is being circulated in social media. After analyzing the originality, CyberPeace Research Team discovered that this video was altered and the music was edited. The original footage was posted by former Argentine footballer Sergio Leonel Aguero in his official Instagram page on 19th December 2022. Lionel Messi and his teammates were shown celebrating their win at the 2022 FIFA World Cup. Contrary to viral video, the song in this real-life video is not from Bhojpuri language. The viral video is cropped from a part of Aguero’s upload and the audio of the clip has been changed to incorporate the Bhojpuri song. Therefore, it is concluded that the Argentinian team dancing to Bhojpuri song is misleading.

Claims:
A video of the Argentina football team dancing to a Bhojpuri song after victory.


Fact Check:
On receiving these posts, we split the video into frames, performed the reverse image search on one of these frames and found a video uploaded to the SKY SPORTS website on 19 December 2022.

We found that this is the same clip as in the viral video but the celebration differs. Upon further analysis, We also found a live video uploaded by Argentinian footballer Sergio Leonel Aguero on his Instagram account on 19th December 2022. The viral video was a clip from his live video and the song or music that’s playing is not a Bhojpuri song.

Thus this proves that the news that circulates in the social media in regards to the viral video of Argentina football team dancing Bhojpuri is false and misleading. People should always ensure to check its authenticity before sharing.
Conclusion:
In conclusion, the video that appears to show Argentina’s football team dancing to a Bhojpuri song is fake. It is a manipulated version of an original clip celebrating their 2022 FIFA World Cup victory, with the song altered to include a Bhojpuri song. This confirms that the claim circulating on social media is false and misleading.
- Claim: A viral video of the Argentina football team dancing to a Bhojpuri song after victory.
- Claimed on: Instagram, YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In recent years, the online gaming sector has seen tremendous growth and is one of the fastest-growing components of the creative economy, contributing significantly to innovation, employment generation and export earnings. India possesses a large pool of skilled young professionals, strong technological capabilities and a rapidly growing domestic market, which together provide an opportunity for the country to assume a leadership role in the global value chain of online gaming. With this, the online gaming industry has also faced an environment of exploitation, abuse, with notable cases of fraud, money laundering, and other emerging cybercrimes. In order to protect the interests of players, ensure fair play and competition, safe and secure online gaming environment, the need for introducing and establishing dedicated gaming regulation was a need of the hour.
On 20 August 2025, the Union government introduced a new bill, ‘Promotion and Regulation of Online Gaming Bill, 2025’ in Lok Sabha that seeks to prohibit online money gaming, including advertisements and financial transactions related to such platforms. From the introduction, the said bill was passed at 5 PM on the same date. Further, the upper house of parliament (Rajya Sabha) passed the bill on 21st August 2025. The bill can be seen as a progressive step towards building safer online gaming spaces for everyone, especially for our youth and combating the emerging cybercrime threats present in the online gaming landscape.
Key Highlights of the Bill
The Bill extends to the whole of India. It also applies to any online money gaming service offered within India or operated from outside the country but accessible in India.
- Definition of E-sports:
Section 2(1)(c) of the Bill defines e-sports as:-
(i) is played as part of multi-sports events;
(ii) involves organised competitive events between individuals or teams, conducted in multiplayer formats governed by predefined rules;
(iii) is duly recognised under the National Sports Governance Act, 2025, and registered with the Authority or agency under section 3;
(iv) has outcome determined solely by factors such as physical dexterity, mental agility, strategic thinking or other similar skills of users as players;
(v) may include payment of registration or participation fees solely for the purpose of entering the competition or covering administrative costs and may include performance-based prize money by the player; and
(vi)shall not involve the placing of bets, wagers or any other stakes by any person, whether or not such person is a participant, including any winning out of such bets, wagers or any other stakes;
- Prohibition of Online Money Gaming and Advertisement thereof
The Bill prohibits the offering of online money games and online money gaming services. It also bans all forms of advertisements or promotions connected to online money games. This includes endorsements by individuals or entities. - Financial Transactions
Banks, financial institutions, and other intermediaries are barred from facilitating transactions related to online money gaming services. - Criminal Liability
Violation of the provisions on online money gaming can result in imprisonment for up to three years, or a fine of up to ₹1 crore, or both. Repeat offenders face stricter punishment with higher fines and longer jail terms. - Cognizable and Non-Bailable Offences
Offences relating to offering online money gaming services and facilitating financial transactions for such games are categorised as cognizable and non-bailable. This gives law enforcement agencies greater power to act without requiring prior approval.
In conversation with CyberPeace ~
Shailendra Vikram Singh, Former Deputy Secretary (Cyber & Information Security), Ministry of Home Affairs, GOI . He highlighted that
"The passage of the Promotion and Regulation of Online Gaming Bill, 2025 in the Lok Sabha highlights the government’s growing priority on national security, public safety, and health in digital regulation. Unfortunately, the real money gaming industry, despite its growth and promise, did not take proactive steps to address these concerns. The absence of safeguards and engagement left the government with no choice but to adopt a blanket ban."Having worked on this issue from both the government and industry side, the clear lesson is that in sensitive digital sectors, early regulatory alignment and constructive dialogue are not optional but essential. Going forward, collaboration is the only way to achieve a balance between innovation and responsibility.”
CyberPeace Outlook
The Promotion and Regulation of Online Gaming Bill, 2025, marks a decisive policy shift by simultaneously fostering the growth of e-sports, educational and social gaming, and imposing an absolute prohibition on online money games. By recognising e-sports as legitimate, skill-based competitive sports under the National Sports Governance Act, 2025, and establishing a central Authority for oversight, registration, and regulation, the Bill creates an institutional framework for safe and responsible development of the sector. The Bill completely bans real money games (RMGs), regardless of whether they are skill-based or chance-based or both, hence it poses significant questions on RMG companies' legal standing, upon which the gaming industry has raised its conundrum. Further, it addresses urgent threats such as cybercrime, gaming addiction, online betting, money laundering, and the misuse of gaming platforms for illicit activities. The move reflects a balanced approach, encouraging innovation and digital skill-building, while safeguarding public order, consumer interests, and financial integrity.
References
- https://prsindia.org/files/bills_acts/bills_parliament/2025/Bill_Text-Online_Gaming_Bill_2025.pdf
- https://prsindia.org/billtrack/the-promotion-and-regulation-of-online-gaming-bill-2025
- https://www.hindustantimes.com/india-news/rajya-sabha-clears-online-gaming-bill-a-day-after-lok-sabha-approval-101755766847840.html

Introduction
Data Breaches have taken over cyberspace as one of the rising issues, these data breaches result in personal data making its way toward cybercriminals who use this data for no good. As netizens, it's our digital responsibility to be cognizant of our data and the data of one's organization. The increase in internet and technology penetration has made people move to cyberspace at a rapid pace, however, awareness regarding the same needs to be inculcated to maximise the data safety of netizens. The recent AIIMS cyber breach has got many organisations worried about their cyber safety and security. According to the HIPPA Journal, 66% of healthcare organizations reported ransomware attacks on them. Data management and security is the prime aspect of clients all across the industry and is now growing into a concern for many. The data is primarily classified into three broad terms-
- Personal Identified Information (PII) - Any representation of information that permits the identity of an individual to whom the information applies to be reasonably inferred by either direct or indirect means.
- Non-Public Information (NPI) - The personal information of an individual that is not and should not be available to the public. This includes Social Security Numbers, bank information, other personal identifiable financial information, and certain transactions with financial institutions.
- Material Non-Public Information (MNPI) - Data relating to a company that has not been made public but could have an impact on its share price. It is against the law for holders of nonpublic material information to use the information to their advantage in trading stocks.
This classification of data allows the industry to manage and secure data effectively and efficiently and at the same time, this allows the user to understand the uses of their data and its intensity in case of breach of data. Organisations process data that is a combination of the above-mentioned classifications and hence in instances of data breach this becomes a critical aspect. Coming back to the AIIMS data breach, it is a known fact that AIIMS is also an educational and research institution. So, one might assume that the reason for any attack on AIIMS could be either to exfiltrate patient data or could be to obtain hands-on the R & D data including research-related intellectual properties. If we postulate the latter, we could also imagine that other educational institutes of higher learning such as IITs, IISc, ISI, IISERs, IIITs, NITs, and some of the significant state universities could also be targeted. In 2021, the Ministry of Home Affairs through the Ministry of Education sent a directive to IITs and many other institutes to take certain steps related to cyber security measures and to create SoPs to establish efficient data management practices. The following sectors are critical in terms of data protection-
- Health sector
- Financial sector
- Education sector
- Automobile sector
These sectors are generally targeted by bad actors and often data breach from these sectors result in cyber crimes as the data is soon made available on Darkweb. These institutions need to practice compliance like any other corporate house as the end user here is the netizen and his/her data is of utmost importance in terms of protection.Organisations in today's time need to be in coherence to the advancement in cyberspace to find out keen shortcomings and vulnerabilities they may face and subsequently create safeguards for the same. The AIIMS breach is an example to learn from so that we can protect other organisations from such cyber attacks. To showcase strong and impenetrable cyber security every organisation should be able to answer these questions-
- Do you have a centralized cyber asset inventory?
- Do you have human resources that are trained to model possible cyber threats and cyber risk assessment?
- Have you ever undertaken a business continuity and resilience study of your institutional digitalized business processes?
- Do you have a formal vulnerability management system that enumerates vulnerabilities in your cyber assets and a patch management system that patches freshly discovered vulnerabilities?
- Do you have a formal configuration assessment and management system that checks the configuration of all your cyber assets and security tools (firewalls, antivirus management, proxy services) regularly to ensure they are most securely configured?
- Do have a segmented network such that your most critical assets (servers, databases, HPC resources, etc.) are in a separate network that is access-controlled and only people with proper permission can access?
- Do you have a cyber security policy that spells out the policies regarding the usage of cyber assets, protection of cyber assets, monitoring of cyber assets, authentication and access control policies, and asset lifecycle management strategies?
- Do you have a business continuity and cyber crisis management plan in place which is regularly exercised like fire drills so that in cases of exigencies such plans can easily be followed, and all stakeholders are properly trained to do their part during such emergencies?
- Do you have multi-factor authentication for all users implemented?
- Do you have a supply chain security policy for applications that are supplied by vendors? Do you have a vendor access policy that disallows providing network access to vendors for configuration, updates, etc?
- Do you have regular penetration testing of the cyberinfrastructure of the organization with proper red-teaming?
- Do you have a bug-bounty program for students who could report vulnerabilities they discover in your cyber infrastructure and get rewarded?
- Do you have an endpoint security monitoring tool mandatory for all critical endpoints such as database servers, application servers, and other important cyber assets?
- Do have a continuous network monitoring and alert generation tool installed?
- Do you have a comprehensive cyber security strategy that is reflected in your cyber security policy document?
- Do you regularly receive cyber security incidents (including small, medium, or high severity incidents, network scanning, etc) updates from your cyber security team in order to ensure that top management is aware of the situation on the ground?
- Do you have regular cyber security skills training for your cyber security team and your IT/OT engineers and employees?
- Do your top management show adequate support, and hold the cyber security team accountable on a regular basis?
- Do you have a proper and vetted backup and restoration policy and practice?
If any organisation has definite answers to these questions, it is safe to say that they have strong cyber security, these questions should not be taken as a comparison but as a checklist by various organisations to be up to date in regard to the technical measures and policies related to cyber security. Having a strong cyber security posture does not drive the cyber security risk to zero but it helps to reduce the risk and improves the fighting chance. Further, if a proper risk assessment is regularly carried out and high-risk cyber assets are properly protected, then the damages resulting from cyber attacks can be contained to a large extent.

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india