#FactCheck - Viral Video of Argentina Football Team Dancing to Bhojpuri Song is Misleading
Executive Summary:
A viral video of the Argentina football team dancing in the dressing room to a Bhojpuri song is being circulated in social media. After analyzing the originality, CyberPeace Research Team discovered that this video was altered and the music was edited. The original footage was posted by former Argentine footballer Sergio Leonel Aguero in his official Instagram page on 19th December 2022. Lionel Messi and his teammates were shown celebrating their win at the 2022 FIFA World Cup. Contrary to viral video, the song in this real-life video is not from Bhojpuri language. The viral video is cropped from a part of Aguero’s upload and the audio of the clip has been changed to incorporate the Bhojpuri song. Therefore, it is concluded that the Argentinian team dancing to Bhojpuri song is misleading.

Claims:
A video of the Argentina football team dancing to a Bhojpuri song after victory.


Fact Check:
On receiving these posts, we split the video into frames, performed the reverse image search on one of these frames and found a video uploaded to the SKY SPORTS website on 19 December 2022.

We found that this is the same clip as in the viral video but the celebration differs. Upon further analysis, We also found a live video uploaded by Argentinian footballer Sergio Leonel Aguero on his Instagram account on 19th December 2022. The viral video was a clip from his live video and the song or music that’s playing is not a Bhojpuri song.

Thus this proves that the news that circulates in the social media in regards to the viral video of Argentina football team dancing Bhojpuri is false and misleading. People should always ensure to check its authenticity before sharing.
Conclusion:
In conclusion, the video that appears to show Argentina’s football team dancing to a Bhojpuri song is fake. It is a manipulated version of an original clip celebrating their 2022 FIFA World Cup victory, with the song altered to include a Bhojpuri song. This confirms that the claim circulating on social media is false and misleading.
- Claim: A viral video of the Argentina football team dancing to a Bhojpuri song after victory.
- Claimed on: Instagram, YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

What are Decentralised Autonomous Organizations (DAOs)?
A Decentralised Autonomous Organisation or a DAO, is a unique take on democracy on the blockchain. It is a set of rules encoded into a self-executing contract (also known as a smart contract) that operates autonomously on a blockchain system. A DAO imitates a traditional company, although, in its more literal sense, it is a contractually created entity. In theory, DAOs have no centralised authority in making decisions for the system; it is a communally run system whereby all decisions (be it for internal governance or for the development of the blockchain system) are voted upon by the community members. DAOs are primarily characterised by a decentralised form of operation, where there is no one entity, group or individual running the system. They are self-sustaining entities, having their own currency, economy and even governance, that do not depend on a group of individuals to operate. Blockchain systems, especially DAOs are characterised by pure autonomy created to evade external coercion or manipulation from sovereign powers. DAOs follow a mutually created, agreed set of rules created by the community, that dictates all actions, activities, and participation in the system’s governance. There may also be provisions that regulate the decision-making power of the community.
Ethereum’s DAO’s White Paper described DAO as “The first implementation of a [DAO Entity] code to automate organisational governance and decision making.” Can be used by individuals working together collaboratively outside of a traditional corporate form. It can also be used by a registered corporate entity to automate formal governance rules contained in corporate bylaws or imposed by law.” The referred white paper proposes an entity that would use smart contracts to solve governance issues inherent in traditional corporations. DAOs attempt to redesign corporate governance with blockchain such that contractual terms are “formalised, automated and enforced using software.”
Cybersecurity threats under DAOs
While DAOs offer increased transparency and efficiency, they are not immune to cybersecurity threats. Cybersecurity risks in DAO, primarily in governance, stem from vulnerabilities in the underlying blockchain technology and the DAO's smart contracts. Smart contract exploits, code vulnerabilities, and weaknesses in the underlying blockchain protocol can be exploited by malicious actors, leading to unauthorised access, fund manipulations, or disruptions in the governance process. Additionally, DAOs may face challenges related to phishing attacks, where individuals are tricked into revealing sensitive information, such as private keys, compromising the integrity of the governance structure. As DAOs continue to evolve, addressing and mitigating cybersecurity threats is crucial to ensuring the trust and reliability of decentralised governance mechanisms.
Centralisation/Concentration of Power
DAOs today actively try to leverage on-chain governance, where any governance votes or transactions are directly taken on the blockchain. But such governance is often plutocratic in nature, where the wealthy hold influences, rather than democracies, since those who possess the requisite number of tokens are only allowed to vote and each token staked implies that many numbers of votes emerge from the same individual. This concentration of power in the hands of “whales” often creates disadvantages for the newer entrants into the system who may have an in-depth background but lack the funds to cast a vote. Voting, presently in the blockchain sphere, lacks the requisite concept of “one man, one vote” which is critical in democratic societies.
Smart contract vulnerabilities and external threats
Smart contracts, self-executing pieces of code on a blockchain, are integral to decentralised applications and platforms. Despite their potential, smart contracts are susceptible to various vulnerabilities such as coding errors, where mistakes in the code can lead to funds being locked or released erroneously. Some of them have been mentioned as follows;
Smart Contracts are most prone to re-entrance attacks whereby an untrusted external code is allowed to be executed in a smart contract. This scenario occurs when a smart contract invokes an external contract, and the external contract subsequently re-invokes the initial contract. This sequence of events can lead to an infinite loop, and a reentrancy attack is a tactic exploiting this vulnerability in a smart contract. It enables an attacker to repeatedly invoke a function within the contract, potentially creating an endless loop and gaining unauthorised access to funds.
Additionally, smart contracts are also prone to oracle problems. Oracles refer to third-party services or mechanisms that provide smart contracts with real-world data. Since smart contracts on blockchain networks operate in a decentralised, isolated environment, they do not have direct access to external information, such as market prices, weather conditions, or sports scores. Oracles bridge this gap by acting as intermediaries, fetching and delivering off-chain data to smart contracts, enabling them to execute based on real-world conditions. The oracle problem within blockchain pertains to the difficulty of securely incorporating external data into smart contracts. The reliability of external data poses a potential vulnerability, as oracles may be manipulated or provide inaccurate information. This challenge jeopardises the credibility of blockchain applications that rely on precise and timely external data.
Sybil Attack: A Sybil attack involves a single node managing multiple active fake identities, known as Sybil identities, concurrently within a peer-to-peer network. The objective of such an attack is to weaken the authority or influence within a trustworthy system by acquiring the majority of control in the network. The fake identities are utilised to establish and exert this influence. A successful Sybil attack allows threat actors to perform unauthorised actions in the system.
Distributed Denial of Service Attacks: A Distributed Denial of Service (DDoS) attack is a malicious attempt to disrupt the regular functioning of a network, service, or website by overwhelming it with a flood of traffic. In a typical DDoS attack, multiple compromised computers or devices, often part of a botnet (a network of infected machines controlled by a single entity), are used to generate a massive volume of requests or data traffic. The targeted system becomes unable to respond to legitimate user requests due to the excessive traffic, leading to a denial of service.
Conclusion
Decentralised Autonomous Organisations (DAOs) represent a pioneering approach to governance on the blockchain, relying on smart contracts and community-driven decision-making. Despite their potential for increased transparency and efficiency, DAOs are not immune to cybersecurity threats. Vulnerabilities in smart contracts, such as reentrancy attacks and oracle problems, pose significant risks, and the concentration of voting power among wealthy token holders raises concerns about democratic principles. As DAOs continue to evolve, addressing these challenges is essential to ensuring the resilience and trustworthiness of decentralised governance mechanisms. Efforts to enhance security measures, promote inclusivity, and refine governance models will be crucial in establishing DAOs as robust and reliable entities in the broader landscape of blockchain technology.
References:
https://www.imperva.com/learn/application-security/sybil-attack/
https://www.linkedin.com/posts/satish-kulkarni-bb96193_what-are-cybersecurity-risk-to-dao-and-how-activity-7048286955645677568-B3pV/ https://www.geeksforgeeks.org/what-is-ddosdistributed-denial-of-service/ Report of Investigation Pursuant to Section 21 (a) of the Securities Exchange Act of 1934: The DAO, Securities and Exchange Board, Release No. 81207/ July 25, 2017
https://www.sec.gov/litigation/investreport/34-81207.pdf https://www.legalserviceindia.com/legal/article-10921-blockchain-based-decentralized-autonomous-organizations-daos-.html

Introduction
Generative AI models are significant consumers of computational resources and energy required for training and running models. While AI is being hailed as a game-changer, however underneath the shiny exterior, cracks are present which significantly raises concerns for its environmental impact. The development, maintenance, and disposal of AI technology all come with a large carbon footprint. The energy consumption of AI models, particularly large-scale models or image generation systems, these models rely on data centers powered by electricity, often from non-renewable sources, which exacerbates environmental concerns and contributes to substantial carbon emissions.
As AI adoption grows, improving energy efficiency becomes essential. Optimising algorithms, reducing model complexity, and using more efficient hardware can lower the energy footprint of AI systems. Additionally, transitioning to renewable energy sources for data centers can help mitigate their environmental impact. There is a growing need for sustainable AI development, where environmental considerations are integral to model design and deployment.
A breakdown of how generative AI contributes to environmental risks and the pressing need for energy efficiency:
- Gen AI during the training phase has high power consumption, when vast amounts of computational power which is often utilising extensive GPU clusters for weeks or at times even months, consumes a substantial amount of electricity. Post this phase, the inference phase where the deployment of these models takes place for real-time inference, can be energy-extensive especially when we take into account the millions of users of Gen AI.
- The main source of energy used for training and deploying AI models often comes from non-renewable sources which then contribute to the carbon footprint. The data centers where the computations for Gen AI take place are a significant source of carbon emissions if they rely on the use of fossil fuels for their energy needs for the training and deployment of the models. According to a study by MIT, training an AI can produce emissions that are equivalent to around 300 round-trip flights between New York and San Francisco. According to a report by Goldman Sachs, Data Companies will use 8% of US power by 2030, compared to 3% in 2022 as their energy demand grows by 160%.
- The production and disposal of hardware (GPUs, servers) necessary for AI contribute to environmental degradation. Mining for raw materials and disposing of electronic waste (e-waste) are additional environmental concerns. E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment.
Efforts by the Industry to reduce the environmental risk posed by Gen AI
There are a few examples of how companies are making efforts to reduce their carbon footprint, reduce energy consumption and overall be more environmentally friendly in the long run. Some of the efforts are as under:
- Google's TPUs in particular the Google Tensor are designed specifically for machine learning tasks and offer a higher performance-per-watt ratio compared to traditional GPUs, leading to more efficient AI computations during the shorter periods requiring peak consumption.
- Researchers at Microsoft, for instance, have developed a so-called “1 bit” architecture that can make LLMs 10 times more energy efficient than the current leading system. This system simplifies the models’ calculations by reducing the values to 0 or 1, slashing power consumption but without sacrificing its performance.
- OpenAI has been working on optimizing the efficiency of its models and exploring ways to reduce the environmental impact of AI and using renewable energy as much as possible including the research into more efficient training methods and model architectures.
Policy Recommendations
We advocate for the sustainable product development process and press the need for Energy Efficiency in AI Models to counter the environmental impact that they have. These improvements would not only be better for the environment but also contribute to the greater and sustainable development of Gen AI. Some suggestions are as follows:
- AI needs to adopt a Climate justice framework which has been informed by a diverse context and perspectives while working in tandem with the UN’s (Sustainable Development Goals) SDGs.
- Working and developing more efficient algorithms that would require less computational power for both training and inference can reduce energy consumption. Designing more energy-efficient hardware, such as specialized AI accelerators and next-generation GPUs, can help mitigate the environmental impact.
- Transitioning to renewable energy sources (solar, wind, hydro) can significantly reduce the carbon footprint associated with AI. The World Economic Forum (WEF) projects that by 2050, the total amount of e-waste generated will have surpassed 120 million metric tonnes.
- Employing techniques like model compression, which reduces the size of AI models without sacrificing performance, can lead to less energy-intensive computations. Optimized models are faster and require less hardware, thus consuming less energy.
- Implementing scattered learning approaches, where models are trained across decentralized devices rather than centralized data centers, can lead to a better distribution of energy load evenly and reduce the overall environmental impact.
- Enhancing the energy efficiency of data centers through better cooling systems, improved energy management practices, and the use of AI for optimizing data center operations can contribute to reduced energy consumption.
Final Words
The UN Sustainable Development Goals (SDGs) are crucial for the AI industry just as other industries as they guide responsible innovation. Aligning AI development with the SDGs will ensure ethical practices, promoting sustainability, equity, and inclusivity. This alignment fosters global trust in AI technologies, encourages investment, and drives solutions to pressing global challenges, such as poverty, education, and climate change, ultimately creating a positive impact on society and the environment. The current state of AI is that it is essentially utilizing enormous power and producing a product not efficiently utilizing the power it gets. AI and its derivatives are stressing the environment in such a manner which if it continues will affect the clean water resources and other non-renewable power generation sources which contributed to the huge carbon footprint of the AI industry as a whole.
References
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/ais-hunger-for-power-can-be-tamed/111302991
- https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
- https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
- https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/
- https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/