#FactCheck - AI-Generated Video of Peacock ‘Rescue’ Falsely Shared as Real
Executive Summary:
A video showing a peacock allegedly trapped in ice has been going viral on social media. In the clip, the peacock appears to be frozen in a snow-covered area. Moments later, a man is seen approaching with a hammer and breaking the ice to rescue the bird. Social media users are sharing the video as a real-life incident, praising the peacock’s resilience and describing the scene as inspiring. However, CyberPeace research found the viral claim to be misleading. Our research revealed that the video was created using Artificial Intelligence (AI) and is being falsely circulated as a real incident.
Claim:
Facebook user ‘Ras Bihari Pathak’ shared the viral video on January 25, 2026, with the caption: “This peacock is not standing on ice, but on courage. It reminds us that no matter how harsh the circumstances are, hope always returns in colours.” The archived version of the post can be accessed here.

Fact Check:
To verify the claim, we first conducted a keyword search on Google to check whether any such real incident involving a peacock trapped in ice had been reported. However, no credible or verified media reports were found. Next, we closely examined the viral video. Upon observation, the peacock’s movements and reactions appeared unnatural and artificial. The motion lacked realistic physical behaviour, raising suspicion that the video might have been digitally generated. To confirm this, we analysed the clip using the AI video detection tool Hive Moderation, which indicated a 99 per cent or higher likelihood that the video was AI-generated.

Conclusion:
CyberPeace research confirms that the viral video showing a peacock allegedly trapped in ice is not real. The clip has been created using Artificial Intelligence and is being shared on social media with a false and misleading claim.
Related Blogs

Introduction
India's Computer Emergency Response Team (CERT-In) has unfurled its banner of digital hygiene, heralding the initiative 'Cyber Swachhta Pakhwada,' a clarion call to the nation's citizens to fortify their devices against the insidious botnet scourge. The government's Cyber Swachhta Kendra (CSK)—a Botnet Cleaning and Malware Analysis Centre—stands as a bulwark in this ongoing struggle. It is a digital fortress, conceived under the aegis of the National Cyber Security Policy, with a singular vision: to engender a secure cyber ecosystem within India's borders. The CSK's mandate is clear and compelling—to detect botnet infections within the subcontinent and to notify, enable cleaning, and secure systems of end users to stymie further infections.
What are Bots?
Bots are automated rogue software programs crafted with malevolent intent, lurking in the shadows of the internet. They are the harbingers of harm, capable of data theft, disseminating malware, and orchestrating cyberattacks, among other digital depredations.
A botnet infection is like a parasitic infestation within the electronic sinews of our devices—smartphones, computers, tablets—transforming them into unwitting soldiers in a hacker's malevolent legion. Once ensnared within the botnet's web, these devices become conduits for a plethora of malicious activities: the dissemination of spam, the obstruction of communications, and the pilfering of sensitive information such as banking details and personal credentials.
How, then, does one's device fall prey to such a fate? The vectors are manifold: an infected email attachment opened in a moment of incaution, a malicious link clicked in haste, a file downloaded from the murky depths of an untrusted source, or the use of an unsecured public Wi-Fi network. Each action can be the key that unlocks the door to digital perdition.
In an era where malware attacks and scams proliferate like a plague, the security of our personal devices has ascended to a paramount concern. To address this exigency and to aid individuals in the fortification of their smartphones, the Department of Telecommunications(DoT) has unfurled a suite of free bot removal tools. The government's outreach extends into the ether, dispatching SMS notifications to the populace and disseminating awareness of these digital prophylactics.
Stay Cyber Safe
To protect your device from botnet infections and malware, the Government of India, through CERT-In, recommends downloading the 'Free Bot Removal Tool' at csk.gov.in.' This SMS is not merely a reminder but a beacon guiding users to a safe harbor in the tumultuous seas of cyberspace.
Cyber Swachhta Kendra
The Cyber Swachhta Kendra portal emerges as an oasis in the desert of digital threats, offering free malware detection tools to the vigilant netizen. This portal, also known as the Botnet Cleaning and Malware Analysis Centre, operates in concert with Internet Service Providers (ISPs) and antivirus companies, under the stewardship ofCERT-In. It is a repository of knowledge and tools, a digital armoury where users can arm themselves against the specters of botnet infection.
To extricate your device from the clutches of a botnet or to purge the bots and malware that may lurk within, one must embark on a journey to the CSK website. There, under the 'Security Tools' tab, lies the arsenal of antivirus companies, each offering their own bot removal tool. For Windows users, the choice includes stalwarts such as eScan Antivirus, K7 Security, and Quick Heal. Android users, meanwhile, can venture to the Google Play Store and seek out the 'eScan CERT-IN Bot Removal ' tool or 'M-Kavach2,' a digital shield forged by C-DAC Hyderabad.
Once the chosen app is ensconced within your device, it will commence its silent vigil, scanning the digital sinews for any trace of malware, excising any infections with surgical precision. But the CSK portal's offerings extend beyond mere bot removal tools; it also proffers other security applications such as 'USB Pratirodh' and 'AppSamvid.' These tools are not mere utilities but sentinels standing guard over the sanctity of our digital lives.
USB Pratirodh
'USB Pratirodh' is a desktop guardian, regulating the ingress and egress of removable storage media. It demands authentication with each new connection, scanning for malware, encrypting data, and allowing changes to read/write permissions. 'AppSamvid,' on the other hand, is a gatekeeper for Windows users, permitting only trusted executables and Java files to run, safeguarding the system from the myriad threats that lurk in the digital shadows.
Conclusion
In this odyssey through the digital safety frontier, the Cyber Swachhta Kendra stands as a testament to the power of collective vigilance. It is a reminder that in the vast, interconnected web of the internet, the security of one is the security of all. As we navigate the dark corners of the internet, let us equip ourselves with knowledge and tools, and may our devices remain steadfast sentinels in the ceaseless battle against the unseen adversaries of the digital age.
References
- https://timesofindia.indiatimes.com/gadgets-news/five-government-provided-botnet-and-malware-cleaning-tools/articleshow/107951686.cms
- https://indianexpress.com/article/technology/tech-news-technology/cyber-swachhta-kendra-free-botnet-detection-removal-tools-digital-india-8650425/

Introduction
Earlier this month, lawmakers in Colorado, a U.S. state, were summoned to a special legislative session to rewrite their newly passed Artificial Intelligence (AI) law before it even takes effect. Although the discussion taking place in Denver may seem distant, evolving regulations like this one directly address issues that India will soon encounter as we forge our own course for AI governance.
The Colorado Artificial Intelligence Act
Colorado became the first U.S. state to pass a comprehensive AI accountability law, set to come into force in 2026. It aims to protect people from bias, discrimination, and harm caused by predictive algorithms since AI tools have been known to reproduce societal biases by sidelining women from hiring processes, penalising loan applicants from poor neighbourhoods, or through welfare systems that wrongly deny citizens their benefits. But the law met resistance from tech companies who threatened to pull out form the state, claiming it is too broad in scope in its current form and would stifle innovation. This brings critical questions about AI regulation to the forefront:
- Who should be responsible when AI causes harm? Developers, deployers, or both?
- How should citizens seek justice?
- How can tech companies be incentivised to develop safe technologies?
Colorado’s governor has called a special session to update the law before it kicks in.
What This Means for India
India is on its path towards framing a dedicated AI-specific law or directions, and discussions are underway through the IndiaAI Mission, the proposed Digital India Act, committee set by the Delhi High Court on deepfake and other measures. But the dilemmas Colorado is wrestling with are also relevant here.
- AI uptake is growing in public service delivery in India. Facial recognition systems are expanding in policing, despite accuracy and privacy concerns. Fintech apps using AI-driven credit scoring raise questions of fairness and transparency.
- Accountability is unclear. If an Indian AI-powered health app gives faulty advice, who should be liable- the global developer, the Indian startup deploying it, or the regulator who failed to set safeguards?
- India has more than 1,500 AI startups (NASSCOM), which, like Colorado’s firms, fear that onerous compliance could choke growth. But weak guardrails could undermine public trust in AI altogether.
Lessons for India
India’s Ministry of Electronics and IT ( MEITy) favours a light-touch approach to AI regulation, and exploring and advancing ways for a future-proof guideline. Further, lessons from other global frameworks can guide its way.
- Colorado’s case shows us the necessity of incorporating feedback loops in the policy-making process. India should utilise regulatory sandboxes and open, transparent consultation processes before locking in rigid rules.
- It will also need to explore proportionate obligations, lighter for low-risk applications and stricter for high-risk use cases such as policing, healthcare, or welfare delivery.
- Europe’s AI Act is heavy on compliance, the U.S. federal government leans toward deregulation, and Colorado is somewhere in between. India has the chance to create a middle path, grounded in our democratic and developmental context.
Conclusion
As AI becomes increasingly embedded in hiring, banking, education, and welfare, opportunities for ordinary Indians are being redefined. To shape how this pans out, states like Tamil Nadu and Telangana have taken early steps to frame AI policies. Lessons will emerge from their initiative in addressing AI governance. Policy and regulation will always be contested, but contestations are a part of the process.
The Colorado debate shows us how participative law-making, with room for debate, revision, and iteration, is not a weakness but a necessity. For India’s emerging AI governance landscape, the challenge will be to embrace this process while ensuring that citizen rights and inclusion are balanced well with industry concerns. CyberPeace advocates for responsible AI regulation that balances innovation and accountability.
References
- https://www.cbsnews.com/colorado/news/colorado-lawmakers-look-repeal-replace-controversial-artificial-intelligence-law/
- https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/
- https://carnegieendowment.org/research/2024/11/indias-advance-on-ai-regulation?lang=en
- https://the-captable.com/2024/12/india-ai-regulation-light-touch/
- https://indiaai.gov.in/article/tamilnadu-s-ai-policy-six-step-tamdef-guidance-framework-and-deepmax-scorecard

Introduction
The use of AI in content production, especially images and videos, is changing the foundations of evidence. AI-generated videos and images can mirror a person’s facial features, voice, or actions with a level of fidelity to which the average individual may not be able to distinguish real from fake. The ability to provide creative solutions is indeed a beneficial aspect of this technology. However, its misuse has been rapidly escalating over recent years. This creates threats to privacy and dignity, and facilitates the creation of dis/misinformation. Its real-world consequences are the manipulation of elections, national security threats, and the erosion of trust in society.
Why India Needs Deepfake Regulation
Deepfake regulation is urgently needed in India, evidenced by the recent Rashmika Mandanna incident, where a hoax deepfake of an actress created a scandal throughout the country. This was the first time that an individual's image was superimposed on the body of another woman in a viral deepfake video that fooled many viewers and created outrage among those who were deceived by the video. The incident even led to law enforcement agencies issuing warnings to the public about the dangers of manipulated media.
This was not an isolated incident; many influencers, actors, leaders and common people have fallen victim to deepfake pornography, deepfake speech scams, defraudations, and other malicious uses of deepfake technology. The rapid proliferation of deepfake technology is outpacing any efforts by lawmakers to regulate its widespread use. In this regard, a Private Member’s Bill was introduced in the Lok Sabha in its Winter Session. This proposal was presented to the Lok Sabha as an individual MP's Private Member's Bill. Even though these have had a low rate of success in being passed into law historically, they do provide an opportunity for the government to take notice of and respond to emerging issues. In fact, Private Member's Bills have been the catalyst for government action on many important matters and have also provided an avenue for parliamentary discussion and future policy creation. The introduction of this Bill demonstrates the importance of addressing the public concern surrounding digital impersonation and demonstrates that the Parliament acknowledges digital deepfakes to be a significant concern and, therefore, in need of a legislative framework to combat them.
Key Features Proposed by the New Deepfake Regulation Bill
The proposed legislation aims to create a strong legal structure around the creation, distribution and use of deepfake content in India. Its five core proposals are:
1. Prior Consent Requirement: individuals must give their written approval before producing or distributing deepfake media, including digital representations of themselves, as well as their faces, images, likenesses and voices. This aims to protect women, celebrities, minors, and everyday citizens against the use of their identities with the intent to harm them or their reputations or to harass them through the production of deepfakes.
2. Penalties for Malicious Deepfakes: Serious criminal consequences should be placed for creating or sharing deepfake media, particularly when it is intended to cause harm (defame, harass, impersonate, deceive or manipulate another person). The Bill also addresses financially fraudulent use of deepfakes, political misinformation, interfering with elections and other types of explicit AI-generated media.
3. Establishment of a Deepfake Task Force: To look at the potential impact of deepfakes on national security, elections and public order, as well as on public safety and privacy. This group will work with academic institutions, AI research labs and technology companies to create advanced tools for the detection of deepfakes and establish best practices for the safe and responsible use of generative AI.
4. Creation of a Deepfake Detection and Awareness Fund: To assist with the development of tools for detecting deepfakes, increasing the capacity of law enforcement agencies to investigate cybercrime, promoting public awareness of deepfakes through national campaigns, and funding research on artificial intelligence safety and misinformation.
How Other Countries Are Handling Deepfakes
1. United States
Many States in the United States, including California and Texas, have enacted laws to prohibit the use of politically deceptive deepfakes during elections. Additionally, the Federal Government is currently developing regulations requiring that AI-generated content be clearly labelled. Social Media Platforms are also being encouraged to implement a requirement for users to disclose deepfakes.
2. United Kingdom
In the United Kingdom, it is illegal to create or distribute intimate deepfake images without consent; violators face jail time. The Online Safety Act emphasises the accountability of digital media providers by requiring them to identify, eliminate, and avert harmful synthetic content, which makes their role in curating safe environments all the more important.
3. European Union:
The EU has enacted the EU AI Act, which governs the use of deepfakes by requiring an explicit label to be affixed to any AI-generated content. The absence of a label would subject an offending party to potentially severe regulatory consequences; therefore, any platform wishing to do business in the EU should evaluate the risks associated with deepfakes and adhere strictly to the EU's guidelines for transparency regarding manipulated media.
4. China:
China has among the most rigorous regulations regarding deepfakes anywhere on the planet. All AI-manipulated media will have to be marked with a visible watermark, users will have to authenticate their identities prior to being allowed to use advanced AI tools, and online platforms have a legal requirement to take proactive measures to identify and remove synthetic materials from circulation.
Conclusion
Deepfake technology has the potential to be one of the greatest (and most dangerous) innovations of AI technology. There is much to learn from incidents such as that involving Rashmika Mandanna, as well as the proliferation of deepfake technology that abuses globally, demonstrating how easily truth can be altered in the digital realm. The new Private Member's Bill created by India seeks to provide for a comprehensive framework to address these abuses based on prior consent, penalties that actually work, technical preparedness, and public education/awareness. With other nations of the world moving towards increased regulation of AI technology, proposals such as this provide a direction for India to become a leader in the field of responsible digital governance.
References
- https://www.ndtv.com/india-news/lok-sabha-introduces-bill-to-regulate-deepfake-content-with-consent-rules-9761943
- https://m.economictimes.com/news/india/shiv-sena-mp-introduces-private-members-bill-to-regulate-deepfakes/articleshow/125802794.cms
- https://www.bbc.com/news/world-asia-india-67305557
- https://www.akingump.com/en/insights/blogs/ag-data-dive/california-deepfake-laws-first-in-country-to-take-effect
- https://codes.findlaw.com/tx/penal-code/penal-sect-21-165/
- https://www.mishcon.com/news/when-ai-impersonates-taking-action-against-deepfakes-in-the-uk#:~:text=As%20of%2031%20January%202024,of%20intimate%20deepfakes%20without%20consent.
- https://www.politico.eu/article/eu-tech-ai-deepfakes-labeling-rules-images-elections-iti-c2pa/
- https://www.reuters.com/article/technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VT/