#FactCheck - Viral Videos of Mutated Animals Debunked as AI-Generated
Executive Summary:
Several videos claiming to show bizarre, mutated animals with features such as seal's body and cow's head have gone viral on social media. Upon thorough investigation, these claims were debunked and found to be false. No credible source of such creatures was found and closer examination revealed anomalies typical of AI-generated content, such as unnatural leg movements, unnatural head movements and joined shoes of spectators. AI material detectors confirmed the artificial nature of these videos. Further, digital creators were found posting similar fabricated videos. Thus, these viral videos are conclusively identified as AI-generated and not real depictions of mutated animals.

Claims:
Viral videos show sea creatures with the head of a cow and the head of a Tiger.



Fact Check:
On receiving several videos of bizarre mutated animals, we searched for credible sources that have been covered in the news but found none. We then thoroughly watched the video and found certain anomalies that are generally seen in AI manipulated images.



Taking a cue from this, we checked all the videos in the AI video detection tool named TrueMedia, The detection tool found the audio of the video to be AI-generated. We divided the video into keyframes, the detection found the depicting image to be AI-generated.


In the same way, we investigated the second video. We analyzed the video and then divided the video into keyframes and analyzed it with an AI-Detection tool named True Media.

It was found to be suspicious and so we analyzed the frame of the video.

The detection tool found it to be AI-generated, so we are certain with the fact that the video is AI manipulated. We analyzed the final third video and found it to be suspicious by the detection tool.


The detection tool found the frame of the video to be A.I. manipulated from which it is certain that the video is A.I. manipulated. Hence, the claim made in all the 3 videos is misleading and fake.
Conclusion:
The viral videos claiming to show mutated animals with features like seal's body and cow's head are AI-generated and not real. A thorough investigation by the CyberPeace Research Team found multiple anomalies in AI-generated content and AI-content detectors confirmed the manipulation of A.I. fabrication. Therefore, the claims made in these videos are false.
- Claim: Viral videos show sea creatures with the head of a cow, the head of a Tiger, head of a bull.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Related Blogs

CAPTCHA, or the Completely Automated Public Turing Test to Tell Computers and Humans Apart function, is an image or distorted text that users have to identify or interpret to prove they are human. 2007 marked the inception of CAPTCHA, and Google developed its free service called reCAPTCHA, one of the most commonly used technologies to tell computers apart from humans. CAPTCHA protects websites from spam and abuse by using tests considered easy for humans but were supposed to be difficult for bots to solve.
But, now this has changed. With AI becoming more and more sophisticated, it is now capable of solving CAPTCHA tests at a rate that is more accurate than humans, rendering them increasingly ineffective. This raises the question of whether CAPTCHA is still effective as a detection tool with the advancements of AI.
CAPTCHA Evolution: From 2007 Till Now
CAPTCHA has evolved through various versions to keep bots at bay. reCAPTCHA v1 relied on distorted text recognition, v2 introduced image-based tasks and behavioural analysis, and v3 operated invisibly, assigning risk scores based on user interactions. While these advancements improved user experience and security, AI now solves CAPTCHA with 96% accuracy, surpassing humans (50-86%). Bots can mimic human behaviour, undermining CAPTCHA’s effectiveness and raising the question: is it still a reliable tool for distinguishing real people from bots?
Smarter Bots and Their Rise
AI advancements like machine learning, deep learning and neural networks have developed at a very fast pace in the past decade, making it easier for bots to bypass CAPTCHA. They allow the bots to process and interpret the CAPTCHA types like text and images with almost human-like behaviour. Some examples of AI developments against bots are OCR or Optical Character Recognition. The earlier versions of CAPTCHA relied on distorted text: AI because of this tech is able to recognise and decipher the distorted text, making CAPTCHA useless. AI is trained on huge datasets which allows Image Recognition by identifying the objects that are specific to the question asked. These bots can mimic human habits and patterns by Behavioural Analysis and therefore fool the CAPTCHA.
To defeat CAPTCHA, attackers have been known to use Adversarial Machine Learning, which refers to AI models trained specifically to defeat CAPTCHA. They collect CAPTCHA datasets and answers and create an AI that can predict correct answers. The implications that CAPTCHA failures have on platforms can range from fraud to spam to even cybersecurity breaches or cyberattacks.
CAPTCHA vs Privacy: GDPR and DPDP
GDPR and the DPDP Act emphasise protecting personal data, including online identifiers like IP addresses and cookies. Both frameworks mandate transparency when data is transferred internationally, raising compliance concerns for reCAPTCHA, which processes data on Google’s US servers. Additionally, reCAPTCHA's use of cookies and tracking technologies for risk scoring may conflict with the DPDP Act's broad definition of data. The lack of standardisation in CAPTCHA systems highlights the urgent need for policymakers to reevaluate regulatory approaches.
CyberPeace Analysis: The Future of Human Verification
CAPTCHA, once a cornerstone of online security, is losing ground as AI outperforms humans in solving these challenges with near-perfect accuracy. Innovations like invisible CAPTCHA and behavioural analysis provided temporary relief, but bots have adapted, exploiting vulnerabilities and undermining their effectiveness. This decline demands a shift in focus.
Emerging alternatives like AI-based anomaly detection, biometric authentication, and blockchain verification hold promise but raise ethical concerns like privacy, inclusivity, and surveillance. The battle against bots isn’t just about tools but it’s about reimagining trust and security in a rapidly evolving digital world.
AI is clearly winning the CAPTCHA war, but the real victory will be designing solutions that balance security, user experience and ethical responsibility. It’s time to embrace smarter, collaborative innovations to secure a human-centric internet.
References
- https://www.business-standard.com/technology/tech-news/bot-detection-no-longer-working-just-wait-until-ai-agents-come-along-124122300456_1.html
- https://www.milesrote.com/blog/ai-defeating-recaptcha-the-evolving-battle-between-bots-and-web-security
- https://www.technologyreview.com/2023/10/24/1081139/captchas-ai-websites-computing/
- https://datadome.co/guides/captcha/recaptcha-gdpr/

Introduction
Generative AI models are significant consumers of computational resources and energy required for training and running models. While AI is being hailed as a game-changer, however underneath the shiny exterior, cracks are present which significantly raises concerns for its environmental impact. The development, maintenance, and disposal of AI technology all come with a large carbon footprint. The energy consumption of AI models, particularly large-scale models or image generation systems, these models rely on data centers powered by electricity, often from non-renewable sources, which exacerbates environmental concerns and contributes to substantial carbon emissions.
As AI adoption grows, improving energy efficiency becomes essential. Optimising algorithms, reducing model complexity, and using more efficient hardware can lower the energy footprint of AI systems. Additionally, transitioning to renewable energy sources for data centers can help mitigate their environmental impact. There is a growing need for sustainable AI development, where environmental considerations are integral to model design and deployment.
A breakdown of how generative AI contributes to environmental risks and the pressing need for energy efficiency:
- Gen AI during the training phase has high power consumption, when vast amounts of computational power which is often utilising extensive GPU clusters for weeks or at times even months, consumes a substantial amount of electricity. Post this phase, the inference phase where the deployment of these models takes place for real-time inference, can be energy-extensive especially when we take into account the millions of users of Gen AI.
- The main source of energy used for training and deploying AI models often comes from non-renewable sources which then contribute to the carbon footprint. The data centers where the computations for Gen AI take place are a significant source of carbon emissions if they rely on the use of fossil fuels for their energy needs for the training and deployment of the models. According to a study by MIT, training an AI can produce emissions that are equivalent to around 300 round-trip flights between New York and San Francisco. According to a report by Goldman Sachs, Data Companies will use 8% of US power by 2030, compared to 3% in 2022 as their energy demand grows by 160%.
- The production and disposal of hardware (GPUs, servers) necessary for AI contribute to environmental degradation. Mining for raw materials and disposing of electronic waste (e-waste) are additional environmental concerns. E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment.
Efforts by the Industry to reduce the environmental risk posed by Gen AI
There are a few examples of how companies are making efforts to reduce their carbon footprint, reduce energy consumption and overall be more environmentally friendly in the long run. Some of the efforts are as under:
- Google's TPUs in particular the Google Tensor are designed specifically for machine learning tasks and offer a higher performance-per-watt ratio compared to traditional GPUs, leading to more efficient AI computations during the shorter periods requiring peak consumption.
- Researchers at Microsoft, for instance, have developed a so-called “1 bit” architecture that can make LLMs 10 times more energy efficient than the current leading system. This system simplifies the models’ calculations by reducing the values to 0 or 1, slashing power consumption but without sacrificing its performance.
- OpenAI has been working on optimizing the efficiency of its models and exploring ways to reduce the environmental impact of AI and using renewable energy as much as possible including the research into more efficient training methods and model architectures.
Policy Recommendations
We advocate for the sustainable product development process and press the need for Energy Efficiency in AI Models to counter the environmental impact that they have. These improvements would not only be better for the environment but also contribute to the greater and sustainable development of Gen AI. Some suggestions are as follows:
- AI needs to adopt a Climate justice framework which has been informed by a diverse context and perspectives while working in tandem with the UN’s (Sustainable Development Goals) SDGs.
- Working and developing more efficient algorithms that would require less computational power for both training and inference can reduce energy consumption. Designing more energy-efficient hardware, such as specialized AI accelerators and next-generation GPUs, can help mitigate the environmental impact.
- Transitioning to renewable energy sources (solar, wind, hydro) can significantly reduce the carbon footprint associated with AI. The World Economic Forum (WEF) projects that by 2050, the total amount of e-waste generated will have surpassed 120 million metric tonnes.
- Employing techniques like model compression, which reduces the size of AI models without sacrificing performance, can lead to less energy-intensive computations. Optimized models are faster and require less hardware, thus consuming less energy.
- Implementing scattered learning approaches, where models are trained across decentralized devices rather than centralized data centers, can lead to a better distribution of energy load evenly and reduce the overall environmental impact.
- Enhancing the energy efficiency of data centers through better cooling systems, improved energy management practices, and the use of AI for optimizing data center operations can contribute to reduced energy consumption.
Final Words
The UN Sustainable Development Goals (SDGs) are crucial for the AI industry just as other industries as they guide responsible innovation. Aligning AI development with the SDGs will ensure ethical practices, promoting sustainability, equity, and inclusivity. This alignment fosters global trust in AI technologies, encourages investment, and drives solutions to pressing global challenges, such as poverty, education, and climate change, ultimately creating a positive impact on society and the environment. The current state of AI is that it is essentially utilizing enormous power and producing a product not efficiently utilizing the power it gets. AI and its derivatives are stressing the environment in such a manner which if it continues will affect the clean water resources and other non-renewable power generation sources which contributed to the huge carbon footprint of the AI industry as a whole.
References
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/ais-hunger-for-power-can-be-tamed/111302991
- https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
- https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
- https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/
- https://insights.grcglobalgroup.com/the-environmental-impact-of-ai/

Background
Cyber slavery and online trafficking have become alarming challenges in Southeast Asia. Against this backdrop, India successfully rescued 197 of its citizens from Mae Sot in Thailand on November 10, 2025, using two Indian Air Force flights. The evacuees had fled Myanmar’s Myawaddy region in October after intense military operations forced them to escape. This was India’s second rescue effort within a week, following the November 6 mission that brought back 270 nationals from similar conditions. The operations were coordinated by the Indian Embassy in Bangkok and the Consulate in Chiang Mai, with crucial assistance from the Royal Thai Government.
The Operation and Bilateral Cooperation
The operation was carried out with the presence and supervision of Prime Minister Anutin Charnvirakul of Thailand and Indian Ambassador Nagesh Singh, who were both present at the ceremony in Mae Sot. This way, the two countries have not only proved but also cemented their bond to fight the crimes which were mentioned before and more than that, they have even promised to facilitate communication between their authorities. Prime Minister Charnvirakul thanked India for the quick intervention and added that Thailand would be giving the needed support for the repatriation of the other victims as well.
“Both parties reaffirmed their strong commitment to the fight against cross-border crimes, including cyber scams and human trafficking, in the region and to improving cooperation among the relevant agencies in both countries.”, Embassy of India, Bangkok.
The Cyber Scam Network
The Myawaddy area in Myanmar has made a quick shift to become a hotspot for the entire world of cybercrimes. Moreover, the crimes are especially committed by the organised criminal groups that take advantage of foreign nationals. After the Myanmar military imposed a restriction in late October, over 1,500 people from 28 nations moved to Thailand because of the KK Park cyber hub and other centres being raided.
A UN report (2025) indicated that this fraud activity is part of a larger network that extends the countries populated with very low-tech criminals who target the most naïve, and they are the very ones who end up being tortured. The trafficked persons often belong to the local population or come from neighbouring countries and are recruited with the promise of high salaries as IT or customer service agents, only to be imprisoned in a compound where they are forced to perform phishing, investment fraud, and cryptocurrency scams aimed at the victims all over the globe. These centres operate in border territories having poor governance, easy-to-cross borders, and little police presence, hence making human trafficking a major factor contributing to cybercrime.
India’s Response and Preventive Measures
The Indian Embassy in Thailand worked hand in hand with the Thai government to facilitate bringing back and repatriating the Indian citizens who had entered Thailand illegally when they were escaping Myanmar.
The embassy was far from helpless in the matter. In the case of the embassy's advisory, they suggested to the citizens that:
- It is mandatory to check the authenticity of the job offers and the agents before securing employment in other countries.
- Such employment by means of tourist or visa-free entry permits should be avoided, as such entries allow only for a short-duration visit or tourism.
- Be careful of ads claiming high pay for online or remote work in Southeast Asia.
The embassy reiterated the Government of India’s commitment to ensuring easy access to assistance for citizens overseas and to addressing the growing intersection between cyber fraud and human trafficking.
CyberPeace Analysis and Advisory
The case of Myawaddy demonstrates that cybercrime and human trafficking have grappled to become a complicated global threat. The scam centres gradually come to depend on the trafficked labour of people who are being forced to commit the fraud digitally under coercion. This underlines the requirement for the cybersecurity measures that consider the rights of humans and the protection of the victims, not only the technical defence.
- Cybercrime–Human Trafficking Convergence:
Cybercrime has moved up to the level of a human trafficking operation. The unwilling victims of such fraud schemes are scared for their very lives or even more, not of a reliable way out. This situation is such that one cannot tell where cyber exploitation ends and forced labour begins.
- Cross-Border Enforcement Challenges:
To effectively carry out their unlawful acts, the criminals use legal and jurisdictional loopholes that are present across borders. Dismantling such networks requires the regional cooperation of India, Thailand, and ASEAN countries.
- Socioeconomic Vulnerability:
The situation with unemployment being stagnant and the public not being educated about the situation makes people, especially the youth, very prone to scams of getting hired overseas. Thus, to prevent this uneducated flocking to the fraudsters, it is necessary to constantly implant in them the knowledge of online literacy and the importance of verification of job offers.
- Public–Private Coordination:
The scammers’ mode of operation usually includes online recruitment through social media and encrypted platforms where their victims can be found and contacted. In this regard, cooperation among government institutions, tech platforms, and civil society is imperative to put an end to the operation of these digital trafficking channels.
CyberPeace Expert Advisory
To lessen the possibility of such incidents, CyberPeace suggests the following preventive and policy measures:
Individuals:
- Trust but verify: Before giving your approval to anything, always verify the job offer by official embassy websites or MEA-approved recruiting agencies first.
- Watch out for red flags: If a recruiter offers a very high salary for almost no work, asks for tourist visas, or gives no written contract, be very careful and pull out immediately.
- Protect your documents: Give a trusted person the responsibility of keeping both digital and physical copies of your passport and visa, and also register your travel with the MADAD portal.
- Report if in doubt: If an agent looks suspicious, contact the nearest Indian Embassy or Consulate or report it to cybercrime.gov.in or the 1930 Helpline.
Policymakers and Agencies:
- Strengthen Bilateral Task Forces: Set up armed forces of cyber and human trafficking enforcement units in South and Southeast Asian countries.
- Support Regional Awareness Campaigns: In addition to targeted advisories in local languages, the most vulnerable job seekers in Tier-2 and Tier-3 cities should also receive such awareness in their languages.
- Overseas Employment Advertising should be regulated: All digital job postings should be made to meet transparency standards and fraudulent recruitment should be punished with heavy fines.
- Invest in Digital Forensics and Intelligence Sharing: Create common databases for monitoring international cybercriminal groups.
Conclusion
The return of Indian citizens from Thailand represents a significant humanitarian and diplomatic milestone and highlights that cybercrime, though carried out through digital channels, remains deeply human in nature. International cooperation, well-informed citizens, and a rights-based cybersecurity approach are the minimum requirements for a global campaign against the new breed of cybercrime that is characterised by fraud and trafficking working hand in hand. CyberPeace reminds everyone that digital vigilance, verification, and collaboration across borders are the most effective ways to prevent online abuse and such crimes.
Reference
- https://www.ndtv.com/india-news/197-indians-repatriated-from-thailand-by-special-indian-air-force-flights-9611934
- https://www.thehindu.com/news/national/india-airlifts-citizens-who-worked-in-myanmar-cybercrime-hub-from-thailand/article70264322.ece
- https://www.mea.gov.in/Images/attach/03-List-4-2024.pdf