#FactCheck - Viral Videos of Mutated Animals Debunked as AI-Generated
Executive Summary:
Several videos claiming to show bizarre, mutated animals with features such as seal's body and cow's head have gone viral on social media. Upon thorough investigation, these claims were debunked and found to be false. No credible source of such creatures was found and closer examination revealed anomalies typical of AI-generated content, such as unnatural leg movements, unnatural head movements and joined shoes of spectators. AI material detectors confirmed the artificial nature of these videos. Further, digital creators were found posting similar fabricated videos. Thus, these viral videos are conclusively identified as AI-generated and not real depictions of mutated animals.

Claims:
Viral videos show sea creatures with the head of a cow and the head of a Tiger.



Fact Check:
On receiving several videos of bizarre mutated animals, we searched for credible sources that have been covered in the news but found none. We then thoroughly watched the video and found certain anomalies that are generally seen in AI manipulated images.



Taking a cue from this, we checked all the videos in the AI video detection tool named TrueMedia, The detection tool found the audio of the video to be AI-generated. We divided the video into keyframes, the detection found the depicting image to be AI-generated.


In the same way, we investigated the second video. We analyzed the video and then divided the video into keyframes and analyzed it with an AI-Detection tool named True Media.

It was found to be suspicious and so we analyzed the frame of the video.

The detection tool found it to be AI-generated, so we are certain with the fact that the video is AI manipulated. We analyzed the final third video and found it to be suspicious by the detection tool.


The detection tool found the frame of the video to be A.I. manipulated from which it is certain that the video is A.I. manipulated. Hence, the claim made in all the 3 videos is misleading and fake.
Conclusion:
The viral videos claiming to show mutated animals with features like seal's body and cow's head are AI-generated and not real. A thorough investigation by the CyberPeace Research Team found multiple anomalies in AI-generated content and AI-content detectors confirmed the manipulation of A.I. fabrication. Therefore, the claims made in these videos are false.
- Claim: Viral videos show sea creatures with the head of a cow, the head of a Tiger, head of a bull.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/
.jpeg)
Introduction and Brief Analysis
A movie named “The Artifice Girl” portrayed A law enforcement agency developing an AI-based personification of a 12-year-old girl who appears to be exactly like a real person. Believing her to be an actual girl, perpetrators of child sexual exploitation were caught attempting to seek sexual favours. The movie showed how AI aided law enforcement, but the reality is that the emergence of Artificial Intelligence has posed numerous challenges in multiple directions. This example illustrates both the promise and the complexity of using AI in sensitive areas like law enforcement, where technological innovation must be carefully balanced with ethical and legal considerations.
Detection and Protection tools are constantly competing with technologies that generate content, automate grooming and challenge legal boundaries. Such technological advancements have provided enough ground for the proliferation of Child Sexual Exploitation and Abuse Material (CSEAM). Also known as child pornography under Section 2 (da) of Protection of Children from Sexual Offences Act, 2012, it defined it as - “means any visual depiction of sexually explicit conduct involving a child which includes a photograph, video, digital or computer-generated image indistinguishable from an actual child and image created, adapted, or modified, but appears to depict a child.”
Artificial Intelligence is a category of technologies that attempt to shape human thoughts and behaviours using input algorithms and datasets. Two Primary applications can be considered in the context of CSEAM: classifiers and content generators. Classifiers are programs that learn from large data sets, which may be labelled or unlabelled and further classify what is restricted or illegal. Whereas generative AI is also trained on large datasets, it uses that knowledge to create new things. Majority of current AI research related to AI for CSEAM is done by the use of Artificial neural networks (ANNs), a type of AI that can be trained to identify unusual connections between items (classification) and to generate unique combinations of items (e.g., elements of a picture) based on the training data used.
Current Legal Landscape
The legal Landscape in terms of AI is yet unclear and evolving, with different nations trying to track the evolution of AI and develop laws. However, some laws directly address CSEAM. The International Centre for Missing and Exploited Children (ICMEC) combats Illegal sexual content involving children. They have a “Model Legislation” for setting recommended sanctions/sentencing. According to research performed in 2018, Illegal sexual content involving children is illegal in 118 of the 196 Interpol member states. This figure represents countries that have sufficient legislation in place to meet 4 or 5 of the 5 criteria defined by the ICMEC.
CSEAM in India can be reported on various portals like the ‘National Cyber Crime Reporting Portal’. Online crimes related to children, including CSEAM, can be reported to this portal by visiting cybercrime.gov.in. This portal allows anonymous reporting, automatic FIR registration and tracking of your complaint. ‘I4C Sahyog Portal’ is another platform managed by the Indian Cyber Crime Coordination Centre (I4C). This portal integrates with social media platforms.
The Indian legal front for AI is evolving and CSEAM is well addressed in Indian laws and through judicial pronouncements. The Supreme Court judgement on Alliance and Anr v S Harish and ors is a landmark in this regard. The following principles were highlighted in this judgment.
- The term “child pornography” should be substituted by “Child Sexual Exploitation and Abuse Material” (CSEAM) and shall not be used for any further judicial proceeding, order, or judgment. Also, parliament should amend the same in POCSO and instead, the term CSEAM should be endorsed.
- Parliament to consider amending Section 15 (1) of POCSO to make it more convenient for the general public to report by way of an online portal.
- Implementing sex education programs to give young people a clear understanding of consent and the consequences of exploitation. To help prevent Problematic sexual behaviour (PSB), schools should teach students about consent, healthy relationships and appropriate behaviour.
- Support services to the victims and rehabilitation programs for the offenders are essential.
- Early identification of at-risk individuals and implementation of intervention strategies for youth.
Distinctive Challenges
According to a report by the National Centre for Missing and Exploited Children (NCMEC), a significant number of reports about child sexual exploitation and abuse material (CSEAM) are linked to perpetrators based outside the country. This highlights major challenges related to jurisdiction and anonymity in addressing such crimes. Since the issue concerns children and considering the cross-border nature of the internet and the emergence of AI, Nations across the globe need to come together to solve this matter. Delays in the extradition procedure and irregular legal processes across the jurisdictions hinder the apprehension of offenders and the delivery of justice to victims.
CyberPeace Recommendations
For effective regulation of AI-generated CSEAM, laws are required to be strengthened for AI developers and trainers to prevent misuse of their tools. AI should be designed with its ethical considerations, ensuring respect for privacy, consent and child rights. There can be a self-regulation mechanism for AI models to recognise and restrict red flags related to CSEAM and indicate grooming or potential abuse.
A distinct Indian CSEAM reporting portal is urgently needed, as cybercrimes are increasing throughout the nation. Depending on the integrated portal may lead to ignorance of AI-based CSEAM cases. This would result in faster response and focused tracking. Since AI-generated content is detectable. The portal should also include an automated AI-content detection system linked directly to law enforcement for swift action.
Furthermore, International cooperation is of utmost importance to win the battle of AI-enabled challenges and to fill the jurisdictional gaps. A united global effort is required. Using a common technology and unified international laws is essential to tackle AI-driven child sexual exploitation across borders and protect children everywhere. CSEAM is an extremely serious issue. Children are among the most vulnerable to such harmful content. This threat must be addressed without delay, through stronger policies, dedicated reporting mechanisms and swift action to protect children from exploitation.
References:
- https://www.sciencedirect.com/science/article/pii/S2950193824000433?ref=pdf_download&fr=RR-2&rr=94efffff09e95975
- https://aasc.assam.gov.in/sites/default/files/swf_utility_folder/departments/aasc_webcomindia_org_oi d_4/portlet/level_2/pocso_act.pdf
- https://www.manupatracademy.com/assets/pdf/legalpost/just-rights-for-children-alliance-and-anr-vs-sharish-and-ors.pdfhttps://www.icmec.orghttps://www.missingkids.org/theissues/generative-ai
.webp)
Introduction
The judiciary as an institution has always been kept on a pedestal and is often seen as the embodiment of justice. From Dictatorship to Democracy, the judiciary plays a central role; even where the judiciary is controlled, the legitimacy of the policies, in one sense or another, is derived from it. In democracies around the world, the independence and well-being of the judiciary are seen as the barometer of democracy’s strength. In this global age, where technology is omnipresent, it seems the judiciary is no exception. Now more than ever, when the judiciary is at the centre of evaluative focus, it becomes imperative to make the judiciary transparent. Digitisation of the judiciary is not just an administrative reform; it is an extension of constitutionalism into the technological realm, an effort to ensure that justice is accessible, transparent, and efficient. On July 25, which is the International Day on Judicial Well-being, is commemorated every year with a clear message that judicial well-being supports “anti-corruption, access to justice, and sustainable peace.”
Digitisation by Design: Justice in the Age of Transformation
The Prime Minister of India envisioned the future of the Indian legal system in alignment with the digitised world, as when he said, “Technology will integrate police, forensics, jails, and courts, and will speed up their work as well. We are moving towards a justice system that will be fully future-ready,” he said, almost predicting the future. Although there are many challenges in the face of this future, there are various initiatives that ease the transition. To clarify, India is streamlining operations, reducing delays, and enhancing access to justice for all by integrating AI into legal research, case management, judicial procedures, and law enforcement. Machine Learning (ML), Natural Language Processing (NLP), Optical Character Recognition (OCR), and predictive analytics are just a few of the AI-powered technologies that are currently being used to increase crime prevention, automate administrative duties, and improve case monitoring.
The digitisation of Indian courts is a structural necessity rather than just a question of contemporary convenience. Miscarriages of justice have frequently resulted from the growing backlog of cases, challenges with maintaining records, and the loss of physical files. In the seminal case of State of U.P. v. Abhay Raj Singh, the courts acknowledged that a conviction could be overturned by missing records alone. With millions of legal documents at risk, digitisation becomes a shield against such a collapse and a tool for preserving judicial memory.
Judicial Digitalisation in India: Institutional Initiatives and Infrastructural Advancements
For centuries, towering bundles of courtroom files stood as dusty monuments to knowledge, sacred, chaotic, and accessible to a select few. But as we now stand in 2025, the physical boundaries of a traditional courtroom have blurred, and the Indian government is actively working towards transforming the legal system. The e-Courts Mission Mode Project is a flagship initiative that aims to utilise Information and Communication Technology (ICT) to modernise and advance the Indian judiciary. This groundbreaking effort, led by the Department of Justice, Government of India, is being carried out in close coordination with the Supreme Court of India’s e-Committee. As a news report suggests, the Supreme Court (SC) held 7.5 lakh hearings through video conferencing between 2020 and 2024, as stated by the Ministry of Law and Justice, responding to a query in the Rajya Sabha on Thursday. Technological tools such as the Supreme Court Vidhik Anuvaad Software (SUVAS), the Case Information Software (CIS), and the Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE) were established to make all pertinent case facts easily available. In another move, the Registry, SC, in close coordination with IIT, Madras, has created and implemented AI and ML-based technologies that are integrated with the Registry’s electronic filing software. This serves as a statement to the fact that cybersecurity and digital infrastructure are no longer administrative add-ons but essential building blocks for ensuring judicial transparency, efficiency, and resilience.
E-Governance and Integrity: The Judiciary in Transition
The United Nations recognises the fundamentals of the judiciary’s well-being and how corruption acts like water to the rust and taints the integrity of not a single judge in general but creates a perception of the whole institution. This threat of corruption is recognised by the United Nations Convention against Corruption (UNCAC), particularly Article 11, which urges the protection of the judiciary’s independence and integrity. Digitisation, while it cannot operate in a vacuum, acts as a structural antidote to corruption by embedding transparency into the fabric of justice delivery as automated registry systems, e-filing, and real-time access to case data drastically reduce discretionary power and the potential for behind-the-scenes manipulation. However, digital systems are only as ethical as the people who design, maintain, and oversee them, bringing their own limitations.
Conclusion: CyberPeace and the Future of Ethical Digital Justice
The potential of digitalisation resides not just in efficiency but also in equity, as India’s judiciary balances tradition and change. A robust democracy, where justice is lit by code rather than hidden under files, is built on a foundation of an open, accessible, and technologically advanced court. This change is not risk-free, though. Secure justice must also be a component of digital justice. The very values that digitisation seeks to preserve are at risk from algorithmic opacity, data breaches, and insecure technologies.
Our vision is not just of a digitalised court system but of a digitally just society, one where judicial data is protected, legal processes are democratised, and innovation upholds constitutionalism. Therefore, as a step forward, CyberPeace resolves to support AI upskilling for legal professionals, advocate for secure-by-design court infrastructure, and facilitate dialogue between technologists and judicial actors to build trust in the digital justice ecosystem. CyberPeace is dedicated to cyber transparency, privacy protection, and ethical AI.
References
- https://www.un.org/en/observances/judicial-well-being
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2106239
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2106239
- https://www.barandbench.com/view-point/facilitating-legal-access-digitalization-of-supreme-court-high-court-records
- https://www.pib.gov.in/PressReleaseIframePage.aspx?PRID=2085127
- https://www.medianama.com/2024/12/223-supreme-court-seven-lakh-video-conferences-four-year-rajya-sabha/