#FactCheck - AI-Generated Video Falsely Shared as ‘Multi-Hooded Snake’ Sighting in Vrindavan
A video is being widely shared on social media showing devotees seated in a boat appearing stunned as a massive, multi-hooded snake—resembling the mythical Sheshnag—suddenly emerges from the middle of a water body.
The video captures visible panic and astonishment among the devotees. Social media users are sharing the clip claiming that it is from Vrindavan, with some portraying the sight as a divine or supernatural event. However, research conducted by the Cyber Peace Foundation found the viral claim to be false. Our research revealed that the video is not authentic and has been generated using artificial intelligence (AI).
Claim
On January 17, 2026, a user shared the viral video on Instagram with the caption suggesting that God had appeared again in the age of Kalyug. The post claims that a terrifying video from Vrindavan has surfaced in which devotees sitting in a boat were shocked to see a massive multi-hooded snake emerge from the water. The caption further states that devotees are hailing the creature as an incarnation of Sheshnag or Vasuki Nag, raising religious slogans and questioning whether the sight represents a divine sign. (The link to the post, its archive link, and screenshots are available.)
- https://www.instagram.com/reel/DTngN9FkoX0/?igsh=MTZvdTN1enI2NnFydA%3D%3D
- https://archive.ph/UuAqB
Fact Check:
Upon closely examining the viral video, we suspected that it might be AI-generated. To verify this, the video was scanned using the AI detection tool SIGHTENGINE, which indicated that the visual is 99 per cent AI-generated.

In the next step of the research , the video was analysed using another AI detection tool, HIVE Moderation. According to the results obtained, the video was found to be 62 per cent AI-generated.

Conclusion
Our research clearly establishes that the viral video claiming to show a multi-hooded snake in Vrindavan is not real. The clip has been created using artificial intelligence and is being falsely shared on social media with religious and sensational claims.
Related Blogs

The Ghibli trend has been in the news for the past couple of weeks for multiple reasons, be it good or bad. The nostalgia that everyone has for the art form has made people turn a blind eye to what the trend means to the artists who painstakingly create the art. The open-source platforms may be trained on artistic material without the artist's ‘explicit permission’ making it so that the rights of the artists are downgraded. The artistic community has reached a level where they are questioning their ability to create, which can be recreated by this software in a couple of seconds and without any thought as to what it is doing. OpenAI’s update on ChatGPT makes it simple for users to create illustrations that are like the style created by Hayao Miyazaki and made into anything from personal pictures to movie scenes and making them into Ghibli-style art. The updates in AI to generate art, including Ghibli-style, may raise critical questions about artistic integrity, intellectual property, and data privacy risks.
AI and the Democratization of Creativity
AI-powered tools have lowered barriers and enable more people to engage with artistic expression. AI allows people to create appealing content in the form of art regardless of their artistic capabilities. The update of ChatGPT has made it so that art has been democratized, and the abilities of the user don't matter. It makes art accessible, efficient and a creative experiment to many.
Unfortunately, these developments also pose challenges for the original artistry and the labour of human creators. The concern doesn't just stop at AI replacing artists, but also about the potential misuse it can lead to. This includes unauthorized replication of distinct styles or deepfake applications. When it is used ethically, AI can enhance artistic processes. It can assist with repetitive tasks, improving efficiency, and enabling creative experimentation.
However, its ability to mimic existing styles raises concerns. The potential that AI-generated content has could lead to a devaluation of human artists' work, potential copyright issues, and even data privacy risks. Unauthorized training of AI models that create art can be exploited for misinformation and deepfakes, making human oversight essential. Few artists believe that AI artworks are disrupting the accepted norms of the art world. Additionally, AI can misinterpret prompts, producing distorted or unethical imagery that contradicts artistic intent and cultural values, highlighting the critical need for human oversight.
The Ethical and Legal Dilemmas
The main dilemma that surrounds trends such as the Ghibli trend is whether it compromises human efforts by blurring the line between inspiration and infringement of artistic freedom. Further, an issue that is not considered by most users is whether the personal content (personal pictures in this case) uploaded on AI models is posing a risk to their privacy. This leads to the issue where the potential misuse of AI-generated content can be used to spread misinformation through misleading or inappropriate visuals.
The negative effects can only be balanced if a policy framework is created that can ensure the fair use of AI in Art. Further, this should ensure that the training of AI models is done in a manner that is fair to the artists who are the original creators of a style. Human oversight is needed to moderate the AI-generated content. This oversight can be created by creating ethical AI usage guidelines for platforms that host AI-generated art.
Conclusion: What Can Potentially Be Done?
AI is not a replacement for human effort, it is to ease human effort. We need to promote a balanced AI approach that protects the integrity of artists and, at the same time, continues to foster innovation. And finally, strengthening copyright laws to address AI-generated content. Labelling AI content and ensuring that this content is disclosed as AI-generated is the first step. Furthermore, there should be fair compensation made to the human artists based on whose work the AI model is trained. There is an increasing need to create global AI ethics guidelines to ensure that there is transparency, ethical use and human oversight in AI-driven art. The need of the hour is that industries should work collaboratively with regulators to ensure that there is responsible use of AI.
References
- https://medium.com/@haileyq/my-experience-with-studio-ghibli-style-ai-art-ethical-debates-in-the-gpt-4o-era-b84e5a24cb60
- https://www.bbc.com/future/article/20241018-ai-art-the-end-of-creativity-or-a-new-movement

Introduction
The increase in consumer demands has resulted in a sharp increase in digital financing in India. As a result, the reputation of the digital lending sector has been impacted, as bad actors increasingly deploy illicit lending platforms such as fraudulent loans and trading apps. As millions of Indians download fast loan applications to help them meet their financial ends, the fraudulent apps result in cyber crimes including financial fraud. Consumers need to be vigilant of dubious trading or loan applications as bad actors frequently use illegitimate apps to trick victims by advertising limited-period offers and applying pressure.
Recently the Indian Cyber Crime Coordination Centre (I4C) led handel CyberDost has issued a cybercrime alert against the ‘CashExpand-U’ finance assistant app, which has been now removed from the Google Play Store. The app was found to be associated with hostile foreign entities, and the app had made it easier to raise small loans. However, such loan apps are seldom credible and may compromise financial information.
Raising cases of Fraudulent Loan Apps
The finance minister had stated that the government is constantly engaged with the Reserve Bank of India and other regulators and stakeholders to control fraudulent loan apps. In FY23, there were 1,062 complaints against such apps, the Finance Minister shared during a Lok Sabha session. Google removed almost 134 fake apps from the Play Store in a single week in September 2023 after multiple complaints were registered against such apps. The Reserve Bank of India (RBI) had also issued regulatory guidelines on digital lending in April 2023 to bring transparency in the digital loan space.
CyberPeace Policy Wing Advisory for Users
- Be cautious of App Permissions
Fake lending apps collect data by fraudulently taking numerous app permissions from consumers and misusing them later. The users must effectively manage their app permissions to avoid denying any extra permissions such as access to contacts, location, and photos. This is because fraudulent digital lenders access users' personal data to extort additional money even after loan repayment.
- Practice Due Diligence
Consumers must exercise care & caution before applying for a loan from digital lending platforms. Before applying for a loan or downloading any such apps, consumers must conduct due diligence by verifying the app's name, rating, reviews, physical address, and contact information. Always double-verify the paperwork before signing any agreement or contract. Always apply for loans from RBI-approved and compliant banking and financial services providers.
- Download from Official Sources
To avoid downloading counterfeit apps, only download lending apps from official stores like Google Play Store or Apple App Store, and avoid downloading apps from web links sent via SMS, email, or social media, even if shared by your known persons.
- Be sceptical of too-good-to-be-true offerings
Be cautious of deals that seem too good to be true, like hassle-free easy loans as they can be fraudulent. If an offer seems too good to be true, it might be a red flag. Hence always conduct your own research to verify the lender and avoid making hasty decisions.
- Reporting Mechanism
In case of facing a scam by such fraudulent apps, victims can file a complaint with the ‘National Cyber Crime Reporting Portal’ or Cyber Crime Helpline ‘1930’, or they can also contact us at CyberPeace Helpline +919570000066 and helpline@cyberpeace.net to get assistance in reporting their cases.
Final Words
Illegitimate loan/trading apps have been raising concerns by defrauding innocent consumers who seek financial assistance. The Center has recently warned against the CashExpand-U app, which has been now removed from the Google Play Store. Users are advised to exercise due care and caution while downloading loan apps and applying for loans to prevent any potential scams. keep up to date with news from concerned authorities about common scams and fraudulent practices in the lending space and stay safe in the online world.
References:
- https://www.livemint.com/news/beware-govt-issues-cybercrime-alert-against-loan-app-cashexpand-u-finance-assistant-11720338996430.html
- https://timesofindia.indiatimes.com/technology/tech-news/government-has-issued-an-important-warning-for-this-loan-app/articleshow/111541577.cms

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.