#FactCheck - Viral Video of ‘Hatha Yogi’ Meditating on Snowy Mountain Is AI-Generated
A video claiming to show a Hatha yogi performing extreme penance on a snow-covered mountain amid strong icy winds is going viral on social media. In the clip, the ascetic is seen balancing on one hand in a yoga posture, while users portray the visuals as a rare example of extraordinary spiritual endurance in harsh climatic conditions.
However, an investigation by the CyberPeace Foundation has found the claim to be false. Our analysis confirms that the viral video is AI-generated and does not depict a real person or an actual event.
Claim:
A Instagram user shared the video with the caption:
“Hatha yogi, what kind of soil are these people made of?” The post suggests that the visuals show a real yogi performing intense meditation on a frozen mountain.
- https://www.instagram.com/reels/DTK32TvDGIJ/
- (Archive link as provided) https://perma.cc/H84M-MGXZ

Fact Check:
To verify the claim, the CyberPeace Foundation conducted a detailed examination of the viral video.No credible or verifiable news reports were found to support the claim that such an incident ever occurred.
The viral video was analysed using the AI detection tool Deepfake-O-Meter.Its AVSRDD (2025) module flagged the video as AI-generated, confirming that the visuals were digitally created and not recorded in real life.
Multiple indicators within the footage,such as unnatural body balance, environmental inconsistencies, and visual artifacts are consistent with AI-generated content.

Conclusion
The viral video purportedly showing a yogi meditating on a frozen mountain is not real. It has been created using artificial intelligence and is being circulated on social media with a misleading narrative. Users are advised to exercise caution and verify content before sharing such sensational claims.
Related Blogs

Introduction
The courts in India have repeatedly emphasised the importance of “enhanced customer protection” and “limited liability” on their part. The rationale behind such imperatives is to extend security against exploitation by institutions that are equipped with all the means to manipulate customers. India, with its looming financial literacy gaps that have to be addressed, needs to curb any manipulation on the part of banking institutions. Various studies have highlighted this gap in recent times; for example, according to the National Centre for Financial Education, only 27% of Indian people are financially literate, which is much less than the 42% global average. With only 19% of millennials exhibiting sufficient financial awareness yet expressing high trust in their financial skills, the issue is very worrisome. Thus, the increasing number of financial frauds intensifies the issue.
Zero Liability in Cyber Frauds: Regulatory Safeguards for Digital Banking Customers
In light of the growing emphasis on financial inclusion and consumer protection, and in response to the recent rise in complaints regarding unauthorised debits from customer accounts and cards, the framework for assessing customer liability in such cases has been re-evaluated. The RBI’s circular dated July 6, 2017 titled “Customer Protection-Limited Liability of Customers in Unauthorised Electronic Banking Transactions” serves as the foundation for regulatory protections for Indian customers of digital banking. A clear and organised framework for determining customer accountability is outlined in the circular, which acknowledges the exponential increase in electronic transactions and related scams. It assigns proportional obligations for unauthorised transactions resulting from system-level breaches, client carelessness, and bank contributory negligence. Most importantly it establishes the zero responsibility concept, which protects clients from monetary losses in cases when the bank or another system component is at fault and the client promptly reports the breach.
This directive’s sophisticated approach to consumer protection is what makes it unique. It requires banks to set up strong fraud prevention systems, proactive alerting systems, and round-the-clock reporting systems. Furthermore, it significantly alters the power dynamics between financial institutions and customers by placing the onus of demonstrating customer negligence completely on the bank. The circular emphasises prompt reversal of funds to impacted customers and requires banks to implement Board-approved policies on liability to redress. As a result, it is a consumer rights charter rather than just a compliance document, promoting confidence and financial accountability in India’s digital banking sector.
Judicial Endorsement in Reinforcing the Zero Liability Principle
In the case of Suresh Chandra Negi & Anr. v. Bank of Baroda & Ors. (Writ (C) No. 24192 of 2022) The Allahabad High Court reaffirmed that the burden of proving consumer accountability rests firmly on the banking institution, hence reaffirming the zero liability concept in circumstances of unapproved electronic banking transactions. The Division bench emphasised the regulatory requirement that banks provide adequate proof before assigning blame to customers, citing Clause 12 of the RBI’s circular dated June 6, 2017, Customer Protection—Limited Liability of Customers in Unauthorised Electronic Banking Transactions. In a similar scenario, the Bombay HC held that a customer is entitled to zero liability when an authorized transaction occurs due to a third-party breach, where the deficiency lies neither with the bank nor the customer, provided the fraud is promptly reported.
The zero liability principle, as envisaged under Clause 8 of the RBI circular, has emerged as a cornerstone of consumer protection in India’s digital banking ecosystem.
Another landmark judgment that has given this principle the front stage in addressing banking frauds is Hare Ram Singh vs RBI &Ors. (W.P. (C) 13497/2022) laid down by Delhi HC which is an important legal turning point in the development of the zero liability principle under the RBI’s 2017 framework. The court reiterated the need to evaluate customer diligence in light of new fraud tactics like phishing and vishing by holding the State Bank of India (SBI) liable for a cyber fraud incident even though the transactions were authenticated by OTP. The ruling made it clear that when complex social engineering or technical manipulation is used, banks are nonetheless accountable even if they only rely on OTP validation. The legal protection provided to victims of unauthorised electronic banking transactions is strengthened by the court’s emphasis on the bank having the burden of evidence in accordance with RBI standards.
Importantly, this ruling lays the full burden of securing digital banking systems on financial organisations and supports the judiciary’s increasing acknowledgement of the digital asymmetry between banks and consumers. It emphasises that prompt consumer reporting, banks’ failure to disclose important credentials, and their own operational errors must all be taken into consideration when determining culpability. As a result, this decision establishes a strong precedent that will increase consumer confidence, promote systemic advancements in digital risk management, and better integrate the zero liability standard into Indian digital banking law. In a time when cyber vulnerabilities are growing, it acts as a beacon for financial accountability.
Conclusion
The Zero Liability Principle serves as a vital safety net for customers navigating an increasingly intricate and precarious financial environment in a time when digital transactions are the foundation of contemporary banking. In addition to codifying strong safeguards against unauthorized electronic transactions, the RBI’s 2017 framework rebalanced the fiduciary relationship by putting financial institutions squarely in charge. Through significant rulings, the courts have upheld this protective culture and emphasised that banks, not the victims of cybercrime, bear the burden of proof.
It would be crucial to execute these principles consistently, review them frequently, and raise public awareness as India transitions to a more digital economy. In order to ensure that consumers are not only protected but also empowered must become more than just a policy on paper.
References
- https://www.business-standard.com/content/specials/making-money-vs-managing-money-india-s-critical-financial-literacy-gap-125021900786_1.html
- https://www.livelaw.in/high-court/allahabad-high-court/allahabad-high-court-ruling-bank-liability-unauthorized-electronic-transaction-and-customer-fault-297962
- https://www.mondaq.com/india/white-collar-crime-anti-corruption-fraud/1635616/cyber-law-series-2-issue-10-the-zero-liability-principle-in-cyber-fraud-hare-ram-singh-v-reserve-bank-of-india-ors-case

Introduction
The Telecommunications Act of 2023 was passed by Parliament in December, receiving the President's assent and being published in the official Gazette on December 24, 2023. The act is divided into 11 chapters 62 sections and 3 schedules. Sections 1, 2, 10-30, 42-44, 46, 47, 50-58, 61 and 62 already took effect on June 26, 2024.
On July 04, 2024, the Centre issued a Gazetted Notification and sections 6-8, 48 and 59(b) were notified to be effective from July 05, 2024. The Act aims to amend and consolidate the laws related to telecommunication services, telecommunication networks, and spectrum assignment and it ‘repeals’ certain older colonial-era legislations like the Indian Telegraph Act 1885 and Indian Wireless Telegraph Act 1933. Due to the advancements in technology in the telecom sector, the new law is enacted.
On 18 July 2024 Thursday, the telecom minister while launching the theme of Indian Mobile Congress (IMC), announced that all rules and provisions of the new Telecom Act would be notified within the next 180 days, hence making the Act operational at full capacity.
Important definitions under Telecommunications Act, 2023
- Authorisation: Section 2(d) entails “authorisation” means a permission, by whatever name called, granted under this Act for— (i) providing telecommunication services; (ii) establishing, operating, maintaining or expanding telecommunication networks; or (iii) possessing radio equipment.
- Telecommunication: Section 2(p) entails “Telecommunication” means transmission, emission or reception of any messages, by wire, radio, optical or other electro-magnetic systems, whether or not such messages have been subjected to rearrangement, computation or other processes by any means in the course of their transmission, emission or reception.
- Telecommunication Network: Section 2(s) entails “telecommunication network” means a system or series of systems of telecommunication equipment or infrastructure, including terrestrial or satellite networks or submarine networks, or a combination of such networks, used or intended to be used for providing telecommunication services, but does not include such telecommunication equipment as notified by the Central Government.
- Telecommunication Service: Section 2(t) entails “telecommunication service” means any service for telecommunication.
Measures for Cyber Security for the Telecommunication Network/Services
Section 22 of the Telecommunication Act, 2023 talks about the protection of telecommunication networks and telecommunication services. The section specifies that the centre may provide rules to ensure the cybersecurity of telecommunication networks and telecommunication services. Such measures may include the collection, analysis and dissemination of traffic data that is generated, transmitted, received or stored in telecommunication networks. ‘Traffic data’ can include any data generated, transmitted, received, or stored in telecommunication networks – such as type, duration, or time of a telecommunication.
Section 22 further empowers the central government to declare any telecommunication network, or part thereof, as Critical Telecommunication Infrastructure. It may further provide for standards, security practices, upgradation requirements and procedures to be implemented for such Critical Telecommunication Infrastructure.
CyberPeace Policy Wing Outlook:
The Telecommunication Act, 2023 marks a significant change & growth in the telecom sector by providing a robust regulatory framework, encouraging research and development, promoting infrastructure development, and measures for consumer protection. The Central Government is empowered to authorize individuals for (a) providing telecommunication services, (b) establishing, operating, maintaining, or expanding telecommunication networks, or (c) possessing radio equipment. Section 48 of the act provides no person shall possess or use any equipment that blocks telecommunication unless permitted by the Central Government.
The Central Government will protect users by implementing different measures, such as the requirement of prior consent of users for receiving particular messages, keeping a 'Do Not Disturb' register to stop unwanted messages, the mechanism to enable users to report any malware or specified messages received, the preparation and maintenance of “Do Not Disturb” register, to ensure that users do not receive specified messages or class of specified messages without prior consent. The authorized entity providing telecommunication services will also be required to create an online platform for users for their grievances pertaining to telecommunication services.
In certain limited circumstances such as national security measures, disaster management and public safety, the act contains provisions empowering the Government to take temporary possession of telecom services or networks from authorised entity; direct interception or disclosure of messages, with measures to be specified in rulemaking. This entails that the government gains additional controls in case of emergencies to ensure security and public order. However, this has to be balanced with appropriate measures protecting individual privacy rights and avoiding any unintended arbitrary actions.
Taking into account the cyber security in the telecommunication sector, the government is empowered under the act to introduce standards for cyber security for telecommunication services and telecommunication networks; and encryption and data processing in telecommunication.
The act also promotes the research and development and pilot projects under Digital Bharat Nidhi. The act also promotes the approach of digital by design by bringing online dispute resolution and other frameworks. Overall the approach of the government is noteworthy as they realise the need for updating the colonial era legislation considering the importance of technological advancements and keeping pace with the digital and technical revolution in the telecommunication sector.
References:
- The Telecommunications Act, 2023 https://acrobat.adobe.com/id/urn:aaid:sc:AP:88cb04ff-2cce-4663-ad41-88aafc81a416
- https://pib.gov.in/PressReleasePage.aspx?PRID=2031057
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2027941
- https://economictimes.indiatimes.com/industry/telecom/telecom-news/new-telecom-act-will-be-notified-in-180-days-bsnl-4g-rollout-is-monitored-on-a-daily-basis-scindia/articleshow/111851845.cms?from=mdr
- https://www.azbpartners.com/wp-content/uploads/2024/06/Update-Staggered-Enforcement-of-Telecommunications-Act-2023.pdf
- https://telecom.economictimes.indiatimes.com/blog/analysing-the-impact-of-telecommunications-act-2023-on-digital-india-mission/111828226
.webp)
Introduction
AI-generated fake videos are proliferating on the Internet indeed becoming more common by the day. There is a use of sophisticated AI algorithms that help manipulate or generate multimedia content such as videos, audio, and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake content, and these AI-manipulated videos look realistic. A recent study has shown that 98% of deepfake-generated videos have adult content featuring young girls, women, and children, with India ranking 6th among the nations that suffer from misuse of deepfake technology. This practice has dangerous consequences and could harm an individual's reputation, and criminals could use this technology to create a false narrative about a candidate or a political party during elections.
The working of deepfake videos is based on algorithms that refine the fake content, and the generators are built and trained in such a way as to get the desired output. The process is repeated several times, allowing the generator to improve the content until it seems realistic, making it more flawless. Deepfake videos are created by specific approaches some of them are: -
- Lip syncing: This is the most common technique used in deepfake. Here, the voice recordings of the video, make it appear as to what was originally said by the person appearing in the video.
- Audio deepfake: For Audio-generated deepfake, a generative adversarial network (GAN) is used to colon a person’s voice, based on the vocal patterns and refine it till the desired output is generated.
- Deepfake has become so serious that the technology could be used by bad actors or by cyber-terrorist squads to set their Geo-political agendas. Looking at the present situation in the past few the number of cases has just doubled, targeting children, women and popular faces.
- Greater Risk: in the last few years the cases of deep fake have risen. by the end of the year 2022, the number of cases has risen to 96% against women and children according to a survey.
- Every 60 seconds, a deepfake pornographic video is created, now quicker and more affordable than ever, it takes less than 25 minutes and costs using just one clean face image.
- The connection to deepfakes is that people can become targets of "revenge porn" without the publisher having sexually explicit photographs or films of the victim. They may be made using any number of random pictures or images collected from the internet to obtain the same result. This means that almost everyone who has taken a selfie or shared a photograph of oneself online faces the possibility of a deepfake being constructed in their image.
Deepfake-related security concerns
As deepfakes proliferate, more people are realising that they can be used not only to create non-consensual porn but also as part of disinformation and fake news campaigns with the potential to sway elections and rekindle frozen or low-intensity conflicts.
Deepfakes have three security implications: at the international level, strategic deepfakes have the potential to destroy precarious peace; at the national level, deepfakes may be used to unduly influence elections, and the political process, or discredit opposition, which is a national security concern, and at the personal level, the scope for using Women suffer disproportionately from exposure to sexually explicit content as compared to males, and they are more frequently threatened.
Policy Consideration
Looking at the present situation where the cases of deepfake are on the rise against women and children, the policymakers need to be aware that deepfakes are utilized for a variety of valid objectives, including artistic and satirical works, which policymakers should be aware of. Therefore, simply banning deepfakes is not a way consistent with fundamental liberties. One conceivable legislative option is to require a content warning or disclaimer. Deepfake is an advanced technology and misuse of deepfake technology is a crime.
What are the existing rules to combat deepfakes?
It's worth noting that both the IT Act of 2000 and the IT Rules of 2021 require social media intermediaries to remove deep-fake videos or images as soon as feasible. Failure to follow these guidelines can result in up to three years in jail and a Rs 1 lakh fine. Rule 3(1)(b)(vii) requires social media intermediaries to guarantee that its users do not host content that impersonates another person, and Rule 3(2)(b) requires such content to be withdrawn within 24 hours of receiving a complaint. Furthermore, the government has stipulated that any post must be removed within 36 hours of being published online. Recently government has also issued an advisory to social media intermediaries to identify misinformation and deepfakes.
Conclusion
It is important to foster ethical and responsible consumption of technology. This can only be achieved by creating standards for both the creators and users, educating individuals about content limits, and providing information. Internet-based platforms should also devise techniques to deter the uploading of inappropriate information. We can reduce the negative and misleading impacts of deepfakes by collaborating and ensuring technology can be used in a better manner.
References
- https://timesofindia.indiatimes.com/life-style/parenting/moments/how-social-media-scandals-like-deepfake-impact-minors-and-students-mental-health/articleshow/105168380.cms?from=mdr
- https://www.aa.com.tr/en/science-technology/deepfake-technology-putting-children-at-risk-say-experts/2980880
- https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/