#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs
.webp)
Introduction:
The Federal Bureau of Investigation (FBI) focuses on threats and is an intelligence-driven agency with both law enforcement and intelligence responsibilities. The FBI has the power and duty to look into certain offences that are entrusted to it and to offer other law enforcement agencies cooperation services including fingerprint identification, lab tests, and training. In order to support its own investigations as well as those of its collaborators and to better comprehend and address the security dangers facing the United States, the FBI also gathers, disseminates, and analyzes intelligence.
The FBI’s Internet Crime Complaint Center (IC3) Functions combating cybercrime:
- Collection: Internet crime victims can report incidents and notify the relevant authorities of potential illicit Internet behavior using the IC3. Law enforcement frequently advises and directs victims to use www.ic3.gov to submit a complaint.
- Analysis: To find new dangers and trends, the IC3 examines and examines data that users submit via its website.
- Public Awareness: The website posts public service announcements, business alerts, and other publications outlining specific frauds. Helps to raise awareness and make people become aware of Internet crimes and how to stay protected.
- Referrals: The IC3 compiles relevant complaints to create referrals, which are sent to national, international, local, and state law enforcement agencies for possible investigation. If law enforcement conducts an investigation and finds evidence of a crime, the offender may face legal repercussions.
Alarming increase in cyber crime cases:
In the recently released 2022 Internet Crime Report by the FBI's Internet Crime Complaint Center (IC3), the statistics paint a concerning picture of cybercrime in the United States. FBI’s Internet Crime Complaint Center (IC3) received 39,416 cases of extortion in 2022. The number of cases in 2021 stood at 39,360.
FBI officials emphasize the growing scope and sophistication of cyber-enabled crimes, which come from around the world. They highlight the importance of reporting incidents to IC3 and stress the role of law enforcement and private-sector partnerships.
About Internet Crime Complaint Center IC3:
IC3 was established in May 2000 by the FBI to receive complaints related to internet crimes.
It has received over 7.3 million complaints since its inception, averaging around 651,800 complaints per year over the last five years. IC3's mission is to provide the public with a reliable reporting mechanism for suspected cyber-enabled criminal activity and to collaborate with law enforcement and industry partners.
The FBI encourages the public to regularly review consumer and industry alerts published by IC3. An victim of an internet crime are urged to submit a complaint to IC3, and can also file a complaint on behalf of another person. These statistics underscore the ever-evolving and expanding threat of cybercrime and the importance of vigilance and reporting to combat this growing challenge.
What is sextortion?
The use or threatened use of a sexual image or video of another person without that person’s consent, derived from online encounters or social media websites or applications, primarily to extort money from that person or asking for sexual favours and giving warning to distribute that picture or video to that person’s friends, acquaintances, spouse, partner, or co-workers or in public domain.
Sextortion is an online crime that can be understood as, when an bad actor coerces a young person into creating or sharing a sexual image or video of themselves and then uses it to get something from such young person, such as other sexual images, money, or even sexual favours. Reports highlights that more and more kids are being blackmailed in this way. Sextortion can also happen to adults. Sextortion can also take place by taking your pictures from social media account and converting those pictures into sexually explicit content by morphing such images or creating deepfake by miusing deepfake technologies.
Sextortion in the age of AI and advanced technologies:
AI and deep fake technology make sextortion even more dangerous and pernicious. A perpetrator can now produce a high-quality deep fake that convincingly shows a victim engaged in explicit acts — even if the person has not done any such thing.
Legal Measures available in cases of sextortion:
In India, cybersecurity is governed primarily by the Indian Penal Code (IPC) and the Information Technology Act, 2000 (IT Act). Addressing cyber crimes such as hacking, identity theft, and the publication of obscene material online, sextortion and other cyber crimes. The IT Act covers various aspects of electronic governance and e-commerce, with providing provisions for defining such offences and providing punishment for such offences.
Recently Digital Personal Data Protection Act, 2023 has been enacted by the Indian Government to protect the digital personal data of the Individuals. These laws collectively establish the legal framework for cybersecurity and cybercrime prevention in India. Victims are urged to report the crime to local law enforcement and its cybercrime divisions. Law enforcement will investigate sextortion cases reports and will undertake appropriate legal action.
How to stay protected from evolving cases of sextortion: Best Practices:
- Report the Crime to law enforcement agency and social media platform or Internet service provider.
- Enable Two-step verification as an extra layer of protection.
- Keep your laptop Webcams covered when not in use.
- Stay protected from malware and phishing Attacks.
- Protect your personal information on your social media account, and also monitor your social media accounts in order to identify any suspicious activity. You can also set and review privacy settings of your social media accounts.
Conclusion:
Sextortion cases has been increased in recent time. Knowing the risk, being aware of rules and regulations, and by following best practices will help in preventing such crime and help you to stay safe and also avoid the chance of being victimized. It is important to spreading awareness about such growing cyber crimes and empowering the people to report it and it is also significant to provide support to victims. Let’s all unite in order to fight against such cyber crimes and also to make life a safer place on the internet or digital space.
References:
- https://www.ic3.gov/Media/PDF/AnnualReport/2022_IC3ElderFraudReport.pdf
- https://octillolaw.com/insights/fbi-ic3-releases-2022-internet-crime-report/
- https://www.iafci.org/app_themes/docs/Federal%20Agency/2022_IC3Report.pdf

Introduction: Reasons Why These Amendments Have Been Suggested.
The suggested changes in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are the much-needed regulatory reaction to the blistering emergence of synthetic information and deepfakes. These reforms are due to the pressing necessity to govern risks within the digital ecosystem as opposed to regular reformation.
The Emergence of the Digital Menace
Generative AI tools have also facilitated the generation of very realistic images, videos, audio, and text in recent years. Such artificial media have been abused to portray people in situations they are not in or in statements they have never said. The market size is expected to have a compound annual growth rate(CAGR) from 2025 to 2031 of 37.57%, resulting in a market volume of US$400.00 bn by 2031. Therefore, tight regulatory controls are necessary to curb a high prevalence of harm in the Indian digital world.
The Gap in Law and Institution
None of the IT Rules, 2021, clearly addressed synthetic content. Although the Information Technology Act, 2000 dealt with identity theft, impersonation and violation of privacy, the intermediaries were not explicitly obligated on artificial media. This left a loophole in enforcement, particularly since AI-generated content might get around the old system of moderation. These amendments bring India closer to the international standards, including the EU AI Act, which requires transparency and labelling of AI-driven content. India addresses such requirements and adapts to local constitutional and digital ecosystem needs.
II. Explanation of the Amendments
The amendments of 2025 present five alternative changes in the current IT Rules framework, which address various areas of synthetic media regulation.
A. Definitional Clarification: Synthetic Generation of Information Introduction.
Rule 2(1)(wa) Amendment:
The amendments provide an all-inclusive definition of what is meant by “synthetically generated information” as information, which is created, or produced, changed or distorted with the use of a computer resource, in a way that such information can reasonably be perceived to be genuine. This definition is intentionally broad and is not limited to deepfakes in the strict sense but to any artificial media that has gone through algorithmic manipulation in order to have a semblance of authenticity.
Expansion of Legal Scope:
Rule 2(1A) also makes it clear that any mention of information in the context of unlawful acts, namely, including categories listed in Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4), should be understood to mean synthetically generated information. This is a pivotal interpretative protection that does not allow intermediaries to purport that synthetic versions of illegal material are not under the control of the regulation since they are algorithmic creations and not descriptions of what actually occurred.
B. Safe Harbour Protection and Content Removal Requirements
Amendment, rule 3(1)(b)- Safe Harbour Clarification:
The amendments add a certain proviso to the Rule (3) (1)(b) that explains a deletion or facilitation of access of synthetically produced information (or any information falling within specified categories) which the intermediaries have made in good faith as part of reasonable endeavours or at the receipt of a complaint shall not be considered a breach of the Section 79(2) (a) or (b) of the Information Technology Act, 2000. This coverage is relevant especially since it insures the intermediaries against liability in situations where they censor the synthetic contents in advance of a court ruling or governmental warnings.
C. Labelling and Metadata Requirements that are mandatory on Intermediaries that enable the creation of synthetic content
The amendments establish a new framework of due diligence in Rule 3(3) on the case of intermediaries that offer tools to generate, modify, or alter the synthetically generated information. Two fundamental requirements are laid down.
- The generated information must be prominently labelled or embedded with a permanent, unique metadata or identifier. The label or metadata must be:
- Visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
- It should cover at least 10% of the surface of the visual display or, in the case of audio content, during the initial 10% of its duration.
- It can be used to immediately identify that such information is synthetically generated information which has been created, generated, modified, or altered using the computer resource of the intermediary.
- The intermediary in clause (a) shall not enable modification, suppression or removal of such label, permanent unique metadata or identifier, by whatever name called.
D. Important Social Media Intermediaries- Pre-Publication Checking Responsibilities
The amendments present a three-step verification mechanism, under Rule 4(1A), to Significant Social Media Intermediaries (SSMIs), which enables displaying, uploading or publishing on its computer resource before such display, uploading, or publication has to follow three steps.
Step 1- User Declaration: It should compel the users to indicate whether the materials they are posting are synthetically created. This puts the first burden on users.
Step 2-Technical Verification: To ensure that the user is truly valid, the SSMIs need to provide reasonable technical means, such as automated tools or other applications. This duty is contextual and would be based on the nature, format and source of content. It does not allow intermediaries to escape when it is known that not every type of content can be verified using the same standards.
Step 3- Prominent Labelling: In case the synthetic origin is verified by user declaration or technical verification, SSMIs should have a notice or label that is prominently displayed to be seen by users before publication.
The amendments provide a better system of accountability and set that intermediaries will be found to have failed due diligence in a case where it is established that they either knowingly permitted, encouraged or otherwise failed to act on synthetically produced information in contravention of these requirements. This brings in an aspect of knowledge, and intermediaries cannot use accidental errors as an excuse for non-compliance.
An explanation clause makes it clear that SSMIs should also make reasonable and proportionate technical measures to check user declarations and keep no synthetic content published without adequate declaration or labelling. This eliminates confusion on the role of the intermediaries with respect to making declarations.
III. Attributes of The Amendment Framework
- Precision in Balancing Innovation and Accountability.
The amendments have commendably balanced two extreme regulatory postures by neither prohibiting nor allowing the synthetic media to run out of control. It has recognised the legitimate use of synthetic media creation in entertainment, education, research and artistic expression by adopting a transparent and traceable mandate that preserves innovation while ensuring accountability.
- Overt Acceptance of the Intermediary Liability and Reverse Onus of Knowledge
Rule 4(1A) gives a highly significant deeming rule; in cases where the intermediary permits or refrains from acting with respect to the synthetic content knowing that the rules are violated, it will be considered as having failed to comply with the due diligence provisions. This description closes any loopholes in unscrupulous supervision where intermediaries can be able to argue that they did so. Standard of scienter promotes material investment in the detection devices and censor mechanisms that have been in place to offer security to the platforms that have sound systems, albeit the fact that the tools fail to capture violations at times.
- Clarity Through Definition and Interpretive Guidance
The cautious definition of the term “synthetically generated information” and the guidance that is provided in Rule 2(1A) is an admirable attempt to solve confusion in the previous regulatory framework. Instead of having to go through conflicting case law or regulatory direction, the amendments give specific definitional limits. The purposefully broad formulation (artificially or algorithmically created, generated, modified or altered) makes sure that the framework is not avoided by semantic games over what is considered to be a real synthetic content versus a slight algorithmic alteration.
- Insurance of non-accountability but encourages preventative moderation
The safe harbour clarification of the Rule 3(1)(b) amendment clearly safeguards the intermediaries who voluntarily dismiss the synthetic content without a court order or government notification. It is an important incentive scheme that prompts platforms to implement sound self-regulation measures. In the absence of such protection, platforms may also make rational decisions to stay in a passive stance of compliance, only deleting content under the pressure of an external authority, thus making them more effective in keeping users safe against dangerous synthetic media.
IV. Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2025 suggest a structured, transparent, and accountable execution of curbing the rising predicaments of synthetic media and deepfakes. The amendments deal with the regulatory and interpretative gaps that have always existed in determining what should be considered as synthetically generated information, the intermediary liabilities and the mandatory labelling and metadata requirement. Safe-harbour protection will encourage the moderation proactively, and a scienter-based liability rule will not permit the intermediaries to escape liability when they are aware of the non-compliance but tolerate such non-compliance. The idea to introduce pre-publication verification of Significant Social Media Intermediaries adds the responsibility to users and due diligence to the platform. Overall, the amendments provide a reasonable balance between innovation and regulation, make the process more open with its proper definitions, promote responsible conduct on the platform and transform India and the new standards in the sphere of synthetic media regulation. They collaborate to enhance the verisimilitude, defence of the users, and visibility of the systems of the digital ecosystem of India.
V. References
2. https://www.statista.com/outlook/tmo/artificial-intelligence/generative-ai/worldwide

Executive Summary
A video widely circulated on social media claims to show a confrontation between a Zee News journalist and Chief of Army Staff (COAS) General Upendra Dwivedi during the Indian Army’s Annual Press Briefing 2026. The video alleges that General Dwivedi made sensitive remarks regarding ‘Operation Sindoor’, including claims that the operation was still ongoing and that diplomatic intervention by former US President Donald Trump had restricted India’s military response. Several social media users shared the clip while questioning the Indian Army’s operational decisions and demanding accountability over the alleged remarks. The CyberPeace concludes that the viral video claiming to show a discussion between a Zee News journalist and Chief of Army Staff General Upendra Dwivedi on ‘Operation Sindoor’ is misleading and digitally manipulated. Although the visuals were sourced from the Indian Army’s Annual Press Briefing 2026, the audio was artificially created and added later to misinform viewers. The Army Chief did not make any remarks regarding diplomatic interference or limitations on military action during the briefing.
Claim:
An X (formerly Twitter) user, Abbas Chandio (@AbbasChandio__), shared the video on January 14, asserting that it showed a Zee News journalist questioning the Army Chief about the status and outcomes of ‘Operation Sindoor’ during a recent press conference. In the clip, the journalist is purportedly heard challenging the Army Chief over his earlier statement that the operation was “still ongoing,” while the COAS is allegedly heard responding that diplomatic intervention during the conflict limited the Army’s ability to pursue further military action. Here is the link and archive link to the post, along with a screenshot.
The reverse image search also directed to an extended version of the footage uploaded on the official YouTube channel of India Today. The original video was identified as coverage from the Indian Army’s Annual Press Conference 2026, held on January 13 in New Delhi and addressed by COAS General Upendra Dwivedi. Upon reviewing the original press briefing footage, CyberPeace found no instance where a Zee News journalist questioned the Army Chief about ‘Operation Sindoor’. There was also no mention of the statements attributed to General Dwivedi in the viral clip.
In the authentic footage, journalist Anuvesh Rath was seen raising questions related to defence procurement and modernization, not military operations or diplomatic interventions. Here is the link to the original video, along with a screenshot.

To further verify the claim, CyberPeace extracted the audio track from the viral video and analysed it using the AI-based voice detection tool Aurigin. The analysis revealed that the voice heard in the clip was artificially generated, indicating the use of synthetic or manipulated audio. This confirmed that while genuine visuals from the Army’s official press briefing were used, a fabricated audio track had been overlaid to falsely attribute controversial statements to the Army Chief and a Zee News journalist.

Conclusion
The CyberPeace concludes that the viral video claiming to show a discussion between a Zee News journalist and Chief of Army Staff General Upendra Dwivedi on ‘Operation Sindoor’ is misleading and digitally manipulated. Although the visuals were sourced from the Indian Army’s Annual Press Briefing 2026, the audio was artificially created and added later to misinform viewers. The Army Chief did not make any remarks regarding diplomatic interference or limitations on military action during the briefing. The video is a clear case of digital manipulation and misinformation, aimed at creating confusion and casting doubts over the Indian Army’s official position.