#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs

Risk Management
The ‘Information Security Profile’ prioritises and informs cybersecurity operations based on the company's risk administration procedures. It assists in choosing areas of focus for security operations that represent the desired results for producers by supporting periodic risk evaluations and validating company motivations. A thorough grasp of the business motivations and safety requirements unique to the Production system and its surroundings is necessary in order to manage cybersecurity threats. Because every organisation has different risks and uses ICS and IT in different ways, there will be variations in how the profile is implemented.
Companies are currently adopting industry principles and cybersecurity requirements, which the Manufacturing Information is intended to supplement, not replace. Manufacturers have the ability to identify crucial operations for key supply chains and can order expenditures in a way that will optimise their impact on each dollar. The Profile's primary objective is to lessen and manage dangers associated with cybersecurity more effectively. The Cybersecurity Framework and the Profile are not universally applicable methods for controlling security risks for essential infrastructure.
Producers will always face distinct risks due to their distinct dangers, weaknesses, and tolerances for danger. Consequently, the ways in which companies adopt security protocols will also change.
Key Cybersecurity Functions: Identify, Protect, Detect, Respond, and Recover
- Determine
Create the organisational knowledge necessary to control the potential hazards of cybersecurity to information, systems, resources, and competencies. The Identify Function's tasks are essential for using the Framework effectively. An organisation can concentrate its efforts in a way that aligns with its approach to risk mitigation and company needs by having a clear understanding of the business environment, the financial resources that assist with vital operations, and the associated cybersecurity threats. Among the outcome characteristics that fall under this function are risk evaluation, mitigation strategy, the administration of assets, leadership, and the business environment.
- Protect
Create and put into place the necessary measures to guarantee the provision of crucial infrastructure amenities. The Protect Function's operations enable the limitation or containment of the possible impact of a cybersecurity incident. Instances of results Access Management, Knowledge and Instruction, Data Safety and Security, Data Protection Processes and Instructions, Repair, and Defensive Systems are some of the classifications that fall under this role.
- Detect
Create and carry out the necessary actions to determine whether a cybersecurity event has occurred. The Detect Function's operations make it possible to find vulnerability occurrences in an efficient way. This function's result subcategories include things like abnormalities and incidents, constant security monitoring, and identification processes.
- React
Create and carry out the necessary plans to address a cybersecurity event that has been discovered. The Response Function's operations facilitate the capacity to mitigate the effects of a possible cybersecurity incident. Within this Scope, emergency planning, interactions, analysis, prevention, and enhancements are a few examples of result categories.
- Recover
Create and carry out the necessary actions to uphold resilience tactics and restore any services or competencies that were hampered by a cybersecurity incident. In order to lessen the effects of a vulnerability incident, the Recovery Function's efforts facilitate a prompt return to regular operations. The following are a few instances of outcome subcategories under this role: communications, enhancements, and recovery planning.
Conclusion
The Information Security Profile, when seen in the framework of risk mitigation, offers producers a tactical method to deal with the ever-changing cybersecurity danger scenario. The assessment directs safeguarding operations prioritisation by recognising specific business reasons and connecting with corporate goals. The Profile enhances the cybersecurity standards and established industry guidelines by taking into account the differences in vulnerabilities and organisational subtleties among producers. It highlights the significance of a customised strategy, acknowledging that every business has unique risks and weaknesses.
The fundamental tasks of the Framework, to Identify, Protect, Detect, Respond, and Recover, serve as a thorough roadmap, guaranteeing a proactive and flexible approach to cybersecurity. The Profile's ultimate goal is to increase the efficacy of risk mitigation techniques, understanding that cybersecurity is a constantly shifting and evolving subject for the manufacturing sector.
References
- https://csrc.nist.gov/news/2020/cybersecurity-framework-v1-1-manufacturing-profile
- https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8183r1.pdf
- https://mysecuritymarketplace.com/reports/cybersecurity-framework-version-1-1-manufacturing-profile/

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.
.webp)
Introduction to Grooming
The term grooming is believed to have been first used by a group of investigators in the 1970s to describe patterns of seduction of an offender towards a child. It eventually evolved and began being commonly used by law enforcement agencies and has now replaced the term seduction for this behavioural pattern. At its core, grooming refers to conditioning a child by an adult offender to further their wrong motives. In its most popular sense, it refers to the sexual victimisation of children whereby an adult befriends a minor and builds an emotional connection to sexually abuse, exploit and even trafficking such a victim. The onset of technology has shifted the offline physical proximity of perpetrators to the internet, enabling groomers to integrate themselves completely into the victim’s life by maintaining consistent contact. It is noted that while grooming can occur online and offline, groomers often establish online contact before moving the ‘relationship’ offline to commit sexual offences.
Underreporting and Vulnerability of Teenagers
Given the elusive nature of the crime, cyber grooming remains one of the most underreported crimes by victims, who are often unaware or embarrassed to share their experiences. Teenagers are particularly more susceptible to cyber grooming since they not only have more access to the internet but also engage in more online risk-taking behaviours such as posting sensitive and personal pictures. Studies indicate that individuals aged 18 to 23 often lack awareness regarding the grooming process. They frequently engage in relationships with groomers without recognising the deceptive and manipulative tactics employed, mistakenly perceiving these relationships as consensual rather than abusive.
Rise of Cyber Grooming incidents after COVID-19 pandemic
There has been an uptick in cyber grooming after the COVID-19 pandemic, whereby an adult poses as a teenager or a child and befriends a minor on child-friendly websites or social media outlets and builds an emotional connection with the victim. The main goal is to obtain intimate and personal data of the minor, often in the form of sexual chats, pictures or videos, to threaten and coerce them into continuing such acts. The grooming process usually begins with seemingly harmless inquiries about the minor's age, interests, and family background. Over time, these questions gradually shift to topics concerning sexual experiences and desires. Research and data indicate that online grooming is primarily carried out by males, who frequently choose their victims based on attractiveness, ease of access, and the ability to exploit the minor's vulnerabilities.
Beyond Sexual Exploitation: Ideological and Commercial Grooming
Grooming is not confined to sexual exploitation. The rise of technology has expanded the influence of extremist ideological groups, granting them access to children who can be coerced into adopting their beliefs. This phenomenon, known as ideological grooming, presents significant personal, social, national security, and law enforcement challenges. Additionally, a new trend, termed digital commercial grooming, involves malicious actors manipulating minors into procuring and using drugs. Violent extremists are improving their online recruitment strategies, learning from each other to target and recruit supporters more effectively and are constantly leveraging children’s vulnerabilities to reinforce anti-government ideologies.
Policy Recommendations to Combat Cyber Grooming
To address the pervasive issue of cyber grooming and child recruitment by extremist groups, several policy recommendations can be implemented. Social media and online platforms should enhance their monitoring and reporting systems to swiftly detect and remove grooming behaviours. This includes investing in AI technologies for content moderation and employing dedicated teams to respond to reports promptly. Additionally, collaborative efforts with cybersecurity experts and child psychologists to develop educational campaigns and tools that teach children about online safety and identify grooming tactics should be mandated. Legislation should also be strengthened to include provisions specifically addressing cyber grooming, ensuring strict penalties for offenders and protections for victims. In this regard, international cooperation among law enforcement agencies and tech companies is essential to create a unified approach to tackling cross-border online threats to children's safety and security.
References:
- Lanning, Kenneth “The Evolution of Grooming: Concept and Term”, Journal of Interpersonal Violence, 2018, Vol. 33 (1) 5-16. https://www.nationalcac.org/wp-content/uploads/2019/05/The-evolution-of-grooming-Concept-and-term.pdf
- Jonie Chiu, Ethel Quayle, “Understanding online grooming: An interpretative phenomenological analysis of adolescents' offline meetings with adult perpetrators”, Child Abuse & Neglect, Volume 128, 2022, 105600, ISSN 0145-2134,https://doi.org/10.1016/j.chiabu.2022.105600. https://www.sciencedirect.com/science/article/pii/S014521342200120X
- “Online child sexual exploitation and abuse”, Sharinnf Electronic Resources on Laws and Crime, United Nations Office for Drugs and Crime. https://sherloc.unodc.org/cld/en/education/tertiary/cybercrime/module-12/key-issues/online-child-sexual-exploitation-and-abuse.html
- Mehrotra, Karishma, “In the pandemic, more Indian children are falling victim to online grooming for sexual exploitation” The Scroll.in, 18 September 2021. https://scroll.in/magazine/1005389/in-the-pandemic-more-indian-children-are-falling-victim-to-online-grooming-for-sexual-exploitation
- Lorenzo-Dus, Nuria, “Digital Grooming: Discourses of Manipulation and Cyber-Crime”, 18 December 2022 https://academic.oup.com/book/45362
- Strategic orientations on a coordinated EU approach to prevention of radicalisation in 2022-2023 https://home-affairs.ec.europa.eu/system/files/2022-03/2022-2023%20Strategic%20orientations%20on%20a%20coordinated%20EU%20approach%20to%20prevention%20of%20radicalisation_en.pdf
- “Handbook on Children Recruited and Exploited by Terrorist and Violent Extremist Groups: The Role of the Justice System”, United Nations Office on Drugs and Crime, 2017. https://www.unodc.org/documents/justice-and-prison-reform/Child-Victims/Handbook_on_Children_Recruited_and_Exploited_by_Terrorist_and_Violent_Extremist_Groups_the_Role_of_the_Justice_System.E.pdf