#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs
.webp)
Executive Summary:
Cyber incidents are evolving along with time, they are designed to attract and lure people through social networking sites and/or messaging services. In the recent past a spate of messages alleging that TRAI is offering ‘3 months free recharge with free voice calls and internet for 4g/5g with 200 GB free data’. These messages display the TRAI logo with attractive offers to trick the users into revealing their personal details. This blog discusses the functioning of this free mobile recharge scheme, its methods and guidelines on how to avoid such fake schemes. This blog explains the importance of vigilance and verification when receiving any links, emphasizing the need to report suspicious activities and educate others to prevent identity theft and protect personal information.
Claim:
The message circulated an enticing offer: free mobile recharge for 3 months which provides unlimited free voice calls with 200GB 4G/5G data with TRAI logo. The key characteristics of the false claims are
- Official Branding: The logo of TRAI has been viewed as a deceptive facade of credibility.
- Unrealistic Offers: It is accompanied by a free recharge , which is intended for an extended period indefinite period, like most fraudsters’ bait.
- Urgency and Exclusivity: The offer is for a limited time to make urgency forcing the receiver to take the offer without confirmation.
The Deceptive Scheme:
Organized systematically, the fraudulent campaign usually proceeds in several steps, all of which aim at extracting the victim’s personal data. Here’s a breakdown of the scheme:
1. Initial Contact: Such messages or calls reach the users’ inboxes or phone numbers through social media applications such as WhatsApp or through text messages. These messages further implies that the user was chosen for the special offer from TRAI, which elicits the interest of the user.
2. Information Request: To claim the purported offer, users are directed to a website or asked to reply with personal details, including:
- Phone number
- State of residence
- SIM provider details
This is useful for the scammers as they harvest information which can be used to conduct identity theft or sold to others on the shady part of the internet known as the ‘Dark Web’.
3. Fake Confirmation: After providing all the information, a congratulatory message appears on the screen showing that their phone number is eligible for the offer. The user is compelled to forward the message to many phone numbers through whatsapp to get the offer.
4. Pressure Tactics: The message often implies a sense of time constraint or fear which psychologically produces pressure to provide all the user information. For example, users are given messages such as that if they do not ‘act now’, they will lose their mobile service.
Analyzing the Fraudulent Campaign
The TRAI fraudulent recharge scheme case depicts that social engineering is used in cyber crimes. Here are some key aspects that characterize this campaign:
- Sophisticated Social Engineering
Scammers take advantage of the holders’ confidence in official bodies such as TRAI. By using official TRAI logos, official language they try to deceive even cautious people.
- Viral Spread
The user is compelled to share the given message to friends and groups; this is an excellent strategy to spread the scam. It not only spreads the fraudulent message but also tries to extract the details of other people.
- Technical Analysis

- Domain Name: SGOFF[.]CYOU
- Registry Domain ID: D472308342-CNIC
- Registrar WHOIS Server: whois.hkdns.hk
- Registrar URL: http://www.hkdns.hk
- Updated Date: 2024-07-24T18:50:48.0Z
- Creation Date: 2024-07-19T18:48:44.0Z
- Registry Expiry Date: 2025-07-19T23:59:59.0Z
- Registrar: West263 International Limited
- Registrar IANA ID: 1915
- Registrant State/Province: Anhui
- Registrant Country: CN
- Name Server: NORMAN.NS.CLOUDFLARE.COM
- Name Server: PAM.NS.CLOUDFLARE.COM
- DNSSEC: unsigned
Cloudflare Inc. is used to cover the scam. The real website always uses the older domain while this url has been registered recently which indicates that this link is a scam.

The graph indicates that some of the communicated files and websites are malicious.
CyberPeace Advisory and Best Practice:
In light of the growing threat posed by such scams, the Research Wing of CyberPeace recommend the following best practices to help users protect themselves:
1. Verify Communications: It is always advisable to visit the official site of the organization or call the official contact numbers of the company to speak to their customer care and clarify about the offers.
2. Do not share personal information: No genuine organization will call the people for personal information. Step carefully and do not provide personal information that will lead to identity theft when dealing with such offers.
3. Report Fraudulent Activity: If one receives any calls or messages that seem to be suspicious, then the user can report cyber crimes to the National Cyber Crime Reporting Portal on www. cybercrime. gov. in or call on 1930. Such scams are reportable and assist the authorities in tracking and fighting the vice.
4. Educate Others : Always raise awareness among friends by sharing these kinds of scams. Educating people helps to avoid them falling prey to such fraudulent schemes.
5. Use Reliable Resources : Always refer to official sources or websites for any kind of offers or promotions.
Conclusion:
The free recharge scheme for 3 months with the logo of TRAI is a fraudulent scam. There is no official information from TRAI or in their official website about this free recharge scheme. Though the scheme looks attractive, it is deceptive. Through this, the scammers are trying to collect personal details of the individual. Before clicking any links, it is necessary to check the authenticity of the information, report these kinds of incidents to spread awareness among people. Always be safe and be vigilant.

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21

In 2023, PIB reported that up to 22% of young women in India are affected by Polycystic Ovarian Syndrome (PCOS). However, access to reliable information regarding the condition and its treatment remains a challenge. A study by the PGIMER Chandigarh conducted in 2021 revealed that approximately 37% of affected women rely on the internet as their primary source of information for PCOS. However, it can be difficult to distinguish credible medical advice from misleading or inaccurate information online since the internet and social media are rife with misinformation. The uptake of misinformation can significantly delay the diagnosis and treatment of medical conditions, jeopardizing health outcomes for all.
The PCOS Misinformation Ecosystem Online
PCOS is one of the most common disorders diagnosed in the female endocrine system, characterized by the swelling of ovaries and the formation of small cysts on their outer edges. This may lead to irregular menstruation, weight gain, hirsutism, possible infertility, poor mental health, and other symptoms. However, there is limited research on its causes, leaving most medical practitioners in India ill-equipped to manage the issue effectively and pushing women to seek alternate remedies from various sources.
This creates space for the proliferation of rumours, unverified cures and superstitions, on social media, For example, content on YouTube, Facebook, and Instagram may promote “miracle cures” like detox teas or restrictive diets, or viral myths claiming PCOS can be “cured” through extreme weight loss or herbal remedies. Such misinformation not only creates false hope for women but also delays treatment, or may worsen symptoms.
How Tech Platforms Amplify Misinformation
- Engagement vs. Accuracy: Social media algorithms are designed to reward viral content, even if it’s misleading or incendiary since it generates advertisement revenue. Further, non-medical health influencers often dominate health conversations online and offer advice with promises of curing the condition.
- Lack of Verification: Although platforms like YouTube try to provide verified health-related videos through content shelves, and label unverified content, the sheer volume of content online means that a significant chunk of content escapes the net of content moderation.
- Cultural Context: In India, discussions around women’s health, especially reproductive health, are stigmatized, making social media the go-to source for private, albeit unreliable, information.
Way Forward
a. Regulating Health Content on Tech Platforms: Social media is a significant source of health information to millions who may otherwise lack access to affordable healthcare. Rather than rolling back content moderation practices as seen recently, platforms must dedicate more resources to identify and debunk misinformation, particularly health misinformation.
b. Public Awareness Campaigns: Governments and NGOs should run nationwide campaigns in digital literacy to educate on women’s health issues in vernacular languages and utilize online platforms for culturally sensitive messaging to reach rural and semi-urban populations. This is vital for countering the stigma and lack of awareness which enables misinformation to proliferate.
c. Empowering Healthcare Communication: Several studies suggest a widespread dissatisfaction among women in many parts of the world regarding the information and care they receive for PCOS. This is what drives them to social media for answers. Training PCOS specialists and healthcare workers to provide accurate details and counter misinformation during patient consultations can improve the communication gaps between healthcare professionals and patients.
d. Strengthening the Research for PCOS: The allocation of funding for research in PCOS is vital, especially in the face of its growing prevalence amongst Indian women. Academic and healthcare institutions must collaborate to produce culturally relevant, evidence-based interventions for PCOS. Information regarding this must be made available online since the internet is most often a primary source of information. An improvement in the research will inform improved communication, which will help reduce the trust deficit between women and healthcare professionals when it comes to women’s health concerns.
Conclusion
In India, the PCOS misinformation ecosystem is shaped by a mix of local and global factors such as health communication failures, cultural stigma, and tech platform design prioritizing engagement over accuracy. With millions of women turning to the internet for guidance regarding their conditions, they are increasingly vulnerable to unverified claims and pseudoscientific remedies which can lead to delayed diagnoses, ineffective treatments, and worsened health outcomes. The rising number of PCOS cases in the country warrants the bridging of health research and communications gaps so that women can be empowered with accurate, actionable information to make the best decisions regarding their health and well-being.
Sources
- https://pib.gov.in/PressReleasePage.aspx?PRID=1893279#:~:text=It%20is%20the%20most%20prevailing%20female%20endocrine,neuroendocrine%20system%2C%20sedentary%20lifestyle%2C%20diet%2C%20and%20obesity.
- https://www.thinkglobalhealth.org/article/india-unprepared-pcos-crisis?utm_source=chatgpt.com
- https://www.bbc.com/news/articles/ckgz2p0999yo
- https://pmc.ncbi.nlm.nih.gov/articles/PMC9092874/