#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
Social media platforms serve as an ideal breeding ground for cybercrime. A new fraud called ‘WhatsApp Pink’ has emerged, promising new features and an improved UI. Several law enforcement and government agencies have already issued severe caution against the program, which is used to hack mobile phones and steal personal information.
What is a pink WhatsApp Scam?
WhatsApp is on a roll with new features, but the messaging app is also experiencing an increase in a new type of scam. The WhatsApp Pink scam, as it is known, is gaining steam. Police and government organisations in several states, including Mumbai, Kerala, and Karnataka, have warned about the scam. A North Region cybercrime wing tweet warned, “WHATSAPP PINK – A Red Alert for Android Users.” The government’s cybersecurity organisations have warned about the rise in pink WhatsApp scams.
Scammers and hackers target WhatsApp users with fake messages via the network in this scam. According to reports, the message contains a link directing users to download WhatsApp Pink, a bogus messaging program. According to sources, scammers are targeting many people with the promise that the next version will have a better interface and additional features.
The application also steals critical financial information such as OTP, bank account information, and contact information. When people open the link, harmful software is installed on their mobile phones, and scammers get access to the phones. The user may even lose access to their phone by downloading the app.
According to the advisory
The news about ‘New Pink Look WhatsApp with extra features’ recently circulating among WhatsApp users is a hoax that can lead to hacking of your mobile through malicious software.” It is uncommon for fraudsters to devise new tricks and methods to entice naive consumers into falling into their trap and committing cyber fraud. It is the users’ responsibility to be Aware, Alert, and Attentive to these types of frauds in order to be safe and secure in the digital world.”
The link that is present in the message, according to a notice from the police, is a phishing effort. By clicking the link, the user runs the risk of having their device compromised, which might allow scammers to steal their device information or use it without their permission.
Users run the possibility of suffering negative outcomes if they click the Pink WhatsApp link, as the Mumbai Police have warned. These dangers include financial loss, identity theft, spam attacks, unauthorised access to contact information and saved images, and even total loss of control over mobile devices.
Guidelines against the Scam
- If a user has installed the fake WhatsApp, the authorities have instructed that they uninstall it immediately by going into the mobile settings, selecting WhatsApp with the pink logo in Apps, and then uninstalling it.
- Users have been advised to exercise caution when clicking links from untrustworthy websites unless they have previously verified their legitimacy. Users are advised to only download and update software from reliable sites such as the official Google Play Store, the iOS App Store, and so on.
- individuals using the site have been told not to send any links or communications to other persons until they have received proper authentication or verification.
- To avoid misuse, users are advised not to disclose any personal or financial information, including passwords, login information, and credit or debit card information, to anybody online. Furthermore, in order to defend themselves against fraud attempts, users are encouraged to stay up to date on the most recent news and changes in order to be informed and careful about cybercriminal activities.
Why do Scammer target WhatsApp
WhatsApp is the world’s most popular messaging service; it can reach out to considerably more prospective victims than it could with another tool. A scammer’s victims are almost certainly using WhatsApp. If all their victims are in one app, the criminal can easily handle their activities.
Conclusion
WhatsApp users may reduce their chances of being victims of the pink WhatsApp scam significantly by following the guidelines issued by the advisory. WhatsApp has become the primary target for scams, as there is a large number of the population using WhatsApp so it will be easy for the Scammer to steal critical personal information and target another victim through WhatsApp. The pink WhatsApp Scam is exactly like it.

Introduction
India & Bangladesh have adopted proactive approaches, focusing on advancing cyber capacity building in the region. Bangladeshi and Indian cybersecurity experts have emphasised the importance of continuous technology training to protect the digital space from growing cyber-attacks and threats. They call for greater collaboration to share knowledge and expertise in cyber resilience, network vulnerability, and cyber risk assessment. The Cyber-Maitree 2023 event held in Dhaka aimed to enrich and build capacity to counter cyber-attacks and threats. The senior director of India's Computer Emergency Response Team acknowledged the growing dependence on cyberspace and the need for increased preparedness as critical infrastructures, energy systems, banks, and utilities are connected to the internet. Recently, Bangladesh Cyber Security Summit 2024, organised by Grameenphone, was held in Dhaka on March 5th, 2024. Such collaborative dialogues between the countries serve as a shining example of cooperation between the governments of Bangladesh and India, serving as a platform for knowledge sharing, capacity building, and international cooperation in cyber security.
Cyber Maitree held in 2023
In 2023, India and Bangladesh held 'Cyber Maitree 2023', an initiative hosted by the ICT Division of the Bangladeshi Government, to address cybersecurity challenges in a rapidly globalising world characterised by digitisation. The event, which translates to "Cyber Friendship," was an interface for cybersecurity experts and aspirants from both nations, creating an avenue for extensive training, practical exercises, and a dynamic exchange of information. We need to emphasise the importance of bolstering digital safety as both nations grapple with the rapid digitisation of the world.
India-Bangladesh joint efforts aim to fortify cyber resilience, pinpoint potential network vulnerabilities, bolster rigorous risk assessments, and illuminate the landscape of cyber threats. It encompasses various sectors, including cybersecurity, artificial intelligence, ICT, and IT-driven human resource expansion. The growing camaraderie between India and Bangladesh has been evident through strategic engagements, such as the India-Bangladesh Startup Bridge and the establishment of 12 High Tech Parks in Bangladesh.
Highlights of the India-Bangladesh MoUs for Cyber Security Cooperation
In 2017, India and Bangladesh signed a Memorandum of Understanding (MoU) focused on cyber security cooperation.
In 2022, Both nations crafted a Memorandum of Understanding (MoU), highlighting collaboration in spheres such as e-governance, e-public service delivery, research, and development. A separate agreement was also inked focusing on mutual information sharing pertaining to cyber-attacks and incidents. The first MoU aims to provide a framework for training Bangladesh Railway employees at Indian Railways' training institutes, including field visits. The Indian Railways will coordinate with officials from the Ministry of Railways, Government of Bangladesh to improve training facilities in Bangladesh. The second MoU focuses on collaboration in IT systems, for the Bangladesh Railway. The Ministry of Railway, Government of India, will offer IT solutions for passenger ticketing, freight operations, train inquiry systems, asset management digitisation, HR and finance infrastructure. The MoUs aim to strengthen the friendship bond between India and Bangladesh and promote friendly cooperation in the railway sector.
Way Ahead
Zunaid Ahmed Palak, State Minister for Posts, Telecom and ICT, Bangladesh, has announced that Bangladesh and India will collaborate to ensure the safety of the cyber world. The two countries are expected to sign a final agreement within the next three to six months. He stressed the importance of attracting investments in the postal, telecommunication, and IT sectors. He also highlighted the strong ties between Bangladesh and India. He also announced that 12 high-tech parks will be constructed in Bangladesh with an Indian Line of Credit, starting operation by 2025. He further referred to the Indian Cyber Emergency Response Team (CERT), and said "We are very much enthusiastic in fighting against the cyber attacks and crimes as the team is now working with us".
Bangladesh Cyber Security Summit 2024
The Bangladesh Cyber Security Summit 2024, organised by Grameenphone, was held in Dhaka on 5th March 2024, focusing on cybersecurity issues and opportunities, fostering collaboration between government, private organisations, industry experts, and sponsors investing in Bangladesh's digital future.
Conclusion
India and Bangladesh share a common vision for a secure digital future, focusing on cybersecurity collaboration to safeguard shared aspirations and empower nations to thrive in the digital age. We must emphasise the need to fortify digital defenses, leveraging expertise, innovation, and collaboration to secure interconnected futures. Collaborative relations in Information and Communication Technology and Cyber Security will strengthen digital defense and establish cyber resilience.
References:
- https://caribbeannewsglobal.com/bangladesh-and-india-call-for-more-cyber-security-training/?amp=1
- https://www.indianewsnetwork.com/en/20231005/bangladesh-and-india-strengthen-ties-through-cyber-maitree-2023
- https://www.bssnews.net/news-flash/150763
- https://digibanglatech.news/english/bangladesh-english/125439/
- https://www.mea.gov.in/Portal/LegalTreatiesDoc/BG17B3024.pdf
- https://digibanglatech.news/english/bangladesh-english/125439/
- https://www.tbsnews.net/tech/ict/bangladesh-india-work-together-cyber-security-palak-712182