#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs

Introduction
In a distressing incident that highlights the growing threat of cyber fraud, a software engineer in Bangalore fell victim to fraudsters who posed as police officials. These miscreants, operating under the guise of a fake courier service and law enforcement, employed a sophisticated scam to dupe unsuspecting individuals out of their hard-earned money. Unfortunately, this is not an isolated incident, as several cases of similar fraud have been reported recently in Bangalore and other cities. It is crucial for everyone to be aware of these scams and adopt preventive measures to protect themselves.
Bangalore Techie Falls Victim to ₹33 Lakh
The software engineer received a call from someone claiming to be from FedEx courier service, informing him that a parcel sent in his name to Taiwan had been seized by the Mumbai police for containing illegal items. The call was then transferred to an impersonator posing as a Mumbai Deputy Commissioner of Police (DCP), who alleged that a money laundering case had been registered against him. The fraudsters then coerced him into joining a Skype call for verification purposes, during which they obtained his personal details, including bank account information.
Under the guise of verifying his credentials, the fraudsters manipulated him into transferring a significant amount of money to various accounts. They assured him that the funds would be returned after the completion of the procedure. However, once the money was transferred, the fraudsters disappeared, leaving the victim devastated and financially drained.
Best Practices to Stay Safe
- Be vigilant and skeptical: Maintain a healthy level of skepticism when receiving unsolicited calls or messages, especially if they involve sensitive information or financial matters. Be cautious of callers pressuring you to disclose personal details or engage in immediate financial transactions.
- Verify the caller’s authenticity: If someone claims to represent a legitimate organisation or law enforcement agency, independently verify their credentials. Look up the official contact details of the organization or agency and reach out to them directly to confirm the authenticity of the communication.
- Never share sensitive information: Avoid sharing personal information, such as bank account details, passwords, or Aadhaar numbers, over the phone or through unfamiliar online platforms. Legitimate organizations will not ask for such information without proper authentication protocols.
- Use secure communication channels: When communicating sensitive information, prefer secure platforms or official channels that provide end-to-end encryption. Avoid switching to alternative platforms or applications suggested by unknown callers, as fraudsters can exploit these.
- Educate yourself and others: Stay informed about the latest cyber fraud techniques and scams prevalent in your region. Share this knowledge with family, friends, and colleagues to create awareness and prevent them from falling victim to similar schemes.
- Implement robust security measures: Keep your devices and software updated with the latest security patches. Utilize robust anti-virus software, firewalls, and spam filters to safeguard against malicious activities. Regularly review your financial statements and account activity to detect any unauthorized transactions promptly.
Conclusion:
The incident involving the Bangalore techie and other victims of cyber fraud highlights the importance of remaining vigilant and adopting preventive measures to safeguard oneself from such scams. It is disheartening to see individuals falling prey to impersonators who exploit their trust and manipulate them into sharing sensitive information. By staying informed, exercising caution, and following best practices, we can collectively minimize the risk and protect ourselves from these fraudulent activities. Remember, the best defense against cyber fraud is a well-informed and alert individual.

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
A photo circulating on social media depicting modified tractors is being misrepresented as part of the 'Delhi Chalo' farmers' protest narrative. In the recent swirl of misinformation surrounding the 'Delhi Chalo' farmers' protest. A photo, ostensibly showing a phalanx of modified tractors, has been making the rounds on social media platforms, falsely tethered to the ongoing protests. This image, accompanied by a headline suggesting a mechanical metamorphosis to resist police barricades, was allegedly published by a news agency. However, beneath the surface of this viral phenomenon lies a more complex and fabricated reality.
The Movement
The 'Delhi Chalo' movement, a clarion call that resonated with thousands of farmers from the fertile plains of Punjab, the verdant fields of Haryana, and the sprawling expanses of Uttar Pradesh, has been a testament to the agrarian community's demand for assured crop prices and legal guarantees for the Minimum Support Price (MSP). The protest, which has seen the fortification of borders and the chaos at the Punjab-Haryana border on February 13, 2024, has become a crucible for the farmers' unyielding spirit.
Yet, amidst this backdrop of civil demonstration and discourse, a nefarious narrative of misinformation has taken root. The viral image, which has been shared with the fervour of wildfire, was accompanied by a screenshot of an article allegedly published by the news agency. This article, dated February 11, 2024, quoted an anonymous official who claimed that intelligence agencies had alerted the police to the protesters' plans to outfit tractors with hydraulic tools. The implication was clear: these machines had been transformed into battering rams against the bulwark of law enforcement.
The Pursuit of Truth
However, the India TV Fact Check team, in their relentless pursuit of truth, unearthed that the viral photo of these so-called modified tractors is nothing but a chimerical creation, a figment of artificial intelligence. Visual discrepancies betrayed its AI-generated nature.
This is not the first time that the misinformation has loomed over the farmers' protest. Previous instances, including a viral video of a modified tractor, have been debunked by the same fact-checking team. These efforts are a bulwark against the tide of false narratives that seek to muddy the waters of public understanding.
The claim that the photo depicted modified tractors intended for use in the ‘Delhi Chalo’ farmers' protest rally in Delhi on February 13, 2024, was a mirage.
The Fact Check
OpIndia, in their article, clarified that the photo used was a representative image created by AI and not a real photograph. To further scrutinize this viral photo, the HIVE AI detector tool was employed, indicating a 99.4% likelihood of the image being AI-generated. Thus, the claim made in the post was misleading.
The viral photo claiming that farmers had modified their tractors to avoid tear gas shells and remove barricades put up by the police during the rally was a digital illusion. The internet has become a fertile ground for the rapid spread of misinformation, reaching millions in an instant. Social media, with its complex algorithms, amplifies this spread, as any interaction, even those intended to debunk false information, inadvertently increases its reach. This phenomenon is exacerbated by 'echo chambers,' where users are exposed to a homogenous stream of content that reinforces their pre-existing beliefs, making it difficult to encounter and consider alternative perspectives.
Conclusion
The viral image depicting modified tractors for the ‘Delhi Chalo’ farmers' protest rally was a digital fabrication, a testament to the power of AI in creating convincing yet false narratives. As we navigate the labyrinth of information in the digital era, it is imperative to remain vigilant, to question the veracity of what we see and hear, and to rely on the diligent work of fact-checkers in discerning the truth. The mirage of modified machines serves as a stark reminder of the potency of misinformation and the importance of critical thinking in the age of artificial intelligence.
References
- https://www.indiatvnews.com/fact-check/fact-check-ai-generated-tractor-photo-misrepresented-delhi-chalo-farmers-protest-narrative-msp-police-barricades-punjab-haryana-uttar-pradesh-2024-02-15-917010
- https://factly.in/this-viral-image-depicting-modified-tractors-for-the-delhi-chalo-farmers-protest-rally-is-created-using-ai/