#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.


FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.

CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

Executive Summary:
One of the most complex threats that have appeared in the space of network security is focused on the packet rate attacks that tend to challenge traditional approaches to DDoS threats’ involvement. In this year, the British based biggest Internet cloud provider of Europe, OVHcloud was attacked by a record and unprecedented DDoS attack reaching the rate of 840 million packets per second. Targets over 1 Tbps have been observed more regularly starting from 2023, and becoming nearly a daily occurrence in 2024. The maximum attack on May 25, 2024, got to 2.5 Tbps, this points to a direction to even larger and more complex attacks of up to 5 Tbps. Many of these attacks target critical equipment such as Mikrotik models within the core network environment; detection and subsequent containment of these threats prove a test for cloud security measures.
Modus Operandi of a Packet Rate Attack:
A type of cyberattack where an attacker sends with a large volume of packets in a short period of time aimed at a network device is known as packet rate attack, or packet flood attack or network flood attack under volumetric DDoS attack. As opposed to the deliberately narrow bandwidth attacks, these raids target the computation time linked with package processing.
Key technical characteristics include:
- Packet Size: Usually compact, and in many cases is less than 100 bytes
- Protocol: Named UDP, although it can also involve TCP SYN or other protocol flood attacks
- Rate: Exceeding 100 million packets per second (Mpps), with recent attacks exceeding 840 Mpps
- Source IP Diversity: Usually originating from a small number of sources and with a large number of requests per IP, which testifies about the usage of amplification principles
- Attack on the Network Stack : To understand the impact, let's examine how these attacks affect different layers of the network stack:
1. Layer 3 (Network Layer):
- Each packet requires routing table lookups and hence routers and L3 switches have the problem of high CPU usage.
- These mechanisms can often be saturated so that network communication will be negatively impacted by the attacker.
2. Layer 4 (Transport Layer):
- Other stateful devices (e.g. firewalls, load balancers) have problems with tables of connections
- TCP SYN floods can also utilize all connection slots so that no incoming genuine connection can be made.
3. Layer 7 (Application Layer):
- Web servers and application firewalls may be triggered to deliver a better response in a large number of requests
- Session management systems can become saturated, and hence, the performance of future iterations will be a little lower than expected in terms of their perceived quality by the end-user.
Technical Analysis of Attack Vectors
Recent studies have identified several key vectors exploited in high-volume packet rate attacks:
1.MikroTik RouterOS Exploitation:
- Vulnerability: CVE-2023-4967
- Impact: Allows remote attackers to generate massive packet floods
- Technical detail: Exploits a flaw in the FastTrack implementation
2.DNS Amplification:
- Amplification factor: Up to 54x
- Technique: Exploits open DNS resolvers to generate large responses to small queries
- Challenge: Difficult to distinguish from legitimate DNS traffic
3.NTP Reflection:
- Command: monlist
- Amplification factor: Up to 556.9x
- Mitigation: Requires NTP server updates and network-level filtering
Mitigation Strategies: A Technical Perspective
1. Combating packet rate attacks requires a multi-layered approach:
- Hardware-based Mitigation:
- Implementation: FPGA-based packet processing
- Advantage: Can handle millions of packets per second with minimal latency
- Challenge: High cost and specialized programming requirements
2.Anycast Network Distribution:
- Technique: Distributing traffic across multiple global nodes
- Benefit: Dilutes attack traffic, preventing single-point failures
- Consideration: Requires careful BGP routing configuration
3.Stateless Packet Filtering:
- Method: Applying filtering rules without maintaining connection state
- Advantage: Lower computational overhead compared to stateful inspection
- Trade-off: Less granular control over traffic
4.Machine Learning-based Detection:
- Approach: Using ML models to identify attack patterns in real-time
- Key metrics: Packet size distribution, inter-arrival times, protocol anomalies
- Challenge: Requires continuous model training to adapt to new attack patterns
Performance Metrics and Benchmarking
When evaluating DDoS mitigation solutions for packet rate attacks, consider these key performance indicators:
- Flows per second (fps) or packet per second (pps) capability
- Dispersion and the latency that comes with it is inherent to mitigation systems.
- The false positive rate in the case of the attack detection
- Exposure time before beginning of mitigation from the moment of attack
Way Forward
The packet rate attacks are constantly evolving where the credible defenses have not stayed the same. The next step entails extension to edge computing and 5G networks for distributing mitigation closer to the attack origins. Further, AI-based proactive tools of analysis for prediction of such threats will help to strengthen the protection of critical infrastructure against them in advance.
In order to stay one step ahead in this, it is necessary to constantly conduct research, advance new technologies, and work together with other cybersecurity professionals. There is always a need to develop secure defenses that safeguard these networks.
Reference:
https://blog.ovhcloud.com/the-rise-of-packet-rate-attacks-when-core-routers-turn-evil/
https://cybersecuritynews.com/record-breaking-ddos-attack-840-mpps/
https://www.cloudflare.com/learning/ddos/famous-ddos-attacks/

Introduction
In September 2024, the Australian government announced the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 ( CLA Bill 2024 hereon), to provide new powers to the Australian Communications and Media Authority (ACMA), the statutory regulatory body for Australia's communications and media infrastructure, to combat online misinformation and disinformation. It proposed allowing the ACMA to hold digital platforms accountable for the “seriously harmful mis- and disinformation” being spread on their platforms and their response to it, while also balancing freedom of expression. However, the Bill was subsequently withdrawn, primarily over concerns regarding the possibility of censorship by the government. This development is reflective of the global contention on the balance between misinformation regulation and freedom of speech.
Background and Key Features of the Bill
According to the BBC’s Global Minds Survey of 2023, nearly 73% of Australians struggled to identify fake news and AI-generated misinformation. There has been a substantial rise in misinformation on platforms like Facebook, Twitter, and TikTok since the COVID-19 pandemic, especially during major events like the bushfires of 2020 and the 2022 federal elections. The government’s campaign against misinformation was launched against this background, with the launch of The Australian Code of Practice on Disinformation and Misinformation in 2021. The main provisions of the CLA Bill, 2024 were:
- Core Transparency Obligations of Digital Media Platforms: Publishing current media literacy plans, risk assessment reports, and policies or information on their approach to addressing mis- and disinformation. The ACMA would also be allowed to make additional rules regarding complaints and dispute-handling processes.
- Information Gathering and Record-Keeping Powers: The ACMA would form rules allowing it to gather consistent information across platforms and publish it. However, it would not have been empowered to gather and publish user information except in limited circumstances.
- Approving Codes and Making Standards: The ACMA would have powers to approve codes developed by the industry and make standards regarding reporting tools, links to authoritative information, support for fact-checking, and demonetisation of disinformation. This would make compliance mandatory for relevant sections of the industry.
- Parliamentary Oversight: The transparency obligations, codes approved and standards set by ACMA under the Bill would be subject to parliamentary scrutiny and disallowance. ACMA would be required to report to the Parliament annually.
- Freedom of Speech Protections: End-users would not be required to produce information for ACMA unless they are a person providing services to the platform, such as its employees or fact-checkers. Further, it would not be allowed to call for removing content from platforms unless it involved inauthentic behavior such as bots.
- Penalties for Non-Compliance: ACMA would be required to employ a “graduated, proportionate and risk-based approach” to non-compliance and enforcement in the form of formal warnings, remedial directions, injunctions, or significant civil penalties as decided by the courts, subject to review by the Administrative Review Tribunal (ART). No criminal penalties would be imposed.
Key Concerns
- Inadequacy of Freedom of Speech Protections: The biggest contention on this Bill has been regarding the issue of possible censorship, particularly of alternative opinions that are crucial to the health of a democratic system. To protect the freedom of speech, the Bill defined mis- and disinformation, what constitutes “serious harm” (election interference, harming public health, etc.), and what would be excluded from its scope. However, reservations among the Opposition persisted due to the lack of a clear mechanism to protect divergent opinions from the purview of this Bill.
- Efficacy of Regulatory Measures: Many argue that by allowing the digital platform industry to make its codes, this law lets it self-police. Big Tech companies have no incentive to curb misinformation effectively since their business models allow them to reap financial benefits from the rampant spread of misinformation. Unless there are financial non- or dis- incentives to curb misinformation, Big Tech is not likely to address the situation at war footing. Thus, this law would run the risk of being toothless. Secondly, the Bill did not require platforms to report on the “prevalence of” false content which, along with other metrics, is crucial for researchers and legislators to track the efficacy of the current misinformation-curbing practices employed by platforms.
- Threat of Government Overreach: The Bill sought to expand the ACMA’s compliance and enforcement powers concerning misinformation and disinformation on online communication platforms by giving it powers to form rules on information gathering, code registration, standard-making powers, and core transparency obligations. However, even though the ACMA as a regulatory authority is answerable to the Parliament, the Bill was unclear in defining limits to these powers. This raised concerns from civil society about potential government overreach in a domain filled with contextual ambiguities regarding information.
Conclusion
While the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill sought to equip the ACMA with tools to hold digital platforms accountable and mitigate the harm caused by false information, its critique highlights the complexities of regulating such content without infringing on freedom of speech. Legislations and proposals regarding the matter all over the world are having to contend with this challenge. Globally, legislation and proposals addressing this issue face similar challenges, emphasizing the need for a continuous discourse at the intersection of platform accountability, regulatory restraint, and the protection of diverse viewpoints.
To regulate Big Tech effectively, governments can benefit from adopting a consultative, incremental, and cooperative approach, as exemplified by the European Union’s Digital Services Act 2023. Such a framework provides for a balanced response, fostering accountability while safeguarding democratic freedoms.
Resources
- https://www.infrastructure.gov.au/sites/default/files/documents/factsheet-misinformation-disinformation-bill.pdf
- https://www.infrastructure.gov.au/have-your-say/new-acma-powers-combat-misinformation-and-disinformation
- https://www.mi-3.com.au/07-02-2024/over-80-australians-feel-they-may-have-fallen-fake-news-says-bbc
- https://www.hrlc.org.au/news/misinformation-inquiry
- https://humanrights.gov.au/our-work/legal/submission/combatting-misinformation-and-disinformation-bill-2024
- https://www.sbs.com.au/news/article/what-is-the-misinformation-bill-and-why-has-it-triggered-worries-about-freedom-of-speech/4n3ijebde
- https://www.hrw.org/report/2023/06/14/no-internet-means-no-work-no-pay-no-food/internet-shutdowns-deny-access-basic#:~:text=The%20Telegraph%20Act%20allows%20authorities,preventing%20incitement%20to%20the%20commission
- https://www.hrlc.org.au/submissions/2024/11/8/submission-combatting-misinformation?utm_medium=email&utm_campaign=Media%20Release%20Senate%20Committee%20to%20hear%20evidence%20calling%20for%20Albanese%20Government%20to%20regulate%20and%20hold%20big%20tech%20accountable%20for%20misinformation&utm_content=Media%20Release%20Senate%20Committee%20to%20hear%20evidence%20calling%20for%20Albanese%20Government%20to%20regulate%20and%20hold%20big%20tech%20accountable%20for%20misinformation+Preview+CID_31c6d7200ed9bd2f7f6f596ba2a8b1fb&utm_source=Email%20campaign&utm_term=Read%20the%20Human%20Rights%20Law%20Centres%20submission%20to%20the%20inquiry