#FactCheck - Stunning 'Mount Kailash' Video Exposed as AI-Generated Illusion!
EXECUTIVE SUMMARY:
A viral video is surfacing claiming to capture an aerial view of Mount Kailash that has breathtaking scenery apparently providing a rare real-life shot of Tibet's sacred mountain. Its authenticity was investigated, and authenticity versus digitally manipulative features were analyzed.
CLAIMS:
The viral video claims to reveal the real aerial shot of Mount Kailash, as if exposing us to the natural beauty of such a hallowed mountain. The video was circulated widely in social media, with users crediting it to be the actual footage of Mount Kailash.
FACTS:
The viral video that was circulated through social media was not real footage of Mount Kailash. The reverse image search revealed that it is an AI-generated video created by Sonam and Namgyal, two Tibet based graphic artists on Midjourney. The advanced digital techniques used helped to provide a realistic lifelike scene in the video.
No media or geographical source has reported or published the video as authentic footage of Mount Kailash. Besides, several visual aspects, including lighting and environmental features, indicate that it is computer-generated.
For further verification, we used Hive Moderation, a deep fake detection tool to conclude whether the video is AI-Generated or Real. It was found to be AI generated.
CONCLUSION:
The viral video claiming to show an aerial view of Mount Kailash is an AI-manipulated creation, not authentic footage of the sacred mountain. This incident highlights the growing influence of AI and CGI in creating realistic but misleading content, emphasizing the need for viewers to verify such visuals through trusted sources before sharing.
- Claim: Digitally Morphed Video of Mt. Kailash, Showcasing Stunning White Clouds
- Claimed On: X (Formerly Known As Twitter), Instagram
- Fact Check: AI-Generated (Checked using Hive Moderation).
Related Blogs
What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References
Introduction
In the vast expanse of the digital cosmos, where the tendrils of the internet weave an intricate tapestry of connectivity, the channels through which information cascades have become a labyrinth of enigma and complexity. As we traverse this boundless virtual landscape, the line demarcating fact from fiction blurs, leaving the essence of truth adrift in a deluge of data. Amidst this ceaseless flow, platforms such as YouTube, Meta, and Twitter emerge as bulwarks in a pivotal struggle against the insidious spectres of fake news and disinformation—a struggle as fervent and consequential as any historical skirmish over the dominion of truth and influence.
Let us delve into a few case studies that illustrate the multifaceted nature of this digital warfare, where the stakes are nothing less than the integrity of public discourse and the sanctity of societal harmony.
Case 1: A Chief Minister's Stand Against Digital Deception
In the northeastern reaches of India, Assam's Chief Minister, Himanta Biswa Sarma, confronted disinformation head-on. With the spectre of elections looming like a storm on the horizon, he took to the microblogging site X to unveil a nefarious scheme—a doctored video intended to distort his speech and sow seeds of communal discord. 'See for yourself, as elections approach, how vested groups distort a speech with the criminal intention of spreading disinformation and communal disharmony. The long arms of the law will catch up with these elements,' declared Sarma, his words a clarion call for vigilance.
The counterfeit video, crafted to smear the Chief Minister's reputation, elicited a swift and decisive response from Assam's Director General of Police, G.P. Singh. 'Noted Sir. CID Assam would register a criminal case and investigate the people behind this,' assured Singh, signalling the readiness of the law to pursue the purveyors of falsehood.
Case 2: Waves of Deceit: Unverified Claims of Cancellations in the Maldives Tourism Controversy
The narrative shifts to the idyllic archipelago of the Maldives, where the azure waters belie a tumultuous undercurrent of diplomatic discord with India. Following disparaging remarks by Maldivian officials directed at Indian Prime Minister Narendra Modi, the social media sphere became rife with claims of Indian tourists en masse cancelling their sojourns to the island nation. Screenshots purporting to show cancelled bookings flooded platforms like X, with one user claiming to have annulled a reservation at the Palms Retreat, Fulhadhoo, to the tune of at least Rs 5 lakh, citing the officials' 'racist remarks.'
Initial reports from a few media outlets lent credence to this narrative of widespread cancellations. However, upon closer scrutiny, the veracity of these claims crumbled like a sandcastle at high tide. Concrete evidence to substantiate the alleged boycott was conspicuously absent, and neither travel agencies nor airlines corroborated the supposed trend.
The controversy was inflamed when PM Modi's visit to Lakshadweep, and subsequent social media posts praising the archipelago, spurred Indian users to champion Lakshadweep as an alternative to the Maldives. The vitriolic response from Maldivian ministers, who labelled Modi with derogatory remarks, ignited a firestorm on X, with hashtags like #BoycottMaldives and #MaldivesBoycott trending fervently.
Yet, the truth behind the cacophony of cancellation numbers remains shrouded in ambiguity, with no official acknowledgement from either government and a conspicuous absence of data from the tourism industry.
Case 3: Misinformation Highway: Unraveling the Fabrications in Bollywood's rumours or misinformation: Lies, Thumbnails, and Digital Dalliances
Gaze now turns to the bustling fabricated thumbnails or rumour taglines on uploaded videos on YouTube, where thumbnails emblazoned with tantalising texts beckon viewers with the promise of scandalous revelations. 'Pregnant? Divorced?' they shout, luring millions into their web with the allure of salacious 'news.' Yet, these are but mirages, baseless rumours masquerading as fact, or worse, complete fabrications.
The platform teems with counterfeit narratives and rumours, targeting the luminaries of Bollywood. Factors such as easy content uploading without strict scrutiny, a burgeoning digital footprint, and India's insatiable appetite for celebrity culture have created a fertile ground for the proliferation of such content. It is a testament to the power of the digital age, where anyone with a connection can craft a narrative and cast it into the ether, regardless of its foundation in reality.
We must arm ourselves with discernment and scepticism in this relentless onslaught of misinformation. The digital realm, for all its wonders, is also a battleground where the currency is truth, and the price of negligence is the erosion of our collective understanding. As we navigate this ever-evolving landscape, let us hold fast to the principles of verification and evidence, for they are the compass by which we can chart a course through the maelstrom of misinformation that seeks to engulf us.
Conclusion
In this era of digital enlightenment, it is incumbent upon us to discern the chaff from the wheat, to elevate the discourse beyond the mire of falsehoods. Let us endeavour to foster a digital polity that values truth, champions authenticity, and resolutely stands against the tide of disinformation that threatens to undermine the very fabric of our society.
References:
- https://www.indiatodayne.in/assam/video/assam-cm-exposes-fake-video-scheme-dgp-promises-swift-action-743097-2024-01-08
- https://www.thequint.com/news/webqoof/boycott-maldives-misinformation-on-trip-booking-cancellations
- https://www.thequint.com/news/webqoof/bollywood-fake-news-on-youtube-uses-divorce-pregnancy-and-arrests-for-misinformation
Overview:
‘Kia Connect’ is the application that is used to connect ‘Kia’ cars which allows the user control various parameters of the vehicle through the application on his/her smartphone. The vulnerabilities found in most Kias built after 2013 with but little exception. Most of the risks are derived from a flawed API that deals with dealer relations and vehicle coordination.
Technical Breakdown of Exploitation:
- API Exploitation: The attack uses the vulnerabilities in Kia’s dealership network. The researchers also noticed that, for example, the logs generated while impersonating a dealer and registering on the Kia dealer portal would be sufficient for deriving access tokens needed for next steps.
- Accessing Vehicle Information: The license plate number allowed the attackers to get the Vehicle Identification Number (VIN) number of their preferred car. This VIN can then be used to look up more information about the car and is an essential number to determine for the shared car.
- Information Retrieval: Having the VIN number in hand, attackers can launch a number of requests to backends to pull more sensitive information about the car owner, including:
- Name
- Email address
- Phone number
- Geographical address
- Modifying Account Access: With this information, attackers could change the accounts settings to make them a second user on the car, thus being hidden from the actual owner of the account.
- Executing Remote Commands: Once again, it was discovered that attackers could remotely execute different commands on the vehicle, which includes:some text
- Unlocking doors
- Starting the engine
- Monitoring the location of the vehicle in terms of position.
- Honking the horn
Technical Execution:
The researchers demonstrated that an attacker could execute a series of four requests to gain control over a Kia vehicle:
- Generate Dealer Token: The attacker sends an HTTP request in order to create a dealer token.
- Retrieve Owner Information: As indicated using the generated token, they make another request to another endpoint that returns the owner’s email address and phone number.
- Modify Access Permissions: The attacker takes advantage of the leaked information (email address and VIN) of the owner to change between users accounts and make himself the second user.
- Execute Commands: As the last one, they can send commands to perform actions on the operated vehicle.
Security Response and Precautionary Measures for Vehicle Owners
- Regular Software Updates: Car owners must make sure their cars receive updates on the recent software updates provided by auto producers.
- Use Strong Passwords: The owners of Kia Connect accounts should develop specific and complex passwords for their accounts and then update them periodically. They should avoid using numbers like the birth dates, vehicle numbers and simple passwords.
- Enable Multi-Factor Authentication: For security, vehicle owners should turn on the use of the secondary authentication when it is available to protect against unauthorized access to an account.
- Limit Personal Information Sharing: Owners of vehicles should be careful with the details that are connected with the account on their car, like the e-mail or telephone number, sharing them on social networks, for example.
- Monitor Account Activity: It is also important to monitor the account activity because of change or access attempts that are unauthorized. In case of any abnormality or anything suspicious felt while using the car, report it to Kia customer support.
- Educate Yourself on Vehicle Security: Being aware of cyber threats that are connected to vehicles and learning about how to safeguard a vehicle from such threats.
- Consider Disabling Remote Features When Not Needed: If remote features are not needed, then it is better to turn them off, and then turn them on again when needed. This can prove to help diminish the attack vector for would-be hackers.
Industry Implications:
The findings from this research underscore broader issues within automotive cybersecurity:
- Web Security Gaps: Most car manufacturers pay more attention to equipment running in automobiles instead of the safety of the websites that the car uses to operate thereby exposing automobiles that are connected very much to risks.
- Continued Risks: Vehicles become increasingly connected to internet technologies. Auto makers will have to carry cyber security measures in their cars in the future.
Conclusion:
The weaknesses found in Kia’s connected car system are a key concern for Automotive security. Since cars need web connections for core services, suppliers also face the problem of risks and need to create effective safeguards. Kia took immediate actions to tighten the safety after disclosure; however, new threats will emerge as this is a dynamic domain involving connected technology. With growing awareness of these risks, it is now important for car makers not only to put in proper security measures but also to maintain customer communication on how it safeguards their information and cars against cyber dangers. That being an incredibly rapid approach to advancements in automotive technology, the key to its safety is in our capacity to shield it from ever-present cyber threats.
Reference:
- https://timesofindia.indiatimes.com/auto/cars/hackers-could-unlock-your-kia-car-with-just-a-license-plate-is-yours-safe/articleshow/113837543.cms
- https://www.thedrive.com/news/hackers-found-millions-of-kias-could-be-tracked-controlled-with-just-a-plate-number
- https://www.securityweek.com/millions-of-kia-cars-were-vulnerable-to-remote-hacking-researchers/
- https://news24online.com/auto/kia-vehicles-hack-connected-car-cybersecurity-threat/346248/
- https://www.malwarebytes.com/blog/news/2024/09/millions-of-kia-vehicles-were-vulnerable-to-remote-attacks-with-just-a-license-plate-number
- https://informationsecuritybuzz.com/kia-vulnerability-enables-remote-acces/
- https://samcurry.net/hacking-kia