#FactCheck - Viral Video Misleadingly Tied to Recent Taiwan Earthquake
Executive Summary:
In the context of the recent earthquake in Taiwan, a video has gone viral and is being spread on social media claiming that the video was taken during the recent earthquake that occurred in Taiwan. However, fact checking reveals it to be an old video. The video is from September 2022, when Taiwan had another earthquake of magnitude 7.2. It is clear that the reversed image search and comparison with old videos has established the fact that the viral video is from the 2022 earthquake and not the recent 2024-event. Several news outlets had covered the 2022 incident, mentioning additional confirmation of the video's origin.

Claims:
There is a news circulating on social media about the earthquake in Taiwan and Japan recently. There is a post on “X” stating that,
“BREAKING NEWS :
Horrific #earthquake of 7.4 magnitude hit #Taiwan and #Japan. There is an alert that #Tsunami might hit them soon”.

Similar Posts:


Fact Check:
We started our investigation by watching the videos thoroughly. We divided the video into frames. Subsequently, we performed reverse search on the images and it took us to an X (formally Twitter) post where a user posted the same viral video on Sept 18, 2022. Worth to notice, the post has the caption-
“#Tsunami warnings issued after Taiwan quake. #Taiwan #Earthquake #TaiwanEarthquake”

The same viral video was posted on several news media in September 2022.

The viral video was also shared on September 18, 2022 on NDTV News channel as shown below.

Conclusion:
To conclude, the viral video that claims to depict the 2024 Taiwan earthquake was from September 2022. In the course of the rigorous inspection of the old proof and the new evidence, it has become clear that the video does not refer to the recent earthquake that took place as stated. Hence, the recent viral video is misleading . It is important to validate the information before sharing it on social media to prevent the spread of misinformation.
Claim: Video circulating on social media captures the recent 2024 earthquake in Taiwan.
Claimed on: X, Facebook, YouTube
Fact Check: Fake & Misleading, the video actually refers to an incident from 2022.
Related Blogs
.webp)
Introduction
The automobile business is fast expanding, with vehicles becoming sophisticated, interconnected gadgets equipped with cutting-edge digital technology. This integration improves convenience, safety, and efficiency while also exposing automobiles to a new set of cyber risks. Electric vehicles (EVs) are equipped with sophisticated computer systems that manage various functions, such as acceleration, braking, and steering. If these systems are compromised, it could result in hazardous situations, including the remote control of the vehicle or unauthorized access to sensitive data. The automotive sector is evolving with the rise of connected car stakeholders, exposing new vulnerabilities for hackers to exploit.
Why Automotive Cybersecurity is required
Cybersecurity threats to automotives result from hardware, software and overall systems redundancy. Additional concerns include general privacy clauses that justify collecting and transferring data to “third-party vendors”, without explicitly disclosing who such third parties are and the manner of processing personal data. For example, infotainment platform data may show popular music and the user’s preferences, which may be used by the music industry to improve marketing strategies. Similarly, it is lesser known that any data relating to behavioural tracking data, such as driving patterns etc., are also logged by the original equipment manufacturer.
Hacking is not limited to attackers gaining control of an electronic automobile; it includes malicious actors hacking charging stations to manipulate the systems. In Russia, EV charging stations were hacked in Moscow to display pro-Ukraine and anti-Putin messages such as “Glory to Ukraine” and “Death to the enemy” in the backdrop of the Russia-Ukraine war. Other examples include instances from the Isle of Wight, where hackers controlled the EV monitor to show inappropriate content and display high voltage fault codes to EV owners, preventing them from charging their vehicles with empty batteries.
UN Economic Commission for Europe releases Regulation 155 for Automobiles
UN Economic Commission for Europe Regulation 155 lays down uniform provisions concerning the approval of vehicles with regard to cybersecurity and cybersecurity management systems (CSMS). This was originally a part of the Commission.s Work Paper (W.P.) 29 that aimed to harmonise vehicular regulations for vehicles and vehicle equipment. Regulation 155 has a two-prong objective; first, to ensure cybersecurity at the organisational level and second, to ensure adequate designs of the vehicle architecture. A critical aspect in this context is the implementation of a certified CSMS by all companies that bring vehicles to market. Notably, this requirement alters the perspective of manufacturers; their responsibilities no longer conclude with the start of production (SOP). Instead, manufacturers are now required to continuously monitor and assess the safety systems throughout the entire life cycle of a vehicle, including making any necessary improvements.
This Regulation reflects the highly dynamic nature of software development and assurance. Moreover, the management system is designed to ensure compliance with safety requirements across the entire supply chain. This is a significant challenge, considering that suppliers currently account for over 70 per cent of the software volume.
The Regulation, which is binding in nature for 64 member countries, came into force in 2021. UNECE countries were required to be compliant with the Regulations by July 2022 for all new vehicles and by July 2024, the Regulation was set to apply to all vehicles. It is believed that the Regulation will become a de facto global standard, since vehicles authorised in a particular country may not be brought into the global market or the market of any UNECE member country based on any other authorisation. In such a scenario, OEMs of non-member countries may be required to give a “self-declaration”, declaring the equipment’s conformity with cybersecurity standards.
Conclusion
To compete and ensure trust, global car makers must deliver a robust cybersecurity framework that meets evolving regulations. The UNECE regulations in this regard are driving this direction by requiring automotive original equipment manufacturers (OEMs) to integrate vehicle cybersecurity throughout the entire value chain. The ‘security by design' approach aims to build a connected car that is trusted by all. Automotive cybersecurity involves measures and technologies to protect connected vehicles and their onboard systems from growing digital threats.
References:
- “Electric vehicle cyber security risks and best practices (2023)”, Cyber Talk, 1 August 2023. https://www.cybertalk.org/2023/08/01/electric-vehicle-cyber-security-risks-and-best-practices-2023/#:~:text=EVs%20are%20equipped%20with%20complex,unauthorized%20access%20to%20sensitive%20data.
- Gordon, Aaron, “Russian Electric Vehicle Chargers Hacked, Tell Users “PUTIN IS A D*******D”, Vice, 28 February 2022. https://www.vice.com/en/article/russian-electric-vehicle-chargers-hacked-tell-users-putin-is-a-dickhead/
- “Isle of Wight: Council’s electric vehicle chargers hacked to show porn site”, BBC, 6 April 2022. https://www.bbc.com/news/uk-england-hampshire-61006816
- Sandler, Manuel, “UN Regulation No. 155: What You Need to Know about UN R155”, Cyres Consulting, 1 June 2022. https://www.cyres-consulting.com/un-regulation-no-155-requirements-what-you-need-to-know/?srsltid=AfmBOopV1pH1mg6M2Nn439N1-EyiU-gPwH2L4vq5tmP0Y2vUpQR-yfP7#A_short_overview_Background_knowledge_on_UN_Regulation_No_155
- https://unece.org/wp29-introduction?__cf_chl_tk=ZYt.Sq4MrXvTwSiYURi_essxUCGCysfPq7eSCg1oXLA-1724839918-0.0.1.1-13972

Introduction
A message has recently circulated on WhatsApp alleging that voice and video chats made through the app will be recorded, and devices will be linked to the Ministry of Electronics and Information Technology’s system from now on. WhatsApp from now, record the chat activities and forward the details to the Government. The Anti-Government News has been shared on social media.
Message claims
- The fake WhatsApp message claims that an 11-point new communication guideline has been established and that voice and video calls will be recorded and saved. It goes on to say that WhatsApp devices will be linked to the Ministry’s system and that Facebook, Twitter, Instagram, and all other social media platforms will be monitored in the future.
- The fake WhatsApp message further advises individuals not to transmit ‘any nasty post or video against the government or the Prime Minister regarding politics or the current situation’. The bogus message goes on to say that it is a “crime” to write or transmit a negative message on any political or religious subject and that doing so could result in “arrest without a warrant.”
- The false message claims that any message in a WhatsApp group with three blue ticks indicates that the message has been noted by the government. It also notifies Group members that if a message has 1 Blue tick and 2 Red ticks, the government is checking their information, and if a member has 3 Red ticks, the government has begun procedures against the user, and they will receive a court summons shortly.
WhatsApp does not record voice and video calls
There has been news which is spreading that WhatsApp records voice calls and video calls of the users. the news is spread through a message that has been recently shared on social media. As per the Government, the news is fake, that WhatsApp cannot record voice and video calls. Only third-party apps can record voice and video calls. Usually, users use third-party Apps to record voice and video calls.
Third-party apps used for recording voice and video calls
- App Call recorder
- Call recorder- Cube ACR
- Video Call Screen recorder for WhatsApp FB
- AZ Screen Recorder
- Video Call Recorder for WhatsApp
Case Study
In 2022 there was a fake message spreading on social media, suggesting that the government might monitor WhatsApp talks and act against users. According to this fake message, a new WhatsApp policy has been released, and it claims that from now on, every message that is regarded as suspicious will have three 3 Blue ticks, indicating that the government has taken note of that message. And the same fake news is spreading nowadays.
WhatsApp Privacy policies against recording voice and video chats
The WhatsApp privacy policies say that voice calls, video calls, and even chats cannot be recorded through WhatsApp because of end-to-end encryption settings. End-to-end encryption ensures that the communication between two people will be kept private and safe.
WhatsApp Brand New Features
- Chat lock feature: WhatsApp Chat Lock allows you to store chats in a folder that can only be viewed using your device’s password or biometrics such as a fingerprint. When you lock a chat, the details of the conversation are automatically hidden in notifications. The motive of WhatsApp behind the cha lock feature is to discover new methods to keep your messages private and safe. The feature allows the protection of most private conversations with an extra degree of security
- Edit chats feature: WhatsApp can now edit your WhatsApp messages up to 15 minutes after they have been sent. With this feature, the users can make the correction in the chat or can add some extra points, users want to add.
Conclusion
The spread of misinformation and fake news is a significant problem in the age of the internet. It can have serious consequences for individuals, communities, and even nations. The news is fake as per the government, as neither WhatsApp nor the government could have access to WhatsApp chats, voice, and video calls on WhatsApp because of end-to-end encryption. End-to-end encryption ensures to protect of the communications of the users. The government previous year blocked 60 social media platforms because of the spreading of Anti India News. There is a fact check unit which identifies misleading and false online content.
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)