#FactCheck - Old Japanese Earthquake Footage Falsely Linked to Tibet
Executive Summary:
A viral post on X (formerly Twitter) gained much attention, creating a false narrative of recent damage caused by the earthquake in Tibet. Our findings confirmed that the clip was not filmed in Tibet, instead it came from an earthquake that occurred in Japan in the past. The origin of the claim is traced in this report. More to this, analysis and verified findings regarding the evidence have been put in place for further clarification of the misinformation around the video.

Claim:
The viral video shows collapsed infrastructure and significant destruction, with the caption or claims suggesting it is evidence of a recent earthquake in Tibet. Similar claims can be found here and here

Fact Check:
The widely circulated clip, initially claimed to depict the aftermath of the most recent earthquake in Tibet, has been rigorously analyzed and proven to be misattributed. A reverse image search based on the Keyframes of the claimed video revealed that the footage originated from a devastating earthquake in Japan in the past. According to an article published by a Japanese news website, the incident occurred in February 2024. The video was authenticated by news agencies, as it accurately depicted the scenes of destruction reported during that event.

Moreover, the same video was already uploaded on a YouTube channel, which proves that the video was not recent. The architecture, the signboards written in Japanese script, and the vehicles appearing in the video also prove that the footage belongs to Japan, not Tibet. The video shows news from Japan that occurred in the past, proving the video was shared with different context to spread false information.

The video was uploaded on February 2nd, 2024.
Snap from viral video

Snap from Youtube video

Conclusion:
The video viral about the earthquake recently experienced by Tibet is, therefore, wrong as it appears to be old footage from Japan, a previous earthquake experienced by this nation. Thus, the need for information verification, such that doing this helps the spreading of true information to avoid giving false data.
- Claim: A viral video claims to show recent earthquake destruction in Tibet.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Introduction
Children today are growing up amidst technology, and the internet has become an important part of their lives. The internet provides a wealth of recreational and educational options and learning environments to children, but it also presents extensively unseen difficulties, particularly in the context of deepfakes and misinformation. AI is capable of performing complex tasks in a fast time. However, misuse of AI technologies led to increasing cyber crimes. The growing nature of cyber threats can have a negative impact on children wellbeing and safety while using the Internet.
India's Digital Environment
India has one of the world's fastest-growing internet user bases, and young netizens here are getting online every passing day. The internet has now become an inseparable part of their everyday lives, be it social media or online courses. But the speed at which the digital world is evolving has raised many privacy and safety concerns increasing the chance of exposure to potentially dangerous content.
Misinformation: The raising Concern
Today, the internet is filled with various types of misinformation, and youngsters are especially vulnerable to its adverse effects. With the diversity in the language and culture in India, the spread of misinformation can have a vast negative impact on society. In particular, misinformation in education has the power to divulge young brains and create hindrances in their cognitive development.
To address this issue, it is important that parents, academia, government, industry and civil society start working together to promote digital literacy initiatives that educate children to critically analyse online material which can ease navigation in the digital realm.
DeepFakes: The Deceptive Mirage:
Deepfakes, or digitally altered videos and/or images made with the use of artificial intelligence, pose a huge internet threat. The possible ramifications of deepfake technology are concerning in India, since there is a high level of dependence on the media. Deepfakes can have far-reaching repercussions, from altering political narratives to disseminating misleading information.
Addressing the deepfake problem demands a multifaceted strategy. Media literacy programs should be integrated into the educational curriculum to assist youngsters in distinguishing between legitimate and distorted content. Furthermore, strict laws as well as technology developments are required to detect and limit the negative impact of deepfakes.
Safeguarding Children in Cyberspace
● Parental Guidance and Open Communication: Open communication and parental guidance are essential for protecting children's internet safety. It's a necessity to have open discussions about the possible consequences and appropriate internet use. Understanding the platforms and material children are consuming online, parents should actively participate in their children's online activities.
● Educational Initiatives: Comprehensive programs for digital literacy must be implemented in educational settings. Critical thinking abilities, internet etiquette, and knowledge of the risks associated with deepfakes and misinformation should all be included in these programs. Fostering a secure online environment requires giving young netizens the tools they need to question and examine digital content.
● Policies and Rules: Admitting the threats or risks posed by misuse of advanced technologies such as AI and deepfake, the Indian government is on its way to coming up with dedicated legislation to tackle the issues arising from misuse of deepfake technology by the bad actors. The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts are made to identify misinformation and deepfakes. Legal frameworks need to be equipped to handle the challenges posed by AI. Accountability in AI is a complex issue that requires comprehensive legal reforms. In light of various cases reported about the misuse of deepfakes and spreading such deepfake content on social media, It is advocated that there is a need to adopt and enforce strong laws to address the challenges posed by misinformation and deepfakes. Working with technological companies to implement advanced content detection tools and ensuring that law enforcement takes swift action against those who misuse technology will act as a deterrent among cyber crooks.
● Digital parenting: It is important for parents to keep up with the latest trends and digital technologies. Digital parenting includes understanding privacy settings, monitoring online activity, and using parental control tools to create a safe online environment for children.
Conclusion
As India continues to move forward digitally, protecting children in cyberspace has become a shared responsibility. By promoting digital literacy, encouraging open communication and enforcing strong laws, we can create a safer online environment for younger generations. Knowledge, understanding, and active efforts to combat misinformation and deeply entrenched myths are the keys to unlocking the safety net in the online age. Social media Intermediaries or platforms must ensure compliance under IT Rules 2021, IT Act, 2000 and the newly enacted Digital Personal Data Protection Act, 2023. It is the shared responsibility of the government, parents & teachers, users and organisations to establish safe online space for children.
References:
.webp)
Introduction
Big Tech has been pushing back against regulatory measures, particularly regarding data handling practices. X Corp (formerly Twitter) has taken a prominent stance in India. The platform has filed a petition against the Central and State governments, challenging content-blocking orders and opposing the Center’s newly launched Sahyog portal. The X Corp has furthermore labelled the Sahyog Portal as a 'censorship portal' that enables government agencies to issue blocking orders using a standardized template.
The key regulations governing the tech space in India include the IT Act of 2000, IT Rules 2021 and 2023 (which stress platform accountability and content moderation), and the DPDP Act 2023, which intersects with personal data governance. This petition by the X Corp raises concerns for digital freedom, platform accountability, and the evolving regulatory frameworks in India.
Elon Musk vs Indian Government: Key Issues at Stake
The 2021 IT Rules, particularly Rule 3(1)(d) of Part II, outline intermediaries' obligations regarding ‘Content Takedowns’. Intermediaries must remove or disable access to unlawful content within 36 hours of receiving a court order or government notification. Notably, the rules do not require government takedown requests to be explicitly in writing, raising concerns about potential misuse.
X’s petition also focuses on the Sahyog Portal, a government-run platform that allows various agencies and state police to request content removal directly. They contend that the failure to comply with such orders can expose intermediaries' officers to prosecution. This has sparked controversy, with platforms like Elon Musk’s X arguing that such provisions grant the government excessive control, potentially undermining free speech and fostering undue censorship.
The broader implications include geopolitical tensions, potential business risks for big tech companies, and significant effects on India's digital economy, user engagement, and platform governance. Balancing regulatory compliance with digital rights remains a crucial challenge in this evolving landscape.
The Global Context: Lessons from Other Jurisdictions
The ‘EU's Digital Services Act’ establishes a baseline 'notice and takedown' system. According to the Act, hosting providers, including online platforms, must enable third parties to notify them of illegal content, which they must promptly remove to retain their hosting defence. The DSA also mandates expedited removal processes for notifications from trusted flaggers, user suspension for those with frequent violations, and enhanced protections for minors. Additionally, hosting providers have to adhere to specific content removal obligations, including the elimination of terrorist content within one hour and deploying technology to detect known or new CSAM material and remove it.
In contrast to the EU, the US First Amendment protects speech from state interference but does not extend to private entities. Dominant digital platforms, however, significantly influence discourse by moderating content, shaping narratives, and controlling advertising markets. This dual role creates tension as these platforms balance free speech, platform safety, and profitability.
India has adopted a model closer to the EU's approach, emphasizing content moderation to curb misinformation, false narratives, and harmful content. Drawing from the EU's framework, India could establish third-party notification mechanisms, enforce clear content takedown guidelines, and implement detection measures for harmful content like terrorist material and CSAM within defined timelines. This would balance content regulation with platform accountability while aligning with global best practices.
Key Concerns and Policy Debates
As the issue stands, the main concerns that arise are:
- The need for transparency in government orders for takedowns, the reasons and a clear framework for why they are needed and the guidelines for doing so.
- The need for balancing digital freedom with national security and the concerns that arise out of it for tech companies. Essentially, the role platforms play in safeguarding the democratic values enshrined in the Constitution of India.
- This court ruling by the Karnataka HC will have the potential to redefine the principles upon which the intermediary guidelines function under the Indian laws.
Potential Outcomes and the Way Forward
While we wait for the Hon’ble Court’s directives and orders in response to the filed suit, while the court's decision could favour either side or lead to a negotiated resolution, the broader takeaway is the necessity of collaborative policymaking that balances governmental oversight with platform accountability. This debate underscores the pressing need for a structured and transparent regulatory framework for content moderation. Additionally, this case also highlights the importance of due process in content regulation and the need for legal clarity for tech companies operating in India. Ultimately, a consultative and principles-based approach will be key to ensuring a fair and open digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/elon-musks-x-sues-union-government-over-alleged-censorship-and-it-act-violations/article69352961.ece
- https://www.hindustantimes.com/india-news/elon-musk-s-x-sues-union-government-over-alleged-censorship-and-it-act-violations-101742463516588.html
- https://www.financialexpress.com/life/technology-explainer-why-has-x-accused-govt-of-censorship-3788648/
- https://thelawreporters.com/elon-musk-s-x-sues-indian-government-over-alleged-censorship-and-it-act-violations
- https://www.linklaters.com/en/insights/blogs/digilinks/2023/february/the-eu-digital-services-act---a-new-era-for-online-harms-and-intermediary-liability

Introduction
AI has transformed the way we look at advanced technologies. As the use of AI is evolving, it also raises a concern about AI-based deepfake scams. Where scammers use AI technologies to create deep fake videos, images and audio to deceive people and commit AI-based crimes. Recently a Kerala man fall victim to such a scam. He received a WhatsApp video call, the scammer impersonated the face of the victim’s known friend using AI-based deep fake technology. There is a need for awareness and vigilance to safeguard ourselves from such incidents.
Unveiling the Kerala deep fake video call Scam
The man in Kerala received a WhatsApp video call from a person claiming to be his former colleague in Andhra Pradesh. In actuality, he was the scammer. He asked for help of 40,000 rupees from the Kerala man via google pay. Scammer to gain the trust even mentioned some common friends with the victim. The scammer said that he is at the Dubai airport and urgently need the money for the medical emergency of his sister.
As AI is capable of analysing and processing data such as facial images, videos, and audio creating a realistic deep fake of the same which closely resembles as real one. In the Kerala Deepfake video call scam the scammer made a video call that featured a convincingly similar facial appearance and voice as same to the victim’s colleague which the scammer was impersonating. The Kerala man believing that he was genuinely communicating with his colleague, transferred the money without hesitation. The Kerala man then called his former colleague on the number he had saved earlier in his contact list, and his former colleague said that he has not called him. Kerala man realised that he had been cheated by a scammer, who has used AI-based deep-fake technology to impersonate his former colleague.
Recognising Deepfake Red Flags
Deepfake-based scams are on the rise, as they pose challenges that really make it difficult to distinguish between genuine and fabricated audio, videos and images. Deepfake technology is capable of creating entirely fictional photos and videos from scratch. In fact, audio can be deepfaked too, to create “voice clones” of anyone.
However, there are some red flags which can indicate the authenticity of the content:
- Video quality- Deepfake videos often have compromised or poor video quality, and unusual blur resolution, which might pose a question to its genuineness.
- Looping videos: Deepfake videos often loop or unusually freeze or where the footage repeats itself, indicating that the video content might be fabricated.
- Verify Separately: Whenever you receive requests for such as financial help, verify the situation by directly contacting the person through a separate channel such as a phone call on his primary contact number.
- Be vigilant: Scammers often possess a sense of urgency leading to giving no time to the victim to think upon it and deceiving them by making a quick decision. So be vigilant and cautious when receiving and entertaining such a sudden emergency which demands financial support from you on an urgent basis.
- Report suspicious activity: If you encounter such activities on your social media accounts or through such calls report it to the platform or to the relevant authority.
Conclusion
The advanced nature of AI deepfake technology has introduced challenges in combatting such AI-based cyber crimes. The Kerala man’s case of falling victim to an AI-based deepfake video call and losing Rs 40,000 serves as an alarming need to remain extra vigilant and cautious in the digital age. So in the reported incident where Kerala man received a call from a person appearing as his former colleague but in actuality, he was a scammer and tricking the victim by using AI-based deepfake technology. By being aware of such types of rising scams and following precautionary measures we can protect ourselves from falling victim to such AI-based cyber crimes. And stay protected from such malicious scammers who exploit these technologies for their financial gain. Stay cautious and safe in the ever-evolving digital landscape.