#FactCheck: Old clip of Greenland tsunami depicts as tsunami in Japan
Executive Summary:
A viral video depicting a powerful tsunami wave destroying coastal infrastructure is being falsely associated with the recent tsunami warning in Japan following an earthquake in Russia. Fact-checking through reverse image search reveals that the footage is from a 2017 tsunami in Greenland, triggered by a massive landslide in the Karrat Fjord.

Claim:
A viral video circulating on social media shows a massive tsunami wave crashing into the coastline, destroying boats and surrounding infrastructure. The footage is being falsely linked to the recent tsunami warning issued in Japan following an earthquake in Russia. However, initial verification suggests that the video is unrelated to the current event and may be from a previous incident.

Fact Check:
The video, which shows water forcefully inundating a coastal area, is neither recent nor related to the current tsunami event in Japan. A reverse image search conducted using keyframes extracted from the viral footage confirms that it is being misrepresented. The video actually originates from a tsunami that struck Greenland in 2017. The original footage is available on YouTube and has no connection to the recent earthquake-induced tsunami warning in Japan

The American Geophysical Union (AGU) confirmed in a blog post on June 19, 2017, that the deadly Greenland tsunami on June 17, 2017, was caused by a massive landslide. Millions of cubic meters of rock were dumped into the Karrat Fjord by the landslide, creating a wave that was more than 90 meters high and destroying the village of Nuugaatsiaq. A similar news article from The Guardian can be found.

Conclusion:
Videos purporting to depict the effects of a recent tsunami in Japan are deceptive and repurposed from unrelated incidents. Users of social media are urged to confirm the legitimacy of such content before sharing it, particularly during natural disasters when false information can exacerbate public anxiety and confusion.
- Claim: Recent natural disasters in Russia are being censored
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide
.webp)
Introduction
On the precipice of a new domain of existence, the metaverse emerges as a digital cosmos, an expanse where the horizon is not sky, but a limitless scope for innovation and imagination. It is a sophisticated fabric woven from the threads of social interaction, leisure, and an accelerated pace of technological progression. This new reality, a virtual landscape stretching beyond the mundane encumbrances of terrestrial life, heralds an evolutionary leap where the laws of physics yield to the boundless potential inherent in our creativity. Yet, the dawn of such a frontier does not escape the spectre of an age-old adversary—financial crime—the shadow that grows in tandem with newfound opportunity, seeping into the metaverse, where crypto-assets are no longer just an alternative but the currency du jour, dazzling beacons for both legitimate pioneers and shades of illicit intent.
The metaverse, by virtue of its design, is a canvas for the digital repaint of society—a three-dimensional realm where the lines between immersive experiences and entertainment blur, intertwining with surreal intimacy within this virtual microcosm. Donning headsets like armor against the banal, individuals become avatars; digital proxies that acquire the ability to move, speak, and perform an array of actions with an ease unattainable in the physical world. Within this alternative reality, users navigate digital topographies, with experiences ranging from shopping in pixelated arcades to collaborating in virtual offices; from witnessing concerts that defy sensory limitations to constructing abodes and palaces from mere codes and clicks—an act of creation no longer beholden to physicality but to the breadth of one's ingenuity.
The Crypto Assets
The lifeblood of this virtual economy pulsates through crypto-assets. These digital tokens represent value or rights held on distributed ledgers—a technology like blockchain, which serves as both a vault and a transparent tapestry, chronicling the pathways of each digital asset. To hop onto the carousel of this economy requires a digital wallet—a storeroom and a gateway for acquisition and trade of these virtual valuables. Cryptocurrencies, with NFTs—Non-fungible Tokens—have accelerated from obscure digital curios to precious artifacts. According to blockchain analytics firm Elliptic, an astonishing figure surpassing US$100 million in NFTs were usurped between July 2021 and July 2022. This rampant heist underlines their captivating allure for virtual certificates. Empowers do not just capture art, music, and gaming, but embody their very soul.
Yet, as the metaverse burgeons, so does the complexity and diversity of financial transgressions. From phishing to sophisticated fraud schemes, criminals craft insidious simulacrums of legitimate havens, aiming to drain the crypto-assets of the unwary. In the preceding year, a daunting figure rose to prominence—the vanishing of US$14 billion worth of crypto-assets, lost to the abyss of deception and duplicity. Hence, social engineering emerges from the shadows, a sort of digital chicanery that preys not upon weaknesses of the system, but upon the psychological vulnerabilities of its users—scammers adorned in the guise of authenticity, extracting trust and assets with Machiavellian precision.
The New Wave of Fincrimes
Extending their tentacles further, perpetrators of cybercrime exploit code vulnerabilities, engage in wash trading, obscuring the trails of money laundering, meander through sanctions evasion, and even dare to fund activities that send ripples of terror across the physical and virtual divide. The intricacies of smart contracts and the decentralized nature of these worlds, designed to be bastions of innovation, morph into paths paved for misuse and exploitation. The openness of blockchain transactions, the transparency that should act as a deterrent, becomes a paradox, a double-edged sword for the law enforcement agencies tasked with delineating the networks of faceless adversaries.
Addressing financial crime in the metaverse is Herculean labour, requiring an orchestra of efforts—harmonious, synchronised—from individual users to mammoth corporations, from astute policymakers to vigilant law enforcement bodies. Users must furnish themselves with critical awareness, fortifying their minds against the siren calls that beckon impetuous decisions, spurred by the anxiety of falling behind. Enterprises, the architects and custodians of this digital realm, are impelled to collaborate with security specialists, to probe their constructs for weak seams, and to reinforce their bulwarks against the sieges of cyber onslaughts. Policymakers venture onto the tightrope walk, balancing the impetus for innovation against the gravitas of robust safeguards—a conundrum played out on the global stage, as epitomised by the European Union's strides to forge cohesive frameworks to safeguard this new vessel of human endeavour.
The Austrian Example
Consider the case of Austria, where the tapestry of laws entwining crypto-assets spans a gamut of criminal offences, from data breaches to the complex webs of money laundering and the financing of dark enterprises. Users and corporations alike must become cartographers of local legislation, charting their ventures and vigilances within the volatile seas of the metaverse.
Upon the sands of this virtual frontier, we must not forget: that the metaverse is more than a hive of bits and bandwidth. It crystallises our collective dreams, echoes our unspoken fears, and reflects the range of our ambitions and failings. It stands as a citadel where the ever-evolving quest for progress should never stray from the compass of ethical pursuit. The cross-pollination of best practices, and the solidarity of international collaboration, are not simply tactics—they are imperatives engraved with the moral codes of stewardship, guiding us to preserve the unblemished spirit of the metaverse.
Conclusion
The clarion call of the metaverse invites us to venture into its boundless expanse, to savour its gifts of connection and innovation. Yet, on this odyssey through the pixelated constellations, we harness vigilance as our star chart, mindful of the mirage of morality that can obfuscate and lead astray. In our collective pursuit to curtail financial crime, we deploy our most formidable resource—our unity—conjuring a bastion for human ingenuity and integrity. In this, we ensure that the metaverse remains a beacon of awe, safeguarded against the shadows of transgression, and celebrated as a testament to our shared aspiration to venture beyond the realm of the possible, into the extraordinary.
References
- https://www.wolftheiss.com/insights/financial-crime-in-the-metaverse-is-real/
- https://gnet-research.org/2023/08/16/meta-terror-the-threats-and-challenges-of-the-metaverse/
- https://shuftipro.com/blog/the-rising-concern-of-financial-crimes-in-the-metaverse-aml-screening-as-a-solution/

Introduction
As India moves full steam ahead towards a trillion-dollar digital economy, how user data is gathered, processed and safeguarded is under the spotlight. One of the most pervasive but least known technologies used to gather user data is the cookie. Cookies are inserted into every website and application to improve functionality, measure usage and customize content. But they also present enormous privacy threats, particularly when used without explicit user approval.
In 2023, India passed the Digital Personal Data Protection Act (DPDP) to give strong legal protection to data privacy. Though the act does not refer to cookies by name, its language leaves no doubt as to the inclusion of any technology that gathers or processes personal information and thus cookies regulation is at the centre of digital compliance in India. This blog covers what cookies are, how international legislation, such as the GDPR, has addressed them and how India's DPDP will regulate their use.
What Are Cookies and Why Do They Matter?
Cookies are simply small pieces of data that a website stores in the browser. They were originally designed to help websites remember useful information about users, such as your login session or what is in your shopping cart. Netscape initially built them in 1994 to make web surfing more efficient.
Cookies exist in various types. Session cookies are volatile and are deleted when the browser is shut down, whereas persistent cookies are stored on the device to monitor users over a period of time. First-party cookies are made by the site one is visiting, while third-party cookies are from other domains, usually utilised for advertisements or analytics. Special cookies, such as secure cookies, zombie cookies and tracking cookies, differ in intent and danger. They gather information such as IP addresses, device IDs and browsing history information associated with a person, thus making it personal data per the majority of data protection regulations.
A Brief Overview of the GDPR and Cookie Policy
The GDPR regulates how personal data can be processed in general. However, if a cookie collects personal data (like IP addresses or identifiers that can track a person), then GDPR applies as well, because it sets the rules on how that personal data may be processed, what lawful bases are required, and what rights the user has.
The ePrivacy Directive (also called the “Cookie Law”) specifically regulates how cookies and similar technologies can be used. Article 5(3) of the ePrivacy Directive says that storing or accessing information (such as cookies) on a user’s device requires prior, informed consent, unless the cookie is strictly necessary for providing the service requested by the user.
In the seminal Planet49 decision, the Court of Justice of the European Union held that pre-ticked boxes do not represent valid consent. Another prominent enforcement saw Amazon fined €35 million by France's CNIL for using tracking cookies without user consent.
Cookies and India’s Digital Personal Data Protection Act (DPDP), 2023
India's Digital Personal Data Protection Act, 2023 does not refer to cookies specifically but its provisions necessarily come into play when cookies harvest personal data like user activity, IP addresses, or device data. According to DPDP, personal data is to be processed for legitimate purposes with the individual's consent. The consent has to be free, informed, clear and unambiguous. The individuals have to be informed of what data is collected, how it will be processed.. The Act also forbids behavioural monitoring and targeted advertising in the case of children.
The Ministry of Electronics and IT released the Business Requirements Document for Consent Management Systems (BRDCMS) in June 2025. Although it is not binding by law, it provides operational advice on cookie consent. It recommends that websites use cookie banners with "Accept," "Reject," and "Customize" choices. Users must be able to withdraw or change their consent at any moment. Multi-language handling and automatic expiry of cookie preferences are also suggested to suit accessibility and privacy requirements.
The DPDP Act and the BRDCMS together create a robust user-rights model, even in the absence of a special cookie law.
What Should Indian Websites Do?
For the purposes of staying compliant, Indian websites and online platforms need to act promptly to harmonise their use of cookies with DPDP principles. This begins with a transparent and simple cookie banner providing users with an opportunity to accept or decline non-essential cookies. Consent needs to be meaningful; coercive tactics such as cookie walls must not be employed. Websites need to classify cookies (e.g., necessary, analytics and ads) and describe each category's function in plain terms under the privacy policy. Users must be given the option to modify cookie settings anytime using a Consent Management Platform (CMP). Monitoring children or their behavioural information must be strictly off-limits.
These are not only about being compliant with the law, they're about adhering to ethical data stewardship and user trust building.
What Should Users Do?
Cookies need to be understood and controlled by users to maintain online personal privacy. Begin by reading cookie notices thoroughly and declining unnecessary cookies, particularly those associated with tracking or advertising. The majority of browsers today support blocking third-party cookies altogether or deleting them periodically.
It is also recommended to check and modify privacy settings on websites and mobile applications. It is possible to minimise surveillance with the use of browser add-ons such as ad blockers or privacy extensions. Users are also recommended not to blindly accept "accept all" in cookie notices and instead choose "customise" or "reject" where not necessary for their use.
Finally, keeping abreast of data rights under Indian law, such as the right to withdraw consent or to have data deleted, will enable people to reclaim control over their online presence.
Conclusion
Cookies are a fundamental component of the modern web, but they raise significant concerns about individual privacy. India's DPDP Act, 2023, though not explicitly referring to cookies, contains an effective legal framework that regulates any data collection activity involving personal data, including those facilitated by cookies.
As India continues to make progress towards comprehensive rulemaking and regulation, companies need to implement privacy-first practices today. And so must the users, in an active role in their own digital lives. Collectively, compliance, transparency and awareness can build a more secure and ethical internet ecosystem where privacy is prioritised by design.
References
- https://prsindia.org/billtrack/digital-personal-data-protection-bill-2023
- https://gdpr-info.eu/
- https://d38ibwa0xdgwxx.cloudfront.net/create-edition/7c2e2271-6ddd-4161-a46c-c53b8609c09d.pdf
- https://oag.ca.gov/privacy/ccpa
- https://www.barandbench.com/columns/cookie-management-under-the-digital-personal-data-protection-act-2023#:~:text=The%20Business%20Requirements%20Document%20for,the%20DPDP%20Act%20and%20Rules.
- https://samistilegal.in/cookies-meaning-legal-regulations-and-implications/#
- https://secureprivacy.ai/blog/india-digital-personal-data-protection-act-dpdpa-cookie-consent-requirements
- https://law.asia/cookie-use-india/
- https://www.cookielawinfo.com/major-gdpr-fines-2020-2021/#:~:text=4.,French%20websites%20could%20refuse%20cookies.