#FactCheck- Old 2019 Video Falsely Shared as Iran Seizing US Ship in Hormuz
Executive Summary:
Amid the ongoing tensions in West Asia, a video is being widely circulated on social media with the claim that Iran has seized a US ship in the Strait of Hormuz. However, a research by the CyberPeace found that the claim is false. The video is from 2019 and is unrelated to the current situation. It actually shows Iran’s Islamic Revolutionary Guard Corps (IRGC) seizing a British-flagged tanker, Stena Impero. The ongoing conflict involving the United States, Israel and Iran since late February has raised concerns over global energy supply. The Strait of Hormuz, located between Iran and Oman, is a key route for global oil and maritime trade. Rising tensions in the region have impacted this route, although Iran has stated that it has not been completely closed.
Claim:
Users on X (formerly Twitter) are sharing the video as breaking news, claiming that Iran has captured a US ship in the Strait of Hormuz. The posts suggest that the move is a direct warning to the United States.

Fact Check:
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. This led us to the same video posted on the X handle of Iran’s Press TV on July 20, 2019.
Link:
- https://x.com/PressTV/status/1152597789362262016?s=20
- https://x.com/PressTV/status/1152597789362262016?s=20

The caption of the post stated that the footage showed the moment when IRGC forces seized the British oil tanker Stena Impero in the Strait of Hormuz. Further, we found a July 2019 report by Al Jazeera that included visuals matching the viral video. According to the report, Iran’s IRGC had intercepted the British-flagged tanker on July 19, 2019, after which the footage was released.
https://www.aljazeera.com/news/2019/7/20/iran-releases-video-showing-capture-of-british-oil-tanker

Conclusion:
The viral claim is misleading. The video is not recent and does not show Iran capturing a US ship. It is from 2019 and depicts the seizure of the British tanker Stena Impero by Iran’s IRGC.
Related Blogs

In an era defined by perpetual technological advancement, the hitherto uncharted territories of the human experience are progressively being illuminated by the luminous glow of innovation. The construct of privacy, once a straightforward concept involving personal secrets and solitude, has evolved into a complex web of data protection, consent, and digital rights. This notion of privacy, which often feels as though it elusively ebbs and flows like the ghost of a bygone epoch, is now confronted with a novel intruder – neurotechnology – which promises to redefine the very essence of individual sanctity.
Why Neuro Rights
At the forefront of this existential conversation lie ventures like Elon Musk's Neuralink. This company, which finds itself at the confluence of fantastical dreams and tangible reality, teases a future where the contents of our thoughts could be rendered as accessible as the words we speak. An existence where machines not only decipher our mental whispers but hold the potential to echo back, reshaping our cognitive landscapes. This startling innovation sets the stage for the emergence of 'neurorights' – a paradigm aimed at erecting a metaphorical firewall around the synapses and neurons that compose our innermost selves.
At institutions such as the University of California, Berkeley, researchers, under the aegis of cognitive scientists like Jack Gallant, are already drawing the map of once-inaccessible territories within the mind. Gallant's landmark study, which involved decoding the brain activity of volunteers as they absorbed visual stimuli, opened Pandora's box regarding the implications of mind-reading. The paper published a decade ago, was an inchoate step toward understanding the narrative woven within the cerebral cortex. Although his work yielded only a rough sketch of the observed video content, it heralded an era where thought could be translated into observable media.
The Growth
This rapid acceleration of neuro-technological prowess has not gone unnoticed on the sociopolitical stage. In a pioneering spirit reminiscent of the robust legislative eagerness of early democracies, Chile boldly stepped into the global spotlight in 2021 by legislating neurorights. The Chilean senate's decision to constitutionalize these rights sent ripples the world over, signalling an acknowledgement that the evolution of brain-computer interfaces was advancing at a daunting pace. The initiative was spearheaded by visionaries like Guido Girardi, a former senator whose legislative foresight drew clear parallels between the disruptive advent of social media and the potential upheaval posed by emergent neurotechnology.
Pursuit of Regulation
Yet the pursuit of regulation in such an embryonic field is riddled with intellectual quandaries and ethical mazes. Advocates like Allan McCay articulate the delicate tightrope that policy-makers must traverse. The perils of premature regulation are as formidable as the risks of a delayed response – the former potentially stifling innovation, the latter risking a landscape where technological advances could outpace societal control, engendering a future fraught with unforeseen backlashes.
Such is the dichotomy embodied in the story of Ian Burkhart, whose life was irrevocably altered by the intervention of neurotechnology. Burkhart's experience, transitioning from quadriplegia to digital dexterity through sheer force of thought, epitomizes the utopic potential of neuronal interfaces. Yet, McCay issues a solemn reminder that with great power comes great potential for misuse, highlighting contentious ethical issues such as the potential for the criminal justice system to over extend its reach into the neural recesses of the human psyche.
Firmly ensconced within this brave new world, the quest for prudence is of paramount importance. McCay advocates for a dyadic approach, where privacy is vehemently protected and the workings of technology proffered with crystal-clear transparency. The clandestine machinations of AI and the danger of algorithmic bias necessitate a vigorous, ethical architecture to govern this new frontier.
As legal frameworks around the globe wrestle with the implications of neurotechnology, countries like India, with their burgeoning jurisprudence regarding privacy, offer a vantage point into the potential shape of forthcoming legislation. Jurists and technology lawyers, including Jaideep Reddy, acknowledge ongoing protections yet underscore the imperativeness of continued discourse to gauge the adequacy of current laws in this nascent arena.
Conclusion
The dialogue surrounding neurorights emerges, not merely as another thread in our social fabric, but as a tapestry unto itself – intricately woven with the threads of autonomy, liberty, and privacy. As we hover at the edge of tomorrow, these conversations crystallize into an imperative collective undertaking, promising to define the sanctity of cognitive liberty. The issue at hand is nothing less than a societal reckoning with the final frontier – the safeguarding of the privacy of our thoughts.
References:

Introduction
" सर्वे भवन्तु सुखिनः, सर्वे सन्तु निरामयाः " May all be happy, may all be free from suffering. This timeless invocation reflects a vision of collective well-being, where progress is meaningful only when shared, and protection extends to every individual in society. This very philosophy lies at the heart of Corporate Social Responsibility, which seeks to ensure that growth is not isolated or unequal, but inclusive, ethical, and mindful of the broader social good.
At its core, Corporate Social Responsibility is not merely a statutory obligation, it is a reflection of a deeper ethical commitment, an acknowledgement that growth must carry with it a sense of duty towards society. In many ways, CSR embodies the idea that progress without responsibility is incomplete, and that corporations, as key actors shaping modern life, must help safeguard the very communities they engage with.
Reframing Digital Literacy Through Cyber Safety in CSR Frameworks
In India, this moral vision has been given a legal structure under the Companies Act, 2013, CSR Schedule VII, which mandates certain classes of companies to allocate a portion of their profits towards socially beneficial activities. Section 135 of the Act requires companies meeting specified financial thresholds to undertake CSR initiatives, guided by principles of inclusivity, sustainability, and social welfare. The underlying values are clear, CSR is intended not as charity, but as a strategic and accountable contribution to societal development.
Schedule VII of the Act further outlines the broad areas that qualify as CSR, including “Education and Digital Literacy”, gender equality, rural development, and measures for reducing inequalities. Within this framework, promoting “digital literacy” has increasingly been recognised as a legitimate and necessary CSR activity, especially in the context of a rapidly digitising society like India.
However, the current understanding of digital literacy within CSR remains incomplete. It often emphasises access and usage, teaching individuals how to navigate digital platforms, use devices, and engage with online services. What remains insufficiently addressed is the question of safety. In an environment where cyber fraud, data breaches, online harassment, and identity theft are becoming increasingly common, digital literacy without cyber awareness risks becoming a partial and potentially harmful intervention.
Embedding cyber awareness and capacity building within ‘digital literacy’ in explicit form is therefore not optional, it is essential. This includes equipping individuals with the ability to recognise online threats, protect personal data, understand digital consent, and respond effectively to cyber risks. It also requires recognising that vulnerable populations, including first-time internet users, women, and marginalised communities, often face disproportionate exposure to cyber harm.
“It is pertinent to note that Cybersecurity awareness training is relevant to CSR but is not yet consistently implemented as an explicit CSR activity. It is often included indirectly within digital literacy programs, highlighting the need for a more structured, progressive and integrated approach.”
Given this reality, there is a strong case for explicitly recognising cyber awareness as a distinct and integral component of CSR activities, rather than treating it as an implicit subset of digital literacy. Doing so would not only align CSR with contemporary societal risks but also ensure that corporate interventions move beyond enabling access to actively ensuring safety.
In a digital society, empowerment without protection is incomplete. If CSR is to truly reflect its foundational values, it must evolve to address not just the opportunities of the digital age, but also its risks.
Why Cyber Safety Must Be Central to CSR
The current state of digital ecosystems, which used to operate as secondary systems, now functions as essential systems that support government operations, banking systems, educational institutions, and social communication. The digital environment has its vulnerabilities, which create direct dangers for people in society. The elderly, first-time internet users, and rural communities face higher cyber threat risks because they often lack knowledge and protective resources on responsible use. The implementation of CSR initiatives that provide digital access to these groups, along with how to handle risks, will create greater benefit for their safety. Organisations must encourage the implementation of cyber safety training in their CSR programs because doing so will create value while fulfilling their ethical obligations. The empowerment process needs to achieve complete success, which protects people from any potential dangers according to the "do no harm" principle.
Key Components of CyberPeace-Aligned Digital Literacy
To make CSR initiatives more effective and future-ready, organisations should incorporate the following elements into their digital literacy programs:
- Cyber Awareness and Risk Recognition: The training program teaches participants how to recognise typical security threats, which include phishing attacks and scams, deepfake technology and misinformation.
- Data Protection and Privacy Literacy: The program teaches users how to protect their personal information, together with the process of giving consent and the methods used to handle their online presence.
- Responsible Digital Behaviour: The program teaches people how to use the internet responsibly by showing them how to make ethical decisions that require both respect and accountability while understanding the legal consequences of their actions.
- Incident Response and Reporting Mechanisms: The program teaches users about cyber incident response, which includes all reporting methods and available support resources.
- Inclusion-Focused Design: The program develops specific solutions which protect various demographic groups from their particular vulnerabilities while maintaining accessibility and essential programmatic relevance.
Policy and Institutional Alignment
The integration of cyber safety into corporate social responsibility lets organisations achieve their national objectives, which include:
- Strengthening digital trust and resilience
- Supporting safe digital inclusion initiatives
- Complementing the efforts of institutions working on cybersecurity awareness and capacity building
The structured approach requires organisations to execute three specific steps, which include:
- Partnering with cybersecurity organisations and civil society
- Developing standardised cyber awareness modules
- The organisation will use behavioural change indicators to evaluate its impact instead of relying on access metrics.
The Way Forward
Digital-era Corporate Social Responsibility needs to transition from its present state of providing access to digital resources toward establishing secure online platforms for users. The understanding of digital literacy needs to shift from its current status as a technical ability toward its new definition as a social competency that encompasses safety, responsibility and resilience training.
Companies need to understand their digital transformation obligations because their digital transformation efforts require them to handle all associated risks. The implementation of cyber safety within corporate social responsibility frameworks will enable organisations to develop a secure and trustworthy digital environment that includes all users.
Conclusion
The implementation of corporate social responsibility needs to fulfil its core mission of creating societal benefits through inclusive practices that span all current digital possibilities and their associated security threats. The field of digital literacy requires a new framework that combines digital safety practices with its existing educational materials.
The digital safety practice ensures that people obtain essential knowledge and skills that enable them to use digital resources securely when they access online content. The process of accomplishing shared community prosperity needs to establish a framework that benefits every person through social advancement and the protection of their rights.
References
- https://upload.indiacode.nic.in/schedulefile?aid=AC_CEN_22_29_00008_201318_1517807327856&rid=79
- https://www.allresearchjournal.com/archives/2025/vol11issue4/PartF/11-5-60-511.pdf
- https://www.unesco.org/en/dtc-finance-toolkit-factsheets/corporate-social-responsibility-csr
- https://www.investopedia.com/terms/c/corp-social-responsibility.asp
- https://digitalmarketinginstitute.com/blog/corporate-16-brands-doing-corporate-social-responsibility-successfully
- https://www.imd.org/blog/sustainability/csr-strategy/

Introduction
In today’s digital world, where everything is related to data, the more data you own, the more control and compliance you have over the market, which is why companies are looking for ways to use data to improve their business. But at the same time, they have to make sure they are protecting people’s privacy. It is very tricky to strike a balance between both of them. Imagine you are trying to bake a cake where you need to use all the ingredients to make it taste great, but you also have to make sure no one can tell what’s in it. That’s kind of what companies are dealing with when it comes to data. Here, ‘Pseudonymisation’ emerges as a critical technical and legal mechanism that offers a middle ground between data anonymisation and unrestricted data processing.
Legal Framework and Regulatory Landscape
Pseudonymisation, as defined by the General Data Protection Regulation (GDPR) in Article 4(5), refers to “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”. This technique represents a paradigm shift in data protection strategy, enabling organisations to preserve data utility while significantly reducing privacy risks. The growing importance of this balance is evident in the proliferation of data protection laws worldwide, from GDPR in Europe to India’s Digital Personal Data Protection Act (DPDP) of 2023.
Its legal treatment varies across jurisdictions, but a convergent approach is emerging that recognises its value as a data protection safeguard while maintaining that the pseudonymised data remains personal data. Article 25(1) of GDPR recognises it as “an appropriate technical and organisational measure” and emphasises its role in reducing risks to data subjects. It protects personal data by reducing the risk of identifying individuals during data processing. The European Data Protection Board’s (EDPB) 2025 Guidelines on Pseudonymisation provide detailed guidance emphasising the importance of defining the “pseudonymisation domain”. It defines who is prevented from attributing data to specific individuals and ensures that the technical and organised measures are in place to block unauthorised linkage of pseudonymised data to the original data subjects. In India, while the DPDP Act does not explicitly define pseudonymisation, legal scholars argue that such data would still fall under the definition of personal data, as it remains potentially identifiable. The Act defines personal data defined in section 2(t) broadly as “any data about an individual who is identifiable by or in relation to such data,” suggesting that the pseudonymised information, being reversible, would continue to require compliance with data protection obligations.
Further, the DPDP Act, 2023 also includes principles of data minimisation and purpose limitation. Section 8(4) says that a “Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the Rules made under it.” The concept of Pseudonymization fits here because it is a recognised technical safeguard, which means companies can use pseudonymization as one of the methods or part of their compliance toolkit under Section 8(4) of the DPDP Act. However, its use should be assessed on a case to case basis, since ‘encryption’ is also considered one of the strongest methods for protecting personal data. The suitability of pseudonymization depends on the nature of the processing activity, the type of data involved, and the level of risk that needs to be mitigated. In practice, organisations may use pseudonymization in combination with other safeguards to strengthen overall compliance and security.
The European Court of Justice’s recent jurisprudence has introduced nuanced considerations about when pseudonymised data might not constitute personal data for certain entities. In cases where only the original controller possesses the means to re-identify individuals, third parties processing such data may not be subject to the full scope of data protection obligations, provided they cannot reasonably identify the data subjects. The “means reasonably likely” assessment represents a significant development in understanding the boundaries of data protection law.
Corporate Implementation Strategies
Companies find that pseudonymisation is not just about following rules, but it also brings real benefits. By using this technique, businesses can keep their data more secure and reduce the damage in the event of a breach. Customers feel more confident knowing that their information is protected, which builds trust. Additionally, companies can utilise this data for their research or other important purposes without compromising user privacy.
Key Benefits of Pseudonymisation:
- Enhanced Privacy Protection: It hides personal details like names or IDs with fake ones (with artificial values or codes), making it harder for accidental privacy breaches.
- Preserved Data Utility: Unlike completely anonymous data, pseudonymised data keeps its usefulness by maintaining important patterns and relationships within datasets.
- Facilitate Data Sharing: It’s easier to share pseudonymised data with partners or researchers because it protects privacy while still being useful.
However, using pseudonymisation is not as easy as companies have to deal with tricky technical issues like choosing the right methods, such as encryption or tokenisation and managing security keys safely. They have to implement strong policies to stop anyone from figuring out who the data belongs to. This can get expensive and complicated, especially when dealing with a large amount of data, and it often requires expert help and regular upkeep.
Balancing Privacy Rights and Data Utility
The primary challenge in pseudonymisation is striking the right balance between protecting individuals' privacy and maintaining the utility of the data. To get this right, companies need to consider several factors, such as why they are using the data, the potential hacker's level of skill, and the type of data being used.
Conclusion
Pseudonymisation offers a practical middle ground between full anonymisation and restricted data use, enabling organisations to harness the value of data while protecting individual privacy. Legally, it is recognised as a safeguard but still treated as personal data, requiring compliance under frameworks like GDPR and India’s DPDP Act. For companies, it is not only regulatory adherence but also ensuring that it builds trust and enhances data security. However, its effectiveness depends on robust technical methods, governance, and vigilance. Striking the right balance between privacy and data utility is crucial for sustainable, ethical, and innovation-driven data practices.
References:
- https://gdpr-info.eu/art-4-gdpr/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://gdpr-info.eu/art-25-gdpr/
- https://www.edpb.europa.eu/system/files/2025-01/edpb_guidelines_202501_pseudonymisation_en.pdf
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915