#FactCheck: Fake Claim on Delhi Authority Culling Dogs After Supreme Court Stray Dog Ban Directive 11 Aug 2025
Executive Summary:
A viral claim alleges that following the Supreme Court of India’s August 11, 2025 order on relocating stray dogs, authorities in Delhi NCR have begun mass culling. However, verification reveals the claim to be false and misleading. A reverse image search of the viral video traced it to older posts from outside India, probably linked to Haiti or Vietnam, as indicated by the use of Haitian Creole and Vietnamese language respectively. While the exact location cannot be independently verified, it is confirmed that the video is not from Delhi NCR and has no connection to the Supreme Court’s directive. Therefore, the claim lacks authenticity and is misleading
Claim:
There have been several claims circulating after the Supreme Court of India on 11th August 2025 ordered the relocation of stray dogs to shelters. The primary claim suggests that authorities, following the order, have begun mass killing or culling of stray dogs, particularly in areas like Delhi and the National Capital Region. This narrative intensified after several videos purporting to show dead or mistreated dogs allegedly linked to the Supreme Court’s directive—began circulating online.

Fact Check:
After conducting a reverse image search using a keyframe from the viral video, we found similar videos circulating on Facebook. Upon analyzing the language used in one of the posts, it appears to be Haitian Creole (Kreyòl Ayisyen), which is primarily spoken in Haiti. Another similar video was also found on Facebook, where the language used is Vietnamese, suggesting that the post associates the incident with Vietnam.
However, it is important to note that while these posts point towards different locations, the exact origin of the video cannot be independently verified. What can be established with certainty is that the video is not from Delhi NCR, India, as is being claimed. Therefore, the viral claim is misleading and lacks authenticity.


Conclusion:
The viral claim linking the Supreme Court’s August 11, 2025 order on stray dogs to mass culling in Delhi NCR is false and misleading. Reverse image search confirms the video originated outside India, with evidence of Haitian Creole and Vietnamese captions. While the exact source remains unverified, it is clear the video is not from Delhi NCR and has no relation to the Court’s directive. Hence, the claim lacks credibility and authenticity.
Claim: Viral fake claim of Delhi Authority culling dogs after the Supreme Court directive on the ban of stray dogs as on 11th August 2025
Claimed On: Social Media
Fact Check: False and Misleading
Related Blogs
.webp)
On 6 June 2025, the EU Council officially adopted the revised Cybersecurity Blueprint, marking a significant evolution from the 2017 guidance. This framework, formalised through Council Recommendation COM(2025) 66 final, responds to a transformed threat environment and reflects new legal milestones like the NIS2 Directive (Network and Information Security Directive) and the Cyber Solidarity Act.
From Fragmented Response to Cohesive Strategy
Between 2017 and now, EU member states have built various systems to manage cyber incidents. Still, real-world events and exercises highlighted critical gaps - uncoordinated escalation procedures, inconsistent terminology, and siloed information flows. The updated Blueprint addresses these issues by focusing on a harmonised operational architecture for the EU. It defines a clear crisis lifecycle with five stages: Detection, Analysis, Escalation, Response, and Recovery. Each stage is supported by common communication protocols, decision-making processes, and defined roles. Consistency is key; standardised terminology along with a broad scope of application that eases cross-border collaboration and empowers coherent response efforts.
Legal Foundations: NIS2, ENISA & EU‑CyCLONe
Several core pillars of EU cybersecurity directly underpin the Blueprint:
- ENISA – The European Union Agency for Cybersecurity continues to play a central role. It supports CSIRTs' Network operations, leads EU‑CyCLONe ( European cyber crisis liaison organisation network) coordination, conducts simulation exercises, and gives training on incident management
- NIS2 Directive, particularly Article 16, is a follow-up of NIS. NIS2 mandates operators of critical infrastructure and essential services to implement appropriate security measures and report incidents to the relevant authorities. Compared to NIS, NIS2 expands its EU-wide security requirements and scope of covered organisations and sectors to improve the security of supply chains, simplify reporting obligations, and enforce more stringent measures and sanctions throughout Europe. It also formally legitimises the EU‑CyCLONe network, which is the crisis liaison mechanism bridging technical teams from member states.
These modern tools, integrated with legal backing, ensure the Blueprint isn’t just theoretical; it’s operationally enforceable.
What’s Inside the Blueprint?
The 2025 Blueprint enhances several critical areas:
- Clear Escalation Triggers - It spells out when a national cyber incident merits EU-level attention, especially those affecting critical infrastructure across borders. Civilian Military Exchange. The Blueprint encourages structured information sharing with defence institutions and NATO, recognising that cyber incidents often have geopolitical implications
- Recovery & Lessons Learned – A dedicated chapter ensures systematic post-incident reviews and shared learning among member states.
Adaptive & Resilient by Design
Rather than a static document, the Blueprint is engineered to evolve:
- Regular Exercises: Built into the framework are simulation drills that are known as Blueprint Operational Level Exercises—to test leadership response and cross-border coordination via EU‑CyCLONe
- Dynamic Reviews: The system promotes continuous iteration- this includes revising protocols, learning from real incidents, and refining role definitions.
This iterative, learning-oriented architecture aims to ensure the Blueprint remains robust amid rapidly evolving threats, including AI-boosted hacks and hybrid cyber campaigns.
Global Implications & Lessons for Others
The EU’s Cybersecurity Blueprint sets a global benchmark in cyber resilience and crisis governance:
- Blueprint for Global Coordination: The EU’s method of defined crisis stages, empowered liaison bodies (like EU‑CyCLONe), and continuous exercise can inspire other regional blocs or national governments to build their own crisis mechanisms.
- Public–Private Synergy: The Blueprint’s insistence on cooperation between governments and private-sector operators of essential services (e.g., energy, telecom, health) provides a model for forging robust ecosystems.
- Learning & Sharing at Scale: Its requirement for post-crisis lessons and peer exchange can fuel a worldwide knowledge network, cultivating resilience across jurisdictions.
Conclusion
The 2025 EU Cybersecurity Blueprint is more than an upgrade; it’s a strategic shift toward operational readiness, legal coherence, and collaborative resilience. Anchored in NIS2 and ENISA, and supported by EU‑CyCLONe, it replaces fragmented guidance with a well-defined, adaptive model. Its adoption signals a transformative moment in global cyber governance as for nations building crisis frameworks, the Blueprint offers a tested, comprehensive template: define clear stages, equip liaison networks, mandate drills, integrate lessons, and legislate coordination. In an era where cyber threats transcend borders, this proves to be an important development that can offer guidance and set a precedent.
For India, the EU Cybersecurity Blueprint offers a valuable reference point as we strengthen our own frameworks through initiatives like the DPDP Act, the upcoming Digital India Act and CERT-In’s evolving mandates. It reinforces the importance of coordinated response systems, cross-sector drills, and legal clarity. As cyber threats grow more complex, such global models can complement our national efforts and enhance regional cooperation.
References
- https://industrialcyber.co/expert/the-eus-cybersecurity-blueprint-and-the-future-of-cyber-crisis-management/
- https://www.consilium.europa.eu/en/press/press-releases/2025/06/06/eu-adopts-blueprint-to-better-manage-european-cyber-crises-and-incidents/
- https://www.enisa.europa.eu/topics/eu-incident-response-and-cyber-crisis-management
- https://www.enisa.europa.eu/news/new-cyber-blueprint-to-scale-up-the-eu-cybersecurity-crisis-management
- https://www.isc2.org/Insights/2025/01/EU-Cyber-Solidarity-Act
- https://www.enisa.europa.eu/topics/eu-incident-response-and-cyber-crisis-management/eu-cyclone
- https://nis2directive.eu/what-is-nis2/

Introduction
In today’s digital world, where everything is related to data, the more data you own, the more control and compliance you have over the market, which is why companies are looking for ways to use data to improve their business. But at the same time, they have to make sure they are protecting people’s privacy. It is very tricky to strike a balance between both of them. Imagine you are trying to bake a cake where you need to use all the ingredients to make it taste great, but you also have to make sure no one can tell what’s in it. That’s kind of what companies are dealing with when it comes to data. Here, ‘Pseudonymisation’ emerges as a critical technical and legal mechanism that offers a middle ground between data anonymisation and unrestricted data processing.
Legal Framework and Regulatory Landscape
Pseudonymisation, as defined by the General Data Protection Regulation (GDPR) in Article 4(5), refers to “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”. This technique represents a paradigm shift in data protection strategy, enabling organisations to preserve data utility while significantly reducing privacy risks. The growing importance of this balance is evident in the proliferation of data protection laws worldwide, from GDPR in Europe to India’s Digital Personal Data Protection Act (DPDP) of 2023.
Its legal treatment varies across jurisdictions, but a convergent approach is emerging that recognises its value as a data protection safeguard while maintaining that the pseudonymised data remains personal data. Article 25(1) of GDPR recognises it as “an appropriate technical and organisational measure” and emphasises its role in reducing risks to data subjects. It protects personal data by reducing the risk of identifying individuals during data processing. The European Data Protection Board’s (EDPB) 2025 Guidelines on Pseudonymisation provide detailed guidance emphasising the importance of defining the “pseudonymisation domain”. It defines who is prevented from attributing data to specific individuals and ensures that the technical and organised measures are in place to block unauthorised linkage of pseudonymised data to the original data subjects. In India, while the DPDP Act does not explicitly define pseudonymisation, legal scholars argue that such data would still fall under the definition of personal data, as it remains potentially identifiable. The Act defines personal data defined in section 2(t) broadly as “any data about an individual who is identifiable by or in relation to such data,” suggesting that the pseudonymised information, being reversible, would continue to require compliance with data protection obligations.
Further, the DPDP Act, 2023 also includes principles of data minimisation and purpose limitation. Section 8(4) says that a “Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the Rules made under it.” The concept of Pseudonymization fits here because it is a recognised technical safeguard, which means companies can use pseudonymization as one of the methods or part of their compliance toolkit under Section 8(4) of the DPDP Act. However, its use should be assessed on a case to case basis, since ‘encryption’ is also considered one of the strongest methods for protecting personal data. The suitability of pseudonymization depends on the nature of the processing activity, the type of data involved, and the level of risk that needs to be mitigated. In practice, organisations may use pseudonymization in combination with other safeguards to strengthen overall compliance and security.
The European Court of Justice’s recent jurisprudence has introduced nuanced considerations about when pseudonymised data might not constitute personal data for certain entities. In cases where only the original controller possesses the means to re-identify individuals, third parties processing such data may not be subject to the full scope of data protection obligations, provided they cannot reasonably identify the data subjects. The “means reasonably likely” assessment represents a significant development in understanding the boundaries of data protection law.
Corporate Implementation Strategies
Companies find that pseudonymisation is not just about following rules, but it also brings real benefits. By using this technique, businesses can keep their data more secure and reduce the damage in the event of a breach. Customers feel more confident knowing that their information is protected, which builds trust. Additionally, companies can utilise this data for their research or other important purposes without compromising user privacy.
Key Benefits of Pseudonymisation:
- Enhanced Privacy Protection: It hides personal details like names or IDs with fake ones (with artificial values or codes), making it harder for accidental privacy breaches.
- Preserved Data Utility: Unlike completely anonymous data, pseudonymised data keeps its usefulness by maintaining important patterns and relationships within datasets.
- Facilitate Data Sharing: It’s easier to share pseudonymised data with partners or researchers because it protects privacy while still being useful.
However, using pseudonymisation is not as easy as companies have to deal with tricky technical issues like choosing the right methods, such as encryption or tokenisation and managing security keys safely. They have to implement strong policies to stop anyone from figuring out who the data belongs to. This can get expensive and complicated, especially when dealing with a large amount of data, and it often requires expert help and regular upkeep.
Balancing Privacy Rights and Data Utility
The primary challenge in pseudonymisation is striking the right balance between protecting individuals' privacy and maintaining the utility of the data. To get this right, companies need to consider several factors, such as why they are using the data, the potential hacker's level of skill, and the type of data being used.
Conclusion
Pseudonymisation offers a practical middle ground between full anonymisation and restricted data use, enabling organisations to harness the value of data while protecting individual privacy. Legally, it is recognised as a safeguard but still treated as personal data, requiring compliance under frameworks like GDPR and India’s DPDP Act. For companies, it is not only regulatory adherence but also ensuring that it builds trust and enhances data security. However, its effectiveness depends on robust technical methods, governance, and vigilance. Striking the right balance between privacy and data utility is crucial for sustainable, ethical, and innovation-driven data practices.
References:
- https://gdpr-info.eu/art-4-gdpr/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://gdpr-info.eu/art-25-gdpr/
- https://www.edpb.europa.eu/system/files/2025-01/edpb_guidelines_202501_pseudonymisation_en.pdf
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915

Executive Summary:
A viral video claiming to show Israelis pleading with Iran to "stop the war" is not authentic. As per our research the footage is AI-generated, created using tools like Google’s Veo, and not evidence of a real protest. The video features unnatural visuals and errors typical of AI fabrication. It is part of a broader wave of misinformation surrounding the Israel-Iran conflict, where AI-generated content is widely used to manipulate public opinion. This incident underscores the growing challenge of distinguishing real events from digital fabrications in global conflicts and highlights the importance of media literacy and fact-checking.
Claim:
A X verified user with the handle "Iran, stop the war, we are sorry" posted a video featuring people holding placards and the Israeli flag. The caption suggests that Israeli citizens are calling for peace and expressing remorse, stating, "Stop the war with Iran! We apologize! The people of Israel want peace." The user further claims that Israel, having allegedly initiated the conflict by attacking Iran, is now seeking reconciliation.

Fact Check:
The bottom-right corner of the video displays a "VEO" watermark, suggesting it was generated using Google's AI tool, VEO 3. The video exhibits several noticeable inconsistencies such as robotic, unnatural speech, a lack of human gestures, and unclear text on the placards. Additionally, in one frame, a person wearing a blue T-shirt is seen holding nothing, while in the next frame, an Israeli flag suddenly appears in their hand, indicating possible AI-generated glitches.

We further analyzed the video using the AI detection tool HIVE Moderation, which revealed a 99% probability that the video was generated using artificial intelligence technology. To validate this finding, we examined a keyframe from the video separately, which showed an even higher likelihood of 99% probability of being AI generated. These results strongly indicate that the video is not authentic and was most likely created using advanced AI tools.

Conclusion:
The video is highly likely to be AI-generated, as indicated by the VEO watermark, visual inconsistencies, and a 99% probability from HIVE Moderation. This highlights the importance of verifying content before sharing, as misleading AI-generated media can easily spread false narratives.
- Claim: AI generated video of Israelis saying "Stop the War, Iran We are Sorry".
- Claimed On: Social Media
- Fact Check:AI Generated Mislead