#FactCheck: Viral video of Unrest in Kenya is being falsely linked with J&K
Executive Summary:
A video of people throwing rocks at vehicles is being shared widely on social media, claiming an incident of unrest in Jammu and Kashmir, India. However, our thorough research has revealed that the video is not from India, but from a protest in Kenya on 25 June 2025. Therefore, the video is misattributed and shared out of context to promote false information.

Claim:
The viral video shows people hurling stones at army or police vehicles and is claimed to be from Jammu and Kashmir, implying ongoing unrest and anti-government sentiment in the region.

Fact Check:
To verify the validity of the viral statement, we did a reverse image search by taking key frames from the video. The results clearly demonstrated that the video was not sourced from Jammu and Kashmir as claimed, but rather it was consistent with footage from Nairobi, Kenya, where a significant protest took place on 25 June 2025. Protesters in Kenya had congregated to express their outrage against police brutality and government action, which ultimately led to violent clashes with police.


We also came across a YouTube video with similar news and frames. The protests were part of a broader anti-government movement to mark its one-year time period.

To support the context, we did a keyword search of any mob violence or recent unrest in J&K on a reputable Indian news source, But our search did not turn up any mention of protests or similar events in J&K around the relevant time. Based on this evidence, it is clear that the video has been intentionally misrepresented and is being circulated with false context to mislead viewers.

Conclusion:
The assertion that the viral video shows a protest in Jammu and Kashmir is incorrect. The video appears to be taken from a protest in Nairobi, Kenya, in June 2025. Labeling the video incorrectly only serves to spread misinformation and stir up uncalled for political emotions. Always be sure to verify where content is sourced from before you believe it or share it.
- Claim: Army faces heavy resistance from Kashmiri youth — the valley is in chaos.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In September 2024, the Australian government announced the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 ( CLA Bill 2024 hereon), to provide new powers to the Australian Communications and Media Authority (ACMA), the statutory regulatory body for Australia's communications and media infrastructure, to combat online misinformation and disinformation. It proposed allowing the ACMA to hold digital platforms accountable for the “seriously harmful mis- and disinformation” being spread on their platforms and their response to it, while also balancing freedom of expression. However, the Bill was subsequently withdrawn, primarily over concerns regarding the possibility of censorship by the government. This development is reflective of the global contention on the balance between misinformation regulation and freedom of speech.
Background and Key Features of the Bill
According to the BBC’s Global Minds Survey of 2023, nearly 73% of Australians struggled to identify fake news and AI-generated misinformation. There has been a substantial rise in misinformation on platforms like Facebook, Twitter, and TikTok since the COVID-19 pandemic, especially during major events like the bushfires of 2020 and the 2022 federal elections. The government’s campaign against misinformation was launched against this background, with the launch of The Australian Code of Practice on Disinformation and Misinformation in 2021. The main provisions of the CLA Bill, 2024 were:
- Core Transparency Obligations of Digital Media Platforms: Publishing current media literacy plans, risk assessment reports, and policies or information on their approach to addressing mis- and disinformation. The ACMA would also be allowed to make additional rules regarding complaints and dispute-handling processes.
- Information Gathering and Record-Keeping Powers: The ACMA would form rules allowing it to gather consistent information across platforms and publish it. However, it would not have been empowered to gather and publish user information except in limited circumstances.
- Approving Codes and Making Standards: The ACMA would have powers to approve codes developed by the industry and make standards regarding reporting tools, links to authoritative information, support for fact-checking, and demonetisation of disinformation. This would make compliance mandatory for relevant sections of the industry.
- Parliamentary Oversight: The transparency obligations, codes approved and standards set by ACMA under the Bill would be subject to parliamentary scrutiny and disallowance. ACMA would be required to report to the Parliament annually.
- Freedom of Speech Protections: End-users would not be required to produce information for ACMA unless they are a person providing services to the platform, such as its employees or fact-checkers. Further, it would not be allowed to call for removing content from platforms unless it involved inauthentic behavior such as bots.
- Penalties for Non-Compliance: ACMA would be required to employ a “graduated, proportionate and risk-based approach” to non-compliance and enforcement in the form of formal warnings, remedial directions, injunctions, or significant civil penalties as decided by the courts, subject to review by the Administrative Review Tribunal (ART). No criminal penalties would be imposed.
Key Concerns
- Inadequacy of Freedom of Speech Protections: The biggest contention on this Bill has been regarding the issue of possible censorship, particularly of alternative opinions that are crucial to the health of a democratic system. To protect the freedom of speech, the Bill defined mis- and disinformation, what constitutes “serious harm” (election interference, harming public health, etc.), and what would be excluded from its scope. However, reservations among the Opposition persisted due to the lack of a clear mechanism to protect divergent opinions from the purview of this Bill.
- Efficacy of Regulatory Measures: Many argue that by allowing the digital platform industry to make its codes, this law lets it self-police. Big Tech companies have no incentive to curb misinformation effectively since their business models allow them to reap financial benefits from the rampant spread of misinformation. Unless there are financial non- or dis- incentives to curb misinformation, Big Tech is not likely to address the situation at war footing. Thus, this law would run the risk of being toothless. Secondly, the Bill did not require platforms to report on the “prevalence of” false content which, along with other metrics, is crucial for researchers and legislators to track the efficacy of the current misinformation-curbing practices employed by platforms.
- Threat of Government Overreach: The Bill sought to expand the ACMA’s compliance and enforcement powers concerning misinformation and disinformation on online communication platforms by giving it powers to form rules on information gathering, code registration, standard-making powers, and core transparency obligations. However, even though the ACMA as a regulatory authority is answerable to the Parliament, the Bill was unclear in defining limits to these powers. This raised concerns from civil society about potential government overreach in a domain filled with contextual ambiguities regarding information.
Conclusion
While the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill sought to equip the ACMA with tools to hold digital platforms accountable and mitigate the harm caused by false information, its critique highlights the complexities of regulating such content without infringing on freedom of speech. Legislations and proposals regarding the matter all over the world are having to contend with this challenge. Globally, legislation and proposals addressing this issue face similar challenges, emphasizing the need for a continuous discourse at the intersection of platform accountability, regulatory restraint, and the protection of diverse viewpoints.
To regulate Big Tech effectively, governments can benefit from adopting a consultative, incremental, and cooperative approach, as exemplified by the European Union’s Digital Services Act 2023. Such a framework provides for a balanced response, fostering accountability while safeguarding democratic freedoms.
Resources
- https://www.infrastructure.gov.au/sites/default/files/documents/factsheet-misinformation-disinformation-bill.pdf
- https://www.infrastructure.gov.au/have-your-say/new-acma-powers-combat-misinformation-and-disinformation
- https://www.mi-3.com.au/07-02-2024/over-80-australians-feel-they-may-have-fallen-fake-news-says-bbc
- https://www.hrlc.org.au/news/misinformation-inquiry
- https://humanrights.gov.au/our-work/legal/submission/combatting-misinformation-and-disinformation-bill-2024
- https://www.sbs.com.au/news/article/what-is-the-misinformation-bill-and-why-has-it-triggered-worries-about-freedom-of-speech/4n3ijebde
- https://www.hrw.org/report/2023/06/14/no-internet-means-no-work-no-pay-no-food/internet-shutdowns-deny-access-basic#:~:text=The%20Telegraph%20Act%20allows%20authorities,preventing%20incitement%20to%20the%20commission
- https://www.hrlc.org.au/submissions/2024/11/8/submission-combatting-misinformation?utm_medium=email&utm_campaign=Media%20Release%20Senate%20Committee%20to%20hear%20evidence%20calling%20for%20Albanese%20Government%20to%20regulate%20and%20hold%20big%20tech%20accountable%20for%20misinformation&utm_content=Media%20Release%20Senate%20Committee%20to%20hear%20evidence%20calling%20for%20Albanese%20Government%20to%20regulate%20and%20hold%20big%20tech%20accountable%20for%20misinformation+Preview+CID_31c6d7200ed9bd2f7f6f596ba2a8b1fb&utm_source=Email%20campaign&utm_term=Read%20the%20Human%20Rights%20Law%20Centres%20submission%20to%20the%20inquiry

Introduction
A policy, no matter how artfully conceived, is like a timeless idiom, its truth self-evident, its purpose undeniable, standing in silent witness before those it vows to protect, yet trapped in the stillness of inaction, where every moment of delay erodes the very justice it was meant to serve. This is the case of the Digital Personal Data Protection Act, 2023, which holds in its promise a resolution to all the issues related to data protection and a protection framework at par with GDPR and Global Best Practices. While debates on its substantive efficacy are inevitable, its execution has emerged as a site of acute contention. The roll-out and the decision-making have been making headlines since late July on various fronts. The government is being questioned by industry stakeholders, media and independent analysts on certain grounds, be it “slow policy execution”, “centralisation of power” or “arbitrary amendments”. The act is now entrenched in a never-ending dilemma of competing interests under the DPDP Act.
The change to the Right to Information Act (RTI), 2005, made possible by Section 44(3) of the DPDP Act, has become a focal point of debate. This amendment is viewed by some as an attack on weakening the hard-won transparency architecture of Indian democracy by substituting an absolute exemption for personal information for the “public interest override” in Section 8(1)(j) of the RTI Act.
The Lag Ledger: Tracking the Delays in DPDP Enforcement
As per a news report of July 28, 2025, the Parliamentary Standing Committee on Information and Communications Technology has expressed its concern over the delayed implementation and has urged the Ministry of Electronics and Information Technology (MeitY) to ensure that data privacy is adequately ensured in the nation. In the report submitted to the Lok Sabha on July 24, the committee reviewed the government’s reaction to the previous recommendations and concluded that MeitY had only been able to hold nine consultations and twenty awareness workshops about the Draft DPDP Rules, 2025. In addition, four brainstorming sessions with academic specialists were conducted to examine the needs for research and development. The ministry acknowledges that this is a specialised field that urgently needs industrial involvement. Another news report dated 30th July, 2025, of a day-long consultation held where representatives from civil society groups, campaigns, social movements, senior lawyers, retired judges, journalists, and lawmakers participated on the contentious and chilling effects of the Draft Rules that were notified in January this year. The organisers said in a press statement the DPDP Act may have a negative impact on the freedom of the press and people’s right to information and the activists, journalists, attorneys, political parties, groups and organisations “who collect, analyse, and disseminate critical information as they become ‘data fiduciaries’ under the law.”
The DPDP Act has thus been caught up in an uncomfortable paradox: praised as a significant legislative achievement for India’s digital future, but caught in a transitional phase between enactment and enforcement, where every day not only postpones protection but also feeds worries about the dwindling amount of room for accountability and transparency.
The Muzzling Effect: Diluting Whistleblower Protections
The DPDP framework raises a number of subtle but significant issues, one of which is the possibility that it would weaken safeguards for whistleblowers. Critics argue that the Act runs the risk of trapping journalists, activists, and public interest actors who handle sensitive material while exposing wrongdoing because it expands the definition of “personal data” and places strict compliance requirements on “data fiduciaries.”One of the most important checks on state overreach may be silenced if those who speak truth to power are subject to legal retaliation in the absence of clear exclusions of robust public-interest protections.
Noted lawyer Prashant Bhushan has criticised the law for failing to protect whistleblowers, warning that “If someone exposes corruption and names officials, they could now be prosecuted for violating the DPDP Act.”
Consent Management under the DPDP Act
In June 2025, the National e-Governance Division (NeGD) under MeitY released a Business Requirement Document (BRD) for developing consent management systems under the DPDP Act, 2023. The document supports the idea of “Consent Manager”, which acts as a single point of contact between Data Principals and Data Fiduciaries. This idea is fundamental to the Act, which is now being operationalised with the help of MeitY’s “Code for Consent: The DPDP Innovation Challenge.” The government has established a collaborative ecosystem to construct consent management systems (CMS) that can serve as a single, standardised interface between Data Principals and Data Fiduciaries by choosing six distinct entities, such as Jio Platforms, IDfy, and Zoop. Such a framework could enable people to have meaningful control over their personal data, lessen consent fatigue, and move India’s consent architecture closer to international standards if it is implemented precisely and transparently.
There is no debate to the importance of this development however, there are various concerns associated with this advancement that must be considered. Although effective, a centralised consent management system may end up being a single point of failure in terms of political overreach and technical cybersecurity flaws. Concerns are raised over the concentration of power over the framing, seeking, and recording of consent when big corporate entities like Jio are chosen as key innovators. Critics contend that the organisations responsible for generating revenue from user data should not be given the responsibility for designing the gatekeeping systems. Furthermore, the CMS can create opaque channels for data access, compromising user autonomy and whistleblower protections, in the absence of strong safeguards, transparency mechanisms and independent oversight.
Conclusion
Despite being hailed as a turning point in India’s digital governance, the DPDP Act is still stuck in a delayed and unequal transition from promise to reality. Its goals are indisputable, but so are the conundrum it poses to accountability, openness, and civil liberties. Every delay increases public mistrust, and every safeguard that remains unsolved. The true test of a policy intended to safeguard the digital rights of millions lies not in how it was drafted, but in the integrity, pace, and transparency with which it is to be implemented. In the digital age, the true cost of delay is measured not in time, but in trust. CyberPeace calls for transparent, inclusive, and timely execution that balances innovation with the protection of digital rights.
References
- https://www.storyboard18.com/how-it-works/parliamentary-committee-raises-concern-with-meity-over-dpdp-act-implementation-lag-77105.htm
- https://thewire.in/law/excessive-centralisation-of-power-lawyers-activists-journalists-mps-express-fear-on-dpdp-act
- https://www.medianama.com/2025/08/223-jio-idfy-meity-consent-management-systems-dpdpa/
- https://www.downtoearth.org.in/governance/centre-refuses-to-amend-dpdp-act-to-protect-journalists-whistleblowers-and-rti-activists

Executive Summary:
A viral image circulating on social media claims it to be a natural optical illusion from Epirus, Greece. However, upon fact-checking, it was found that the image is an AI-generated artwork created by Iranian artist Hamidreza Edalatnia using the Stable Diffusion AI tool. CyberPeace Research Team found it through reverse image search and analysis with an AI content detection tool named HIVE Detection, which indicated a 100% likelihood of AI generation. The claim of the image being a natural phenomenon from Epirus, Greece, is false, as no evidence of such optical illusions in the region was found.

Claims:
The viral image circulating on social media depicts a natural optical illusion from Epirus, Greece. Users share on X (formerly known as Twitter), YouTube Video, and Facebook. It’s spreading very fast across Social Media.

Similar Posts:


Fact Check:
Upon receiving the Posts, the CyberPeace Research Team first checked for any Synthetic Media detection, and the Hive AI Detection tool found it to be 100% AI generated, which is proof that the Image is AI Generated. Then, we checked for the source of the image and did a reverse image search for it. We landed on similar Posts from where an Instagram account is linked, and the account of similar visuals was made by the creator named hamidreza.edalatnia. The account we landed posted a photo of similar types of visuals.

We searched for the viral image in his account, and it was confirmed that the viral image was created by this person.

The Photo was posted on 10th December, 2023 and he mentioned using AI Stable Diffusion the image was generated . Hence, the Claim made in the Viral image of the optical illusion from Epirus, Greece is Misleading.
Conclusion:
The image claiming to show a natural optical illusion in Epirus, Greece, is not genuine, and it's False. It is an artificial artwork created by Hamidreza Edalatnia, an artist from Iran, using the artificial intelligence tool Stable Diffusion. Hence the claim is false.