#FactCheck: Viral video of Unrest in Kenya is being falsely linked with J&K
Executive Summary:
A video of people throwing rocks at vehicles is being shared widely on social media, claiming an incident of unrest in Jammu and Kashmir, India. However, our thorough research has revealed that the video is not from India, but from a protest in Kenya on 25 June 2025. Therefore, the video is misattributed and shared out of context to promote false information.

Claim:
The viral video shows people hurling stones at army or police vehicles and is claimed to be from Jammu and Kashmir, implying ongoing unrest and anti-government sentiment in the region.

Fact Check:
To verify the validity of the viral statement, we did a reverse image search by taking key frames from the video. The results clearly demonstrated that the video was not sourced from Jammu and Kashmir as claimed, but rather it was consistent with footage from Nairobi, Kenya, where a significant protest took place on 25 June 2025. Protesters in Kenya had congregated to express their outrage against police brutality and government action, which ultimately led to violent clashes with police.


We also came across a YouTube video with similar news and frames. The protests were part of a broader anti-government movement to mark its one-year time period.

To support the context, we did a keyword search of any mob violence or recent unrest in J&K on a reputable Indian news source, But our search did not turn up any mention of protests or similar events in J&K around the relevant time. Based on this evidence, it is clear that the video has been intentionally misrepresented and is being circulated with false context to mislead viewers.

Conclusion:
The assertion that the viral video shows a protest in Jammu and Kashmir is incorrect. The video appears to be taken from a protest in Nairobi, Kenya, in June 2025. Labeling the video incorrectly only serves to spread misinformation and stir up uncalled for political emotions. Always be sure to verify where content is sourced from before you believe it or share it.
- Claim: Army faces heavy resistance from Kashmiri youth — the valley is in chaos.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In today’s digital era, warfare is being redefined. Defence Minister Rajnath Singh recently stated that “we are in the age of Grey Zone and hybrid warfare where cyber-attacks, disinformation campaigns and economic warfare have become tools to achieve politico-military aims without a single shot being fired.” The crippling cyberattacks on Estonia in 2007, Russia’s interference in the 2016 US elections, and the ransomware strike on the Colonial Pipeline in the United States in 2021 all demonstrate how states are now using cyberspace to achieve strategic goals while carefully circumventing the threshold of open war.
Legal Complexities: Attribution, Response, and Accountability
Grey zone warfare challenges the traditional notions of security and international conventions on peace due to inherent challenges such as :
- Attribution
The first challenge in cyber warfare is determining who is responsible. Threat actors hide behind rented botnets, fake IP addresses, and servers scattered across the globe. Investigators can follow digital trails, but those trails often point to machines, not people. That makes attribution more of an educated guess than a certainty. A wrong guess could lead to misattribution of blame, which could beget a diplomatic crisis, or worse, a military one. - Proportional Response
Even if attribution is clear, designing a response can be a challenge. International law does give room for countermeasures if they are both ‘necessary’ and ‘proportionate’. But defining these qualifiers can be a long-drawn, contested process. Effectively, governments employ softer measures such as protests or sanctions, tighten their cyber defences or, in extreme cases, strike back digitally. - Accountability
States can be held responsible for waging cyber attacks under the UN’s Draft Articles on State Responsibility. But these are non-binding and enforcement depends on collective pressure, which can be slow and inconsistent. In cyberspace, accountability often ends up being more symbolic than real, leaving plenty of room for repeat offences.
International and Indian Legal Frameworks
Cyber law is a step behind cyber warfare since existing international frameworks are often inadequate. For example, the Tallinn Manual 2.0, the closest thing we have to a rulebook for cyber conflict, is just a set of guidelines. It says that if a cyber operation can be tied to a state, even through hired hackers or proxies, then that state can be held responsible. But attribution is a major challenge. Similarly, the United Nations has tried to build order through its Group of Governmental Experts (GGE) that promotes norms like “don’t attack. However, these norms are not binding, effectively leaving practice to diplomacy and trust.
India is susceptible to routine attacks from hostile actors, but does not yet have a dedicated cyber warfare law. While Section 66F of the IT ACT, 2000, talks about cyber terrorism, and Section 75 lets Indian courts examine crimes committed abroad if they impact India, grey-zone tactics like fake news campaigns, election meddling, and influence operations fall into a legal vacuum.
Way Forward
- Strengthen International Cooperation
Frameworks like the Tallinn Manual 2.0 can form the basis for future treaties. Bilateral and multilateral agreements between countries are essential to ensure accountability and cooperation in tackling grey zone activities. - Develop Grey Zone Legislation
India currently relies on the IT Act, 2000, but this law needs expansion to specifically cover grey zone tactics such as election interference, propaganda, and large-scale disinformation campaigns. - Establish Active Monitoring Systems
India must create robust early detection systems to identify grey zone operations in cyberspace. Agencies can coordinate with social media platforms like Instagram, Facebook, X (Twitter), and YouTube, which are often exploited for propaganda and disinformation, to improve monitoring frameworks. - Dedicated Theatre Commands for Cyber Operations
Along with the existing Defence Cyber Agency, India should consider specialised theatre commands for grey zone and cyber warfare. This would optimise resources, enhance coordination, and ensure unified command in dealing with hybrid threats.
Conclusion
Grey zone warfare in cyberspace is no longer an optional tactic used by threat actors but a routine activity. India lacks the early detection systems, robust infrastructure, and strong cyber laws to counter grey-zone warfare. To counter this, India needs sharper attribution tools for early detection and must actively push for stronger international rules in this global landscape. More importantly, instead of merely blaming without clear plans, India should focus on preparing for solid retaliation strategies. By doing so, India can also learn to use cyberspace strategically to achieve politico-military aims without firing a single shot.
References
- Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (Michael N. Schmitt)
- UN Document on International Law in Cyberspace (UN Digital Library)
- NATO Cyber Defence Policy
- Texas Law Review: State Responsibility and Attribution of Cyber Intrusions
- Deccan Herald: Defence Minister on Grey Zone Warfare
- VisionIAS: Grey Zone Warfare
- Sachin Tiwari, The Reality of Cyber Operations in the Grey Zone

AI has grown manifold in the past decade and so has its reliance. A MarketsandMarkets study estimates the AI market to reach $1,339 billion by 2030. Further, Statista reports that ChatGPT amassed more than a million users within the first five days of its release, showcasing its rapid integration into our lives. This development and integration have their risks. Consider this response from Google’s AI chatbot, Gemini to a student’s homework inquiry: “You are not special, you are not important, and you are not needed…Please die.” In other instances, AI has suggested eating rocks for minerals or adding glue to pizza sauce. Such nonsensical outputs are not just absurd; they’re dangerous. They underscore the urgent need to address the risks of unrestrained AI reliance.
AI’s Rise and Its Limitations
The swiftness of AI’s rise, fueled by OpenAI's GPT series, has revolutionised fields like natural language processing, computer vision, and robotics. Generative AI Models like GPT-3, GPT-4 and GPT-4o with their advanced language understanding, enable learning from data, recognising patterns, predicting outcomes and finally improving through trial and error. However, despite their efficiency, these AI models are not infallible. Some seemingly harmless outputs can spread toxic misinformation or cause harm in critical areas like healthcare or legal advice. These instances underscore the dangers of blindly trusting AI-generated content and highlight the importance and the need to understand its limitations.
Defining the Problem: What Constitutes “Nonsensical Answers”?
Harmless errors due to AI nonsensical responses can be in the form of a wrong answer for a trivia question, whereas, critical failures could be as damaging as wrong legal advice.
AI algorithms sometimes produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. This response is known as a Nonsensical Answer and the situation is known as an “AI Hallucination”. It can be factual inaccuracies, irrelevant information or even contextually inappropriate responses.
A significant source of hallucination in machine learning algorithms is the bias in input that it receives. If the inputs for the AI model are full of biased datasets or unrepresentative data, it may lead to the model hallucinating and producing results that reflect these biases. These models are also vulnerable to adversarial attacks, wherein bad actors manipulate the output of an AI model by tweaking the input data ina subtle manner.
The Need for Policy Intervention
Nonsensical AI responses risk eroding user trust and causing harm, highlighting the need for accountability despite AI’s opaque and probabilistic nature. Different jurisdictions address these challenges in varied ways. The EU’s AI Act enforces stringent reliability standards with a risk-based and transparent approach. The U.S. emphasises creating ethical guidelines and industry-driven standards. India’s DPDP Act indirectly tackles AI safety through data protection, focusing on the principles of accountability and consent. While the EU prioritises compliance, the U.S. and India balance innovation with safeguards. This reflects on the diverse approaches that nations have to AI regulation.
Where Do We Draw the Line?
The critical question is whether AI policies should demand perfection or accept a reasonable margin for error. Striving for flawless AI responses may be impractical, but a well-defined framework can balance innovation and accountability. Adopting these simple measures can lead to the creation of an ecosystem where AI develops responsibly while minimising the societal risks it can pose. Key measures to achieve this include:
- Ensure that users are informed about AI and its capabilities and limitations. Transparent communication is the key to this.
- Implement regular audits and rigorous quality checks to maintain high standards. This will in turn prevent any form of lapses.
- Establishing robust liability mechanisms to address any harms caused by AI-generated material which is in the form of misinformation. This fosters trust and accountability.
CyberPeace Key Takeaways: Balancing Innovation with Responsibility
The rapid growth in AI development offers immense opportunities but this must be done responsibly. Overregulation of AI can stifle innovation, on the other hand, being lax could lead to unintended societal harm or disruptions.
Maintaining a balanced approach to development is essential. Collaboration between stakeholders such as governments, academia, and the private sector is important. They can ensure the establishment of guidelines, promote transparency, and create liability mechanisms. Regular audits and promoting user education can build trust in AI systems. Furthermore, policymakers need to prioritise user safety and trust without hindering creativity while making regulatory policies.
We can create a future that is AI-development-driven and benefits us all by fostering ethical AI development and enabling innovation. Striking this balance will ensure AI remains a tool for progress, underpinned by safety, reliability, and human values.
References
- https://timesofindia.indiatimes.com/technology/tech-news/googles-ai-chatbot-tells-student-you-are-not-needed-please-die/articleshow/115343886.cms
- https://www.forbes.com/advisor/business/ai-statistics/#2
- https://www.reuters.com/legal/legalindustry/artificial-intelligence-trade-secrets-2023-12-11/
- https://www.indiatoday.in/technology/news/story/chatgpt-has-gone-mad-today-openai-says-it-is-investigating-reports-of-unexpected-responses-2505070-2024-02-21

Introduction
Human Trafficking has been a significant concern and threat to society for a very long time. The aspects of our physical safety also have been influenced by human traffickers and the modus operandi they have adopted and deployed over the years. We are always cautious of younger children in regard to trafficking whenever we go out to crowded or unknown places. This concern and threat have also migrated to cyberspace and now pose new and different tangents of threats. These crimes are committed using technology and are further substantiated by different cybercrimes.
What is Cyber-Enabled Human Trafficking?
Cyber-enabled human trafficking is the new evolution of human trafficking in the digital age. Bad actors lure the victims via the internet and use social engineering to exploit their vulnerabilities to get them into their traps. In today's time, crime is often substantiated in lieu of fake job offers and a better lifestyle in new and major metropolitan cities. Now this crime has gone beyond the geographical boundaries of our nation, and often the victims end up in remote locations in the Middle East or South East Asia.
Cybercrime Hubs in Myanmar
The reports have indicated that a lot of trafficked victims are taken down to various cybercrime hubs in Myanmar. The victims are often lured on the pretext of job offers overseas, which pay handsomely. The victims make their way into the foreign nation but are then cornered by the bad actors and are segregated and taken into different hubs. The victims are often school graduates and seek basic jobs for their earnings. The victims are taken into Cybercrime hubs which Chinese syndicate criminals allegedly run.The victims are kept in tough conditions, beaten up, and held captive in remote jungles. Once the victim has lost hope, the criminals train them to commit cyber frauds like phishing. The victims are given scripts and mobile numbers to commit cybercrimes. The victims are given targets to ensure their survival, and due to the dark and threatening conditions, the victims just give up on the demands just to remain alive. Some of the victims make their way back home as well, but that is after 6-7 years of such constant torture and abuse to commit cybercrimes. The majority of such survivors face trouble seeking legal assistance as the criminals are almost impossible to track, thus making redressal for crimes and rehabilitation for survivors tough.
How to stay safe?
The criminals in such acts often target the vulnerable sector of the population, these people generally hail from tier 3 towns and rural areas. These victims aspire for a better life and earning opportunities, and due to less education and minimal awareness, they fail to see the traps set by the victims. The population at large can deploy the following measures and safe practices to avoid such horrific threats-
- Avoid Stranger interaction: Avoid interacting with strangers on any online platform or portal. Social media sites are the most used platforms by bad actors to make contact with potential victims.
- Do not Share: Avoid sharing any personal information with anyone online, and avoid filling out third-party surveys/forms seeking personal information.
- Check, Check and Recheck: Always be on alert for threats and always check and cross-check any link or platform you use or access.
- Too good to be true: If something feels like Too good to be true, it probably is and hence avoid falling for attractive job offers and work-from-home opportunities on social media platforms.
- Know your helplines: One should know the helpline numbers to make sure to exercise the reporting duty and also encourage your family members to report in case of any threat or issue.
- Raise Awareness: It is the duty of all netizens to raise awareness in society to arm more people against cybercrimes and fraud.
Conclusion
The name of cybercriminals is spreading all across the ecosystems, and now the technology is being deployed by such bad actors to even substantiate physical crimes. We need to be on alert and remain aware of such crimes and the modus Operandi of cyber criminals. Awareness and education are our best weapons to combat the threats and issues of cyber-enabled human trafficking, as the criminals feed on our vulnerabilities, lets eradicate them for once and for all and work towards creating a wholesome safe cyber ecosystem for all.https://www.scmp.com/week-asia/politics/article/3228543/inside-chinese-run-crime-hubs-myanmar-are-conning-world-we-can-kill-you-here