#FactCheck - "Viral Video Misleadingly Claims Surrender to Indian Army, Actually Shows Bangladesh Army”
Executive Summary:
A viral video has circulated on social media, wrongly showing lawbreakers surrendering to the Indian Army. However, the verification performed shows that the video is of a group surrendering to the Bangladesh Army and is not related to India. The claim that it is related to the Indian Army is false and misleading.

Claims:
A viral video falsely claims that a group of lawbreakers is surrendering to the Indian Army, linking the footage to recent events in India.



Fact Check:
Upon receiving the viral posts, we analysed the keyframes of the video through Google Lens search. The search directed us to credible news sources in Bangladesh, which confirmed that the video was filmed during a surrender event involving criminals in Bangladesh, not India.

We further verified the video by cross-referencing it with official military and news reports from India. None of the sources supported the claim that the video involved the Indian Army. Instead, the video was linked to another similar Bangladesh Media covering the news.

No evidence was found in any credible Indian news media outlets that covered the video. The viral video was clearly taken out of context and misrepresented to mislead viewers.
Conclusion:
The viral video claiming to show lawbreakers surrendering to the Indian Army is footage from Bangladesh. The CyberPeace Research Team confirms that the video is falsely attributed to India, misleading the claim.
- Claim: The video shows miscreants surrendering to the Indian Army.
- Claimed on: Facebook, X, YouTube
- Fact Check: False & Misleading
Related Blogs

Introduction
The appeal is to be heard by the TDSAT (telecommunication dispute settlement & appellate tribunal) regarding several changes under Digital personal data protection. The Changes should be a removal of the deemed consent, a change in appellate mechanism, No change in delegation legislation, and under data breach. And there are some following other changes in the bill, and the digital personal data protection bill 2023 will now provide a negative list of countries that cannot transfer the data.
New Version of the DPDP Bill
The Digital Personal Data Protection Bill has a new version. There are three major changes in the 2022 draft of the digital personal data protection bill. The changes are as follows: The new version proposes changes that there shall be no deemed consent under the bill and that the personal data processing should be for limited uses only. By giving the deemed consent, there shall be consent for the processing of data for any purposes. That is why there shall be no deemed consent.
- In the interest of the sovereignty
- The integrity of India and the National Security
- For the issue of subsidies, benefits, services, certificates, licenses, permits, etc
- To comply with any judgment or order under the law
- To protect, assist, or provide service in a medical or health emergency, a disaster situation, or to maintain public order
- In relation to an employee and his/her rights
The 2023 version now includes an appeals mechanism
It states that the Board will have the authority to issue directives for data breach remediation or mitigation, investigate data breaches and complaints, and levy financial penalties. It would be authorised to submit complaints to alternative dispute resolution, accept voluntary undertakings from data fiduciaries, and advise the government to prohibit a data fiduciary’s website, app, or other online presence if the terms of the law were regularly violated. The Telecom Disputes Settlement and Appellate Tribunal will hear any appeals.
The other change is in delegated legislation, as one of the criticisms of the 2022 version bill was that it gave the government extensive rule-making powers. The committee also raised the same concern with the ministry. The committed wants that the provisions that cannot be fully defined within the scope of the bill can be addressed.
The other major change raised in the new version bill is regarding the data breach; there will be no compensation for the data breach. This raises a significant concern for the victims, If the victims suffer a data breach and he approaches the relevant court or authority, he will not be awarded compensation for the loss he has suffered due to the data breach.
Need of changes under DPDP
There is a need for changes in digital personal data protection as we talk about the deemed consent so simply speaking, by ‘deeming’ consent for subsequent uses, your data may be used for purposes other than what it has been provided for and, as there is no provision for to be informed of this through mandatory notice, there may never even come to know about it.
Conclusion
The bill requires changes to meet the need of evolving digital landscape in the digital personal data protection 2022 draft. The removal of deemed consent will ultimately protect the data of the data principal. And the data of the data principal will be used or processed only for the purpose for which the consent is given. The change in the appellate mechanism is also crucial as it meets the requirements of addressing appeals. However, the no compensation for a data breach is derogatory to the interest of the victim who has suffered a data breach.

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

Executive Summary
A video circulating on social media shows Uttar Pradesh Chief Minister Yogi Adityanath and Gorakhpur MP Ravi Kishan walking with a group of people. Users are claiming that the two leaders were participating in a protest against the University Grants Commission (UGC). Research by CyberPeace has found the viral claim to be misleading. Our research revealed that the video is from September 2025 and is being shared out of context with recent events. The video was recorded when Chief Minister Yogi Adityanath undertook a foot march in Gorakhpur on a Monday. Ravi Kishan, MP from Gorakhpur, was also present. During the march, the Chief Minister visited local markets, malls, and shops, interacting with traders and gathering information on the implementation of GST rate cuts.
Claim Details:
On Instagram, a user shared the viral video on 27 January 2026. The video shows the Chief Minister and the MP walking with a group of people. The text “UGC protest” appears on the video, suggesting that it is connected to a protest against the University Grants Commission.

Fact Check:
To verify the claim, we searched Google using relevant keywords, but found no credible media reports confirming it.Next, we extracted key frames from the video and searched them using Google Lens. The video was traced to NBT Uttar Pradesh’s X (formerly Twitter) account, posted on 22 September 2025.

According to NBT Uttar Pradesh, CM Yogi Adityanath undertook a foot march in Gorakhpur, visiting malls and shops to interact with traders and check the implementation of GST rate cuts.
Conclusion:
The viral video is not related to any recent UGC guidelines. It dates back to September 2025, showing CM Yogi Adityanath and MP Ravi Kishan on a foot march in Gorakhpur, interacting with traders about GST rate cuts.The claim that the video depicts a protest against the University Grants Commission is therefore false and misleading.