#FactCheck - "Viral Video Misleadingly Claims Surrender to Indian Army, Actually Shows Bangladesh Army”
Executive Summary:
A viral video has circulated on social media, wrongly showing lawbreakers surrendering to the Indian Army. However, the verification performed shows that the video is of a group surrendering to the Bangladesh Army and is not related to India. The claim that it is related to the Indian Army is false and misleading.

Claims:
A viral video falsely claims that a group of lawbreakers is surrendering to the Indian Army, linking the footage to recent events in India.



Fact Check:
Upon receiving the viral posts, we analysed the keyframes of the video through Google Lens search. The search directed us to credible news sources in Bangladesh, which confirmed that the video was filmed during a surrender event involving criminals in Bangladesh, not India.

We further verified the video by cross-referencing it with official military and news reports from India. None of the sources supported the claim that the video involved the Indian Army. Instead, the video was linked to another similar Bangladesh Media covering the news.

No evidence was found in any credible Indian news media outlets that covered the video. The viral video was clearly taken out of context and misrepresented to mislead viewers.
Conclusion:
The viral video claiming to show lawbreakers surrendering to the Indian Army is footage from Bangladesh. The CyberPeace Research Team confirms that the video is falsely attributed to India, misleading the claim.
- Claim: The video shows miscreants surrendering to the Indian Army.
- Claimed on: Facebook, X, YouTube
- Fact Check: False & Misleading
Related Blogs

Introduction
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
.png)
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
- Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
- Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms: Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
- User Empowerment to Counter Misinformation: Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
- Partnership with Fact-Checking/Expert Organizations: Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
References
- https://mark-hurlstone.github.io/THKE.22.BJP.pdf
- https://futurefreespeech.org/wp-content/uploads/2024/01/Empowering-Audiences-Through-%E2%80%98Prebunking-Michael-Bang-Petersen-Background-Report_formatted.pdf
- https://newsreel.pte.hu/news/unprecedented_challenges_Debunking_disinformation
- https://misinforeview.hks.harvard.edu/article/global-vaccination-badnews/

Introduction
Words come easily, but not necessarily the consequences that follow. Imagine a 15-year-old child on the internet hoping that the world will be nice to him and help him gain confidence, but instead, someone chooses to be mean on the internet, or the child becomes the victim of a new kind of cyberbullying, i.e., online trolling. The consequences of trolling can have serious repercussions, including eating disorders, substance abuse, conduct issues, body dysmorphia, negative self-esteem, and, in tragic cases, self-harm and suicide attempts in vulnerable individuals. The effects of online trolling can include anxiety, depression, and social isolation. This is one example, and hate speech and online abuse can touch anyone, regardless of age, background, or status. The damage may take different forms, but its impact is far-reaching. In today’s digital age, hate speech spreads rapidly through online platforms, often amplified by AI algorithms.
As we celebrate today, i.e., 18th June, the International Day for Countering Hate Speech, if we have ever been mean to someone on the internet, we pledge never to repeat that kind of behaviour, and if we have been the victim, we will stand against the perpetrator and report it.
This year, the theme for the International Day for Countering Hate Speech is “Hate Speech and Artificial Intelligence Nexus: Building coalitions to reclaim inclusive and secure environments free of hatred. UN Secretary-General Antonio Guterres, in his statement, said, “Today, as this year’s theme reminds us, hate speech travels faster and farther than ever, amplified by Artificial Intelligence. Biased algorithms and digital platforms are spreading toxic content and creating new spaces for harassment and abuse."
Coded Convictions: How AI Reflects and Reinforces Ideologies
Algorithms have swiftly taken the place of feelings; they tamper with your taste, and they do so with a lighter foot, invisibly. They are becoming an important component of social media user interaction and content distribution. While these tools are designed to improve user experience, they frequently inadvertently spread divisive ideologies and push extremist propaganda. This amplification can strengthen the power of extremist organisations, spread misinformation, and deepen societal tensions. This phenomenon, known as “algorithmic radicalisation,” demonstrates how social media companies may utilise a discriminating content selection approach to entice people down ideological rabbit holes and shape their ideas. AI-driven algorithms often prioritise engagement over ethics, enabling divisive and toxic content to trend and placing vulnerable groups, especially youth and minorities, at risk. The UN’s Strategy and Plan of Action on Hate Speech, launched on June 18, 2019, recognises that while AI holds promise for early detection and prevention of harmful speech, it also demands stringent human rights safeguards. Without regulation, these tools can themselves become purveyors of bias and exclusion.
India’s Constitutional Resolve and Civilizational Ethos against Hate
India has always taken pride in being inclusive and united rather than divided. As far as hate speech is concerned, India's stand is no different. The United Nations, India believes in the same values as its international counterpart. Although India has won many battles against hate speech, the war is not over and is now more prominent than ever due to the advancement in communication technologies. In India, while the right to freedom of speech and expression is protected under Article 19(1)(a), its exercise is limited subject to reasonable restrictions under Article 19(2). Landmark rulings such as Ramji Lal Modi v. State of U.P. and Amish Devgan v. UOI have clarified that speech can be curbed if it incites violence or undermines public order. Section 69A of the IT Act, 2000, empowers the government to block content, and these principles are also reflected in Section 196 of the BNS, 2023 (153A IPC) and Section 299 of the BNS, 2023 (295A IPC). Platforms are also required to track down the creators of harmful content and remove it within a reasonable hour and fulfil their due diligence requirements under IT rules.
While there is no denying that India needs to be well-equipped and prepared normatively to tackle hate propaganda and divisive forces. India’s rich culture and history, rooted in philosophies of Vasudhaiva Kutumbakam (the world is one family) and pluralistic traditions, have long stood as a beacon of tolerance and coexistence. By revisiting these civilizational values, we can resist divisive forces and renew our collective journey toward harmony and peaceful living.
CyberPeace Message
The ultimate goal is to create internet and social media platforms that are better, safer and more harmonious for each individual, irrespective of his/her/their social and cultural background. CyberPeace stands resolute on promoting digital media literacy, cyber resilience, and consistently pushing for greater accountability for social media platforms.
References
- https://www.un.org/en/observances/countering-hate-speech
- https://www.artemishospitals.com/blog/the-impact-of-trolling-on-teen-mental-health
- https://www.orfonline.org/expert-speak/from-clicks-to-chaos-how-social-media-algorithms-amplify-extremism
- https://www.techpolicy.press/indias-courts-must-hold-social-media-platforms-accountable-for-hate-speech/

Introduction:
Apple is known for its unique innovations and designs. Apple, with the introduction of the iPhone 15 series, now will come up with the USB-C by complying with European Union(EU) regulations. The standard has been set by the European Union’s rule for all mobile devices. The new iPhone will now come up with USB-C. However there is a little caveat here, you will be able to use any USB-C cable to charge or transfer from your iPhone. European Union approved new rules to make it compulsory for tech companies to ensure a universal charging port is introduced for electronic gadgets like mobile phones, tablets, cameras, e-readers, earbuds and other devices by the end of next year.
The new iPhone will now come up with USB-C. However, Apple being Apple, will limit third-party USB-C cables. This means Apple-owned MFI-certified cable will have an optimised charging speed and a faster data transfer speed. MFI stands for 'Made for iPhone/iPad' and is a quality mark or testing program from Apple for Lightning cables and other products. The MFI-certified product ensures safety and improved performance.
European Union's regulations on common charging port:
The new iPhone will have a type-c USB port. EU rules have made it mandatory that all phones and laptops need to have one USB-C charging port. IPhone will be switching to USB-C from the lightning port. European Union's mandate for all mobile device makers to adopt this technology. EU has set a deadline for all new phones to use USB-C for wired charging by the end of 2024. These EU rules will be applicable to all devices, such as tablets, digital cameras, headphones, handheld video game consoles, etc. And will apply to devices that offer wired charging. The EU rules require that phone manufacturers adopt a common charging connection. The mobile manufacturer or relevant industry has to comply with these rules by the end of 2024. The rules are enacted with the intent to save consumers money and cut waste. EU stated that these rules will save consumers from unnecessary charger purchases and tonnes of cut waste per year. With the implementation of these rules, the phone manufacturers have to comply with it, and customers will be able to use a single charger for their different devices. It will strengthen the speed of data transfer in new iPhone models. The iPhone will also be compatible with chargers used by non-apple users, i.e. USB-C.
Indian Standards on USB-C Type Charging Ports in India
The Bureau of Indian Standards (BIS) has also issued standards for USB-C-type chargers. The standards aim to provide a solution of a common charger for all different charging devices. Consumers will not need to purchase multiple chargers for their different devices, ultimately leading to a reduction in the number of chargers per consumer. This would contribute to the Government of India's goal of reducing e-waste and moving toward sustainable development.
Conclusion:
New EU rules require all mobile phone devices, including iPhones, to have a USB-C connector for their charging ports. Notably, now you can see the USB-C port on the upcoming iPhone 15. These rules will enable the customers to use a single charger for their different Apple devices, such as iPads, Macs and iPhones. Talking about the applicability of these rules, the EU common-charger rule will cover small and medium-sized portable electronics, which will include mobile phones, tablets, e-readers, mice and keyboards, digital cameras, handheld videogame consoles, portable speakers, etc. Such devices are mandated to have USB-C charging ports if they offer the wired charging option. Laptops will also be covered under these rules, but they are given more time to adopt the changes and abide by these rules. Overall, this step will help in reducing e-waste and moving toward sustainable development.
References:
https://www.bbc.com/news/technology-66708571