#FactCheck - Bangladeshi Migrant’s Arrest Misrepresented as Indian in Viral Video!
Executive Summary:
An old video dated 2023 showing the arrest of a Bangladeshi migrant for murdering a Polish woman has been going viral massively on social media claiming that he is an Indian national. This viral video was fact checked and debunked.
Claim:
The video circulating on social media alleges that an Indian migrant was arrested in Greece for assaulting a young Christian girl. It has been shared with narratives maligning Indian migrants. The post was first shared on Facebook by an account known as “Voices of hope” and has been shared in the report as well.

Facts:
The CyberPeace Research team has utilized Google Image Search to find the original source of the claim. Upon searching we find the original news report published by Greek City Times in June 2023.


The person arrested in the video clip is a Bangladeshi migrant and not of Indian origin. CyberPeace Research Team assessed the available police reports and other verifiable sources to confirm that the arrested person is Bangladeshi.
The video has been dated 2023, relating to a case that occurred in Poland and relates to absolutely nothing about India migrants.
Neither the Polish government nor authorized news agency outlets reported Indian citizens for the controversy in question.

Conclusion:
The viral video falsely implicating an Indian migrant in a Polish woman’s murder is misleading. The accused is a Bangladeshi migrant, and the incident has been misrepresented to spread misinformation. This highlights the importance of verifying such claims to prevent the spread of xenophobia and false narratives.
- Claim: Video shows an Indian immigrant being arrested in Greece for allegedly assaulting a young Christian girl.
- Claimed On: X (Formerly Known As Twitter) and Facebook.
- Fact Check: Misleading.
Related Blogs

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company

Introduction
The rise in start-up culture, increasing investments, and technological breakthroughs are being encouraged alongside innovations and the incorporation of generative Artificial Intelligence elements. Witnessing the growing focus on human-centred AI, its potential to transform industries like education remains undeniable. Enhancing experiences and inculcating new ways of learning, there is much to be explored. Recently, a Delhi-based non-profit called Rocket Learning, in collaboration with Google.org, launched Appu- a personalised AI educational tool providing a multilingual and conversational learning experience for kids between 3 and 6.
AI Appu
Developed in 6 months, along with the help of dedicated Google.org fellows, interactive Appu has resonated with those the founders call “super-users,” i.e. parents and caregivers. Instead of redirecting students to standard content and instructional videos, it operates on the idea of conversational learning, one equally important for children in the targeted age bracket. Designed in the form of an elephant, Appu is supposed to be a personalised tutor, helping both children and parents understand concepts through dialogue. AI enables the generation of different explanations in case of doubt, aiding in understanding. If children were to answer in mixed languages instead of one complete sentence in a single language (eg, Hindi and English), the AI would still consider it as a response. The AI lessons are two minutes long and are inculcated with real-world examples. The emphasis on interactive and fun learning of concepts through innovation enhances the learning experience. Currently only available in Hindi, it is being worked on to include 20 other languages such as Punjabi and Marathi.
UNESCO, AI, and Education
It is important to note that such innovations also find encouragement in UNESCO’s mandate as AI in education contributes to achieving the 2030 Agenda of Sustainable Development Goals (here; SDG 4- focusing on quality education). Within the ambit of the Beijing Consensus held in 2019, UNESCO encourages a human-centred approach to AI, and has also developed the “Artificial Intelligence and Education: Guidance for Policymakers” aiming towards understanding its potential and opportunities in education as well as the core competencies it needs to work on. Another publication was launched during one of the flagship events of UNESCO- (Digital Learning Week, 2024) - AI competency frameworks for both, students and teachers which provide a roadmap for assessing the potential and risks of AI, each covering common aspects such as AI ethics, and human-centred mindset and even certain distinct options such as AI system design for students and AI pedagogy for teachers.
Potential Challenges
While AI holds immense promise in education, innovation with regard to learning is contentious as several risks must be carefully managed. Depending on the innovation, AI’s struggle with multitasking beyond the classroom, such as administrative duties and tedious grading, which require highly detailed role descriptions could prove to be a challenge. This can become exhausting for developers managing innovative AI systems, as they would have to fit various responses owing to the inherent nature of AI needing to be trained to produce output. Security concerns are another major issue, as data breaches could compromise sensitive student information. Implementation costs also present challenges, as access to AI-driven tools depends on financial resources. Furthermore, AI-driven personalised learning, while beneficial, may inadvertently reduce student motivation, also compromising students' soft skills, such as teamwork and communication, which are crucial for real-world success. These risks highlight the need for a balanced approach to AI integration in education.
Conclusion
Innovations related to education, especially the ones that focus on a human-centred AI approach, have immense potential in not only enhancing learning experiences but also reshaping how knowledge is accessed, understood, and applied. Untapped potential using other services is also encouraged in this sector. However, maintaining a balance between fostering intrigue and ensuring the inculcation of ethical and secure AI remains imperative.
References
- https://www.unesco.org/en/articles/what-you-need-know-about-unescos-new-ai-competency-frameworks-students-and-teachers?hub=32618
- https://www.unesco.org/en/digital-education/artificial-intelligence
- https://www.deccanherald.com/technology/google-backed-rocket-learning-launches-appu-an-ai-powered-tutor-for-kids-3455078
- https://indianexpress.com/article/technology/artificial-intelligence/how-this-google-backed-ai-tool-is-reshaping-education-appu-9896391/
- https://www.thehindu.com/business/ai-appu-to-tutor-children-in-india/article69354145.ece
- https://www.velvetech.com/blog/ai-in-education-risks-and-concerns/
.webp)
Introduction:
The Federal Bureau of Investigation (FBI) focuses on threats and is an intelligence-driven agency with both law enforcement and intelligence responsibilities. The FBI has the power and duty to look into certain offences that are entrusted to it and to offer other law enforcement agencies cooperation services including fingerprint identification, lab tests, and training. In order to support its own investigations as well as those of its collaborators and to better comprehend and address the security dangers facing the United States, the FBI also gathers, disseminates, and analyzes intelligence.
The FBI’s Internet Crime Complaint Center (IC3) Functions combating cybercrime:
- Collection: Internet crime victims can report incidents and notify the relevant authorities of potential illicit Internet behavior using the IC3. Law enforcement frequently advises and directs victims to use www.ic3.gov to submit a complaint.
- Analysis: To find new dangers and trends, the IC3 examines and examines data that users submit via its website.
- Public Awareness: The website posts public service announcements, business alerts, and other publications outlining specific frauds. Helps to raise awareness and make people become aware of Internet crimes and how to stay protected.
- Referrals: The IC3 compiles relevant complaints to create referrals, which are sent to national, international, local, and state law enforcement agencies for possible investigation. If law enforcement conducts an investigation and finds evidence of a crime, the offender may face legal repercussions.
Alarming increase in cyber crime cases:
In the recently released 2022 Internet Crime Report by the FBI's Internet Crime Complaint Center (IC3), the statistics paint a concerning picture of cybercrime in the United States. FBI’s Internet Crime Complaint Center (IC3) received 39,416 cases of extortion in 2022. The number of cases in 2021 stood at 39,360.
FBI officials emphasize the growing scope and sophistication of cyber-enabled crimes, which come from around the world. They highlight the importance of reporting incidents to IC3 and stress the role of law enforcement and private-sector partnerships.
About Internet Crime Complaint Center IC3:
IC3 was established in May 2000 by the FBI to receive complaints related to internet crimes.
It has received over 7.3 million complaints since its inception, averaging around 651,800 complaints per year over the last five years. IC3's mission is to provide the public with a reliable reporting mechanism for suspected cyber-enabled criminal activity and to collaborate with law enforcement and industry partners.
The FBI encourages the public to regularly review consumer and industry alerts published by IC3. An victim of an internet crime are urged to submit a complaint to IC3, and can also file a complaint on behalf of another person. These statistics underscore the ever-evolving and expanding threat of cybercrime and the importance of vigilance and reporting to combat this growing challenge.
What is sextortion?
The use or threatened use of a sexual image or video of another person without that person’s consent, derived from online encounters or social media websites or applications, primarily to extort money from that person or asking for sexual favours and giving warning to distribute that picture or video to that person’s friends, acquaintances, spouse, partner, or co-workers or in public domain.
Sextortion is an online crime that can be understood as, when an bad actor coerces a young person into creating or sharing a sexual image or video of themselves and then uses it to get something from such young person, such as other sexual images, money, or even sexual favours. Reports highlights that more and more kids are being blackmailed in this way. Sextortion can also happen to adults. Sextortion can also take place by taking your pictures from social media account and converting those pictures into sexually explicit content by morphing such images or creating deepfake by miusing deepfake technologies.
Sextortion in the age of AI and advanced technologies:
AI and deep fake technology make sextortion even more dangerous and pernicious. A perpetrator can now produce a high-quality deep fake that convincingly shows a victim engaged in explicit acts — even if the person has not done any such thing.
Legal Measures available in cases of sextortion:
In India, cybersecurity is governed primarily by the Indian Penal Code (IPC) and the Information Technology Act, 2000 (IT Act). Addressing cyber crimes such as hacking, identity theft, and the publication of obscene material online, sextortion and other cyber crimes. The IT Act covers various aspects of electronic governance and e-commerce, with providing provisions for defining such offences and providing punishment for such offences.
Recently Digital Personal Data Protection Act, 2023 has been enacted by the Indian Government to protect the digital personal data of the Individuals. These laws collectively establish the legal framework for cybersecurity and cybercrime prevention in India. Victims are urged to report the crime to local law enforcement and its cybercrime divisions. Law enforcement will investigate sextortion cases reports and will undertake appropriate legal action.
How to stay protected from evolving cases of sextortion: Best Practices:
- Report the Crime to law enforcement agency and social media platform or Internet service provider.
- Enable Two-step verification as an extra layer of protection.
- Keep your laptop Webcams covered when not in use.
- Stay protected from malware and phishing Attacks.
- Protect your personal information on your social media account, and also monitor your social media accounts in order to identify any suspicious activity. You can also set and review privacy settings of your social media accounts.
Conclusion:
Sextortion cases has been increased in recent time. Knowing the risk, being aware of rules and regulations, and by following best practices will help in preventing such crime and help you to stay safe and also avoid the chance of being victimized. It is important to spreading awareness about such growing cyber crimes and empowering the people to report it and it is also significant to provide support to victims. Let’s all unite in order to fight against such cyber crimes and also to make life a safer place on the internet or digital space.
References:
- https://www.ic3.gov/Media/PDF/AnnualReport/2022_IC3ElderFraudReport.pdf
- https://octillolaw.com/insights/fbi-ic3-releases-2022-internet-crime-report/
- https://www.iafci.org/app_themes/docs/Federal%20Agency/2022_IC3Report.pdf