Survivors Unveil the Dark Reality of Cyber Slavery
Mr. Neeraj Soni
Sr. Researcher - Policy & Advocacy, CyberPeace
PUBLISHED ON
Dec 11, 2024
10
Introduction
Cyber slavery has emerged as a serious menace. Offenders target innocent individuals, luring them with false promises of employment, only to capture them and subject them to horrific torture and forced labour. According to reports, hundreds of Indians have been imprisoned in 'Cyber Slavery' in certain Southeast Asian countries. Indians who have travelled to South Asian nations such as Cambodia in the hopes of finding work and establishing themselves have fallen victim to the illusion of internet slavery. According to reports, 30,000 Indians who travelled to this region on tourist visas between 2022 and 2024 did not return. India Today’s coverage demonstrated how survivors of cyber slavery who have somehow escaped and returned to India have talked about the terrifying experiences they had while being coerced into engaging in cyber slavery.
Tricked by a Job Offer, Trapped in Cyber Slavery
India Today aired testimonials of cyber slavery victims who described how they were trapped. One individual shared that he had applied for a well-paying job as an electrician in Cambodia through an agent in Delhi. However, upon arriving in Cambodia, he was offered a job with a Chinese company where he was forced to participate in cyber scam operations and online fraudulent activities.
He revealed that a personal system and mobile phone were provided, and they were compelled to cheat Indian individuals using these devices and commit cyber fraud. They were forced to work 12-hour shifts. After working there for several months, he repeatedly requested his agent to help him escape. In response, the Chinese group violently loaded him into a truck, assaulted him, and left him for dead on the side of the road. Despite this, he managed to survive. He contacted locals and eventually got in touch with his brother in India, and somehow, he managed to return home.
This case highlights how cyber-criminal groups deceive innocent individuals with the false promise of employment and then coerce them into committing cyber fraud against their own country. According to the Ministry of Home Affairs' Indian Cyber Crime Coordination Center (I4C), there has been a significant rise in cybercrimes targeting Indians, with approximately 45% of these cases originating from Southeast Asia.
CyberPeace Recommendations
Cyber slavery has developed as a serious problem, beginning with digital deception and progressing to physical torture and violent actions to commit fraudulent online acts. It is a serious issue that also violates human rights. The government has already taken note of the situation, and the Indian Cyber Crime Coordination Centre (I4C) is taking proactive steps to address it. It is important for netizens to exercise due care and caution, as awareness is the first line of defence. By remaining vigilant, they can oppose and detect the digital deceit of phony job opportunities in foreign nations and the manipulative techniques of scammers. Netizens can protect themselves from significant threats that could harm their lives by staying watchful and double-checking information from reliable sources.
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.
The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.
Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.
Today, let us talk about one of the key features of our digital lives – security. The safer their online habits are, the safer their data and devices will be. A branded security will make their devices and Internet connections secure, but their carelessness or ignorance can make them targets for cybercrimes. On the other hand, they can themselves unwittingly get involved in dubious activities online. With children being very smart about passwords and browsing history clearing, parents are often left in the dark about their digital lives.
Fret not, parental controls are there at your service. These are digital tools often included with your OS or security software package, which helps you to remotely monitor and control your child’s online activities.
Where Can I find them?
Many devices come with pre-installed PC tools that you have to set up and run. Go to Settings-> Parental controls or Screentime and proceed from there. As I mentioned, they are also offered as a part of your comprehensive security software package.
Why and How to Use Parental Controls
Parental controls help monitor and limit your children's smartphone usage, ensuring they access only age-appropriate content. If your child is a minor, use of this tool is recommended, with the full knowledge of your child/ren. Let them know that just as you supervise them in public places for their safety, and guide them on rights and wrongs, you will use the tool to monitor and mentor them online, for their safety. Emphasize that you love them and trust them but are concerned about the various dubious and fake characters online as well as unsafe websites and only intend to supervise them. As they grow older and display greater responsibility and maturity levels, you may slowly reduce the levels of monitoring. This will help build a relationship of mutual trust and respect.
Step 1: Enable Parental Controls
iOS: If your child has an iPhone, to set up the controls, go to Settings, select Screen Time, then select Content & Privacy Restrictions.
Android: If the child has an Android phone, you can use the Google Family Link to manage apps, set screen time limits, and track device usage.
Third-party apps: Consider security tools like McAfee, Kaspersky, Bark, Qustodio, or Norton Family for advanced features.
Check out what some of the security software apps have on offer:
If you prefer Norton, here are the details:
McAfee Parental Controls suite offers the following features:
McAfee also outlines why Parental Controls matter:
Lastly, let us take a look at what Quick Heal has on offer:
STEP 2: Set up Admin Login
Needless to say, a parent should be the admin login, and it is a wise idea to set up a strong and unique password. You do not want your kids to outsmart you and change their accessibility settings, do you? Remember to create a password you will remember, for children are clever and will soon discover where you have jotted it down.
STEP 3: Create Individual accounts for all users of the device
Let us say two minor kids, a grandparent and you, will be using the device. You will have to create separate accounts for each user. You can allow the children to choose their own passwords, it will give them a sense of privacy. The children or you may (or may not) need to help any Seniors set up their accounts.
Done? Good. Now let us proceed to the next step.
STEP 4: Set up access permissions by age
Let us first get grandparents and other seniors out of the way by giving them full access. when you enter their ages; your device will identify them as adults and guide you accordingly.
Now for each child, follow the instructions to set up filters and blocks. This will again vary with age – more filters for the younger ones, while you can remove controls gradually as they grow older, and hence more mature and responsible. Set up screen Time (daily and weekends), game filtering and playtime, content filtering and blocking by words (e.g. block websites that contain violence/sex/abuse). Ask for activity reports on your device so that you can monitor them remotely This will help you to receive alerts if children connect with strangers or get involved in abusive actions.
Save the data and it has done! Simple, wasn’t it?
Additional Security
For further security, you may want to set up parental controls on the Home Wi-Fi Router, Gaming devices, and online streaming services you subscribe to.
Follow the same steps. Select settings, Admin sign-in, and find out what controls or screen time protection they offer. Choose the ones you wish to activate, especially for the time when adults are not at home.
Conclusion
Congratulations. You have successfully secured your child’s digital space and sanitized it. Discuss unsafe practices as a family, and make any digital rule breaches and irresponsible actions, or concerns, learning points for them. Let their takeaway be that parents will monitor and mentor them, but they too have to take ownership of their actions.
A viral image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man; however, this claim is false. A thorough investigation by the Cyberpeace Research team found that the image has been digitally manipulated. The original photo, which was posted by Balmukund Acharya, a BJP MLA from Jaipur, on his official Facebook account in December 2023, he was posing with a Muslim man in his election office. The man wearing the Muslim skullcap is featured in several other photos on Acharya's Instagram account, where he expressed gratitude for the support from the Muslim community. Thus, the claimed image of a marriage between a Hindu Sadhvi and a Muslim man is digitally altered.
Claims:
An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
Upon receiving the posts, we reverse searched the image to find any credible sources. We found a photo posted by Balmukund Acharya Hathoj Dham on his facebook page on 6 December 2023.
This photo is digitally altered and posted on social media to mislead. We also found several different photos with the skullcap man where he was featured.
We also checked for any AI fabrication in the viral image. We checked using a detection tool named, “content@scale” AI Image detection. This tool found the image to be 95% AI Manipulated.
We also checked with another detection tool for further validation named, “isitai” image detection tool. It found the image to be 38.50% of AI content, which concludes to the fact that the image is manipulated and doesn’t support the claim made. Hence, the viral image is fake and misleading.
Conclusion:
The lack of credible source and the detection of AI manipulation in the image explains that the viral image claiming to show a Hindu Sadhvi marrying a Muslim man is false. It has been digitally altered. The original image features BJP MLA Balmukund Acharya posing with a Muslim man, and there is no evidence of the claimed marriage.
Claim: An image circulating on social media claims to show a Hindu Sadhvi marrying a Muslim man.
Claimed on: X (Formerly known as Twitter)
Fact Check: Fake & Misleading
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.