Launch of Central Suspect Registry to Combat Cyber Crimes
Introduction
The Indian government has introduced initiatives to enhance data sharing between law enforcement and stakeholders to combat cybercrime. Union Home Minister Amit Shah has launched the Central Suspect Registry, Cyber Fraud Mitigation Center, Samanvay Platform and Cyber Commandos programme on the Indian Cyber Crime Coordination Centre (I4C) Foundation Day celebration took place on the 10th September 2024 at Vigyan Bhawan, New Delhi. The ‘Central Suspect Registry’ will serve as a central-level database with consolidated data on cybercrime suspects nationwide. The Indian Cyber Crime Coordinating Center will share a list of all repeat offenders on their servers. Shri Shah added that the Suspect Registry at the central level and connecting the states with it will help in the prevention of cybercrime.
Key Highlights of Central Suspect Registry
The Indian Cyber Crime Coordination Centre (I4C) has established the suspect registry in collaboration with banks and financial intermediaries to enhance fraud risk management in the financial ecosystem. The registry will serve as a central-level database with consolidated data on cybercrime suspects. Using data from the National Cybercrime Reporting Portal (NCRP), the registry makes it possible to identify cybercriminals as potential threats.
Central Suspect Registry Need of the Hour
The Union Home Minister of India, Shri Shah, has emphasized the need for a national Cyber Suspect Registry to combat cybercrime. He argued that having separate registries for each state would not be effective, as cybercriminals have no boundaries. He emphasized the importance of connecting states to this platform, stating it would significantly help prevent future cyber crimes.
CyberPeace Outlook
There has been an alarming uptick in cybercrimes in the country highlighting the need for proactive approaches to counter the emerging threats. The recently launched initiatives under the umbrella of the Indian Cyber Crime Coordination Centre will serve as significant steps taken by the centre to improve coordination between law enforcement agencies, strengthen user awareness, and offer technical capabilities to target cyber criminals and overall aim to combat the growing rate of cybercrime in the country.
References:
Related Blogs

Modern international trade heavily relies on data transfers for the exchange of digital goods and services. User data travels across multiple jurisdictions and legal regimes, each with different rules for processing it. Since international treaties and standards for data protection are inadequate, states, in an effort to protect their citizens' data, have begun extending their domestic privacy laws beyond their borders. However, this opens a Pandora's box of legal and administrative complexities for both, the data protection authorities and data processors. The former must balance the harmonization of domestic data protection laws with their extraterritorial enforcement, without overreaching into the sovereignty of other states. The latter must comply with the data privacy laws in all states where it collects, stores, and processes data. While the international legal community continues to grapple with these challenges, India can draw valuable lessons to refine the Digital Personal Data Protection Act, 2023 (DPDP) in a way that effectively addresses these complexities.
Why Extraterritorial Application?
Since data moves freely across borders and entities collecting such data from users in multiple states can misuse it or use it to gain an unfair competitive advantage in local markets, data privacy laws carry a clause on their extraterritorial application. Thus, this principle is utilized by states to frame laws that can ensure comprehensive data protection for their citizens, irrespective of the data’s location. The foremost example of this is the European Union’s (EU) General Data Protection Regulation (GDPR), 2016, which applies to any entity that processes the personal data of its citizens, regardless of its location. Recently, India has enacted the DPDP Act of 2023, which includes a clause on extraterritorial application.
The Extraterritorial Approach: GDPR and DPDP Act
The GDPR is considered the toughest data privacy law in the world and sets a global standard in data protection. According to Article 3, its provisions apply not only to data processors within the EU but also to those established outside its territory, if they offer goods and services to and conduct behavioural monitoring of data subjects within the EU. The enforcement of this regulation relies on heavy penalties for non-compliance in the form of fines up to €20 million or 4% of the company’s global turnover, whichever is higher, in case of severe violations. As a result, corporations based in the USA, like Meta and Clearview AI, have been fined over €1.5 billion and €5.5 million respectively, under the GDPR.
Like the GDPR, the DPDP Act extends its jurisdiction to foreign companies dealing with personal data of data principles within Indian territory under section 3(b). It has a similar extraterritorial reach and prescribes a penalty of up to Rs 250 crores in case of breaches. However, the Act or DPDP Rules, 2025, which are currently under deliberation, do not elaborate on an enforcement mechanism through which foreign companies can be held accountable.
Lessons for India’s DPDP on Managing Extraterritorial Application
- Clarity in Definitions: GDPR clearly defines ‘personal data’, covering direct information such as name and identification number, indirect identifiers like location data, and, online identifiers that can be used to identify the physical, physiological, genetic, mental, economic, cultural, or social identity of a natural person. It also prohibits revealing special categories of personal data like religious beliefs and biometric data to protect the fundamental rights and freedoms of the subjects. On the other hand, the DPDP Act/ Rules define ‘personal data’ vaguely, leaving a broad scope for Big Tech and ad-tech firms to bypass obligations.
- International Cooperation: Compliance is complex for companies due to varying data protection laws in different countries. The success of regulatory measures in such a scenario depends on international cooperation for governing cross-border data flows and enforcement. For DPDP to be effective, India will have to foster cooperation frameworks with other nations.
- Adequate Safeguards for Data Transfers: The GDPR regulates data transfers outside the EU via pre-approved legal mechanisms such as standard contractual clauses or binding corporate rules to ensure that the same level of protection applies to EU citizens’ data even when it is processed outside the EU. The DPDP should adopt similar safeguards to ensure that Indian citizens’ data is protected when processed abroad.
- Revised Penalty Structure: The GDPR mandates a penalty structure that must be effective, proportionate, and dissuasive. The supervisory authority in each member state has the power to impose administrative fines as per these principles, up to an upper limit set by the GDPR. On the other hand, the DPDP’s penalty structure is simplistic and will disproportionately impact smaller businesses. It must take into regard factors such as nature, gravity, and duration of the infringement, its consequences, compliance measures taken, etc.
- Governance Structure: The GDPR envisages a multi-tiered governance structure comprising of
- National-level Data Protection Authorities (DPAs) for enforcing national data protection laws and the GDPR,
- European Data Protection Supervisor (EDPS) for monitoring the processing of personal data by EU institutions and bodies,
- European Commission (EC) for developing GDPR legislation
- European Data Protection Board (EDPB) for enabling coordination between the EC, EDPS, and DPAs
In contrast, the Data Protection Board (DPB) under DPDP will be a single, centralized body overseeing compliance and enforcement. Since its members are to be appointed by the Central Government, it raises questions about the Board’s autonomy and ability to apply regulations consistently. Further, its investigative and enforcement capabilities are not well defined.
Conclusion
The protection of the human right to privacy ( under the International Covenant on Civil and Political Rights and the Universal Declaration of Human Rights) in today’s increasingly interconnected digital economy warrants international standard-setting on cross-border data protection. In the meantime, States relying on the extraterritorial application of domestic laws is unavoidable. While India’s DPDP takes measures towards this, they must be refined to ensure clarity regarding implementation mechanisms. They should push for alignment with data protection laws of other States, and account for the complexity of enforcement in cases involving extraterritorial jurisdiction. As India sets out to position itself as a global digital leader, a well-crafted extraterritorial framework under the DPDP Act will be essential to promote international trust in India’s data governance regime.
Sources
- https://gdpr-info.eu/art-83-gdpr/
- https://gdpr-info.eu/recitals/no-150/
- https://gdpr-info.eu/recitals/no-51/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://www.eqs.com/compliance-blog/biggest-gdpr-fines/#:~:text=ease%20the%20burden.-,At%20a%20glance,In%20summary
- https://gdpr-info.eu/art-3-gdpr/
- https://www.legal500.com/developments/thought-leadership/gdpr-v-indias-dpdpa-key-differences-and-compliance-implications/#:~:text=Both%20laws%20cover%20'personal%20data,of%20personal%20data%20as%20sensitive.

Introduction
In the digital landscape, there is a rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Certain regulatory mechanisms are also established for the ethical and reasonable use of such advanced technologies. However, these technologies are easily accessible; hence, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies, new cyber threats have emerged.
Deepfake Scams
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content, which looks very realistic but, in actuality, is fake.
Voice cloning
To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology. Recently, in Kerala, a man fell victim to an AI-based video call on WhatsApp. He received a video call from a person claiming to be his former colleague. The scammer, using AI deepfake technology, impersonated the face of his former colleague and asked for financial help of 40,000.
Uttarakhand Police issues warning admitting the rising trend of AI-based scams
Recently, Uttarakhand police’s Special Task Force (STF) has issued a warning admitting the widespread of AI technology-based scams such as deepfake or voice cloning scams targeting innocent people. Police expressed concern that several incidents have been reported where innocent people are lured by cybercriminals. Cybercriminals exploit advanced technologies and manipulate innocent people to believe that they are talking to their close ones or friends, but in actuality, they are fake voice clones or deepfake video calls. In this way, cybercriminals ask for immediate financial help, which ultimately leads to financial losses for victims of such scams.
Tamil Nadu Police Issues advisory on deepfake scams
To deceive people, cyber criminals misuse deepfake technologies and target them for financial gain. Recently, Tamilnadu Police Cyberwing have issued an advisory on rising deepfake scams. Fraudsters are creating highly convincing images, videos or voice clones to defraud innocent people and make them victims of financial fraud. The advisory states that you limit the personal data you share you share online and adjust privacy settings. Advisory says to promptly report any suspicious activity or cyber crimes to 1930 or the National Cyber Crime Reporting portal.
Best practices
- Pay attention if you notice compromised video quality because deepfake videos often have compromised or poor video quality and unusual blur resolution, which poses a question to its genuineness. Deepfake videos often loop or unusually freeze, which indicates that the video content might be fabricated.
- Whenever you receive requests for any immediate financial help, act responsively and verify the situation by directly contacting the person on his primary contact number.
- You need to be vigilant and cautious, since scammers often possess a sense of urgency, leading to giving no time for the victim to think about it and deceiving them by making a quick decision. Scammers pose sudden emergencies and demand financial support on an urgent basis.
- Be aware of the recent scams and follow the best practices to stay protected from rising cyber frauds.
- Verify the identity of unknown callers.
- Utilise privacy settings on your social media.
- Pay attention if you notice any suspicious nature, and avoid sharing voice notes with unknown users because scammers might use them as voice samples and create your voice clone.
- If you fall victim to such frauds, one powerful resource available is the National Cyber Crime Reporting Portal (www.cybercrime.gov.in) and the 1930 toll-free helpline number where you can report cyber fraud, including any financial crimes.
Conclusion
AI-powered technologies are leveraged by cybercriminals to commit cyber crimes such as deepfake scams, voice clone scams, etc. Where innocent people are lured by scammers. Hence there is a need for awareness and caution among the people. We should be vigilant and aware of the growing incidents of AI-based cyber scams. Must follow the best practices to stay protected.
References:
- https://www.the420.in/ai-voice-cloning-cyber-crime-alert-uttarakhand-police/
- https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/exploiting-ai-how-cybercriminals-misuse-abuse-ai-and-ml#:~:text=AI%20and%20ML%20Misuses%20and%20Abuses%20in%20the%20Future&text=Through%20the%20use%20of%20AI,and%20business%20processes%20are%20compromised.
- https://www.ndtv.com/india-news/kerala-man-loses-rs-40-000-to-ai-based-deepfake-scam-heres-what-it-is-4217841
- https://news.bharattimes.co.in/t-n-cybercrime-police-issue-advisory-on-deepfake-scams/

In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/