DPDP Bill 2023 A Comparative Analysis
Introduction
THE DIGITAL PERSONAL DATA PROTECTION BILL, 2022 Released for Public Consultation on November 18, 2022THE DIGITAL PERSONAL DATA PROTECTION BILL, 2023Tabled at LokSabha on August 03. 2023Personal data may be processed only for a lawful purpose for which an individual has given consent. Consent may be deemed in certain cases.The 2023 bill imposes reasonable obligations on data fiduciaries and data processors to safeguard digital personal data.There is a Data Protection Board under the 2022 bill to deal with the non-compliance of the Act.Under the 2023 bill, there is the Establishment of a new Data Protection Board which will ensure compliance, remedies and penalties.
Under the new bill, the Board has been entrusted with the power of a civil court, such as the power to take cognisance in response to personal data breaches, investigate complaints, imposing penalties. Additionally, the Board can issue directions to ensure compliance with the act.The 2022 Bill grants certain rights to individuals, such as the right to obtain information, seek correction and erasure, and grievance redressal.The 2023 bill also grants More Rights to Individuals and establishes a balance between user protection and growing innovations. The bill creates a transparent and accountable data governance framework by giving more rights to individuals. In the 2023 bill, there is an Incorporation of Business-friendly provisions by removing criminal penalties for non-compliance and facilitating international data transfers.
The new 2023 bill balances out fundamental privacy rights and puts reasonable limitations on those rights.Under the 2022 bill, Personal data can be processed for a lawful purpose for which an individual has given his consent. And there was a concept of deemed consent.The new data protection board will carefully examine the instance of non-compliance by imposing penalties on non-compiler.The bill does not provide any express clarity in regards to compensation to be granted to the Data Principal in case of a Data Breach.Under 2023 Deemed consent is there in its new form as ‘Legitimate Users’.The 2022 bill allowed the transfer of personal data to locations notified by the government.There is an introduction of the negative list, which restricts cross-data transfer.
Related Blogs

Introduction
In this ever-evolving world of technology, cybercrimes and criminals continue to explore new and innovative methods to exploit and intimidate their victims. One of the recent shocking incidents has been reported from the city of Bharatpur, Rajasthan, where the cyber crooks organised a mock court session This complex operation, meant to induce fear and force obedience, exemplifies the daring and intelligence of modern hackers. In this blog article, we’ll go deeper into this concerning occurrence, delving into it to offer light on the strategies used and the ramifications for cybersecurity.to frighten their targets.
The Setup
The case was reported from Gopalgarh village in Bharatpur, Rajasthan, and has unfolded with a shocking twist -the father-son duo, Tahir Khan and his son Talim Khano — from Gopalgarh village in Bharatpur, Rajasthan, has been fooling people to gain their monetary gain by staging a mock court setting and recorded the proceedings to intimidate their victims into paying hefty sums. In the recent case, they have gained 2.69 crores through sextortion. the duo uses to trace their targets on social media platforms, blackmail them, and earn a hefty amount.
An official complaint was filed by a 69-year-old victim who was singled out through his social media accounts, his friends, and his posts Initially, they contacted the victim with a pre-recorded video featuring a nude woman, coaxing him into a compromising situation. As officials from the Delhi Crime Branch and the CBI, they threatened the victim, claiming that a girl had approached them intending to file a complaint against him. Later, masquerading as YouTubers, they threatened to release the incriminating video online. Adding to the charade, they impersonated a local MLA and presented the victim with a forged stamp paper alleging molestation charges. Eventually, posing as Delhi Crime Branch officials again, they demanded money to settle the case after falsely stating that they had apprehended the girl. To further manipulate the victim, the accused staged a court proceeding, recording it and subsequently sending it to him, creating the illusion that everything was concluded. This unique case of sextortion stands out as the only instance where the culprits went to such lengths, staging and recording a mock court to extort money. Furthermore, it was discovered that the accused had fabricated a letter from the Delhi High Court, adding another layer of deception to their scheme.
The Investigation
The complaint was made in a cyber cell. After the complaint was filed, the investigation was made, and it was found that this case stands as one of the most significant sextortion incidents in the country. The father-son pair skillfully assumed five different roles, meticulously executing their plan, which included creating a simulated court environment. “We have also managed to recover Rs 25 lakh from the accused duo—some from their residence in Gopalgarh and the rest from the bank account where it was deposited.
The Tricks used by the duo
The father-son The setup in the fake court scene event was a meticulously built web of deception to inspire fear and weakness in the victim. Let’s look at the tricks the two used to fool the people.
- Social Engineering strategies: Cyber criminals are skilled at using social engineering strategies to acquire the trust of their victims. In this situation, they may have employed phishing emails or phone calls to get personal information about the victim. By appearing as respectable persons or organisations, the crooks tricked the victim into disclosing vital information, giving them weapons they needed to create a sense of trustworthiness.
- Making a False Narrative: To make the fictitious court scenario more credible, the cyber hackers concocted a captivating story based on the victim’s purported legal problems. They might have created plausible papers to give their plan authority, such as forged court summonses, legal notifications, or warrants. They attempted to create a sense of impending danger and an urgent necessity for the victim to comply with their demands by deploying persuasive language and legal jargon.
- Psychological Manipulation: The perpetrators of the fictitious court scenario were well aware of the power of psychological manipulation in coercing their victims. They hoped to emotionally overwhelm the victim by using fear, uncertainty, and the possible implications of legal action. The offenders probably used threats of incarceration, fines, or public exposure to increase the victim’s fear and hinder their capacity to think critically. The idea was to use desperation and anxiety to force the victim to comply.
- Use of Technology to Strengthen Deception: Technological advancements have given cyber thieves tremendous tools to strengthen their misleading methods. The simulated court scenario might have included speech modulation software or deep fake technology to impersonate the voices or appearances of legal experts, judges, or law enforcement personnel. This technology made the deception even more believable, blurring the border between fact and fiction for the victim.
The use of technology in cybercriminals’ misleading techniques has considerably increased their capacity to fool and influence victims. Cybercriminals may develop incredibly realistic and persuasive simulations of judicial processes using speech modulation software, deep fake technology, digital evidence alteration, and real-time communication tools. Individuals must be attentive, gain digital literacy skills, and practice critical thinking when confronting potentially misleading circumstances online as technology advances. Individuals can better protect themselves against the expanding risks posed by cyber thieves by comprehending these technological breakthroughs.
What to do?
Seeking Help and Reporting Incidents: If you or anyone you know is the victim of cybercrime or is fooled by cybercrooks. When confronted with disturbing scenarios such as the imitation court scene staged by cybercrooks, victims must seek help and act quickly by reporting the occurrence. Prompt reporting serves various reasons, including increasing awareness, assisting with investigations, and preventing similar crimes from occurring again. Victims should take the following steps:
- Contact your local law enforcement: Inform local legal enforcement about the cybercrime event. Provide them with pertinent incident facts and proof since they have the experience and resources to investigate cybercrime and catch the offenders involved.
- Seek Assistance from a Cybersecurity specialist: Consult a cybersecurity specialist or respected cybersecurity business to analyse the degree of the breach, safeguard your digital assets, and obtain advice on minimising future risks. Their knowledge and forensic analysis can assist in gathering evidence and mitigating the consequences of the occurrence.
- Preserve Evidence: Keep any evidence relating to the event, including emails, texts, and suspicious actions. Avoid erasing digital evidence, and consider capturing screenshots or creating copies of pertinent exchanges. Evidence preservation is critical for investigations and possible legal procedures.
Conclusion
The setting fake court scene event shows how cybercriminals would deceive and abuse their victims. These criminals tried to use fear and weakness in the victim through social engineering methods, the fabrication of a false narrative, the manipulation of personal information, psychological manipulation, and the use of technology. Individuals can better defend themselves against cybercrooks by remaining watchful and sceptical.
.webp)
Introduction: The Internet’s Foundational Ideal of Openness
The Internet was built as a decentralised network to foster open communication and global collaboration. Unlike traditional media or state infrastructure, no single government, company, or institution controls the Internet. Instead, it has historically been governed by a consensus of the multiple communities, like universities, independent researchers, and engineers, who were involved in building it. This bottom-up, cooperative approach was the foundation of Internet governance and ensured that the Internet remained open, interoperable, and accessible to all. As the Internet began to influence every aspect of life, including commerce, culture, education, and politics, it required a more organised governance model. This compelled the rise of the multi-stakeholder internet governance model in the early 2000s.
The Rise of Multistakeholder Internet Governance
Representatives from governments, civil society, technical experts, and the private sector congregated at the United Nations World Summit on Information Society (WSIS), and adopted the Tunis Agenda for the Information Society. Per this Agenda, internet governance was defined as “… the development and application by governments, the private sector, and civil society in their respective roles of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.” Internet issues are cross-cutting across technical, political, economic, and social domains, and no one actor can manage them alone. Thus, stakeholders with varying interests are meant to come together to give direction to issues in the digital environment, like data privacy, child safety, cybersecurity, freedom of expression, and more, while upholding human rights.
Internet Governance in Practice: A History of Power Shifts
While the idea of democratizing Internet governance is a noble one, the Tunis Agenda has been criticised for reflecting geopolitical asymmetries and relegating the roles of technical communities and civil society to the sidelines. Throughout the history of the internet, certain players have wielded more power in shaping how it is managed. Accordingly, internet governance can be said to have undergone three broad phases.
In the first phase, the Internet was managed primarily by technical experts in universities and private companies, which contributed to building and scaling it up. The standards and protocols set during this phase are in use today and make the Internet function the way it does. This was the time when the Internet was a transformative invention and optimistically hailed as the harbinger of a utopian society, especially in the USA, where it was invented.
In the second phase, the ideal of multistakeholderism was promoted, in which all those who benefit from the Internet work together to create processes that will govern it democratically. This model also aims to reduce the Internet’s vulnerability to unilateral decision-making, an ideal that has been under threat because this phase has seen the growth of Big Tech. What started as platforms enabling access to information, free speech, and creativity has turned into a breeding ground for misinformation, hate speech, cybercrime, Child Sexual Abuse Material (CSAM), and privacy concerns. The rise of generative AI only compounds these challenges. Tech giants like Google, Meta, X (formerly Twitter), OpenAI, Microsoft, Apple, etc. have amassed vast financial capital, technological monopoly, and user datasets. This gives them unprecedented influence not only over communications but also culture, society, and technology governance.
The anxieties surrounding Big Tech have fed into the third phase, with increasing calls for government regulation and digital nationalism. Governments worldwide are scrambling to regulate AI, data privacy, and cybersecurity, often through processes that lack transparency. An example is India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which was passed without parliamentary debate. Governments are also pressuring platforms to take down content through opaque takedown orders. Laws like the UK’s Investigatory Powers Act, 2016, are criticised for giving the government the power to indirectly mandate encryption backdoors, compromising the strength of end-to-end encryption systems. Further, the internet itself is fragmenting into the “splinternet” amid rising geopolitical tensions, in the form of Russia’s “sovereign internet” or through China’s Great Firewall.
Conclusion
While multistakeholderism is an ideal, Internet governance is a playground of contesting power relations in practice. As governments assert digital sovereignty and Big Tech consolidates influence, the space for meaningful participation of other stakeholders has been negligible. Consultation processes have often been symbolic. The principles of openness, inclusivity, and networked decision-making are once again at risk of being sidelined in favour of nationalism or profit. The promise of a decentralised, rights-respecting, and interoperable internet will only be fulfilled if we recommit to the spirit of Multi-Stakeholder Internet Governance, not just its structure. Efficient internet governance requires that the multiple stakeholders be empowered to carry out their roles, not just talk about them.
References
- https://www.newyorker.com/magazine/2024/02/05/can-the-internet-be-governed
- https://www.internetsociety.org/wp-content/uploads/2017/09/ISOC-PolicyBrief-InternetGovernance-20151030-nb.pdf
- https://itp.cdn.icann.org/en/files/government-engagement-ge/multistakeholder-model-internet-governance-fact-sheet-05-09-2024-en.pdf\
- https://nrs.help/post/internet-governance-and-its-importance/
- https://daidac.thecjid.org/how-data-power-is-skewing-internet-governance-to-big-tech-companies-and-ai-tech-guys/
.webp)
Introduction
A Pew Research Center survey conducted in September 2023, found that among 1,453 age group of 13-17 year olds projected that the majority of the age group uses TikTok (63%), Snapchat (60%) and Instagram (59%) in the U.S. Further, in India the 13-19 year-olds age group makes up 31% of social media users in India, according to a report by Statista from 2021. This has been the leading cause of young users inadvertently or deliberately accessing adult content on social media platforms.
Brief Analysis of Meta’s Proposed AI Age Classifier
It can be seen as a step towards safer and moderated content for teen users, by placing age restrictions on teen social media users as sometimes they do not have enough cognitive skills to understand what content can be shared and consumed on these platforms and what can not as per their age. Moreover, there needs to be an understanding of platform policies and they need to understand that nothing can be completely erased from the internet.
Unrestricted access to social media exposes teens to potentially harmful or inappropriate online content, raising concerns about their safety and mental well-being. Meta's recent measures aim to address this, however striking a balance between engagement, protection, and privacy is also an essential part.
The AI-based Age Classifier proposed by Meta classifies users based on their age and places them in the ‘Teen Account’ category which has built-in limits on who can contact them, the content they see and more ways to connect and explore their interests. According to Meta, teens under 16 years of age will need parental permission to change these settings.
Meta's Proposed Solution: AI-Powered Age Classifier
This tool uses Artificial Intelligence (AI) to analyze users’ online behaviours and other profile information to estimate their age. It analyses different factors such as who follows the user, what kind of content they interact with, and even comments like birthday posts from friends. If the classifier detects that a user is likely under 18 years old, it will automatically switch them to a “Teen Account.” These accounts have more restricted privacy settings, such as limiting who can message the user and filtering the type of content they can see.
The adult classifier is anticipated to be deployed by next year and will start scanning for such users who may have lied about their age. All users found to be under 18 years old will be placed in the category of teen accounts, but 16-17 year olds will be able to adjust these settings if they want more flexibility, while younger teens will need parental permission. The effort is part of a broader strategy to protect teens from potentially harmful content on social media. This is especially important in today’s time as the invasion of privacy for anyone, particularly, can be penalised due to legal instruments like GDPR, DPDP Act, COPPA and many more.
Policy Implications and Compliances
Meta's AI Age Classifier addresses the growing concerns over teen safety on social media by categorizing users based on age, restricting minors' access to adult content, and enforcing parental controls. However, reliance on behavioural tracking might potentially impact the online privacy of teen users. Hence the approach of Meta needs to be aligned with applicable jurisdictional laws. In India, the recently enacted DPDP Act, of 2023 prohibits behavioural tracking and targeted advertising to children. Accuracy and privacy are the two main concerns that Meta should anticipate when they roll out the classifier.
Meta emphasises transparency to build user trust, and customizable parental controls empower families to manage teens' online experiences. This initiative reflects Meta's commitment to creating a safer, regulated digital space for young users worldwide, it must also align its policies properly with the regional policy and law standards. Meta’s proposed AI Age Classifier aims to protect teens from adult content, reassure parents by allowing them to curate acceptable content, and enhance platform integrity by ensuring a safer environment for teen users on Instagram.
Conclusion
Meta’s AI Age Classifier while promising to enhance teen safety and putting certain restrictions and parental controls on accounts categorised as ‘teen accounts’, must also properly align with global regulations like GDPR, and the DPDP Act with reference to India. This tool offers reassurance to parents and aims to foster a safer social media environment for teens. To support accurate age estimation and transparency, policy should focus on refining AI methods to minimise errors and ensure clear disclosures about data handling. Collaborative international standards are essential as privacy laws evolve. Meta’s initiative is intended to prioritise youth protection and build public trust in AI-driven moderation across social platforms, while it must also balance the online privacy of users while utilising these advanced tech measures on the platforms.
References
- https://familycenter.meta.com/in/our-products/instagram/
- https://www.indiatoday.in/technology/news/story/instagram-will-now-take-help-of-ai-to-check-if-kids-are-lying-about-their-age-on-app-2628464-2024-11-05
- https://www.bloomberg.com/news/articles/2024-11-04/instagram-plans-to-use-ai-to-catch-teens-lying-about-age
- https://tech.facebook.com/artificial-intelligence/2022/6/adult-classifier/
- https://indianexpress.com/article/technology/artificial-intelligence/too-young-to-use-instagram-metas-ai-classifier-could-help-catch-teens-lying-about-their-age-9658555/