CCI Imposes ₹213 Crore Penalty on Meta for Abusing Dominance via 2021 WhatsApp Privacy Policy
Mr. Neeraj Soni
Sr. Researcher - Policy & Advocacy, CyberPeace
PUBLISHED ON
Nov 20, 2024
10
Introduction
India's Competition Commission of India (CCI) on 18th November 2024 imposed a ₹213 crore penalty on Meta for abusing its dominant position in internet-based messaging through WhatsApp and online display advertising. The CCI order is passed against abuse of dominance by the Meta and relates to WhatsApp’s 2021 Privacy Policy. The CCI considers Meta a dominant player in internet-based messaging through WhatsApp and also in online display advertising. WhatsApp's 2021 privacy policy update undermined users' ability to opt out of getting their data shared with the group's social media platform Facebook. The CCI directed WhatsApp not to share user data collected on its platform with other Meta companies or products for advertising purposes for five years.
CCI Contentions
The regulator contended that for purposes other than advertising, WhatsApp's policy should include a detailed explanation of the user data shared with other Meta group companies or products specifying the purpose. The regulator also stated that sharing user data collected on WhatsApp with other Meta companies or products for purposes other than providing WhatsApp services should not be a condition for users to access WhatsApp services in India. CCI order is significant as it upholds user consent as a key principle in the functioning of social media giants, similar to the measures taken by some other markets.
Meta’s Stance
WhatsApp parent company Meta has expressed its disagreement with the Competition Commission of India's(CCI) decision to impose a Rs 213 crore penalty on them over users' privacy concerns. Meta clarified that the 2021 update did not change the privacy of people's personal messages and was offered as a choice for users at the time. It also ensured no one would have their accounts deleted or lose functionality of the WhatsApp service because of this update.
Meta clarified that the update was about introducing optional business features on WhatsApp and providing further transparency about how they collect data. The company stated that WhatsApp has been incredibly valuable to people and businesses, enabling organization's and government institutions to deliver citizen services through COVID and beyond and supporting small businesses, all of which further the Indian economy. Meta plans to find a path forward that allows them to continue providing the experiences that "people and businesses have come to expect" from them. The CCI issued cease-and-desist directions and directed Meta and WhatsApp to implement certain behavioral remedies within a defined timeline.
The competition watchdog noted that WhatsApp's 2021 policy update made it mandatory for users to accept the new terms, including data sharing with Meta, and removed the earlier option to opt-out, categorized as an "unfair condition" under the Competition Act. It was further noted that WhatsApp’s sharing of users’ business transaction information with Meta gave the group entities an unfair advantage over competing platforms.
CyberPeace Outlook
The 2021 policy update by WhatsApp mandated data sharing with Meta's other companies group, removing the opt-out option and compelling users to accept the terms to continue using the platform. This policy undermined user autonomy and was deemed as an abuse of Meta's dominant market position, violating Section 4(2)(a)(i) of the Competition Act, as noted by CCI.
The CCI’s ruling requires WhatsApp to offer all users in India, including those who had accepted the 2021 update, the ability to manage their data-sharing preferences through a clear and prominent opt-out option within the app. This decision underscores the importance of user choice, informed consent, and transparency in digital data policies.
By addressing the coercive nature of the policy, the CCI ruling establishes a significant legal precedent for safeguarding user privacy and promoting fair competition. It highlights the growing acknowledgement of privacy as a fundamental right and reinforces the accountability of tech giants to respect user autonomy and market fairness. The directive mandates that data sharing within the Meta ecosystem must be based on user consent, with the option to decline such sharing without losing access to essential services.
MSMEs, being the cornerstone of the Indian economy, are one of the most vulnerable targets in cyberspace and no enterprise is too small to be a target for malicious actors. MSMEs hardly ever perform a cyber-risk assessment, but when they do, they may run into a number of internal problems, such as cyberattacks brought on by inadequate networking security, online fraud, ransomware assaults, etc. Tackling cyber threats in MSMEs is critical mainly because of their high level of dependance on digital technologies and the growing sophistication of cyber attacks. Protecting them from cyber threats is essential, as a security breach can have devastating consequences, including financial loss, reputational damage, and operational disruptions.
Key Cyber Threats that MSMEs are facing
MSMEs are most vulnerable to are phishing attacks, ransomware, malware and viruses, insider threats, social engineering attacks, supply chain attacks, credential stuffing and brute force attacks and Distributed Denial of Service (DDoS) Attacks. Some of these attacks are described as under-
Insider threats arise from employees or contractors who intentionally or unintentionally compromise security. It involves data theft, misuse of access privileges, or accidental data exposure.
Social engineering attacks involve manipulating individuals into divulging confidential information or performing actions that compromise security by pretexting, baiting, and impersonation.
Supply chain attacks exploit the trust in relationships between businesses and their suppliers and introduce malware, compromise data integrity, and disrupt operations.
Credential stuffing and brute force attacks give unauthorized access to accounts and systems, leading to data breaches and financial losses.
Challenges Faced by MSMEs in Cybersecurity
The challenges faced by MSMEs in cyber security are mainly due to limited resources and budget constraints which leads to other issues such as a lack of specialized expertise as MSMEs often lack the IT support of cyber security experts. Awareness and training are needed to mitigate poor understanding of cyber threats and their complexity in nature. Vulnerabilities in the supply chain are present as they rely on third-party vendors and partners often, introducing potential supply chain vulnerabilities. Regulatory compliance is often complex and is taken seriously only when an issue crops up but it needs special attention especially with the DPDP Act coming in. The lack of an incident response plan leads to delayed and inadequate responses to cyber incidents, increasing the impact of breaches.
Best Practices for Tackling Cyber Threats for MSMEs
To effectively tackle cyber threats, MSMEs should adopt a comprehensive approach such as:
Implementing and enforcing strong access controls by using MFA or 2FA and password policies. Limiting employee access as role based and updating the same as and when needed.
Regularly apply security patches and use automated patch management solutions to prevent exploitation of known vulnerabilities.
Conduct employee training and awareness programs and promote a security-first approach for the employees and assessing employee readiness to identify improvement areas.
Implement network security measures by using firewalls and intrusion detection systems. Using secure Wi-Fi networks via strong encryptions and changing default credentials for the router are recommended, as is segmenting networks to limit lateral movement within the network in case of a breach.
Regular data backup ensures that in case of an attack, data loss can be recovered and made available in secure offsite locations to protect it from unauthorized access.
Developing an incident response plan that outlines the roles, responsibilities and procedure for responding to cyber incidents with regular drills to ensure readiness and clear communication protocols for incident reporting to regulators, stakeholders and customers.
Implement endpoint security solutions using antivirus and anti-malware softwares. Devices should be against unauthorized access and implement mobile device management solutions enforcing security policies on employee-owned devices used for work purposes.
Cyber insurance coverage will help in transferring financial risks in case of cyber incidents. It should have comprehensive coverage including business interruptions, data restoration, legal liabilities and incident response costs.
Recommended Cybersecurity Solutions Tailored for MSMEs
A Managed Security Service Provider offers outsourced cybersecurity services, including threat monitoring, incident response, and vulnerability management that may be lacking in-house.
Cloud-Based Security Solutions such as firewall as a service and Security Information and Event Management , provide scalable and cost-effective protection for MSMEs.
Endpoint Detection and Response (EDR) Tools detect and respond to threats on endpoints, providing real-time visibility into potential threats and automating incident response actions.
Security Awareness Training Platforms deliver interactive training sessions and simulations to educate employees about cybersecurity threats and best practices.
Conclusion
Addressing cyber threats in MSMEs requires a proactive and multi-layered approach that encompasses technical solutions, employee training, and strategic planning. By implementing best practices and leveraging cybersecurity solutions tailored to their specific needs, MSMEs can significantly enhance their resilience against cyber threats. As cyber threats continue to evolve, staying informed about the latest trends and adopting a culture of security awareness will be essential for MSMEs to protect their assets, reputation, and bottom line.
A video is going viral on social media claiming to show family members mourning the death of Iddo Netanyahu, brother of Israeli Prime Minister Benjamin Netanyahu. However, an research by the CyberPeace found that the claim being shared with the video is false. The video has been available on the internet since 2024. According to available information, it shows the funeral of an Israeli soldier who was killed in an attack in the Jabalia area of northern Gaza.Moreover, no credible news reports were found confirming the death of Iddo Netanyahu.
Claim:
An Instagram user shared the viral video with an English caption stating, “Family members are crying after the death of Iddo Netanyahu was confirmed.”
During the investigation, we found the original video on an X (formerly Twitter) account named Warfare Analysis. The video was posted on October 12, 2024, confirming that it predates the recent Iran-Israel conflict. Notably, the “Warfare Analysis” logo is also visible in the viral video. According to the caption, the footage shows the funeral of Israeli soldier Netanel Hershkovit, who was killed on October 11, 2024, in an attack by Al-Qassam in Jabalia, northern Gaza.
Our research found that the claim shared with the video is false. The video has been online since 2024 and shows the funeral of an Israeli soldier killed in northern Gaza. Additionally, no credible reports confirm the death of Iddo Netanyahu.
The Expanding Governance Challenge of Artificial Intelligence
Artificial intelligence (AI) systems are increasingly embedded in economic and social infrastructure. They are being adopted in financial services, healthcare diagnostics, hiring systems, and public administration. But while these systems improve efficiency and decision-making, they also introduce new forms of technological risk.
Unlike conventional software, AI systems learn patterns from data and continue to evolve as they run. This poses governance issues since risks can arise throughout the AI life cycle, whether at the coding level or in their implementation.
The latest regulatory frameworks, such as the European Union’s AI Act (EU AI Act) and the UNESCO Recommendation on the Ethics of Artificial Intelligence, note that responsible AI governance depends on the realisation of where risks emerge across the development process.
This article maps the AI system lifecycle, identifies the risks that emerge at each stage and evaluates the policy tools used to mitigate them using the lifecycle framework developed by the Organisation of Economic Co-operation and Development (OECD).
The Lifecycle of an AI System
AI systems are developed through a structured process that includes problem definition, dataset collection and preparation, model development, testing and validation, deployment, and monitoring.
The OECD conceptualises this development process as the AI system lifecycle. Each stage entails various technical and administrative procedures, since choices made during these stages will dictate the goals and limits of an AI system. Further, the quality and representativeness of training sets will have a strong effect on the behaviour of models after implementation.
Since this is an iterative and not a linear procedure, risks can be introduced at each stage of the AI lifecycle. New data can be retrained into different models, and systems are regularly updated once they have been deployed, to address performance degradation, model errors, or unintended outputs. This iterative process means governance must address risks across the entire lifecycle, not just at deployment.
Where AI Risks Emerge
AI risks usually emerge earlier in the development process, especially in the phases when system objectives are formulated and training data are chosen. The EU AI Act and the UNESCO Recommendation on the Ethics of AI outline the following risks: bias and discrimination, privacy and data security violations, the absence of transparency in automated decision-making, and risks to fundamental rights.
AI Governance Risk Landscape: Core Risk Categories Under International Frameworks
Risk categories jointly identified by the EU AI Act and UNESCO Recommendation on the Ethics of Artificial Intelligence
Outlining the risks throughout the AI lifecycle helps understand the areas where governance interventions are most necessary. For example, discriminatory outcomes often result from biased or unrepresentative training data, while safety failures are typically linked to inadequate testing before deployment. Risks such as misinformation arise post the development process, when generative AI systems are deployed at scale on digital platforms.
AI System Lifecycle: Key Risks at Each Stage
Risks identified per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Understanding where risks emerge across the lifecycle explains why governance frameworks classify AI systems by risk and apply oversight at multiple stages.
Policy Tools for Mitigating AI Risks
Governments and international organisations have developed regulatory tools to help mitigate AI risks in the lifecycle. These tools are meant to make sure that AI technologies are identified as up to standard in safety, accountability and fairness prior to and after deployment.
For example, the OECD AI Policy Observatory recommends that governments adopt policy instruments such as risk evaluations, algorithmic auditing necessities, regulatory sandboxes, and transparency necessities of AI systems. The European Union’s Artificial Intelligence Act (AI Act) is one of the most comprehensive systems of governance that introduces a risk-oriented regulation strategy. It mandates adherence to requirements concerning data governance, documentation, human oversight, and robustness, and cybersecurity. Such requirements bring regulatory checkpoints to the lifecycle of AI systems.
Mapping these policy tools across the lifecycle illustrates how governance mechanisms can intervene at different stages of AI development.
Governance Overlay: Policy Interventions Across the AI Lifecycle
Regulatory tools mapped at each stage of AI development per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Several policy tools are directed at the risks that occur in the pre-developmental stages. In one example, algorithmic impact assessment has been applied in various jurisdictions to measure the possible consequences of automated decision systems on society before implementation. On the same note, the requirements of dataset documentation, including dataset transparency requirements and model cards, are aimed at enhancing accountability during the training and development stages of the AI systems. Therefore, lifecycle-based policy design allows regulators to intervene before harmful outcomes occur, rather than responding only after AI systems have caused damage in real-world environments.
The Policy Gap in AI Governance
The misalignment between risks and governance tools across the AI lifecycle indicates a critical structural gap in existing regulations. Numerous governance processes become activated after AI systems are classified as “high risk” or after they are implemented in the real world. But the most serious sources of damage have their roots in earlier stages of the development procedure.
An example is that prejudiced or unbalanced training data is almost inevitably a source of discriminative results in automated decision systems. When these types of models are applied in areas like staffing, credit rating, or in providing services to the public, such biases can quickly spread to large populations and undermine democratic rights. In the same way, the lack of transparency in model design might result in the fact that the regulator or individuals are affected by the decision-making process. This reflects a broader timing gap in AI governance, where risks originate during design and development, but regulatory intervention typically occurs only after deployment.
Analysis
1. Key risks originate before deployment: As depicted in the lifecycle mapping, the data collection and model development phase presents several significant governance risks as opposed to the deployment phase. Structural issues can be entrenched within AI systems even before they are deployed in practice due to bias in data sets, incomplete reporting of training sets, and obscured network designs.
2. Data governance is a primary point of vulnerability: Most of the instances of algorithmic discrimination listed above are associated with training material that is not representative of some population groups or is historical. Since machine learning models are optimisations of patterns that exist in datasets, these biases can be carried through the whole lifecycle and reproduced after deployment.
3. Regulatory approaches remain mismatched across jurisdictions: Different countries adopt varying approaches to AI governance, ranging from risk-based frameworks such as the EU AI Act to more sector-specific or voluntary guidelines in other regions. This divergence creates inconsistencies in safety, accountability, and enforcement standards, allowing risks to persist across borders and potentially undermining the protection of users in globally deployed AI systems.
4. Governance interventions remain uneven across the lifecycle: Whereas the various regulatory instruments aim at deployment and monitoring, fewer instruments systematically tackle the risks that are posed by the previous design and development phases.
Recommendations
1. Introduce mandatory lifecycle risk assessments: The regulatory systems need to demand systemic risk evaluation at the beginning of AI development, especially at the problem design and dataset selection phases. This would assist in detecting possible harmful applications in advance, before systems are constructed and installed.
2. Strengthen dataset governance standards: Training datasets must be supplemented with documentation as to their provenance, composition and limitations. Standardised documentation frameworks of data sets can assist in the discovery by regulators and auditors of the potential sources of bias or privacy threats.
3. Expand independent algorithmic auditing: AI systems can be assessed by regular third-party audits based on fairness, strength, and security weaknesses. The auditing mechanisms especially apply to high-risk systems employed in employment, finance or the public services.
4. Integrate continuous monitoring requirements: AI systems may be monitored regularly after implementation to identify model drift, unforeseen consequences, or abuse. Reporting systems can facilitate the process where the regulators can see the emerging risks and modify the governance systems.
Conclusion - The Need for Global AI Governance
Despite growing regulatory attention, global air governance remains fragmented. Different jurisdictions adopt varying approaches to risk classification, oversight, and enforcement, leading to inconsistencies in safety and accountability standards. Given that AI systems are often developed, deployed, and used across borders, this lack of coordination allows risks to persist beyond national regulatory frameworks.
Addressing these challenges requires a shift towards greater international cooperation and lifecycle-based governance. Developing shared standards, improving cross-border regulatory alignment, and embedding oversight across all stages of AI development will be essential to ensuring that AI systems are safe, transparent, and accountable in a globally interconnected environment.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.