Digital Disparities and Constitutional Mandates: Supreme Court’s Stand on Inclusion
Muskan Sharma
Research Analyst- Policy & Advocacy, CyberPeace
PUBLISHED ON
May 15, 2025
10
Introduction
On April 30, 2025, the Supreme Court of India delivered a landmark judgment that cast a sharp light on one of the most overlooked yet pressing issues in modern governance—digital inequity. In a country that has a staggering 900 million Internet users, the ruling highlights a disheartening reality, a paradox that brings the “digital divide” to centre stage. While India may be the world’s second-largest online market, a significant segment of its population remains digitally disenfranchised. The judgment, delivered in response to two interconnected petitions, underscored that access to the internet is no longer a luxury but a lifeline integral to exercising fundamental rights. The court pointed out in clear terms that the government must build a digital ecosystem that is inclusive and accessible to all and attributed the right to digital access as an intrinsic part of the right to life and liberty under Article 21 as enshrined under the Indian Constitution.
Understanding the Context: What Prompted the Petitions?
The judgment springs out of two writ petitions, which sought instructions or guidelines for people with blindness or limited vision and acid attack survivors, respectively, to conduct digital Know Your Customer (KYC)/e-KYC/video KYC mandated by RBI’s KYC Master Directions, 2016, which were reserved for judgment on January 28. The court delivered the judgment on April 30, 2025, emphasising the fact that true inclusion in this digital era is confounded in an inclusive digital infrastructure, and it must provide reasonable accommodation to those who face impediments due to any disability or disfigurement.
In consonance with its view, it laid down various guidelines that ensure that all persons with disabilities or acid attack survivors are treated even when digital processes are involved in accordance with the provisions of the Right of Persons with Disabilities Act, 2016 (hereinafter referred to as “RPwD Act”)
Another major observation made by the Honourable SC judges is that the mode of facilitation of government services is through digital platforms, i.e., e-governance, and access to all these welfare schemes is the right of every citizen, irrespective of the fact that they suffer from any disability. The failure of the provisioning of e-governance of these facilities to these individuals is a gross failure of the objectives of these schemes.
Key Observations and Directives
The court directed the government to release fresh guidelines that establish alternative methods to conduct digital KYC/e-KYC for all persons who suffer any impairment, low vision, or disfigurement with greater sensitivity, particularly for acid-attack survivors. The court made its intention very clear that the right to digital access is intrinsic to the right to life and liberty. All the tasks that are included within the ambit of digital KYC, such as pen-on-paper signatures, screen signatures, and the brief window for OTP entry, create an inaccessible and exclusionary framework, violating not just the dignity but the legal rights granted protection under the RPwD Act, 2016. The ruling directs a fundamental reimagining of digital governance through the lens of inclusion, equality, and dignity.
Conclusion
The court is not mincing its words when it declares digital accessibility as a constitutional imperative; it has made it clear that bridging the digital divide is no longer optional but a legal duty. The decision marks the new beginning and a propeller of digital transformation, and a delightful amalgamation of digital access and the rights of people. The effect of this judgment will not be restricted to one class of people. Still, it will cater to all those individuals who face these obstacles on a daily basis due to the exclusionary nature of digital platforms.
The development of high-speed broadband internet in the 90s triggered a growth in online gaming, particularly in East Asian countries like South Korea and China. This culminated in the proliferation of competitive video game genres, which had otherwise existed mostly in the form of high-score and face-to-face competitions at arcades. The online competitive gaming market has only become bigger over the years, with a separate domain for professional competition, called esports. This industry is projected to reach US$4.3 billion by 2029, driven by advancements in gaming technology, increased viewership, multi-million dollar tournaments, professional leagues, sponsorships, and advertising revenues. However, the industry is still in its infancy and struggles with fairness and integrity issues. It can draw lessons in regulation from the traditional sports market to address these challenges for uniform global growth.
The Growth of Esports
The appeal of online gaming lies in its design innovations, social connectivity, and accessibility. Its rising popularity has culminated in online gaming competitions becoming an industry, formally organised into leagues and tournaments with reward prizes reaching up to millions of dollars. Professional teams now have coaches, analysts and psychologists supporting their players. For scale, the 2024 ESports World Cup (EWS) held in Saudi Arabia had the largest combined prize pool of over US$60 million. Such tournaments can be viewed in arenas and streamed online, and by 2025, around 322.7 million people are forecast to be occasional viewers of esports events.
According to Statista, esports revenue is expected to demonstrate an annual growth rate (CAGR 2024-2029) of 6.59%, resulting in a projected market volume of US$5.9 billion by 2029. Esports has even been recognised in traditional sporting events, debuting as a medal sport in the Asian Games 2022. In 2024, the International Olympic Committee (IOC) announced the Olympic Esports Games, with the inaugural event set to take place in 2025 in Saudi Arabia. Hosting esports events such as the EWS is expected to boost tourism and the host country’s local economy.
The Challenges of Esports Regulation
While the esports ecosystem provides numerous opportunities for growth and partnerships, its under-regulation presents challenges. Due to the lack of a single governing body like the IOC for the Olympics or FIFA for football to lay down centralised rules, the industry faces certain challenges, such as :
Integrity issues: Esports are not immune to cheating attempts. Match-fixing, using advanced software hacks, doping (e.g., Adderall use), and the use of other illegal aids are common. DOTA, Counter-Strike, and Overwatch tournaments are particularly susceptible to cheating scandals.
Players’ Rights: The teams that contractually own professional players provide remuneration and exercise significant control over athletes, who face issues like overwork, a short-lived career, stress, the absence of collective bargaining forums, instability, etc.
Fragmented National Regulations: While multiple countries have recognised esports as a sport, policies on esports governance and allied regulation vary within and across borders. For example, age restrictions and laws on gambling, taxation, labour, and advertising differ by country. This can create confusion, risks and extra costs, impacting the growth of the ecosystem.
Cybersecurity Concerns: The esports industry carries substantial prize pools and has growing viewer engagement, which makes it increasingly vulnerable to Distributed Denial of Service (DDoS) attacks, malware, ransomware, data breaches, phishing, and account hijacking. Tournament organisers must prioritise investments in secure network infrastructure, perform regular security audits, encrypt sensitive data, implement network monitoring, utilise API penetration testing tools, deploy intrusion detection systems, and establish comprehensive incident response and mitigation plans.
Proposals for Esports Regulation: Lessons from Traditional Sports
To address the most urgent challenges to the esports industry as outlined above, the following interventions, drawing on the governance and regulatory frameworks of traditional sports, can be made:
Need for a Centralised Esports Governing Body: Unlike traditional sports, the esports landscape lacks a Global Sports Organisation (GSO) to oversee its governance. Instead, it is handled de facto by game publishers with industry interests different from those of traditional GSOs. Publishers’ primary source of revenue is not esports, which means they can adopt policies unsuitable for its growth but good for their core business. Appointing a centralised governing body with the power to balance the interests of multiple stakeholders and manage issues like unregulated gambling, athlete health, and integrity challenges is a logical next step for this industry.
Gambling/Betting Regulations: While national laws on gambling/betting vary, GSOs establish uniform codes of conduct that bind participants contractually, ensuring consistent ethical standards across jurisdictions. Similar rules in esports are managed by individual publishers/ tournament organisers, leading to inconsistencies and legal grey areas. The esports ecosystem needs standardised regulation to preserve fair play codes and competitive integrity.
Anti-Doping Policies: There is increasing adderall abuse among young players to enhance performance with the rising monetary stakes in esports. The industry must establish a global framework similar to the World Anti-Doping Code, which, in conjunction with eight international standards, harmonises anti-doping policies across all traditional sports and countries in the world. The esports industry should either adopt this or develop its own policy to curb stimulant abuse.
Norms for Participant Health: Professional players start around age 16 or 17 and tend to retire around 24. They may be subjected to rigorous practice hours and stringent contracts by the teams that own them. There is a need for international norm-setting by a federation overseeing the protection of underage players. Enforcement of these norms can be one of the responsibilities of a decentralised system comprising country and state-level bodies. This also ensures fair play governance.
Respect and Diversity: While esports is technologically accessible, it still has room for better representation of diverse gender identities, age groups, abilities, races, ethnicities, religions, and sexual orientations. Embracing greater diversity and inclusivity would benefit the industry's growth and enhance its potential to foster social connectivity through healthy competition.
Conclusion
The development of the world’s first esports island in Abu Dhabi gives impetus to the rapidly growing esports industry with millions of fans across the globe. To sustain this momentum, stakeholders must collaborate to build a strong governance framework that protects players, supports fans, and strengthens the ecosystem. By learning from traditional sports, esports can establish centralised governance, enforce standardised anti-doping measures, safeguard athlete rights, and promote inclusivity, especially for young and diverse communities. Embracing regulation and inclusivity will not only enhance esports' credibility but also position it as a powerful platform for unity, creativity, and social connection in the digital age.
The Expanding Governance Challenge of Artificial Intelligence
Artificial intelligence (AI) systems are increasingly embedded in economic and social infrastructure. They are being adopted in financial services, healthcare diagnostics, hiring systems, and public administration. But while these systems improve efficiency and decision-making, they also introduce new forms of technological risk.
Unlike conventional software, AI systems learn patterns from data and continue to evolve as they run. This poses governance issues since risks can arise throughout the AI life cycle, whether at the coding level or in their implementation.
The latest regulatory frameworks, such as the European Union’s AI Act (EU AI Act) and the UNESCO Recommendation on the Ethics of Artificial Intelligence, note that responsible AI governance depends on the realisation of where risks emerge across the development process.
This article maps the AI system lifecycle, identifies the risks that emerge at each stage and evaluates the policy tools used to mitigate them using the lifecycle framework developed by the Organisation of Economic Co-operation and Development (OECD).
The Lifecycle of an AI System
AI systems are developed through a structured process that includes problem definition, dataset collection and preparation, model development, testing and validation, deployment, and monitoring.
The OECD conceptualises this development process as the AI system lifecycle. Each stage entails various technical and administrative procedures, since choices made during these stages will dictate the goals and limits of an AI system. Further, the quality and representativeness of training sets will have a strong effect on the behaviour of models after implementation.
Since this is an iterative and not a linear procedure, risks can be introduced at each stage of the AI lifecycle. New data can be retrained into different models, and systems are regularly updated once they have been deployed, to address performance degradation, model errors, or unintended outputs. This iterative process means governance must address risks across the entire lifecycle, not just at deployment.
Where AI Risks Emerge
AI risks usually emerge earlier in the development process, especially in the phases when system objectives are formulated and training data are chosen. The EU AI Act and the UNESCO Recommendation on the Ethics of AI outline the following risks: bias and discrimination, privacy and data security violations, the absence of transparency in automated decision-making, and risks to fundamental rights.
AI Governance Risk Landscape: Core Risk Categories Under International Frameworks
Risk categories jointly identified by the EU AI Act and UNESCO Recommendation on the Ethics of Artificial Intelligence
Outlining the risks throughout the AI lifecycle helps understand the areas where governance interventions are most necessary. For example, discriminatory outcomes often result from biased or unrepresentative training data, while safety failures are typically linked to inadequate testing before deployment. Risks such as misinformation arise post the development process, when generative AI systems are deployed at scale on digital platforms.
AI System Lifecycle: Key Risks at Each Stage
Risks identified per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Understanding where risks emerge across the lifecycle explains why governance frameworks classify AI systems by risk and apply oversight at multiple stages.
Policy Tools for Mitigating AI Risks
Governments and international organisations have developed regulatory tools to help mitigate AI risks in the lifecycle. These tools are meant to make sure that AI technologies are identified as up to standard in safety, accountability and fairness prior to and after deployment.
For example, the OECD AI Policy Observatory recommends that governments adopt policy instruments such as risk evaluations, algorithmic auditing necessities, regulatory sandboxes, and transparency necessities of AI systems. The European Union’s Artificial Intelligence Act (AI Act) is one of the most comprehensive systems of governance that introduces a risk-oriented regulation strategy. It mandates adherence to requirements concerning data governance, documentation, human oversight, and robustness, and cybersecurity. Such requirements bring regulatory checkpoints to the lifecycle of AI systems.
Mapping these policy tools across the lifecycle illustrates how governance mechanisms can intervene at different stages of AI development.
Governance Overlay: Policy Interventions Across the AI Lifecycle
Regulatory tools mapped at each stage of AI development per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Several policy tools are directed at the risks that occur in the pre-developmental stages. In one example, algorithmic impact assessment has been applied in various jurisdictions to measure the possible consequences of automated decision systems on society before implementation. On the same note, the requirements of dataset documentation, including dataset transparency requirements and model cards, are aimed at enhancing accountability during the training and development stages of the AI systems. Therefore, lifecycle-based policy design allows regulators to intervene before harmful outcomes occur, rather than responding only after AI systems have caused damage in real-world environments.
The Policy Gap in AI Governance
The misalignment between risks and governance tools across the AI lifecycle indicates a critical structural gap in existing regulations. Numerous governance processes become activated after AI systems are classified as “high risk” or after they are implemented in the real world. But the most serious sources of damage have their roots in earlier stages of the development procedure.
An example is that prejudiced or unbalanced training data is almost inevitably a source of discriminative results in automated decision systems. When these types of models are applied in areas like staffing, credit rating, or in providing services to the public, such biases can quickly spread to large populations and undermine democratic rights. In the same way, the lack of transparency in model design might result in the fact that the regulator or individuals are affected by the decision-making process. This reflects a broader timing gap in AI governance, where risks originate during design and development, but regulatory intervention typically occurs only after deployment.
Analysis
1. Key risks originate before deployment: As depicted in the lifecycle mapping, the data collection and model development phase presents several significant governance risks as opposed to the deployment phase. Structural issues can be entrenched within AI systems even before they are deployed in practice due to bias in data sets, incomplete reporting of training sets, and obscured network designs.
2. Data governance is a primary point of vulnerability: Most of the instances of algorithmic discrimination listed above are associated with training material that is not representative of some population groups or is historical. Since machine learning models are optimisations of patterns that exist in datasets, these biases can be carried through the whole lifecycle and reproduced after deployment.
3. Regulatory approaches remain mismatched across jurisdictions: Different countries adopt varying approaches to AI governance, ranging from risk-based frameworks such as the EU AI Act to more sector-specific or voluntary guidelines in other regions. This divergence creates inconsistencies in safety, accountability, and enforcement standards, allowing risks to persist across borders and potentially undermining the protection of users in globally deployed AI systems.
4. Governance interventions remain uneven across the lifecycle: Whereas the various regulatory instruments aim at deployment and monitoring, fewer instruments systematically tackle the risks that are posed by the previous design and development phases.
Recommendations
1. Introduce mandatory lifecycle risk assessments: The regulatory systems need to demand systemic risk evaluation at the beginning of AI development, especially at the problem design and dataset selection phases. This would assist in detecting possible harmful applications in advance, before systems are constructed and installed.
2. Strengthen dataset governance standards: Training datasets must be supplemented with documentation as to their provenance, composition and limitations. Standardised documentation frameworks of data sets can assist in the discovery by regulators and auditors of the potential sources of bias or privacy threats.
3. Expand independent algorithmic auditing: AI systems can be assessed by regular third-party audits based on fairness, strength, and security weaknesses. The auditing mechanisms especially apply to high-risk systems employed in employment, finance or the public services.
4. Integrate continuous monitoring requirements: AI systems may be monitored regularly after implementation to identify model drift, unforeseen consequences, or abuse. Reporting systems can facilitate the process where the regulators can see the emerging risks and modify the governance systems.
Conclusion - The Need for Global AI Governance
Despite growing regulatory attention, global air governance remains fragmented. Different jurisdictions adopt varying approaches to risk classification, oversight, and enforcement, leading to inconsistencies in safety and accountability standards. Given that AI systems are often developed, deployed, and used across borders, this lack of coordination allows risks to persist beyond national regulatory frameworks.
Addressing these challenges requires a shift towards greater international cooperation and lifecycle-based governance. Developing shared standards, improving cross-border regulatory alignment, and embedding oversight across all stages of AI development will be essential to ensuring that AI systems are safe, transparent, and accountable in a globally interconnected environment.
A video showing people running amid smoke and chaos during an attack is being widely shared on social media with the claim that it depicts an Iranian strike on Israel. The clip, around 29 seconds long, shows thick black smoke rising as people flee the scene, with voices heard calling for help. However, research by the CyberPeace found that the claim is misleading. The video is actually from the September 2001 attacks on the World Trade Center in the United States.
Claim:
The video has been shared on Facebook with a caption claiming, “Iran has launched its most powerful attack on Israel. Thousands of soldiers have reportedly been killed. Massive protests have erupted within the country, and Israel appears completely helpless.”
To verify the claim, we conducted a reverse image search using keyframes from the viral video. This led us to a longer version of the same footage uploaded on YouTube on September 11, 2016.
The relevant portion appears around the 2-minute 9-second mark. The video description identifies the footage as part of the September 2001 attacks on the World Trade Center in New York. Further, we found the same video in an archive folder on a website associated with the US Department of Commerce, which contains multiple images and videos related to the 9/11 attacks. This further confirms the origin of the footage.
Conclusion:
The viral claim is false. The video does not show an Iranian attack on Israel. It is from September 2001 and depicts the aftermath of the World Trade Center attacks in New York, USA.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.