TRAI’s Consultation Paper on OTT Platforms
Introduction
Recently, a Consultation Paper on Regulatory Mechanisms for Over-The-Top (OTT) Communication Services was published by the Telecom Regulatory Authority of India (TRAI). The paper explores several OTT regulation-related challenges and solicits input from stakeholders on a suggested regulatory framework. We’ll summarise the paper’s main conclusions in this blog.
Structure of the Paper
The Telecom Regulatory Authority of India’s Consultation Paper on Regulatory Mechanism for Over-The-Top (OTT) Communication Services and Selective Banning of OTT Services intends to solicit comments and recommendations from stakeholders about the regulation of OTT services in India. The paper is broken up into five chapters that cover the introduction and background, issues with regulatory mechanisms for OTT communication services, issues with the selective banning of OTT services, a summary of the issues for consultation, and an overview of international practices on the topic. Written comments from interested parties are requested and may be sent electronically to the Advisor (Networks, Spectrum and Licencing) at TRAI. These comments will also be posted on the TRAI website.
Overview of the Paper
- Chapter 1: Introduction and Background
- The first chapter of the essay introduces the subject of OTT communication services and argues why regulatory frameworks are necessary. The chapter also gives a general outline of the topics and the paper’s organisation that will be covered in the following chapters.
- Chapter 2: Examination of the Issues Related to Regulatory Mechanism for Over-The-Top Communication Services
- The second chapter of the essay looks at the problems with OTT communication service regulation. It talks about the many kinds of OTT services and how they affect the conventional telecom sector. The chapter also looks at the regulatory issues raised by OTT services and the various strategies used by various nations to address them.
- Chapter 3: Examination of the Issues Related to Selective Banning of OTT Services
- The final chapter of the essay looks at the problems of selectively outlawing OTT services. It analyses the justifications for government restrictions on OTT services as well as the possible effects of such restrictions on consumers and the telecom sector. The chapter also looks at the legal and regulatory structures that determine how OTT services are prohibited in various nations.
- Chapter 4: International Practices
- An overview of global OTT communication service best practices is given in the paper’s fourth chapter. It talks about the various regulatory strategies used by nations throughout the world and how they affect consumers and the telecom sector. The chapter also looks at the difficulties regulators encounter when trying to create efficient regulatory frameworks for OTT services.
- Chapter 5: Issues for Consultation
- This chapter is the spirit of the consultation paper as it covers the points and questions for consultation. This chapter has been classified into two sub-sections – Issues Related to Regulatory Mechanisms for OTT Communication Services and Issues Related to the Selective Banning of OTT Services. The inputs will be entirely focused on these sub headers, and the scope, extent, and ambit of the consultation paper rests on these questions and necessary inputs.
Conclusion
An important publication that aims to address the regulatory issues raised by OTT services is the Consultation Paper on Regulatory Mechanisms for Over-The-Top Communication Services. The paper offers a thorough analysis of the problems with OTT service regulation and requests input from stakeholders on the suggested regulatory structure. In order to make sure that the regulatory framework is efficient and advantageous for everyone, it is crucial for all stakeholders to offer their opinion on the document.
Related Blogs
Introduction
The pervasive issue of misinformation in India is a multifaceted challenge with profound implications for democratic processes, public awareness, and social harmony. The Election Commission of India (ECI) has taken measures to counter misinformation during the 2024 elections. ECI has launched campaigns to educate people and urge them to verify election-related content and share responsibly on social media. In response to the proliferation of fake news and misinformation online, the ECI has introduced initiatives such as ‘Myth vs. Reality’ and 'VerifyBeforeYouAmplify' to clear the air around fake news being spread on social media. EC measures aim to ensure that the spread of misinformation is curbed, especially during election time, when voters consume a lot of information from social media. It is of the utmost importance that voters take in facts and reliable information and avoid any manipulative or fake information that can negatively impact the election process.
EC Collaboration with Tech Platforms
In this new age of technology, the Internet and social media continue to witness a surge in the spread of misinformation, disinformation, synthetic media content, and deepfake videos. This has rightly raised serious concerns. The responsible use of social media is instrumental in maintaining the accuracy of information and curbing misinformation incidents.
The ECI has collaborated with Google to empower the citizenry by making it easy to find critical voting information on Google Search and YouTube. In this way, Google supports the 2024 Indian General Election by providing high-quality information to voters, safeguarding platforms from abuse, and helping people navigate AI-generated content. The company connects voters to helpful information through product features that show data from trusted organisations across its portfolio. YouTube showcases election information panels, including how to register to vote, how to vote, and candidate information. YouTube's recommendation system prominently features content from authority sources on the homepage, in search results, and in the "Up Next" panel. YouTube highlights high-quality content from authoritative news sources during key moments through its Top News and Breaking News shelves, as well as the news watch page.
Google has also implemented strict policies and restrictions regarding who can run election-related advertising campaigns on its platforms. They require all advertisers who wish to run election ads to undergo an identity verification process, provide a pre-certificate issued by the ECI or anyone authorised by the ECI for each election ad they want to run where necessary, and have in-ad disclosures that clearly show who paid for the ad. Additionally, they have long-standing ad policies that prohibit ads from promoting demonstrably false claims that could undermine trust or participation in elections.
CyberPeace Countering Misinformation
CyberPeace Foundation, a leading organisation in the field of cybersecurity works to promote digital peace for all. CyberPeace is working on the wider ecosystem to counter misinformation and develop a safer and more responsible Internet. CyberPeace has collaborated with Google.org to run a pan-India awareness-building program and comprehensive multilingual digital resource hub with content available in up to 15 Indian languages to empower over 40 million netizens in building resilience against misinformation and practising responsible online behaviour. This step is crucial in creating a strong foundation for a trustworthy Internet and secure digital landscape.
Myth vs Reality Register by ECI
The Election Commission of India (ECI) has launched the 'Myth vs Reality Register' to combat misinformation and ensure the integrity of the electoral process during the general elections 2024. The 'Myth vs Reality Register' can be accessed through the Election Commission's official website (https://mythvsreality.eci.gov.in/). All stakeholders are urged to verify and corroborate any dubious information they receive through any channel with the information provided in the register. The register provides a one-stop platform for credible and authenticated election-related information, with the factual matrix regularly updated to include the latest busted fakes and fresh FAQs. The ECI has identified misinformation as one of the challenges, along with money, muscle, and Model Code of Conduct violations, for electoral integrity. The platform can be used to verify information, prevent the spread of misinformation, debunk myths, and stay informed about key issues during the General Elections 2024.
The ECI has taken proactive steps to combat the challenge of misinformation which could cripple the democratic process. EC has issued directives urging vigilance and responsibility from all stakeholders, including political parties, to verify information before amplifying it. The EC has also urged responsible behaviour on social media platforms and discourse that inspires unity rather than division. The commission has stated that originators of false information will face severe consequences, and nodal officers across states will remove unlawful content. Parties are encouraged to engage in issue-based campaigning and refrain from disseminating unverified or misleading advertisements.
Conclusion
The steps taken by the ECI have been designed to empower citizens and help them affirm the accuracy and authenticity of content before amplifying it. All citizens must be well-educated about the entire election process in India. This includes information on how the electoral rolls are made, how candidates are monitored, a complete database of candidates and candidate backgrounds, party manifestos, etc. For informed decision-making, active reading and seeking information from authentic sources is imperative. The partnership between government agencies, tech platforms and civil societies helps develop strategies to counter the widespread misinformation and promote online safety in general, and electoral integrity in particular.
References
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2016941#:~:text=To%20combat%20the%20spread%20of,the%20ongoing%20General%20Elections%202024
- https://www.business-standard.com/elections/lok-sabha-election/ls-elections-2024-ec-uses-social-media-to-nudge-electors-to-vote-124040700429_1.html
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/
In the vast, interconnected cosmos of the internet, where knowledge and connectivity are celebrated as the twin suns of enlightenment, there lurk shadows of a more sinister nature. Here, in these darker corners, the innocence of childhood is not only exploited but also scarred, indelibly and forever. The production, distribution, and consumption of Child Sexual Abuse Material (CSAM) have surged to alarming levels globally, casting a long, ominous shadow over the digital landscape.
In response to this pressing issue, the National Human Rights Commission (NHRC) has unfurled a comprehensive four-part advisory, a beacon of hope aimed at combating CSAM and safeguarding the rights of children in this digital age. This advisory dated 27/10/23 is not merely a reaction to the rising tide of CSAM, but a testament to the imperative need for constant vigilance in the realm of cyber peace.
The statistics paint a sobering picture. In 2021, more than 1,500 instances of publishing, storing, and transmitting CSAM were reported, shedding a harsh light on the scale of the problem. Even more alarming is the upward trend in cases reported in subsequent years. By 2023, a staggering 450,207 cases of CSAM had already been reported, marking a significant increase from the 204,056 and 163,633 cases reported in 2022 and 2021, respectively.
The Key Aspects of Advisory
The NHRC's advisory commences with a fundamental recommendation - a redefinition of terminology. It suggests replacing the term 'Child Pornography' with 'Child Sexual Abuse Material' (CSAM). This shift in language is not merely semantic; it underscores the gravity of the issue, emphasizing that this is not about pornography but child abuse.
Moreover, the advisory calls for the definition of 'sexually explicit' under Section 67B of the IT Act, 2000. This step is crucial for ensuring the prompt identification and removal of online CSAM. By giving a clear definition, law enforcement can act swiftly in removing such content from the internet.
The digital world knows no borders, and CSAM can easily cross jurisdictional lines. NHRC recognizes this challenge and proposes that laws be harmonized across jurisdictions through bilateral agreements. Moreover, it recommends pushing for the adoption of a UN draft Convention on 'Countering the Use of Information and Communications Technologies for Criminal Purposes' at the General Assembly.
One of the critical aspects of the advisory is the strengthening of law enforcement. NHRC advocates for the creation of Specialized State Police Units in every state and union territory to handle CSAM-related cases. The central government is expected to provide support, including grants, to set up and equip these units.
The NHRC further recommends establishing a Specialized Central Police Unit under the government of India's jurisdiction. This unit will focus on identifying and apprehending CSAM offenders and maintaining a repository of such content. Its role is not limited to law enforcement; it is expected to cooperate with investigative agencies, analyze patterns, and initiate the process for content takedown. This coordinated approach is designed to combat the problem effectively, both on the dark web and open web.
The role of internet intermediaries and social media platforms in controlling CSAM is undeniable. The NHRC advisory emphasizes that intermediaries must deploy technology, such as content moderation algorithms, to proactively detect and remove CSAM from their platforms. This places the onus on the platforms to be proactive in policing their content and ensuring the safety of their users.
New Developments
Platforms using end-to-end encryption services may be required to create additional protocols for monitoring the circulation of CSAM. Failure to do so may invite the withdrawal of the 'safe harbor' clause under Section 79 of the IT Act, 2000. This measure ensures that platforms using encryption technology are not inadvertently providing safe havens for those engaged in illegal activities.
NHRC's advisory extends beyond legal and law enforcement measures; it emphasizes the importance of awareness and sensitization at various levels. Schools, colleges, and institutions are called upon to educate students, parents, and teachers about the modus operandi of online child sexual abusers, the vulnerabilities of children on the internet, and the early signs of online child abuse.
To further enhance awareness, a cyber curriculum is proposed to be integrated into the education system. This curriculum will not only boost digital literacy but also educate students about relevant child care legislation, policies, and the legal consequences of violating them.
NHRC recognizes that survivors of CSAM need more than legal measures and prevention strategies. Survivors are recommended to receive support services and opportunities for rehabilitation through various means. Partnerships with civil society and other stakeholders play a vital role in this aspect. Moreover, psycho-social care centers are proposed to be established in every district to facilitate need-based support services and organization of stigma eradication programs.
NHRC's advisory is a resounding call to action, acknowledging the critical importance of protecting children from the perils of CSAM. By addressing legal gaps, strengthening law enforcement, regulating online platforms, and promoting awareness and support, the NHRC aims to create a safer digital environment for children.
Conclusion
In a world where the internet plays an increasingly central role in our lives, these recommendations are not just proactive but imperative. They underscore the collective responsibility of governments, law enforcement agencies, intermediaries, and society as a whole in safeguarding the rights and well-being of children in the digital age.
NHRC's advisory is a pivotal guide to a more secure and child-friendly digital world. By addressing the rising tide of CSAM and emphasizing the need for constant vigilance, NHRC reaffirms the critical role of organizations, governments, and individuals in ensuring cyber peace and child protection in the digital age. The active contribution from premier cyber resilience firms like Cyber Peace Foundation, amplifies the collective action forging a secure digital space, highlighting the pivotal role played by think tanks in ensuring cyber peace and resilience.
References:
- https://www.hindustantimes.com/india-news/nhrc-issues-advisory-regarding-child-sexual-abuse-material-on-internet-101698473197792.html
- https://ssrana.in/articles/nhrcs-advisory-proliferation-of-child-sexual-abuse-material-csam/
- https://theprint.in/india/specialised-central-police-unit-use-of-technology-to-proactively-detect-csam-nhrc-advisory/1822223/
Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide