World Environment Day 2025: The Hidden Cost of Our Digital Lives
On June 5th, the world comes together to reflect on how the way we live impacts the environment. We discuss conserving water, cutting back on plastic, and planting trees, but how often do we think about the environmental impact of our digital lives?
The internet is ubiquitous but invisible in a world that is becoming more interconnected by the day. It drives our communications, meetings, and recollections. However, there is a price for this digital convenience: carbon emissions.
A Digital Carbon Footprint: What Is It?
Electricity is necessary for every video we stream, email we send, and file we store on the cloud. But almost 60% of the electricity produced today is generated from burning fossil fuels. The digital world uses an incredible amount of energy, from the energy-hungry data centres that house our information to the networks that send it. Thus, the greenhouse gas emissions produced by our use of digital tools and services are referred to as our "digital carbon footprint."
To put it in perspective:
- Up to 150–200 grams of CO₂ can be produced by streaming an hour-long HD video on your phone.
- A typical email sent can release about 4 grams of CO₂, and more if it contains attachments.
- Comparable to the airline industry, the internet as a whole accounts for 1.5% to 4% of global greenhouse gas emissions.
Why It Matters
Ironically, despite the fact that digital life frequently feels "clean" and weightless, it is backed by enormous, power-hungry infrastructures. Additionally, our online activity is growing at a rapid pace as digital penetration increases. Plus, with the advent of AI and big data, the demand for energy is only going to rise. The harms of air, water, and soil degradation, and biodiversity loss are already upon us. It's high time we reconsider how we use technology on World Environment Day.
What Can You Do?
The good news is that even minor adjustments to our online conduct can have an impact.
🗑️ Clear out your digital clutter by getting rid of unnecessary emails, apps, and files.
📥 Unsubscribe from mailing lists that you no longer use.
📉 When HD is not required, stream videos with lower quality.
⚡ Make use of energy-saving gadgets and disconnect them when not in use.
🌐 Make the move to renewable energy-powered, environmentally friendly cloud providers.
🗳️ Support informed policy by engaging with your elected representatives and advocating for greener tech policies. Knowing your digital rights and responsibilities can help shape smarter policies and a healthier planet.
We at the CyberPeace Foundation think that cyberspace needs to be sustainable. An eco-friendly digital world is also a safer one, where all communities can thrive in harmony. We must promote digital responsibility, including its environmental component, as we work towards digital equity and resilience.
On this World Environment Day, let's go one step further and work towards a greener internet as well as a greener planet.
Related Blogs
%203rd%20Sep%2C%202025.webp)
In the past decade, India’s gaming sector has seen a surprising but swift advancement, which brought along millions of players and over billions in investments and has even been estimated to be at $23 billion. Whether it's fantasy cricket and Ludo apps, high-stakes poker, or rummy platforms, investing real money in online gaming and gambling has become a beloved hobby for many. Moreover, it not only gave a boost to the economy but also contributed to creative innovation and the generation of employment.
The real concern lies behind the glossy numbers, tales of addiction, financial detriment, and the never-ending game of cat and mouse with legal loopholes. The sector’s meteoric rise has raised various concerns relating to national financial integrity, regulatory clarity and consumer safety.
In light of this, the Promotion and Regulation of Online Gaming Act, 2025, which was passed by Parliament and signed into law on August 22, stands out as a significant development. The Act, which is positioned as a consumer protection and sector-defining law, aims to distinguish between innovation and exploitation by acknowledging e-sport as a legitimate activity and establishing unambiguous boundaries around the larger gaming industry.
Key Highlights of the Act
- Complete Ban on all games involving Real-Money: All e-games, whether based on skill or luck, that involve monetary stakes have been banned.
- Prohibition of Ads: Promotion of such e-games has also been disallowed across all platforms.
- Legal Ramifications: Operation of such games may lead to up to 3 years in prison with a 1 cr fine; Advertisement for the same may lead to up to 2 years in prison with a 50 lakh fine. However, in case of repeat offences, this may go up to 3-5 years in prison and 2 cr in fines.
- Creation of Online Gaming Authority: The creation of a national-level regulatory body to classify and monitor games, register platforms and enforce the dedicated rules.
- Support for eSports and Social & Educational games: All kinds of games that are non-monetary, promote social and educational growth, will not only be recognised but encouraged. Meanwhile, eSports will also gain official recognition under the Ministry of Sports.
Positive Impacts
- Addressal & Tackling of Addiction and Financial Ruin: The major reason behind the ban is to reduce the cases of players, mainly youth, getting into gambling and losing huge amounts of money to betting apps and games, and to protect vulnerable users
- Boost to eSports & Regulatory Clarity: The law not only legitimises the eSport sector but also provides opportunities for scholarship and other financial benefits, along with windows for professional tournaments and platforms on global stages. Along with this aims to bring about an order around e-games of skill versus luck.
- Fraud Monitoring & Control: The law makes sure to block off avenues for money laundering, gambling and illegal betting networks.
- Promotion of Safe Digital Ecosystem: Encouraging social, developmental and educational games to focus on skill, learning and fun.
Challenges
The fact that the Promotion and Regulation of Online Gaming Act, 2025 is still in its early stages, which must be recognised. In the end, its effectiveness will rely not only on the letter of the law but on the strength of its enforcement and the wisdom of its application. The Act has the potential to safeguard the interests of at-risk youth from the dangers of gambling and its addiction, if it is applied carefully and clearly, all the while maintaining the digital ecosystem as a place of innovation, equity, and trust.
- Blanket Ban: By imposing a blanket ban on games that have long been justified as skill-based like rummy or fantasy cricket, the Act runs the risk of suppressing respectable enterprises and centres of innovation. Many startups that were once hailed for being at the forefront of India’s digital innovation may now find it difficult to thrive in an unpredictable regulatory environment.
- Rise of Illegal Platforms: History offers a sobering lesson, prohibition does not eliminate demand, it simply drives it underground. The prohibition of money games may encourage the growth of unregulated, offshore sites, where players are more vulnerable to fraud, data theft, and abuse and have no way to seek consumer protection.
Conclusion
The Act is definitely a tough and bold stand to check and regulate India’s digital gaming industry, but it is also a double-edged sword. It brings in much-needed consumer protection regulations in place and legitimises e-Sports. However, it also casts a long shadow over a successful economy and runs the risk of fostering a black market that is more harmful than the issue it was intended to address.
Therefore, striking a balance between innovation and protection, between law and liberty, will be considered more important in the coming years than the success of regulations alone. India’s legitimacy as a digital economy ready for global leadership, as well as the future of its gaming industry, will depend on how it handles this delicate balance.
References:
- https://economictimes.indiatimes.com/tech/technology/gaming-bodies-write-to-amit-shah-urge-to-block-blanket-ban-warn-of-rs-20000-crore-tax-loss/articleshow/123392342.cms
- https://m.economictimes.com/news/india/govt-estimates-45-cr-people-lose-about-rs-20000-cr-annually-from-real-money-gaming/articleshow/123408237.cms
- https://www.cyberpeace.org/resources/blogs/promotion-and-regulation-of-online-gaming-bill-2025-gets-green-flag-from-both-houses-of-parliament
- https://www.thehindu.com/business/Industry/real-money-gaming-firms-wind-down-operations/article69965196.ece

Misinformation is a scourge in the digital world, making the most mundane experiences fraught with risk. The threat is considerably heightened in conflict settings, especially in the modern era, where geographical borders blur and civilians and conflict actors alike can take to the online realm to discuss -and influence- conflict events. Propaganda can complicate the narrative and distract from the humanitarian crises affecting civilians, while also posing a serious threat to security operations and law and order efforts. Sensationalised reports of casualties and manipulated portrayals of military actions contribute to a cycle of violence and suffering.
A study conducted by MIT found the mere thought of sharing news on social media reduced the ability to judge whether a story was true or false; the urge to share outweighed the consideration of accuracy (2023). Cross-border misinformation has become a critical issue in today's interconnected world, driven by the rise of digital communication platforms. To effectively combat misinformation, coordinated international policy frameworks and cooperation between governments, platforms, and global institutions are created.
The Global Nature of Misinformation
Cross-border misinformation is false or misleading information that spreads across countries. Out-of-border creators amplify information through social media and digital platforms and are a key source of misinformation. Misinformation can interfere with elections, and create serious misconceptions about health concerns such as those witnessed during the COVID-19 pandemic, or even lead to military conflicts.
The primary challenge in countering cross-border misinformation is the difference in national policies, legal frameworks and governance policies of social media platforms across various jurisdictions. Examining the existing international frameworks, such as cybersecurity treaties and data-sharing agreements used for financial crimes might be helpful to effectively address cross-border misinformation. Adapting these approaches to the digital information ecosystem, nations could strengthen their collective response to the spread of misinformation across borders. Global institutions like the United Nations or regional bodies like the EU and ASEAN can work together to set a unified response and uniform international standards for regulation dealing with misinformation specifically.
Current National and Regional Efforts
Many countries have taken action to deal with misinformation within their borders. Some examples include:
- The EU’s Digital Services Act has been instrumental in regulating online intermediaries and platforms including marketplaces, social networks, content-sharing platforms, app stores, etc. The legislation aims to prevent illegal and harmful activities online and the spread of disinformation.
- The primary legislation that governs cyberspace in India is the IT Act of 2000 and its corresponding rules (IT Rules, 2023), which impose strict requirements on social media platforms to counter misinformation content and enable the traceability of the creator responsible for the origin of misinformation. Platforms have to conduct due diligence, failing which they risk losing their safe harbour protection. The recently-enacted DPDP Act of 2023 indirectly addresses personal data misuse that can be used to contribute to the creation and spread of misinformation. Also, the proposed Digital India Act is expected to focus on “user harms” specific to the online world.
- In the U.S., the Right to Editorial Discretion and Section 230 of the Communications Decency Act place the responsibility for regulating misinformation on private actors like social media platforms and social media regulations. The US government has not created a specific framework addressing misinformation and has rather encouraged voluntary measures by SMPs to have independent policies to regulate misinformation on their platforms.
The common gap area across these policies is the absence of a standardised, global framework for addressing cross-border misinformation which results in uneven enforcement and dependence on national regulations.
Key Challenges in Achieving International Cooperation
Some of the key challenges identified in achieving international cooperation to address cross-border misinformation are as follows:
- Geopolitical tensions can emerge due to the differences in political systems, priorities, and trust issues between countries that hinder attempts to cooperate and create a universal regulation.
- The diversity in approaches to internet governance and freedom of speech across countries complicates the matters further.
- Further complications arise due to technical and legal obstacles around the issues of sovereignty, jurisdiction and enforcement, further complicating matters relating to the monitoring and removal of cross-border misinformation.
CyberPeace Recommendations
- The UN Global Principles For Information Integrity Recommendations for Multi-stakeholder Action, unveiled on 24 June 2024, are a welcome step for addressing cross-border misinformation. This can act as the stepping stone for developing a framework for international cooperation on misinformation, drawing inspiration from other successful models like climate change agreements, international criminal law framework etc.
- Collaborations like public-private partnerships between government, tech companies and civil societies can help enhance transparency, data sharing and accountability in tackling cross-border misinformation.
- Engaging in capacity building and technology transfers in less developed countries would help to create a global front against misinformation.
Conclusion
We are in an era where misinformation knows no borders and the need for international cooperation has never been more urgent. Global democracies are exploring solutions, both regulatory and legislative, to limit the spread of misinformation, however, these fragmented efforts fall short of addressing the global scale of the problem. Establishing a standardised, international framework, backed by multilateral bodies like the UN and regional alliances, can foster accountability and facilitate shared resources in this fight. Through collaborative action, transparent regulations, and support for developing nations, the world can create a united front to curb misinformation and protect democratic values, ensuring information integrity across borders.
References
- https://economics.mit.edu/sites/default/files/2023-10/A%20Model%20of%20Online%20Misinformation.pdf
- https://www.indiatoday.in/global/story/in-the-crosshairs-manufacturing-consent-and-the-erosion-of-public-trust-2620734-2024-10-21
- https://laweconcenter.org/resources/knowledge-and-decisions-in-the-information-age-the-law-economics-of-regulating-misinformation-on-social-media-platforms/
- https://www.article19.org/resources/un-article-19-global-principles-for-information-integrity/

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm