Misinformation on Internet
Introduction
We consume news from various sources such as news channels, social media platforms and the Internet etc. In the age of the Internet and social media, the concern of misinformation has become a common issue as there is widespread misinformation or fake news on the Internet and social media platforms.
Misinformation on social media platforms
The wide availability of user-provided content on online social media platforms facilitates the spread of misinformation. With the vast population on social media platforms, the information gets viral and spreads all over the internet. It has become a serious concern as such misinformation, including rumours, morphed images, unverified information, fake news, and planted stories, spread easily on the internet, leading to severe consequences such as public riots, lynching, communal tensions, misconception about facts, defamation etc.
Platform-centric measures to mitigate the spread of misinformation
- Google introduced the ‘About this result’ feature’. This allows the users to help with better understand the search results and websites at a glance.
- During the covid-19 pandemic, there were huge cases of misinformation being shared. Google, in April 2020, invested $6.5 million in funding to fact-checkers and non-profits fighting misinformation around the world, including a check on information related to coronavirus or on issues related to the treatment, prevention, and transmission of Covid-19.
- YouTube also have its Medical Misinformation Policy which prevents the spread of information or content which is in contravention of the World Health Organization (WHO) or local health authorities.
- At the time of the Covid-19 pandemic, major social media platforms such as Facebook and Instagram have started showing awareness pop-ups which connected people to information directly from the WHO and regional authorities.
- WhatsApp has a limit on the number of times a WhatsApp message can be forwarded to prevent the spread of fake news. And also shows on top of the message that it is forwarded many times. WhatsApp has also partnered with fact-checking organisations to make sure to have access to accurate information.
- On Instagram as well, when content has been rated as false or partly false, Instagram either removes it or reduces its distribution by reducing its visibility in Feeds.
Fight Against Misinformation
Misinformation is rampant all across the world, and the same needs to be addressed at the earliest. Multiple developed nations have synergised with tech bases companies to address this issue, and with the increasing penetration of social media and the internet, this remains a global issue. Big tech companies such as Meta and Google have undertaken various initiatives globally to address this issue. Google has taken up the initiative to address this issue in India and, in collaboration with Civil Society Organisations, multiple avenues for mass-scale awareness and upskilling campaigns have been piloted to make an impact on the ground.
How to prevent the spread of misinformation?
Conclusion
In the digital media space, there is a widespread of misinformative content and information. Platforms like Google and other social media platforms have taken proactive steps to prevent the spread of misinformation. Users should also act responsibly while sharing any information. Hence creating a safe digital environment for everyone.
Related Blogs

Introduction
Rajeev Chandrasekhar, the Union minister of state for information technology (IT), said that the Global Partnership on Artificial Intelligence (GPAI) Summit, which brings together 29 member governments, including the European Union, announced on 13th December 2023 that the New Delhi Declaration had been adopted. The proclamation committed to developing AI applications for medical treatment and agribusiness jointly and taking the needs of the Global South into account when developing AI.
In addition, signing countries committed to leveraging the GPAI infrastructure to establish a worldwide structure for AI safety and trust, as well as to make AI advantages and approaches accessible to all. In order to complete the recommended structure in six months, India also submitted a proposal to host the GPAI Global Governance Summit.
“The New Delhi Declaration, which aims to place GPAI at the forefront of defining the future of AI in terms of both development and building cooperative AI across the partner states, has been unanimously endorsed by 29 GPAI member countries. Nations have come to an agreement to develop AI applications in healthcare, agriculture, and numerous other fields that affect all of our nations and citizens,” Chandrasekhar stated.
The statement highlights GPAI's critical role in tackling modern AI difficulties, such as generative AI, through submitted AI projects meant to maximize benefits and minimize related risks while solving community problems and worldwide difficulties.
GPAI
Global Partnership on Artificial Intelligence (GPAI) is an organisation of 29 countries from the Americas (North and South), Europe and Asia. It has important players such as the US, France, Japan and India, but it excludes China. The previous meeting took place in Japan. In 2024, India will preside over GPAI.
In order to promote and steer the responsible implementation of artificial intelligence based on human rights, multiculturalism, gender equality, innovation, economic growth, the surroundings, and social impact, this forum was established in 2020. Its goal is to bring together elected officials and experts in order to make tangible contributions to the 2030 Agenda and the UN Sustainable Development Goals (SDGs).
Given the quick and significant advancements in artificial intelligence over the previous year, the meeting in New Delhi attracted particular attention. They have sparked worries about its misuse as well as enthusiasm about its possible advantages.
The Summit
The G20 summit, which India hosted in September 2023, provided an atmosphere for the discussions at the GPAI summit. There, participants of this esteemed worldwide economic conference came to an agreement on how to safely use AI for "Good and for All."
In order to safeguard people's freedoms and security, member governments pledged to address AI-related issues "in a responsible, inclusive, and human-centric manner."
The key tactic devised is to distribute AI's advantages fairly while reducing its hazards. Promoting international collaboration and discourse on global management for AI is the first step toward accomplishing this goal.
A major milestone in that approach was the GPAI summit.
The conversation on AI was started by India's Prime Minister Narendra Modi, who is undoubtedly one of the most tech-aware and tech-conscious international authorities.
He noted that every system needs to be revolutionary, honest, and trustworthy in order to be sustained.
"There is no doubt that AI is transformative, but it is up to us to make it more and more transparent." He continued by saying that when associated social, ethical, and financial concerns are appropriately addressed, trust will increase.
After extensive discussions, the summit attendees decided on a strategy to establish global collaboration on a number of AI-related issues. The proclamation pledged to place GPAI at the leading edge of defining AI in terms of creativity and cooperation while expanding possibilities for AI in healthcare, agriculture, and other areas of interest, according to Union Minister Rajeev Chandrasekhar.
There was an open discussion of a number of issues, including disinformation, joblessness and bias, protection of sensitive information, and violations of human rights. The participants reaffirmed their dedication to fostering dependable, safe, and secure AI within their respective domains.
Concerns raised by AI
- The issue of legislation comes first. There are now three methods in use. In order to best promote inventiveness, the UK government takes a "less is more" approach to regulation. Conversely, the European Union (EU) is taking a strong stance, planning to propose a new Artificial Intelligence Act that might categorize AI 'in accordance with use-case situations based essentially on the degree of interference and vulnerability'.
- Second, analysts say that India has the potential to lead the world in discussions about AI. For example, India has an advantage when it comes to AI discussions because of its personnel, educational system, technological stack, and populace, according to Markham Erickson of Google's Centers for Excellence. However, he voiced the hope that Indian regulations will be “interoperable” with those of other countries in order to maximize the benefits for small and medium-sized enterprises in the nation.
- Third, there is a general fear about how AI will affect jobs, just as there was in the early years of the Internet's development. Most people appear to agree that while many jobs won't be impacted, certain jobs might be lost as artificial intelligence develops and gets smarter. According to Erickson, the solution to the new circumstances is to create "a more AI-skilled workforce."
- Finally, a major concern relates to deepfakes defined as 'digital media, video, audio and images, edited and manipulated, using Artificial Intelligence (AI).'
Need for AI Strategy in Commercial Businesses
Firstly, astute or mobile corporate executives such as Shailendra Singh, managing director of Peak XV Partners, feel that all organisations must now have 'an AI strategy'.
Second, it is now impossible to isolate the influence of digital technology and artificial intelligence from the study of international relations (IR), foreign policy, and diplomacy. Academics have been contemplating and penning works of "the geopolitics of AI."
Combat Strategies
"We will talk about how to combine OECD capabilities to maximize our capacity to develop the finest approaches to the application and management of AI for the benefit of our people. The French Minister of Digital Transition and Telecommunications", Jean-Noël Barrot, informed reporters.
Vice-Minister of International Affairs for Japan's Ministry of Internal Affairs and Communications Hiroshi Yoshida stated, "We particularly think GPAI should be more inclusive so that we encourage more developing countries to join." Mr Chandrasekhar stated, "Inclusion of lower and middle-income countries is absolutely core to the GPAI mission," and added that Senegal has become a member of the steering group.
India's role in integrating agribusiness into the AI agenda was covered in a paragraph. The proclamation states, "We embrace the use of AI innovation in supporting sustainable agriculture as a new thematic priority for GPAI."
Conclusion
The New Delhi Declaration, which was adopted at the GPAI Summit, highlights the cooperative determination of 29 member nations to use AI for the benefit of all people. GPAI, which will be led by India in 2024, intends to influence AI research with an emphasis on healthcare, agriculture, and resolving ethical issues. Prime Minister Narendra Modi stressed the need to use AI responsibly and build clarity and confidence. Legislative concerns, India's potential for leadership, employment effects, and the difficulty of deepfakes were noted. The conference emphasized the importance of having an AI strategy in enterprises and covered battle tactics, with a focus on GPAI's objective, which includes tolerance for developing nations. Taken as a whole, the summit presents GPAI as an essential tool for navigating the rapidly changing AI field.
References
- https://www.thehindu.com/news/national/ai-summit-adopts-new-delhi-declaration-on-inclusiveness-collaboration/article67635398.ece
- https://www.livemint.com/news/india/gpai-meet-adopts-new-delhi-ai-declaration-11702487342900.html
- https://startup.outlookindia.com/sector/policy/global-partnership-on-ai-member-nations-unanimously-adopt-new-delhi-declaration-news-10065
- https://gpai.ai/

Introduction
Rajeev Chandrasekhar, Minister of State at the Ministry of Electronics and Information Technology, has emphasised the need for an open internet. He stated that no platform can deny content creators access to distribute and monetise content and that large technology companies have begun to play a significant role in the digital evolution. Chandrasekhar emphasised that the government does not want the internet or monetisation to be in the purview of just one or two companies and does not want 120 crore Indians on the internet in 2025 to be catered to by big islands on the internet.
The Voice for Open Internet
India's Minister of State for IT, Rajeev Chandrasekhar, has stated that no technology company or social media platform can deny content creators access to distribute and monetise their content. Speaking at the Digital News Publishers Association Conference in Delhi, Chandrasekhar emphasized that the government does not want the internet or monetization of the internet to be in the hands of just one or two companies. He argued that the government does not like monopoly or duopoly and does not want 120 crore Indians on the Internet in 2025 to be catered to by big islands on the internet.
Chandrasekhar highlighted that large technology companies have begun to exert influence when it comes to the dissemination of content, which has become an area of concern for publishers and content creators. He stated that if any platform finds it necessary to block any content, they need to give reasons or grounds to the creators, stating that the content is violating norms.
As India tries to establish itself as an innovator in the technology sector, a recent corpus of Rs 1 lakh crore was announced by the government in the interim Budget of 2024-25. As big companies continue to tighten their stronghold on the sector, content moderation has become crucial. Under the IT Rules Act, 11 types of categories are unlawful under IT Act and criminal law. Platforms must ensure no user posts content that falls under these categories, take down any such content, and gateway users to either de-platforming or prosecuting. Chandrasekhar believes that the government has to protect the fundamental rights of people and emphasises legislative guardrails to ensure platforms are accountable for the correctness of the content.
Monetizing Content on the Platform
No platform can deny a content creator access to the platform to distribute and monetise it,' Chandrasekhar declared, boldly laying down a gauntlet that defies the prevailing norms. This tenet signals a nascent dawn where creators may envision reaping the rewards borne of their creative endeavours unfettered by platform restrictions.
An increasingly contentious issue that shadows this debate is the moderation of content within the digital realm. In this vast uncharted expanse, the powers that be within these monolithic platforms assume the mantle of vigilance—policing the digital avenues for transgressions against a conscribed code of conduct. Under the stipulations of India's IT Rules Act, for example, platforms are duty-bound to interdict user content that strays into territories encompassing a spectrum of 11 delineated unlawful categories. Violations span the gamut from the infringement of intellectual property rights to the propagation of misinformation—each category necessitating swift and decisive intervention. He raised the alarm against misinformation—a malignant growth fed by the fertile soils of innovation—a phenomenon wherein media reports chillingly suggest that up to half of the information circulating on the internet might be a mere fabrication, a misleading simulacrum of authenticity.
The government's stance, as expounded by Chandrasekhar, pivots on an axis of safeguarding citizens' fundamental rights, compelling digital platforms to shoulder the responsibility of arbiters of truth. 'We are a nation of over 90 crores today, a nation progressing with vigour, yet we find ourselves beset by those who wish us ill,'
Upcoming Digital India Act
Awaiting upon the horizon, India's proposed Digital India Act (DIA), still in its embryonic stage of pre-consultation deliberation, seeks to sculpt these asymmetries into a more balanced form. Chandrasekhar hinted at the potential inclusion within the DIA of regulatory measures that would sculpt the interactions between platforms and the mosaic of content creators who inhabit them. Although specifics await the crucible of public discourse and the formalities of consultation, indications of a maturing framework are palpable.
Conclusion
It is essential that the fable of digital transformation reverberates with the voices of individual creators, the very lifeblood propelling the vibrant heartbeat of the internet's culture. These are the voices that must echo at the centre stage of policy deliberations and legislative assembly halls; these are the visions that must guide us, and these are the rights that we must uphold. As we stand upon the precipice of a nascent digital age, the decisions we forge at this moment will cascade into the morrow and define the internet of our future. This internet must eternally stand as a bastion of freedom, of ceaseless innovation and as a realm of boundless opportunity for every soul that ventures into its infinite expanse with responsible use.
References
- https://www.financialexpress.com/business/brandwagon-no-platform-can-deny-a-content-creator-access-to-distribute-and-monetise-content-says-mos-it-rajeev-chandrasekhar-3386388/
- https://indianexpress.com/article/india/meta-content-monetisation-social-media-it-rules-rajeev-chandrasekhar-9147334/
- https://www.medianama.com/2024/02/223-rajeev-chandrasekhar-content-creators-publishers/

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/