Google Play Enhancing Trust and Transparency
Introduction
Google Play has announced its new policy which will ensure trust and transparency on google play by providing a new framework for developer verification and app details. The new policy requires that new developer accounts on Google Play will have to provide a D-U-N-S number to verify the business. So when an organisation will create a new Play Console developer account the organisation will need to provide a D-U-N-S number. Which is a nine-digit unique identifier which will be used to verify their business. The new google play policy aims to enhance user trust. And the developer will provide detailed developer details on the app’s listing page. Users will get to know who is behind the app which they are installing.
Verifying Developer Identity with D-U-N-S Numbers
To boost security the google play new policy requires the developer account to provide the D-U-N-S number when creating a new Play Console developer account. The D-U-N-S number assigned by Dun & Bradstreet will be used to verify the business. Once the developer creates his new Play Console developer account by providing a D-U-N-S number, Google Play will verify the developer’s details, and he will be able to start publishing the apps. Through this step, Google Play aims to validate the business information in a more authentic way.
If your organisation does not have a D-U-N-S number, you may check on or request for it for free on this website (https://www.dnb.com/duns-number/lookup.html). The request process for D-U-N-S can take up to 30 days. Developers are also required to keep the information up to date.
Building User Trust with Enhanced App Details
In addition to verifying developer identities in a more efficient way, google play also requires that developer provides sufficient app details to the users. There will be an “App Support” section on the app’s store listing page, where the developer will display the app’s support email address and even can include their website and phone number for support.
The new section “About the developer” will also be introduced to provide users with verified identity information, including the developer’s name, address, and contact details. Which will make the users more informed about the valuable information of the app developers.
Key highlights of the Google Play Polic
- Google Play came up with the policy to keep the platform safe by verifying the developers’ identity and it will also help to reduce the spread of malware apps and help the users to make confident informed decisions about the apps they download. Google Play announced the policy by expanding its developer verification requirement to strengthen Google Play as a platform and build user trust. When you create a new Play Console Developer account and choose organisation as your account type you will now need to provide a D-U-N-S number.
- Users will get detailed information about the developers’ identities and contact information, building more transparency and encouraging responsible app development practices.
- This policy will enable the users to make informed choices about the apps they download.
- The new “App support” section will provide enhanced communication between users and developers by displaying support email addresses, website and support phone numbers, streamlining the support process and user satisfaction.
Timeline and Implementation
The new policy requirements for D-U-N-S numbers will start rolling out on 31 August 2023 for all new Play Console developer accounts. The “About the developer” section will be visible to users as soon as a new app is published. and In October 2023, existing developers will also be required to update and verify their existing accounts to comply with the new verification policy.
Conclusion
Google Play’s new policy will aim to enhance the more transparent app ecosystem. This new policy will provide the users with more information about the developers. Google Play aims to establish a platform where users can confidently discover and download apps. This new policy will enhance the user experience on google play in terms of a reliable and trustworthy platform.
Related Blogs

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html
.webp)
Introduction
Search engines have become indispensable in our daily lives, allowing us to find information instantly by entering keywords or phrases. Using the prompt "search Google or type a URL" reflects just how seamless this journey to knowledge has become. With millions of searches conducted every second, and Google handling over 6.3 million searches per minute as of 2023 (Statista), one critical question arises: do search engines prioritise results based on user preferences and past behaviours, or are they truly unbiased?
Understanding AI Bias in Search Algorithms
AI bias is also known as machine learning bias or algorithm bias. It refers to the occurrence of biased results due to human biases that deviate from the original training data or AI algorithm which leads to distortion of outputs and creation of potentially harmful outcomes. The sources of this bias are algorithmic bias, data bias and interpretation bias which emerge from user history, geographical data, and even broader societal biases in training data.
Common biases include excluding certain groups of people from opportunities because of AI bias. In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms. While AI helps streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in biased outcomes due to a biased dataset or any other bias in the input data.
Case in Point: Google’s "Helpful" Results and Its Impact
Google optimises results by analysing user interactions to determine satisfaction with specific types of content. This data-driven approach forms ‘filter bubbles’ by repeatedly displaying content that aligns with a user’s preferences, regardless of factual accuracy. While this can create a more personalised experience, it risks confining users to a limited view, excluding diverse perspectives or alternative viewpoints.
The personal and societal impacts of such biases are significant. At an individual level, filter bubbles can influence decision-making, perceptions, and even mental health. On a societal level, these biases can reinforce stereotypes, polarise opinions, and shape collective narratives. There is also a growing concern that these biases may promote misinformation or limit users’ exposure to diverse perspectives, all stemming from the inherent bias in search algorithms.
Policy Challenges and Regulatory Measures
Regulating emerging technologies like AI, especially in search engine algorithms, presents significant challenges due to their intricate, proprietary nature. Traditional regulatory frameworks struggle to keep up with them as existing laws were not designed to address the nuances of algorithm-driven platforms. Regulatory bodies are pushing for transparency and accountability in AI-powered search algorithms to counter biases and ensure fairness globally. For example, the EU’s Artificial Intelligence Act aims to establish a regulatory framework that will categorise AI systems based on risk and enforces strict standards for transparency, accountability, and fairness, especially for high-risk AI applications, which may include search engines. India has proposed the Digital India Act in 2023 which will define and regulate High-risk AI.
Efforts include ethical guidelines emphasising fairness, accountability, and transparency in information prioritisation. However, a complex regulatory landscape could hinder market entrants, highlighting the need for adaptable, balanced frameworks that protect user interests without stifling innovation.
CyberPeace Insights
In a world where search engines are gateways to knowledge, ensuring unbiased, accurate, and diverse information access is crucial. True objectivity remains elusive as AI-driven algorithms tend to personalise results based on user preferences and past behaviour, often creating a biased view of the web. Filter bubbles, which reinforce individual perspectives, can obscure factual accuracy and limit exposure to diverse viewpoints. Addressing this bias requires efforts from both users and companies. Users should diversify sources and verify information, while companies should enhance transparency and regularly audit algorithms for biases. Together, these actions can promote a more equitable, accurate, and unbiased search experience for all users.
References
- https://www.bbc.com/future/article/20241101-how-online-photos-and-videos-alter-the-way-you-think
- https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear
- https://www.ibm.com/topics/ai-bias#:~:text=In%20healthcare%2C%20underrepresenting%20data%20of,can%20skew%20predictive%20AI%20algorithms

Introduction
So it's that time of year when you feel bright and excited to start the year with new resolutions; your goals could be anything from going to the gym to learning new skills and being productive this year, but with cybercrime on the rise, you must also be smart and take your New Year Cyber Resolutions seriously. Yes, you heard it right: it's a new year, a new you, but the same hackers with advanced dangers. It's time to make a cyber resolution this year to be secure, smart, and follow the best cyber safety tips for 2K25 and beyond.
Best Cyber Security Tips For You
So while taking your cyber resolutions this 2k25, remember that hackers have resolutions too; so you have to make yours better! CyberPeace has curated a list of great tips and cyber hygiene practices you must practice in 2025:
- Be Aware Of Your Digital Rights: Netizens should be aware of their rights in the digital space. It's important to know where to report issues, how to raise concerns with platforms, and what rights are available to you under applicable IT and Data Protection laws. And as we often say, sharing is caring, so make sure to discuss and share your knowledge of digital rights with your family, peers, and circle. Not only will this help raise awareness, but you’ll also learn from their experiences, collectively empowering yourselves. After all, a well-informed online community is a happy one.
- Awareness Is Your First Line Of Defence: Awareness serves as the first line of defence, especially in light of the lessons learned from 2024, where new forms of cybercrimes have emerged with serious consequences. Scams like digital arrests, romance frauds, lottery scams, and investment scams have become more prevalent. As we move into 2025, remember that sophisticated cyber scams require equally advanced strategies to stay protected. As cybercrimes evolve and become more complex, it's crucial to stay updated with specific strategies and hygiene tips to defend yourself. Build your first line of defence by being aware of these growing scams, and say goodbye to the manipulative tactics used by cyber crooks.
- Customise Social Media Media Profile And Privacy Settings: With the rising misuse of advanced technologies such as deepfake, it’s crucial to share access to your profile only with people you trust and know. Customize your social media profile settings based on your convenience, such as who can add you, who can see your uploaded pictures and stories, and who can comment on your posts. Tailor these settings to suit your needs and preferences, ensuring a safer digital environment for yourself.
- Be Cautious: Choose wisely, just because an online deal seems exciting doesn’t mean it’s legitimate. A single click could have devastating consequences. Not every link leads to a secure website; it could be a malware or phishing attempt. Be cautious and follow basic cyber hygiene tips, such as only visiting websites with a padlock symbol, a secure connection, and the 'HTTPS' status in the URL.
- Don’t Let Fake News Fake You Out: Online misinformation and disinformation have sparked serious concern due to their widespread proliferation. That’s why it’s crucial to 'Spot The Lies Before They Spot You.' Exercise due care and caution when consuming, sharing, or forwarding any online information. Always verify it from trusted sources, recognize the red flags of misleading claims, and contribute to creating a truthful online information landscape.
- Turn the Tables on Cybercriminals: It is crucial to know the proper reporting channels for cybercrimes, including specific reporting methods based on the type of issue. For example, ‘unsolicited commercial communications’ can be reported on the Chakshu portal by the government. Unauthorized electronic transactions can be reported to the RBI toll-free number at 14440, while women can report incidents to the National Commission for Women. If you encounter issues on a platform, you can reach out to the platform's grievance officer. All types of cybercrimes can be reported through the National Cyber Crime Reporting Portal (cybercrime.gov.in) and the helpline at 1930. It’s essential to be aware of the right authorities and reporting mechanisms, so if something goes wrong in your digital experience, you can take action, turn the tables on cybercrooks, and stay informed about official grievances and reporting channels.
- Log Out, Chill Out: The increased use of technology can have far-reaching consequences that are often overlooked, such as procrastination, stress, anxiety, and eye strain (also known as digital eye strain or computer vision syndrome). Sometimes, it’s essential to switch off the digital curtains. This is where a ‘Digital Detox’ comes in, offering a chance to recharge and reset. We’re all aware of how our devices and phones influence our daily lives, shaping our behaviours, decisions, and lifestyles from morning until night, even impacting our sleep. Taking time to unplug can provide a much-needed psychological and physical boost. Practicing a digital detox at regular suitable intervals, such as twice a month, can help restore balance, reduce stress, and improve overall well-being.
Final Words & the Idea of ‘Tech for Good’
Remember that we are in the technological era, and these technologies are created for our ease and convenience. There are certain challenges that bad actors pose, but to counter this, the change starts from you. Remember that technology, while having its risks, also brings tremendous benefits to society. We encourage you to take a step and encourage the responsible and ethical use of the technology. The vision for ‘Tech for Good’ will have to be expanded to a larger picture. Do not engage in a behaviour that you would not ordinarily do in an offline environment, the online environment is also the same and has far-reaching effects. Use technology for good, and follow and encourage ethical and responsible behaviour in online communities. The emphasis should be on using technology in a safer environment for everyone and combatting dishonest practices.
The effective strategies for preventing cybercrime and dishonest practices requires cooperation , efforts by citizens, government agencies, and technology businesses. We intend to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while promoting innovation and connectedness. In 2025, together we can make a cyber safe resilient society.