The Indian government has developed the National Cybersecurity Reference Framework (NCRF) to provide an implementable measure for cybersecurity, based on existing legislations, policies, and guidelines. The National Critical Information Infrastructure Protection Centre is responsible for the framework. The government is expected to recommend enterprises, particularly those in critical sectors like banking, telecom, and energy, to use only security products and services developed in India. The NCRF aims to ensure that cybersecurity is protected and that the use of made-in-India products is encouraged to safeguard cyber infrastructure. The Centre is expected to emphasise the significant progress in developing indigenous cybersecurity products and solutions.
National Cybersecurity Reference Framework (NCRF)
The Indian government has developed the National Cybersecurity Reference Framework (NCRF), a guideline that sets the standard for cybersecurity in India. The framework focuses on critical sectors and provides guidelines to help organisations develop strong cybersecurity systems. It can serve as a template for critical sector entities to develop their own governance and management systems. The government has identified telecom, power, transportation, finance, strategic entities, government entities, and health as critical sectors.
The NCRF is non-binding in nature, meaning its recommendations will not be binding. It recommends enterprises allocate at least 10% of their total IT budget towards cybersecurity, with monitoring by top-level management or the board of directors. The framework may suggest that national nodal agencies evolve platforms and processes for machine-processing data from different sources to ensure proper audits and rate auditors based on performance.
Regulators overseeing critical sectors may have greater powers to set rules for information security and define information security requirements to ensure proper audits. They also need an effective Information Security Management System (ISMS) instance to access sensitive data and deficiencies related to operations in the critical sector. The policy is based on a Common but Differentiated Responsibility (CBDR) approach, recognising that different organisations have varying levels of cybersecurity needs and responsibilities.
India faces a barrage of cybersecurity-related incidents, such as the high-profile attack on AIIMS Delhi in 2022. Many ministries feel hamstrung by the lack of an overarching framework on cybersecurity when formulating sector-specific legislation. In recent years, threat actors backed by nation-states and organised cyber-criminal groups have attempted to target the critical information infrastructure (CII) of the government and enterprises. The current guiding framework on cybersecurity for critical infrastructure in India comes from the National Cybersecurity Policy of 2013. From 2013 to 2023, the world has evolved significantly due to the emergence of new threats necessitating the development of new strategies.
Significance in the realm of Critical Infrastructure
India faces numerous cybersecurity incidents due to a lack of a comprehensive framework. Critical Information Infrastructure like banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises are most targeted by threat actors, including nation-states and cybercriminals. These critical information sectors especially by their vary nature as they hold sensitive data make them prime targets for cyber threats and attacks. Cyber-attacks can compromise patient privacy, disrupt services, compromise control systems, pose safety risks, and disrupt critical services. Hence it is of paramount importance to come up with NCRF which can potentially address the emerging issues by providing sector-specific guidelines.
The Indian government is considering promoting the use of made-in-India products to enhance Cyber Infrastructure
India is preparing to recommend the use of domestically developed cybersecurity products and services, particularly for critical sectors like banking, telecom, and energy, to enhance national security in the face of escalating cybersecurity threats. The initiative aims to enhance national security in response to increasing cybersecurity threats.
Conclusion
Promoting locally made cybersecurity products and services in important industries shows India's commitment to strengthening national security. A step of coming up with the National Cybersecurity Reference Framework (NCRF) which outlines duties, responsibilities, and recommendations for organisations and regulators shows the critical step towards a comprehensive cybersecurity policy framework which is a need of the hour. The government underscoring made-in-India solutions and allocating cybersecurity resources underlines its determination to protect the country's cyber infrastructure in light of increasing cyber threats & attacks. The NCRF is expected to help draft sector-specific guidelines on cyber security.
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.
The rapid digitization of educational institutions in India has created both opportunities and challenges. While technology has improved access to education and administrative efficiency, it has also exposed institutions to significant cyber threats. This report, published by CyberPeace, examines the types, causes, impacts, and preventive measures related to cyber risks in Indian educational institutions. It highlights global best practices, national strategies, and actionable recommendations to mitigate these threats.
Image: Recent CyberAttack on Eindhoven University
Significance of the Study:
The pandemic-induced shift to online learning, combined with limited cybersecurity budgets, has made educational institutions prime targets for cyberattacks. These threats compromise sensitive student, faculty, and institutional data, leading to operational disruptions, financial losses, and reputational damage. Globally, educational institutions face similar challenges, emphasizing the need for universal and localized responses.
Threat Faced by Education Institutions:
Based on the insights from the CyberPeace’s report titled 'Exploring Cyber Threats and Digital Risks in Indian Educational Institutions', this concise blog provides a comprehensive overview of cybersecurity threats and risks faced by educational institutions, along with essential details to address these challenges.
🎣 Phishing: Phishing is a social engineering tactic where cyber criminals impersonate trusted sources to steal sensitive information, such as login credentials and financial details. It often involves deceptive emails or messages that lead to counterfeit websites, pressuring victims to provide information quickly. Variants include spear phishing, smishing, and vishing.
💰 Ransomware: Ransomware is malware that locks users out of their systems or data until a ransom is paid. It spreads through phishing emails, malvertising, and exploiting vulnerabilities, causing downtime, data leaks, and theft. Ransom demands can range from hundreds to hundreds of thousands of dollars.
🌐 Distributed Denial of Service (DDoS): DDoS attacks overwhelm servers, denying users access to websites and disrupting daily operations, which can hinder students and teachers from accessing learning resources or submitting assignments. These attacks are relatively easy to execute, especially against poorly protected networks, and can be carried out by amateur cybercriminals, including students or staff, seeking to cause disruptions for various reasons
🕵️ Cyber Espionage: Higher education institutions, particularly research-focused universities, are vulnerable to spyware, insider threats, and cyber espionage. Spyware is unauthorized software that collects sensitive information or damages devices. Insider threats arise from negligent or malicious individuals, such as staff or vendors, who misuse their access to steal intellectual property or cause data leaks..
🔒 Data Theft: Data theft is a major threat to educational institutions, which store valuable personal and research information. Cybercriminals may sell this data or use it for extortion, while stealing university research can provide unfair competitive advantages. These attacks can go undetected for long periods, as seen in the University of California, Berkeley breach, where hackers allegedly stole 160,000 medical records over several months.
🛠️ SQL Injection: SQL injection (SQLI) is an attack that uses malicious code to manipulate backend databases, granting unauthorized access to sensitive information like customer details. Successful SQLI attacks can result in data deletion, unauthorized viewing of user lists, or administrative access to the database.
🔍Eavesdropping attack: An eavesdropping breach, or sniffing, is a network attack where cybercriminals steal information from unsecured transmissions between devices. These attacks are hard to detect since they don't cause abnormal data activity. Attackers often use network monitors, like sniffers, to intercept data during transmission.
🤖 AI-Powered Attacks: AI enhances cyber attacks like identity theft, password cracking, and denial-of-service attacks, making them more powerful, efficient, and automated. It can be used to inflict harm, steal information, cause emotional distress, disrupt organizations, and even threaten national security by shutting down services or cutting power to entire regions
Insights from Project eKawach
The CyberPeace Research Wing, in collaboration with SAKEC CyberPeace Center of Excellence (CCoE) and Autobot Infosec Private Limited, conducted a study simulating educational institutions' networks to gather intelligence on cyber threats. As part of the e-Kawach project, a nationwide initiative to strengthen cybersecurity, threat intelligence sensors were deployed to monitor internet traffic and analyze real-time cyber attacks from July 2023 to April 2024, revealing critical insights into the evolving cyber threat landscape.
Cyber Attack Trends
Between July 2023 and April 2024, the e-Kawach network recorded 217,886 cyberattacks from IP addresses worldwide, with a significant portion originating from countries including the United States, China, Germany, South Korea, Brazil, Netherlands, Russia, France, Vietnam, India, Singapore, and Hong Kong. However, attributing these attacks to specific nations or actors is complex, as threat actors often use techniques like exploiting resources from other countries, or employing VPNs and proxies to obscure their true locations, making it difficult to pinpoint the real origin of the attacks.
Brute Force Attack:
The analysis uncovered an extensive use of automated tools in brute force attacks, with 8,337 unique usernames and 54,784 unique passwords identified. Among these, the most frequently targeted username was “root,” which accounted for over 200,000 attempts. Other commonly targeted usernames included: "admin", "test", "user", "oracle", "ubuntu", "guest", "ftpuser", "pi", "support"
Similarly, the study identified several weak passwords commonly targeted by attackers. “123456” was attempted over 3,500 times, followed by “password” with over 2,500 attempts. Other frequently targeted passwords included: "1234", "12345", "12345678", "admin", "123", "root", "test", "raspberry", "admin123", "123456789"
Insights from Threat Landscape Analysis
Research done by the USI - CyberPeace Centre of Excellence (CCoE) and Resecurity has uncovered several breached databases belonging to public, private, and government universities in India, highlighting significant cybersecurity threats in the education sector. The research aims to identify and mitigate cybersecurity risks without harming individuals or assigning blame, based on data available at the time, which may evolve with new information. Institutions were assigned risk ratings that descend from A to F, with most falling under a D rating, indicating numerous security vulnerabilities. Institutions rated D or F are 5.4 times more likely to experience data breaches compared to those rated A or B. Immediate action is recommended to address the identified risks.
Risk Findings :
The risk findings for the institutions are summarized through a pie chart, highlighting factors such as data breaches, dark web activity, botnet activity, and phishing/domain squatting. Data breaches and botnet activity are significantly higher compared to dark web leakages and phishing/domain squatting. The findings show 393,518 instances of data breaches, 339,442 instances of botnet activity, 7,926 instances related to the dark web and phishing & domain activity - 6711.
Key Indicators: Multiple instances of data breaches containing credentials (email/passwords) in plain text.
Botnet activity indicating network hosts compromised by malware.
Credentials from third-party government and non-governmental websites linked to official institutional emails
Details of software applications, drivers installed on compromised hosts.
Sensitive cookie data exfiltrated from various browsers.
IP addresses of compromised systems.
Login credentials for different Android applications.
Below is the sample detail of one of the top educational institutions that provides the insights about the higher rate of data breaches, botnet activity, dark web activities and phishing & domain squatting.
Risk Detection:
It indicates the number of data breaches, network hygiene, dark web activities, botnet activities, cloud security, phishing & domain squatting, media monitoring and miscellaneous risks. In the below example, we are able to see the highest number of data breaches and botnet activities in the sample particular domain.
Risk Changes:
Risk by Categories:
Risk is categorized with factors such as high, medium and low, the risk is at high level for data breaches and botnet activities.
Challenges Faced by Educational Institutions
Educational institutions face cyberattack risks, the challenges leading to cyberattack incidents in educational institutions are as follows:
🔒 Lack of a Security Framework: A key challenge in cybersecurity for educational institutions is the lack of a dedicated framework for higher education. Existing frameworks like ISO 27001, NIST, COBIT, and ITIL are designed for commercial organizations and are often difficult and costly to implement. Consequently, many educational institutions in India do not have a clearly defined cybersecurity framework.
🔑 Diverse User Accounts: Educational institutions manage numerous accounts for staff, students, alumni, and third-party contractors, with high user turnover. The continuous influx of new users makes maintaining account security a challenge, requiring effective systems and comprehensive security training for all users.
📚 Limited Awareness: Cybersecurity awareness among students, parents, teachers, and staff in educational institutions is limited due to the recent and rapid integration of technology. The surge in tech use, accelerated by the pandemic, has outpaced stakeholders' ability to address cybersecurity issues, leaving them unprepared to manage or train others on these challenges.
📱 Increased Use of Personal/Shared Devices: The growing reliance on unvetted personal/Shared devices for academic and administrative activities amplifies security risks.
💬 Lack of Incident Reporting: Educational institutions often neglect reporting cyber incidents, increasing vulnerability to future attacks. It is essential to report all cases, from minor to severe, to strengthen cybersecurity and institutional resilience.
Impact of Cybersecurity Attacks on Educational Institutions
Cybersecurity attacks on educational institutions lead to learning disruptions, financial losses, and data breaches. They also harm the institution's reputation and pose security risks to students. The following are the impacts of cybersecurity attacks on educational institutions:
📚Impact on the Learning Process: A report by the US Government Accountability Office (GAO) found that cyberattacks on school districts resulted in learning losses ranging from three days to three weeks, with recovery times taking between two to nine months.
💸Financial Loss: US schools reported financial losses ranging from $50,000 to $1 million due to expenses like hardware replacement and cybersecurity upgrades, with recovery taking an average of 2 to 9 months.
🔒Data Security Breaches: Cyberattacks exposed sensitive data, including grades, social security numbers, and bullying reports. Accidental breaches were often caused by staff, accounting for 21 out of 25 cases, while intentional breaches by students, comprising 27 out of 52 cases, frequently involved tampering with grades.
⚠️Data Security Breach: Cyberattacks on schools result in breaches of personal information, including grades and social security numbers, causing emotional, physical, and financial harm. These breaches can be intentional or accidental, with a US study showing staff responsible for most accidental breaches (21 out of 25) and students primarily behind intentional breaches (27 out of 52) to change grades.
🏫Impact on Institutional Reputation: Cyberattacks damaged the reputation of educational institutions, eroding trust among students, staff, and families. Negative media coverage and scrutiny impacted staff retention, student admissions, and overall credibility.
🛡️ Impact on Student Safety: Cyberattacks compromised student safety and privacy. For example, breaches like live-streaming school CCTV footage caused severe distress, negatively impacting students' sense of security and mental well-being.
CyberPeace Advisory:
CyberPeace emphasizes the importance of vigilance and proactive measures to address cybersecurity risks:
Develop effective incident response plans: Establish a clear and structured plan to quickly identify, respond to, and recover from cyber threats. Ensure that staff are well-trained and know their roles during an attack to minimize disruption and prevent further damage.
Implement access controls with role-based permissions: Restrict access to sensitive information based on individual roles within the institution. This ensures that only authorized personnel can access certain data, reducing the risk of unauthorized access or data breaches.
Regularly update software and conduct cybersecurity training: Keep all software and systems up-to-date with the latest security patches to close vulnerabilities. Provide ongoing cybersecurity awareness training for students and staff to equip them with the knowledge to prevent attacks, such as phishing.
Ensure regular and secure backups of critical data: Perform regular backups of essential data and store them securely in case of cyber incidents like ransomware. This ensures that, if data is compromised, it can be restored quickly, minimizing downtime.
Adopt multi-factor authentication (MFA): Enforce Multi-Factor Authentication(MFA) for accessing sensitive systems or information to strengthen security. MFA adds an extra layer of protection by requiring users to verify their identity through more than one method, such as a password and a one-time code.
Deploy anti-malware tools: Use advanced anti-malware software to detect, block, and remove malicious programs. This helps protect institutional systems from viruses, ransomware, and other forms of malware that can compromise data security.
Monitor networks using intrusion detection systems (IDS): Implement IDS to monitor network traffic and detect suspicious activity. By identifying threats in real time, institutions can respond quickly to prevent breaches and minimize potential damage.
Conduct penetration testing: Regularly conduct penetration testing to simulate cyberattacks and assess the security of institutional networks. This proactive approach helps identify vulnerabilities before they can be exploited by actual attackers.
Collaborate with cybersecurity firms: Partner with cybersecurity experts to benefit from specialized knowledge and advanced security solutions. Collaboration provides access to the latest technologies, threat intelligence, and best practices to enhance the institution's overall cybersecurity posture.
Share best practices across institutions: Create forums for collaboration among educational institutions to exchange knowledge and strategies for cybersecurity. Sharing successful practices helps build a collective defense against common threats and improves security across the education sector.
Conclusion:
The increasing cyber threats to Indian educational institutions demand immediate attention and action. With vulnerabilities like data breaches, botnet activities, and outdated infrastructure, institutions must prioritize effective cybersecurity measures. By adopting proactive strategies such as regular software updates, multi-factor authentication, and incident response plans, educational institutions can mitigate risks and safeguard sensitive data. Collaborative efforts, awareness, and investment in cybersecurity will be essential to creating a secure digital environment for academia.
As technological advancements continue to shape the future, the rise of artificial intelligence brings with it significant potential benefits, yet also raises concerns about the spread of misinformation. Recognising the need for accountability on both ends, on 5th May, during the three-day World News Media Congress 2025 in Kraków, Poland the European Broadcasting Union (EBU) and the World Association of News Publishers (WAN-IFRA) have announced to the public the five core principles for their joint initiative called News Integrity in the Age of AI. The initiative is aimed at fostering dialogue and cooperation between media organisations and technology platforms, and the principles announced are to be a code of practice to be followed by all those taking part. With thousands of public and private media outlets around the world joining the effort, the initiative highlights the shared responsibility of AI developers to ensure that AI systems are trustworthy, safe, and supportive of a reliable news ecosystem. It represents a global call to action to uphold the integrity of news in this age of major influx and curb the growing challenge of misinformation.
The five core principles released focus on:
1. Authorisation of content by the originators is a must prior to its usage in Generative AI tools and models
2. High-quality and up-to-date news content must be recognised by third parties that are benefiting from it
3. There must be a focus on accuracy and attribution, making the original sources of news apparent to the public, promoting transparency
4. Harnessing the plural nature of the news perspectives, which will help AI-driven tools perform better and
5. An invitation to tech companies for an open dialogue with news outlets, facilitating conversation to collaborate and develop standards of transparency, accuracy, and safety.
As this initiative provides a unified platform to address and deliberate on issues affecting the integrity of news, there are also some other technical ways in which misinformation in news caused by AI can be curbed:
1. Encourage the usage of Smaller Generative AI Models: The Large Language Models (LLMs) have to be trained on a range of topics. Businesses don’t require such an expanse of information but just a little that is relevant. A narrower context of information to be sourced from allows better content navigation and a reduced chance of mix-up.
2. Fighting AI hallucination: This is a phenomenon that causes generative AI (such as chatbots and computer vision tools) to present nonsensical and inaccurate outputs as the system perceives objects or patterns that are imperceptible or non-existent to human observers. This occurs as a result of the system trying to focus on both language fluency and stitching information from different sources together. In order to deal with this, one can deploy retrieval augmented generation (RAG). This enables connection with external sources of data that include academic journals, a company’s organisational data, among other things, that would help in providing more accurate, domain-specific content.
Conclusion
This global call to action marks an important step toward fostering unified efforts to combat misinformation. The set of principles introduced is designed to be adaptable, providing a flexible framework that can evolve to address emerging challenges (through dialogue and discussion), including issues like copyright infringement. While AI offers powerful tools to support the news industry, it is essential to emphasise that human oversight remains crucial. These technological advancements are meant to enhance and augment the work of journalists, not replace it, ensuring that the core values of journalism, such as accuracy and integrity, are preserved in the age of AI.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.