The National Transport Repository: Legal Fault Lines in India’s Transport Data Policy
Muskan Sharma
Research Analyst- Policy & Advocacy, CyberPeace
PUBLISHED ON
Sep 5, 2025
10
Introduction
India’s new Policy for Data Sharing from the National Transport Repository (NTR) released by the Ministry of Road Transport and Highways (MoRTH) in August, 2025, can be seen as a constitutional turning point and a milestone in administrative efficiency. The state has established an unprecedentedly large unified infrastructure by combining the records of 390 million vehicles, 220 million driver’s licenses, and the streams from the e-challan, e-DAR, and FASTag systems. Its supporters hail its promise of private-sector innovation, data-driven research, and smooth governance. However, there is a troubling paradox beneath this facade of advancement: the very structures intended to improve citizen mobility may simultaneously strengthen widespread surveillance. Without strict protections, the NTR runs the risk of violating the constitutional trifecta of need, proportionality, and legality as stated in Puttaswamy v. UOI, which brings to light important issues at the nexus of liberty, law, and data.
The other pertinent question to be addressed is as India unifies one of its comprehensive datasets on citizen mobility the question becomes more pressing: while motorised citizens are now in the spotlight for accountability, what about the millions of other datasets that are still dispersed, unregulated, and shared inconsistently in the areas of health, education, telecom, and welfare?
The Legal Backdrop
MoRTH grounds its new policy in Sections 25A and 62B of the Motor Vehicles Act, 1988. Data is consolidated into a single repository since states are required by Section 136A to electronically monitor road safety. According to the policy, it complies with the Digital Personal Data Protection Act, 2023.
The DPDP Act itself, however, is rife with state exclusions, particularly Sections 7 and 17, which give government organisations access to personal information for “any function under any law” or for law enforcement purposes. This is where the constitutional issue lies. Prior judicial supervision, warrants, or independent checks are not necessary. With legislative approval, MoRTH is essentially creating a national vehicle database without any constitutional protections.
Data, Domination and the New Privacy Paradigm
As an efficiency and governance reform, VAHAN, SARATHI, e-challan, eDAR, and FASTag are being consolidated into a single National Transport Repository (NTR). However, centralising extensive mobility and identity-linked records on a large scale is more than just a technical advancement; it also changes how the state and private life interact. The NTR must therefore be interpreted through a more comprehensive privacy paradigm, one that acknowledges that data aggregation is a means of enhancing administrative capacity and has the potential to develop into a long-lasting tool of social control and surveillance unless both technological and constitutional restrictions are placed at the same time.
Two recent doctrinal developments sharpen this concern. First, the Supreme Court’s foundational ruling that privacy is a fundamental right remains the constitutional lodestar, any state interference must satisfy legality, necessity and proportionality (KS Puttaswamy & Anr. vs UOI). Second, as seen by the court’s most recent refusals to normalise ongoing, warrantless location monitoring, such as the ruling overturning bail requirements that required accused individuals to provide a Google maps pin, as movement tracking necessitates closer examination (Frank Vitus v. Narcotics Control Bureau & Ors.,).When taken as a whole, these authorities maintain that unrestricted, ongoing access to mobility and toll-transaction records is a constitutional issue and cannot be handled as an administrative convenience.
Structural Fault Lines in the NTR Framework
Fundamentally, the NTR policy generates structural vulnerabilities by providing nearly unrestricted access through APIs and even mass transfers on physical media to a broad range of parties, including insurance companies, law enforcement, and intelligence services. This design undermines constitutional protections in three ways: first, it makes it possible to draw conclusions about private life patterns that the Supreme Court has identified as one of the most sensitive data categories by exposing rich mobility trails like FASTag logs and vehicle-linked identities; Second, it allows bulk datasets to circulate outside the ministry’s custodial boundary, which creates the possibility of function creep, secondary use, and monetisation risks reminiscent of the bulk sharing regime that the government itself once abandoned; and third, it introduces coercive exclusion by tying private sector access to Aadhaar-based OTP consent.
Artificial Intelligence (AI) has transcended its role as a futuristic tool; it is already an integral part of the decision-making process in various sectors, including governance, the medical field, education, security, and the economy, worldwide. On the one hand, there are concerns about the nature of AI, its advantages and disadvantages, and the risks it may pose to the world. There are also doubts about the technology’s capacity to provide effective solutions, especially when threats such as misinformation, cybercrime, and deepfakes are becoming more common.
Recently, global leaders have reiterated that the use of AI should continue to be human-centric, transparent, and governed responsibly. The issue of offering unbridled access to innovators, while also preventing harm, is a dilemma that must be resolved.
AI as a Global Public Good
In earlier times only the most influential states and large corporations controlled the supply and use of advanced technologies, and they guarded them as national strategic assets. In contrast, AI has emerged as a digital innovation that exists and evolves within a deeply interconnected environment, which makes access far more distributed than before. Usage of AI in a specific country will not only bring its pros and cons to that particular place, but the rest of the world as well. For instance, deepfake scams and biased algorithms will not only affect the people in the country where they are created but also in all other countries where such people might be doing business or communicating.
The Growing Threat of AI Misuse
Deepfakes, Crime, and Digital Terrorism The application of artificial intelligence in the wrong way is quickly becoming one of the main security problems. Deepfake technology is being used to carry out electoral misinformation spread, communicate lies, and create false narratives. Cybercriminals are now making use of AI to make phishing attacks faster and more efficient, hack into security systems, and come up with elaborate social engineering tactics. In the case of extremist groups, AI has the power to give a better quality of propaganda, recruitment, and coordination.
Solution - Human Oversight and Safety-by-Design To overcome these dangers, a global AI system must be developed based on the principles of safety-by-design. This means incorporating moral safeguards right from the development phase rather than reacting after the damage is done. Moreover, human control is just as vital. Artificial intelligence (AI) systems that influence public confidence, security, or human rights should always be under the control of human decision-makers. Automated decision-making where there is no openness or the possibility of auditing could lead to black-box systems being developed, where the assignment of responsibility is unclear.
Three Pillars of a Responsible AI Framework
Equitable Access to AI Technologies One of the major hindrances to global AI development is the non-uniformity of access. The provision of high-end computing capability, data infrastructure, and AI research resources is still highly localised in some areas. A sustainable framework needs to be set up so that smaller countries, rural areas, and people speaking different languages will also be able to share the benefits of AI. The distribution of access fairly will be a gradual process, but at the same time, it will lead to the creation of new ideas and improvements in the different places where the local markets are. Thus, there would be no digital divide, and the AI future would not be exclusively determined by the wealthy economies.
Population-Level Skilling and Talent Readiness AI will have an impact on worldwide working areas. Thus, societies must not only equip their people with the existing job skills but also with the future technology-based skills. Massive AI literacy programs, digital competencies enhancement, and cross-disciplinary education are very important. Forecasting human resources for roles in AI governance, data ethics, cyber security, and modern technologies will help prevent large scale displacement while also promoting growth that is genuinely inclusive.
Responsible and Human-Centric Deployment Adoption of Responsible AI makes sure that technology is used for social good and not just for making profits. The human-centred AI directs its applications to the sectors like healthcare, agriculture, education, disaster management, and public services, especially the underserved regions in the world that are most in need of these innovations. This strategy guarantees that progress in technology will improve human life instead of making the situation worse for the poor or taking away the responsibility from humans.
Need for a Global AI Governance Framework
Why International Cooperation Matters AI governance cannot be fragmented. Different national regulations lead to the creation of loopholes that allow bad actors to operate in different countries. Hence, global coordination and harmonisation of safety frameworks is of utmost importance. A single AI governance framework should stipulate:
Clear responsible prohibition on AI misuse in terrorism, deepfakes, and cybercrime .
Transparency and algorithm audits as a compulsory requirement.
Independent global oversight bodies.
Ethical codes of conduct in harmony with humanitarian laws.
Framework like this makes it clear that AI will be shaped by common values rather than being subject to the influence of different interest groups.
Talent Mobility and Open Innovation If AI is to be universally accepted, then global mobility of talent must be made easier. The flow of innovation takes place when the interaction between researchers, engineers, and policymakers is not limited by borders.
AI, Equity, and Global Development The rapid concentration of technology in a few hands poses the risk of widening the gap in equality among countries. Most developing countries are facing the problems of poor infrastructure, lack of education and digital resources. By regarding them only as technology markets and not as partners in innovation, they become even more isolated from the mainstream of development. An AI development mix of human-centred and technology-driven must consider that the global stillness is broken only by the inclusion of the participation of the whole world. For example, the COVID-19 pandemic has already demonstrated how technology can be a major factor in the building of healthcare and crisis resilience. As a matter of fact, when fairly used, AI has a significant role to play in the realisation of the Sustainable Development Goals.
Conclusion
AI is located at a crucial junction. It can either enhance human progress or increase the digital risks. Making sure that AI is a global good goes beyond mere sophisticated technology; it requires moral leadership, inclusion in governance, and collaboration between countries. Preventing misuse by means of openness, supervision by humans, and policies that are responsible will be vital in keeping public trust. Properly guided, AI can make society more resilient, speed up development, and empower future generations. The future we choose is determined by how responsibly we act today.
As PM Modi stated ‘AI should serve as a global good, and at the same time nations must stay vigilant against its misuse’. CyberPeace reinforces this vision by advocating responsible innovation and a secure digital future for all.
The rapid digitization of educational institutions in India has created both opportunities and challenges. While technology has improved access to education and administrative efficiency, it has also exposed institutions to significant cyber threats. This report, published by CyberPeace, examines the types, causes, impacts, and preventive measures related to cyber risks in Indian educational institutions. It highlights global best practices, national strategies, and actionable recommendations to mitigate these threats.
Image: Recent CyberAttack on Eindhoven University
Significance of the Study:
The pandemic-induced shift to online learning, combined with limited cybersecurity budgets, has made educational institutions prime targets for cyberattacks. These threats compromise sensitive student, faculty, and institutional data, leading to operational disruptions, financial losses, and reputational damage. Globally, educational institutions face similar challenges, emphasizing the need for universal and localized responses.
Threat Faced by Education Institutions:
Based on the insights from the CyberPeace’s report titled 'Exploring Cyber Threats and Digital Risks in Indian Educational Institutions', this concise blog provides a comprehensive overview of cybersecurity threats and risks faced by educational institutions, along with essential details to address these challenges.
🎣 Phishing: Phishing is a social engineering tactic where cyber criminals impersonate trusted sources to steal sensitive information, such as login credentials and financial details. It often involves deceptive emails or messages that lead to counterfeit websites, pressuring victims to provide information quickly. Variants include spear phishing, smishing, and vishing.
💰 Ransomware: Ransomware is malware that locks users out of their systems or data until a ransom is paid. It spreads through phishing emails, malvertising, and exploiting vulnerabilities, causing downtime, data leaks, and theft. Ransom demands can range from hundreds to hundreds of thousands of dollars.
🌐 Distributed Denial of Service (DDoS): DDoS attacks overwhelm servers, denying users access to websites and disrupting daily operations, which can hinder students and teachers from accessing learning resources or submitting assignments. These attacks are relatively easy to execute, especially against poorly protected networks, and can be carried out by amateur cybercriminals, including students or staff, seeking to cause disruptions for various reasons
🕵️ Cyber Espionage: Higher education institutions, particularly research-focused universities, are vulnerable to spyware, insider threats, and cyber espionage. Spyware is unauthorized software that collects sensitive information or damages devices. Insider threats arise from negligent or malicious individuals, such as staff or vendors, who misuse their access to steal intellectual property or cause data leaks..
🔒 Data Theft: Data theft is a major threat to educational institutions, which store valuable personal and research information. Cybercriminals may sell this data or use it for extortion, while stealing university research can provide unfair competitive advantages. These attacks can go undetected for long periods, as seen in the University of California, Berkeley breach, where hackers allegedly stole 160,000 medical records over several months.
🛠️ SQL Injection: SQL injection (SQLI) is an attack that uses malicious code to manipulate backend databases, granting unauthorized access to sensitive information like customer details. Successful SQLI attacks can result in data deletion, unauthorized viewing of user lists, or administrative access to the database.
🔍Eavesdropping attack: An eavesdropping breach, or sniffing, is a network attack where cybercriminals steal information from unsecured transmissions between devices. These attacks are hard to detect since they don't cause abnormal data activity. Attackers often use network monitors, like sniffers, to intercept data during transmission.
🤖 AI-Powered Attacks: AI enhances cyber attacks like identity theft, password cracking, and denial-of-service attacks, making them more powerful, efficient, and automated. It can be used to inflict harm, steal information, cause emotional distress, disrupt organizations, and even threaten national security by shutting down services or cutting power to entire regions
Insights from Project eKawach
The CyberPeace Research Wing, in collaboration with SAKEC CyberPeace Center of Excellence (CCoE) and Autobot Infosec Private Limited, conducted a study simulating educational institutions' networks to gather intelligence on cyber threats. As part of the e-Kawach project, a nationwide initiative to strengthen cybersecurity, threat intelligence sensors were deployed to monitor internet traffic and analyze real-time cyber attacks from July 2023 to April 2024, revealing critical insights into the evolving cyber threat landscape.
Cyber Attack Trends
Between July 2023 and April 2024, the e-Kawach network recorded 217,886 cyberattacks from IP addresses worldwide, with a significant portion originating from countries including the United States, China, Germany, South Korea, Brazil, Netherlands, Russia, France, Vietnam, India, Singapore, and Hong Kong. However, attributing these attacks to specific nations or actors is complex, as threat actors often use techniques like exploiting resources from other countries, or employing VPNs and proxies to obscure their true locations, making it difficult to pinpoint the real origin of the attacks.
Brute Force Attack:
The analysis uncovered an extensive use of automated tools in brute force attacks, with 8,337 unique usernames and 54,784 unique passwords identified. Among these, the most frequently targeted username was “root,” which accounted for over 200,000 attempts. Other commonly targeted usernames included: "admin", "test", "user", "oracle", "ubuntu", "guest", "ftpuser", "pi", "support"
Similarly, the study identified several weak passwords commonly targeted by attackers. “123456” was attempted over 3,500 times, followed by “password” with over 2,500 attempts. Other frequently targeted passwords included: "1234", "12345", "12345678", "admin", "123", "root", "test", "raspberry", "admin123", "123456789"
Insights from Threat Landscape Analysis
Research done by the USI - CyberPeace Centre of Excellence (CCoE) and Resecurity has uncovered several breached databases belonging to public, private, and government universities in India, highlighting significant cybersecurity threats in the education sector. The research aims to identify and mitigate cybersecurity risks without harming individuals or assigning blame, based on data available at the time, which may evolve with new information. Institutions were assigned risk ratings that descend from A to F, with most falling under a D rating, indicating numerous security vulnerabilities. Institutions rated D or F are 5.4 times more likely to experience data breaches compared to those rated A or B. Immediate action is recommended to address the identified risks.
Risk Findings :
The risk findings for the institutions are summarized through a pie chart, highlighting factors such as data breaches, dark web activity, botnet activity, and phishing/domain squatting. Data breaches and botnet activity are significantly higher compared to dark web leakages and phishing/domain squatting. The findings show 393,518 instances of data breaches, 339,442 instances of botnet activity, 7,926 instances related to the dark web and phishing & domain activity - 6711.
Key Indicators: Multiple instances of data breaches containing credentials (email/passwords) in plain text.
Botnet activity indicating network hosts compromised by malware.
Credentials from third-party government and non-governmental websites linked to official institutional emails
Details of software applications, drivers installed on compromised hosts.
Sensitive cookie data exfiltrated from various browsers.
IP addresses of compromised systems.
Login credentials for different Android applications.
Below is the sample detail of one of the top educational institutions that provides the insights about the higher rate of data breaches, botnet activity, dark web activities and phishing & domain squatting.
Risk Detection:
It indicates the number of data breaches, network hygiene, dark web activities, botnet activities, cloud security, phishing & domain squatting, media monitoring and miscellaneous risks. In the below example, we are able to see the highest number of data breaches and botnet activities in the sample particular domain.
Risk Changes:
Risk by Categories:
Risk is categorized with factors such as high, medium and low, the risk is at high level for data breaches and botnet activities.
Challenges Faced by Educational Institutions
Educational institutions face cyberattack risks, the challenges leading to cyberattack incidents in educational institutions are as follows:
🔒 Lack of a Security Framework: A key challenge in cybersecurity for educational institutions is the lack of a dedicated framework for higher education. Existing frameworks like ISO 27001, NIST, COBIT, and ITIL are designed for commercial organizations and are often difficult and costly to implement. Consequently, many educational institutions in India do not have a clearly defined cybersecurity framework.
🔑 Diverse User Accounts: Educational institutions manage numerous accounts for staff, students, alumni, and third-party contractors, with high user turnover. The continuous influx of new users makes maintaining account security a challenge, requiring effective systems and comprehensive security training for all users.
📚 Limited Awareness: Cybersecurity awareness among students, parents, teachers, and staff in educational institutions is limited due to the recent and rapid integration of technology. The surge in tech use, accelerated by the pandemic, has outpaced stakeholders' ability to address cybersecurity issues, leaving them unprepared to manage or train others on these challenges.
📱 Increased Use of Personal/Shared Devices: The growing reliance on unvetted personal/Shared devices for academic and administrative activities amplifies security risks.
💬 Lack of Incident Reporting: Educational institutions often neglect reporting cyber incidents, increasing vulnerability to future attacks. It is essential to report all cases, from minor to severe, to strengthen cybersecurity and institutional resilience.
Impact of Cybersecurity Attacks on Educational Institutions
Cybersecurity attacks on educational institutions lead to learning disruptions, financial losses, and data breaches. They also harm the institution's reputation and pose security risks to students. The following are the impacts of cybersecurity attacks on educational institutions:
📚Impact on the Learning Process: A report by the US Government Accountability Office (GAO) found that cyberattacks on school districts resulted in learning losses ranging from three days to three weeks, with recovery times taking between two to nine months.
💸Financial Loss: US schools reported financial losses ranging from $50,000 to $1 million due to expenses like hardware replacement and cybersecurity upgrades, with recovery taking an average of 2 to 9 months.
🔒Data Security Breaches: Cyberattacks exposed sensitive data, including grades, social security numbers, and bullying reports. Accidental breaches were often caused by staff, accounting for 21 out of 25 cases, while intentional breaches by students, comprising 27 out of 52 cases, frequently involved tampering with grades.
⚠️Data Security Breach: Cyberattacks on schools result in breaches of personal information, including grades and social security numbers, causing emotional, physical, and financial harm. These breaches can be intentional or accidental, with a US study showing staff responsible for most accidental breaches (21 out of 25) and students primarily behind intentional breaches (27 out of 52) to change grades.
🏫Impact on Institutional Reputation: Cyberattacks damaged the reputation of educational institutions, eroding trust among students, staff, and families. Negative media coverage and scrutiny impacted staff retention, student admissions, and overall credibility.
🛡️ Impact on Student Safety: Cyberattacks compromised student safety and privacy. For example, breaches like live-streaming school CCTV footage caused severe distress, negatively impacting students' sense of security and mental well-being.
CyberPeace Advisory:
CyberPeace emphasizes the importance of vigilance and proactive measures to address cybersecurity risks:
Develop effective incident response plans: Establish a clear and structured plan to quickly identify, respond to, and recover from cyber threats. Ensure that staff are well-trained and know their roles during an attack to minimize disruption and prevent further damage.
Implement access controls with role-based permissions: Restrict access to sensitive information based on individual roles within the institution. This ensures that only authorized personnel can access certain data, reducing the risk of unauthorized access or data breaches.
Regularly update software and conduct cybersecurity training: Keep all software and systems up-to-date with the latest security patches to close vulnerabilities. Provide ongoing cybersecurity awareness training for students and staff to equip them with the knowledge to prevent attacks, such as phishing.
Ensure regular and secure backups of critical data: Perform regular backups of essential data and store them securely in case of cyber incidents like ransomware. This ensures that, if data is compromised, it can be restored quickly, minimizing downtime.
Adopt multi-factor authentication (MFA): Enforce Multi-Factor Authentication(MFA) for accessing sensitive systems or information to strengthen security. MFA adds an extra layer of protection by requiring users to verify their identity through more than one method, such as a password and a one-time code.
Deploy anti-malware tools: Use advanced anti-malware software to detect, block, and remove malicious programs. This helps protect institutional systems from viruses, ransomware, and other forms of malware that can compromise data security.
Monitor networks using intrusion detection systems (IDS): Implement IDS to monitor network traffic and detect suspicious activity. By identifying threats in real time, institutions can respond quickly to prevent breaches and minimize potential damage.
Conduct penetration testing: Regularly conduct penetration testing to simulate cyberattacks and assess the security of institutional networks. This proactive approach helps identify vulnerabilities before they can be exploited by actual attackers.
Collaborate with cybersecurity firms: Partner with cybersecurity experts to benefit from specialized knowledge and advanced security solutions. Collaboration provides access to the latest technologies, threat intelligence, and best practices to enhance the institution's overall cybersecurity posture.
Share best practices across institutions: Create forums for collaboration among educational institutions to exchange knowledge and strategies for cybersecurity. Sharing successful practices helps build a collective defense against common threats and improves security across the education sector.
Conclusion:
The increasing cyber threats to Indian educational institutions demand immediate attention and action. With vulnerabilities like data breaches, botnet activities, and outdated infrastructure, institutions must prioritize effective cybersecurity measures. By adopting proactive strategies such as regular software updates, multi-factor authentication, and incident response plans, educational institutions can mitigate risks and safeguard sensitive data. Collaborative efforts, awareness, and investment in cybersecurity will be essential to creating a secure digital environment for academia.
Key points: Data collection, Protecting Children, and Awareness
Introduction
The evolution of technology has drastically changed over the period impacting mankind and their lifestyle. For every single smallest aspect, humans are reliable on the computers they have manufactured. The use of AI has almost hindered mankind, kids these days are more lethargic to work and write more sensibly on their own, but they are more likely interested in television, video games, mobile games, etc. School kids use AI just to complete their homework. Is it a good sign for the country’s future? The study suggests that Tools like ChatGPT is a threat to humans/a child’s potential to be creative and make original content requiring a human writer’s insight. Tools like ChatGPT can remove students’ artistic voices rather than using their unique writing style.
Does any of those browsers or search engines use your search history against you? or How do non-users tend to lose their private info on such a search engine?
Are there any safety measures that one’s the government of a particular country taking to protect their people’s rights?
Some of us might wonder how these two fancy-looking world merge and into, Arey they a boon or curse?
So here’s the top news getting flooded all over the world through the internet,
“Italian Agency impose strict measures on OpenAI’s ChatGPT”
Italy becomes the first Western European country to take serious measures about using Open AI ChatGPT. An Italian Data Protection agency named Garante has set mandates on ChatGPT. Garante has raised concerns about privacy violations and the inability to verify the age of users. Garate has also claimed that the AI ChatBot is violating the EU’s General Data Protection Regulation (GDPR). In a press release, Garante demanded OpenAI take necessary actions.
To begin with, Garante has demanded that OpenAI’s ChatGPT should increase its transparency and give a comprehensive statement about its data processing practices. OpenAI must specify between obtaining user consent for processing users’ data to train its AI model or may rely on a legitimate basis. OpenAI must maintain the privacy of users’ data.
In addition, ChatGPT should also take measures to prevent minors from accessing the technology at such an early stage of life, which could hinder their brain power. ChatGPT should add some age verification system to prevent minors from accessing explicit content. Moreover, Garante suggests that OpenAI should spread awareness among its users about their data being processed to train its AI model. Garante has set a deadline of April 30 for ChatGPT to complete the given tasks. Until then, its service should be banned in the country.
Child safety while surfing on ChatGpt
Italian agency demands age limitation to surf and an age verification method to exclude users under the age of 13, and parental authority should be required for users between the ages of 13 and 18. As this is a matter of security. Children might get exposed to explicit content invalidated to their age or explore illegitimate content. The AI chatbot doesn’t have the sense to determine which content is appropriate for the underage audience. Due to tools like chatbots, subjective things/information are already available to young students, leading to endangered irrespective of their future. As ChatGpt can hinder their potential and ability to create original and creative content for young minds. It is a threat motivation to humans’ motivation to write. Moreover, when students need time to think and analyze they get lethargic due to tools like ChatGPT, and the practice they need fades away.
Collection of User’s Data
According to some reports from the company’s privacy policy, OpenAI ChatGpt collects an assortment of additional data. The first two questions are for a free trial when a session starts. It asks for your Login, and SignUp through your Gmail account collects your IP address, browser type, and the data you put in the form of input, i.e. it collects data on the user’s interaction with the website, It also collects the user’s data like session time, cookies through third party may tend to sell it to an unspecified third party.
This snapshot shows that they have added a few things after Garante’s draft.
Conclusion
AI chatbot – Chatgpt is an advanced technology tool that makes work a little easier, but one surfing on such tools must stay aware of the information they are asking for. Such AI bots are trained to understand mankind, its job is to give a helping hand and not doltish. In case of this, some people tend to provide sensitive information unknowingly, young minds get exposed to explicit information. Such bots need to put some age limitations. Such innovations keep taking place, but it’s individuals’ responsibility what actions to be allowed to access their online connected device. Unlike the Italian Agency, which has taken some preventive measures to keep their user’s data safe, also looking at the adverse effect of such chatbots on a young mind.
Become a part of our vision to make the digital world safe for all!
Numerous avenues exist for individuals to unite with us and our collaborators in fostering global cyber security
Awareness
Stay Informed: Elevate Your Awareness with Our Latest Events and News Articles Promoting Cyber Peace and Security.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.