India plans to draft the first AI regulations framework. The draft will be discussed and debated in June-July this year as stated by Union Minister of Skill Development and Entrepreneurship Rajeev Chandrasekhar. He aims to harness AI for economic growth, healthcare, and agriculture, ensuring its significant impact. The Indian government plans to fully utilise AI for economic growth, focusing on healthcare, drug discovery, agriculture, and farmer productivity.
Government Approach to Regulating AI
Chandrasekhar stated that the government's approach to AI regulation involves establishing principles and a comprehensive list of harms and criminalities. They prefer clear platform standards to address bias and misuse during model training rather than regulating AI at specific stages of its development. Union Minister Chandrasekhar also highlights the importance of legal compliance and the risks faced by entrepreneurs who disregard regulations in the digital economy. He warned of "severe consequences" for non-compliance.
Addressing the opening session of the two-day Nasscom leadership summit in Mumbai, the Union minister added that the intention is to harness AI for economic growth and address potential risks and harms. Mr. Chandrasekhar stated that the government is committed to developing AI-skilled individuals. He also highlighted the importance of a global governance framework that deals with the safety and trust of AI.
Union Minister Chandrasekhar also said that 900 million Indians online and 1.3 billion people will be connected to the global internet soon, providing India with both an opportunity and a responsibility to collaborate on regulations to establish legal safeguards that protect consumers and citizens. He further added that the framework is being retrofitted to address the complexity and impact of AI in safety infrastructure. The goal is to ensure legal guardrails for Al, a kinetic enabler of the digital economy, safety and trust, and accountability for those using the AI platform.
Prioritizing Safety and Trust in AI Development
Union minister Chandrasekhar announced that the framework will be discussed at the upcoming Global Partnership on Artificial Intelligence (GPAI) event, a multi-stakeholder initiative with 29 member countries aiming to bridge the gap between theory and practice on AI by supporting research on AI-related priorities. Chandrasekhar emphasises the importance of safety and trust in generative AI development. He believes that every platform must be legally accountable for any harm it causes or enables and should not enable criminality. He advocated for safe and trustworthy AI.
Conclusion
India is drafting its first AI regulation framework, as highlighted by Union Minister Rajeev Chandrasekhar. This framework aims to harness the potential of AI while ensuring safety, trust, and accountability. The framework will focus on principles, comprehensive standards, and legal compliance to navigate the complexities of AI's impact on sectors like healthcare, agriculture, and the digital economy. India recognises the need for robust legal safeguards to protect citizens and foster innovation and economic growth while fostering a culture of trustworthy AI development.
In an alarming event, one of India’s premier healthcare institutes, AIIMS Delhi, has fallen victim to a malicious cyberattack for the second time in the year. The Incident serves as a clear-cut reminder of the escalating threat landscape faced by the healthcare organisation in this digital age. In the attack, which unfolded with grave implications, the attackers not only explored the vulnerabilities present in the healthcare sector, but this also raised the concern about the security of patient data and the uninterrupted delivery of critical healthcare services. In this blog post, we will explore the incident, what happened, and what safety measures can be taken.
Backdrop
The cyber-security systems deployed in AIIMS, New Delhi, recently detected a malware attack. The nature and scope of the attack were both sophisticated and targeted. This second hack acts as a wake-up call for healthcare organisations nationwide. As the healthcare business increasingly depends on digital technology to improve patient care and operational efficiency, cybersecurity must be prioritised to protect sensitive data. To minimise cyber-attack dangers, healthcare organisations must invest in robust defences such as multi-factor authentication, network security, frequent system upgrades, and employee training.
The attempt was successfully prevented, and the deployed cyber-security systems neutralised the threat. The e-Hospital services remain to be fully secure and are functioning normally.
Impact on AIIMS
Healthcare services have been under hackers’ radar worldwide, and the healthcare sector has been impacted badly. The attack on AIIMS Delhi’s effects has been both immediate and far-reaching. The organisation, which is recognised for delivering excellent healthcare services and performing breakthrough medical research, faced significant interruptions in its everyday operations. Patient care and treatment processes were considerably impeded, resulting in delays, cancellations, and the inability to access essential medical documents. The stolen data raises serious concerns about patient privacy and confidentiality, raising doubts about the institution’s capacity to protect sensitive information. Furthermore, the financial ramifications of the assault, such as the cost of recovery, deploying more robust cybersecurity measures, and potential legal penalties and forensic analyses, contribute to the scale of the effect. The event has also generated public concerns about the institution’s ability to preserve personal information, undermining confidence and degrading AIIMS Delhi’s image.
Impact on Patients: The attacks not only impact the institutes but also have serious implications for the patients and here are some key highlights:
Healthcare Service Disruption: The hack has affected the seamless delivery of healthcare services at AIIMS Delhi. Appointments, surgeries, and other medical treatments may be delayed, cancelled, or rescheduled. This disturbance can result in longer wait times, longer treatment periods, and potential problems from delayed or interrupted therapy.
Patient Privacy and Confidentiality are jeopardised because of the breach of sensitive patient data. Medical data, test findings, and treatment plans may have been compromised. This breach may diminish patient faith in the institution’s capacity to safeguard their personal information, discouraging them from seeking care or submitting sensitive information in the future.
As a result of the cyberattack, patients may endure mental anguish and worry. Fear of possible exploitation of personal health information, confusion about the scope of the breach, and concerns about the security of their healthcare data can all have a negative impact on their mental health. This stress might aggravate pre-existing medical issues and impede total recovery.
Trust at stake: A data breach may harm patients’ faith and confidence in AIIMS Delhi and the healthcare system. Patients rely on healthcare facilities to keep their information secure and confidential while providing safe, high-quality care. A hack can doubt the institution’s ability to safeguard patient data, affecting patients’ overall faith in the organisation and potentially leading to patients seeking care elsewhere.
Cybersecurity Measures
To avoid future hacks and protect patient data, AIIMS Delhi must prioritize enhancing its cybersecurity procedures. The institution can strengthen its resistance to changing threats by establishing strong security practices. The following steps can be considered.
Using Multi-factor Authentication: By forcing users to submit several forms of identity to access systems and data, multi-factor authentication offers an extra layer of protection. AIIMS Delhi may considerably lower the danger of unauthorised access by applying this precaution, even in the case of leaked passwords or credentials. Biometrics and one-time passwords, for example, should be integrated into the institution’s authentication systems.
Improving Network Security and Firewalls: AIIMS Delhi should improve network security by implementing strong firewalls, intrusion detection and prevention systems, and network segmentation. These techniques serve to construct barriers between internal systems and external threats, reducing attackers’ lateral movement within the network. Regular network traffic monitoring and analysis can assist in recognising and mitigating any security breaches.
Risk Assessment: Regular penetration testing and vulnerability assessments are required to uncover possible flaws and vulnerabilities in AIIMS Delhi’s systems and infrastructure. Security professionals can detect vulnerabilities and offer remedial solutions by carrying out controlled simulated assaults. This proactive strategy assists in identifying and addressing any security flaws before attackers exploit them.
Educating and training Healthcare Professionals: Education and training have a crucial role in enhancing cybersecurity practices in healthcare facilities. Healthcare workers, including physicians, nurses, administrators, and support staff, must be well-informed about the importance of cybersecurity and trained in risk-mitigation best practices. This will empower healthcare professionals to actively contribute to protecting the patient’s data and maintaining the trust and confidence of patients.
Learnings from Incidents
AIIMS Delhi should embrace cyber-attacks as learning opportunities to strengthen its security posture. Following each event, a detailed post-incident study should be performed to identify areas for improvement, update security policies and procedures, and improve employee training programs. This iterative strategy contributes to the institution’s overall resilience and preparation for future cyber-attacks. AIIMS Delhi can effectively respond to cyber incidents, minimise the impact on operations, and protect patient data by establishing an effective incident response and recovery plan, implementing data backup and recovery mechanisms, conducting forensic analysis, and promoting open communication. Proactive measures, constant review, and regular revisions to incident response plans are critical for staying ahead of developing cyber threats and ensuring the institution’s resilience in the face of potential future assaults.
Conclusion
To summarise, developing robust healthcare systems in the digital era is a key challenge that healthcare organisations must prioritise. Healthcare organisations can secure patient data, assure the continuation of key services, and maintain patients’ trust and confidence by adopting comprehensive cybersecurity measures, building incident response plans, training healthcare personnel, and cultivating a security culture. Adopting a proactive and holistic strategy for cybersecurity is critical to developing a healthcare system capable of withstanding and successfully responding to digital-age problems.
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
An age of unprecedented problems has been brought about by the constantly changing technological world, and misuse of deepfake technology has become a reason for concern which has also been discussed by the Indian Judiciary. Supreme Court has expressed concerns about the consequences of this quickly developing technology, citing a variety of issues from security hazards to privacy violations to the spread of disinformation. In general, misuse of deepfake technology is particularly dangerous since it may fool even the sharpest eye because they are almost identical to the actual thing.
SC judge expressed Concerns: A Complex Issue
During a recent speech, Supreme Court Justice Hima Kohli emphasized the various issues that deepfakes present. She conveyed grave concerns about the possibility of invasions of privacy, the dissemination of false information, and the emergence of security threats. The ability of deepfakes to be created so convincingly that they seem to come from reliable sources is especially concerning as it increases the potential harm that may be done by misleading information.
Gender-Based Harassment Enhanced
In this internet era, there is a concerning chance that harassment based on gender will become more severe, as Justice Kohli noted. She pointed out that internet platforms may develop into epicentres for the quick spread of false information by anonymous offenders who act worrisomely and freely. The fact that virtual harassment is invisible may make it difficult to lessen the negative effects of toxic online postings. In response, It is advocated that we can develop a comprehensive policy framework that modifies current legal frameworks—such as laws prohibiting sexual harassment online —to adequately handle the issues brought on by technology breakthroughs.
Judicial Stance on Regulating Deepfake Content
In a different move, the Delhi High Court voiced concerns about the misuse of deepfake and exercised judicial intervention to limit the use of artificial intelligence (AI)-generated deepfake content. The intricacy of the matter was highlighted by a division bench. The bench proposed that the government, with its wider outlook, could be more qualified to handle the situation and come up with a fair resolution. This position highlights the necessity for an all-encompassing strategy by reflecting the court's acknowledgement of the technology's global and borderless character.
PIL on Deepfake
In light of these worries, an Advocate from Delhi has taken it upon himself to address the unchecked use of AI, with a particular emphasis on deepfake material. In the event that regulatory measures are not taken, his Public Interest Litigation (PIL), which is filed at the Delhi High Court, emphasises the necessity of either strict limits on AI or an outright prohibition. The necessity to discern between real and fake information is at the center of this case. Advocate suggests using distinguishable indicators, such as watermarks, to identify AI-generated work, reiterating the demand for openness and responsibility in the digital sphere.
The Way Ahead:
Finding a Balance
The authorities must strike a careful balance between protecting privacy, promoting innovation, and safeguarding individual rights as they negotiate the complex world of deepfakes. The Delhi High Court's cautious stance and Justice Kohli's concerns highlight the necessity for a nuanced response that takes into account the complexity of deepfake technology.
Because of the increased complexity with which the information may be manipulated in this digital era, the court plays a critical role in preserving the integrity of the truth and shielding people from the possible dangers of misleading technology. The legal actions will surely influence how the Indian judiciary and legislature respond to deepfakes and establish guidelines for the regulation of AI in the nation. The legal environment needs to change as technology does in order to allow innovation and accountability to live together.
Collaborative Frameworks:
Misuse of deepfake technology poses an international problem that cuts beyond national boundaries. International collaborative frameworks might make it easier to share technical innovations, legal insights, and best practices. A coordinated response to this digital threat may be ensured by starting a worldwide conversation on deepfake regulation.
Legislative Flexibility:
Given the speed at which technology is advancing, the legislative system must continue to adapt. It will be required to introduce new legislation expressly addressing developing technology and to regularly evaluate and update current laws. This guarantees that the judicial system can adapt to the changing difficulties brought forth by the misuse of deepfakes.
AI Development Ethics:
Promoting moral behaviour in AI development is crucial. Tech businesses should abide by moral or ethical standards that place a premium on user privacy, responsibility, and openness. As a preventive strategy, ethical AI practices can lessen the possibility that AI technology will be misused for malevolent purposes.
Government-Industry Cooperation:
It is essential that the public and commercial sectors work closely together. Governments and IT corporations should collaborate to develop and implement legislation. A thorough and equitable approach to the regulation of deepfakes may be ensured by establishing regulatory organizations with representation from both sectors.
Conclusion
A comprehensive strategy integrating technical, legal, and social interventions is necessary to navigate the path ahead. Governments, IT corporations, the courts, and the general public must all actively participate in the collective effort to combat the misuse of deepfakes, which goes beyond only legal measures. We can create a future where the digital ecosystem is safe and inventive by encouraging a shared commitment to tackling the issues raised by deepfakes. The Government is on its way to come up with dedicated legislation to tackle the issue of deepfakes. Followed by the recently issued government advisory on misinformation and deepfake.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.