The Ministry of Electronics and Information Technology (MeitY) issued an advisory on March 1 2024, urging platforms to prevent bias, discrimination, and threats to electoral integrity by using AI, generative AI, LLMs, or other algorithms. The advisory requires that AI models deemed unreliable or under-tested in India must obtain explicit government permission before deployment. While leveraging Artificial Intelligence models, Generative AI, software, or algorithms in their computer resources, Intermediaries and platforms need to ensure that they prevent bias, discrimination, and threats to electoral integrity. As Intermediaries are required to follow due diligence obligations outlined under “Information Technology (Intermediary Guidelines and Digital Media Ethics Code)Rules, 2021, updated as of 06.04.2023”. This advisory is issued to urge the intermediaries to abide by the IT rules and regulations and compliance therein.
Key Highlights of the Advisories
Intermediaries and platforms must ensure that users of Artificial Intelligence models/LLM/Generative AI, software, or algorithms do not allow users to host, display, upload, modify, publish, transmit, store, update, or share unlawful content, as per Rule 3(1)(b) of the IT Rules.
The government emphasises intermediaries and platforms to prevent bias or discrimination in their use of Artificial Intelligence models, LLMs, and Generative AI, software, or algorithms, ensuring they do not threaten the integrity of the electoral process.
The government requires explicit permission to use deemed under-testing or unreliable AI models, LLMs, or algorithms on the Indian internet. Further, it must be deployed with proper labelling of potential fallibility or unreliability. Further, users can be informed through a consent popup mechanism.
The advisory specifies that all users should be well informed about the consequences of dealing with unlawful information on platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. It entails that users are clearly informed, through terms of services and user agreements, about the consequences of engaging with unlawful information on the platform.
The advisory also indicates measures advocating to combat deepfakes or misinformation. The advisory necessitates identifying synthetically created content across various formats, advising platforms to employ labels, unique identifiers, or metadata to ensure transparency. Furthermore, the advisory mandates the disclosure of software details and tracing the first originator of such synthetically created content.
Rajeev Chandrasekhar, Union Minister of State for IT, specified that
“Advisory is aimed at the Significant platforms, and permission seeking from Meity is only for large platforms and will not apply to startups. Advisory is aimed at untested AI platforms from deploying on the Indian Internet. Process of seeking permission , labelling & consent based disclosure to user about untested platforms is insurance policy to platforms who can otherwise be sued by consumers. Safety & Trust of India's Internet is a shared and common goal for Govt, users and Platforms.”
Conclusion
MeitY's advisory sets the stage for a more regulated Al landscape. The Indian government requires explicit permission for the deployment of under-testing or unreliable Artificial Intelligence models on the Indian Internet. Alongside intermediaries, the advisory also applies to digital platforms that incorporate Al elements. Advisory is aimed at significant platforms and will not apply to startups. This move safeguards users and fosters innovation by promoting responsible AI practices, paving the way for a more secure and inclusive digital environment.
In the 21st century, wars are no longer confined to land, sea, and air. Rather, they are increasingly playing out across the digital domain, where effective dominance over networks, data, and communications determines who holds the upper hand. Among these, 5G networks are becoming a defining factor on modern battlefields. The ultra-low latency, massive bandwidth capability, and the ability to connect many devices at a single time are transforming the scale and level of military operations, intelligence, and logistics. This unprecedented connectivity is also met with a host of cybersecurity vulnerabilities that the governments and the militaries have to address.
As India faces a challenging security environment, the emergence of 5G presents both an opportunity and a dilemma. On one hand, it can enhance our command, control, surveillance and battlefield coordination. On the other hand, it also exposes the military and the security establishments to risks of espionage and supply chain vulnerabilities. So in this case, it will be important to strike a balance between innovation and security for turning 5G into a strength rather than a liability.
How can 5G networks be a military asset?
In comparison to its predecessors, 5G is not just about faster downloads. Rather, it is a complete overhaul of network architectures that are designed to support services and technologies according to modern technological requirements. In terms of military application of 5G networks, it can prove a series of game-changing capabilities, such as:-
Enhanced Command and Control in the form of real-time data sharing between troops, UAVs, Radar systems and the Command Cells to ensure a faster and coordinated decision-making approach.
Tactical Situational Awareness with the help of 5G-enabled devices can give soldiers instant updates on the terrain, troop movements, or positions and enemy movements.
Advanced Intelligence, Surveillance and Reconnaissance (ISR) with high-resolution sensors, radars and UAVs that can operate at their full potential by transmitting vast data streams with minimal legacy.
However, 5G networks can also help to become a key component of the communication component of the military command establishments that would allow machines, sensors and human operators to function as a single and integrated force.
Understanding the importance of 5G networks as the Double-Edged Swords of Connectivity-
The potential of 5G is undeniable, but its vulnerabilities cannot be ignored. Because they are software-driven and reliant on dense networks of small cells. For the military, this shows that adversaries could exploit their weaknesses to disrupt the communication, jam signals, and intercept sensitive data, leaving behind some key risks, such as;
Cybersecurity threats from software-based architectures make 5G networks prone to malware and data breaches.
Supply chain risks can arise from reliance on foreign hardware and software components with raising fears of embedded backdoors or compromised systems.
Signal jamming and interface in terms of millimetre-wave spectrum, 5G signals are vulnerable to disruption in contested environments.
There can also be insider threats and physical sabotage over personnel or unsecured installations that could compromise network integrity.
Securing the Backbone: Cyber Defence Imperatives
To safeguard 5G networks as the backbone for future warfare, the defence establishments need to adopt a layered, proactive cybersecurity strategy. Several measures can be considered, such as;
Ensuring robust encryption and authentication to protect sensitive data, which requires the installation of advanced protocols like Subscription Concealed Identifiers and zero-trust frameworks to eliminate implicit trust.
Investing in domestic R&D for 5G components to reduce dependency on foreign suppliers. India’s adoption of the 5Gi standard is a step in this direction, but upgrading it into a military grade remains vital.
To ensure collaboration across different sectors, the defence forces need to work with civilian agencies and private telecoms support providers to create unified standards and best practices.
Thus, with embedding cybersecurity into every layer of the 5G architecture, India can work in the direction to reduce risks to maintain its operational resilience.
The geopolitical domains of 5G Network as a tool of warfare-
The introduction of 5G networks has definitely come as a tool of technological advancement in the communication sector. But at the same time, it has also posed a geopolitical context as well. The strategic competition between the US and China to dominate the 5G infrastructure has global security implications. For India, aligning closely with either of the blocs will pose a risk to its strategic autonomy, but pursuing non-alignment can give India some leverage to develop its capability on its own.
In this case, partnerships with the QUAD with the US, Japan and Australia can open avenues for cooperating on shared standards, cost sharing, and interoperability in 5G-enabled military systems. Learning from countries like the US and Israel, which are developing their defence communication and network infrastructure to secure 5G networks, or revisiting existing frameworks like COMCASA or BECA with the US can serve as platforms to explore joint protocols for 5G networks.
Conclusion: Opportunities and the way forward-
The 5G network is becoming a part of the central nervous system of the future battlefields. It can offer immense opportunities for India to modernise its defence capabilities and enhance the situational awareness by integrating AI-driven systems. The future lies in adopting a balanced strategy by developing indigenous capabilities, forging trusted partnerships, embedding cybersecurity into every layer of the networking architecture and preparing a skilled workforce to analyse and counter evolving threats. However, by adopting a foresighted preparedness, India can turn the double-edged sword of 5G into a decisive advantage by ensuring that it not only adapts to the digital battlefield, rather India can also lead it.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
Former Egyptian MP Ahmed Eltantawy was targeted with Cytrox’s predator spyware through links sent via SMS and WhatsApp. Former Egyptian MP Ahmed Eltantawy has been targeted with Cytrox’s Predator spyware in a campaign believed to be state-sponsored cyber espionage. After Eltantawy made his intention to run for president in the 2024 elections known, the targeting took place between May and September 2023. The spyware was distributed using links sent via SMS and WhatsApp, network injection, and visits to certain websites by Eltantawy. The Citizen Lab examined the assaults with the help of Google's Threat Analysis Group (TAG), and they were able to acquire an iPhone zero-day exploit chain that was designed to be used to install spyware on iOS versions up to 16.6.1.
Investigation: The Ahmed Eltantawy Incident
Eltantawy's device was forensically examined by The Citizen Lab, which uncovered several efforts to use Cytrox's Predator spyware to target him. In the investigation, The Citizen Lab and TAG discovered an iOS exploit chain utilised in the attacks against Eltantawy. They started a responsible disclosure procedure with Apple, and as a consequence, it resulted in the release of updates patching the vulnerabilities used by the exploit chain. Mobile zero-day exploit chains may be quite expensive, with black market values for them exceeding millions of dollars. The Citizen Lab also identified several domain names and IP addresses associated with Cytrox’s Predator spyware. Additionally, a network injection method was also utilised to get the malware onto Eltantawy's phone, according to the study. He would be discreetly routed to a malicious website using network injection when he went to certain websites that weren't HTTPS.
What is Cyber Espionage?
Cyber espionage, also referred to as cyber spying, is a sort of cyberattack in which an unauthorised user tries to obtain confidential or sensitive information or intellectual property (IP) for financial gain, business benefit, or political objectives.
Apple's Response: A Look at iOS Vulnerability Patching
Users are advised to keep their devices up-to-date and enable lockdown Mode on iPhones. Former Egyptian MP targeted with predator spyware ahead of 2024 presidential run hence Update your macOS Ventura, iOS, and iPadOS devices, as Apple has released emergency updates to address the flaws. Apple has Released Emergency Updates Amid Citizen Lab’s Disclosure. Apple has issued three emergency updates for iOS, iPadOS (1), and macOS Ventura (2).
The updates address the following vulnerabilities:
CVE-2023-41991,
CVE-2023-41992,
CVE-2023-41993.
Apple customers are advised to immediately install these emergency security updates to protect themselves against potential targeted spyware attacks. By updating promptly, users will ensure that their devices are secure and cannot be compromised by such attacks exploiting these particular zero-day vulnerabilities. Hence it is advisable to maintain up-to-date software and enable security features in your Apple devices.
Conclusion:
Ahmed Eltantawy, a former Egyptian MP and presidential candidate, was targeted with Cytrox’s Predator spyware after announcing his bid for the presidency. He was targeted by Cytrox Predator Spyware Campaign. Such an incident is believed to be State-Sponsored Cyber Espionage. The incident raises the question of loss of privacy and shows the mala fide intention of the political opponents. The investigation Findings reveal that Ahmed Eltantawy was the victim of a sophisticated cyber espionage campaign that leveraged Cytrox’s Predator spyware. Apple advised that all users are urged to update their Apple devices. This case raises alarming concerns about the lack of controls on the export of spyware technologies and underscores the importance of security updates and lockdown modes on Apple devices.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.