Bharat National Cyber Security Exercise, 2024 - Harmonising Efforts in the Indian Cybersecurity Space
Introduction:
The National Security Council Secretariat, in strategic partnership with the Rashtriya Raksha University, Gujarat, conducted a 12-day Bharat National Cyber Security Exercise in 2024 (from 18th November to 29th November). This exercise included landmark events such as a CISO (Chief Information Security Officers) Conclave and a Cyber Security Start-up exhibition, which were inaugurated on 27 November 2024. Other key features of the exercise include cyber defense training, live-fire simulations, and strategic decision-making simulations. The aim of the exercise was to equip senior government officials and personnel in critical sector organisations with skills to deal with cybersecurity issues. The event also consisted of speeches, panel discussions, and initiatives such as the release of the National Cyber Reference Framework (NCRF)- which provides a structured approach to cyber governance, and the launch of the National Cyber Range(NCR) 1.0., a cutting-edge facility for cyber security research training.
The Deputy National Security Advisor, Shri T.V. Ravichandran (IPS) reiterated, through his speech, the importance of the inclusion of technology in challenges with respect to cyber security and shaping India’s cyber strategy in a manner that is proactive. The CISOs of both government and private entities were encouraged to take up multidimensional efforts which included technological upkeep but also soft skills for awareness.
CyberPeace Outlook
The Bharat National Cybersecurity Exercise (Bharat NCX) 2024 underscores India’s commitment to a comprehensive and inclusive approach to strengthening its cybersecurity ecosystem. By fostering collaboration between startups, government bodies, and private organizations, the initiative facilitates dialogue among CISOs and promotes a unified strategy toward cyber resilience. Platforms like Bharat NCX encourage exploration in the Indian entrepreneurial space, enabling startups to innovate and contribute to critical domains like cybersecurity. Developments such as IIT Indore’s intelligent receivers (useful for both telecommunications and military operations) and the Bangalore Metro Rail Corporation Limited’s plans to establish a dedicated Security Operations Centre (SOC) to counter cyber threats are prime examples of technological strides fostering national cyber resilience.
Cybersecurity cannot be understood in isolation: it is an integral aspect of national security, impacting the broader digital infrastructure supporting Digital India initiatives. The exercise emphasises skills training, creating a workforce adept in cyber hygiene, incident response, and resilience-building techniques. Such efforts bolster proficiency across sectors, aligning with the government’s Atmanirbhar Bharat vision. By integrating cybersecurity into workplace technologies and fostering a culture of awareness, Bharat NCX 2024 is a platform that encourages innovation and is a testament to the government’s resolve to fortify India’s digital landscape against evolving threats.
References
- https://ciso.economictimes.indiatimes.com/news/cybercrime-fraud/bharat-cisos-conclave-cybersecurity-startup-exhibition-inaugurated-under-bharat-ncx-2024/115755679
- https://pib.gov.in/PressReleasePage.aspx?PRID=2078093
- https://timesofindia.indiatimes.com/city/indore/iit-indore-unveils-groundbreaking-intelligent-receivers-for-enhanced-6g-and-military-communication-security/articleshow/115265902.cms
- https://www.thehindu.com/news/cities/bangalore/defence-system-to-be-set-up-to-protect-metro-rail-from-cyber-threats/article68841318.ece
- https://rru.ac.in/wp-content/uploads/2021/04/Brochure12-min.pdf
Related Blogs

Introduction
Cyber-attacks are another threat in this digital world, not exclusive to a single country, that could significantly disrupt global movements, commerce, and international relations all of which experienced first-hand when a cyber-attack occurred at Heathrow, the busiest airport in Europe, which threw their electronic check-in and baggage systems into a state of chaos. Not only were there chaos and delays at Heathrow, airports across Europe including Brussels, Berlin, and Dublin experienced delay and had to conduct manual check-ins for some flights further indicating just how interconnected the world of aviation is in today's world. Though Heathrow assured passengers that the "vast majority of flights" would operate, hundreds were delayed or postponed for hours as those passengers stood in a queue while nearly every European airport's flying schedule was also negatively impacted.
The Anatomy of the Attack
The attack specifically targeted Muse software by Collins Aerospace, a software built to allow various airlines to share check-in desks and boarding gates. The disruption initially perceived to be technical issues soon turned into a logistical nightmare, with airlines relying on Muse having to engage in horror-movie-worthy manual steps hand-tagging luggage, verifying boarding passes over the phone, and manually boarding passengers. While British Airways managed to revert to a backup system, most other carriers across Heathrow and partner airports elsewhere in Europe had to resort to improvised manual solutions.
The trauma was largely borne by the passengers. Stories emerged about travelers stranded on the tarmac, old folks left barely able to walk without assistance, and even families missing important connections. It served to remind everyone that the aviation world, with its schedules interlocked tightly across borders, can see even a localized system failure snowball into a continental-level crisis.
Cybersecurity Meets Aviation Infrastructure
In the last two decades, aviation has become one of the more digitally dependent industries in the world. From booking systems and baggage handling issues to navigation and air traffic control, digital systems are the invisible scaffold on which flight operations are supported. Though this digitalization has increased the scale of operations and enhanced efficiency, it must have also created many avenues for cyber threats. Cyber attackers increasingly realize that to target aviation is not just about money but about leverage. Just interfering with the check-in system of a major hub like Heathrow is more than just financial disruption; it causes panic and hits the headlines, making it much more attractive for criminal gangs and state-sponsored threat actors.
The Heathrow incident is like the worldwide IT crash in July 2024-thwarting activities of flights caused by a botched Crowdstrike update. Both prove the brittleness of digital dependencies in aviation, where one failure point triggering uncontrollable ripple effects spanning multiple countries. Unlike conventional cyber incidents contained within corporate networks, cyber-attacks in aviation spill on to the public sphere in real time, disturbing millions of lives.
Response and Coordination
Heathrow Airport first added extra employees to assist with manual check-in and told passengers to check flight statuses before traveling. The UK's National Cyber Security Centre (NCSC) collaborated with Collins Aerospace, the Department for Transport, and law enforcement agencies to investigate the extent and source of the breach. Meanwhile, the European Commission published a statement that they are "closely following the development" of the cyber incident while assuring passengers that no evidence of a "widespread or serious" breach has been observed.
According to passengers, the reality was quite different. Massive passenger queues, bewildering announcements, and departure time confirmations cultivated an atmosphere of chaos. The wrenching dissonance between the reassurances from official channel and Kirby needs to be resolved about what really happens in passenger experiences. During such incidents, technical restoration and communication flow are strategies for retaining public trust in incidents.
Attribution and the Shadow of Ransomware
As with many cyber-attacks, questions on its attribution arose quite promptly. Rumours of hackers allegedly working for the Kremlin escaped into the air quite possibly inside seconds of the realization, Cybersecurity experts justifiably advise against making conclusions hastily. Extortion ransomware gangs stand the last chance to hold the culprits, whereas state actors cannot be ruled out, especially considering Russian military activity under European airspace. Meanwhile, Collins Aerospace has refused to comment on the attack, its precise nature, or where it originated, emphasizing an inherent difficulty in cyberattribution.
What is clear is the way these attacks bestow criminal leverage and dollars. In previous ransomware attacks against critical infrastructure, cybercriminal gangs have extorted millions of dollars from their victims. In aviation terms, the stakes grow exponentially, not only in terms of money but national security and diplomatic relations as well as human safety.
Broader Implications for Aviation Cybersecurity
This incident brings to consideration several core resilience issues within aviation systems. Traditionally, the airports and airlines had placed premium on physical security, but today, the equally important concept of digital resilience has come into being. Systems such as Muse, which bind multiple airlines into shared infrastructure, offer efficiency but, at the same time, also concentrate that risk. A cyber disruption in one place will cascade across dozens of carriers and multiple airports, thereby amplifying the scale of that disruption.
The case also brings forth redundancy and contingency planning as an urgent concern. While BA systems were able to stand on backups, most other airlines could not claim that advantage. It is about time that digital redundancies, be it in the form of parallel systems or isolated backups or even AI-driven incident response frameworks, are built into aviation as standard practice and soon.
On the policy plane, this incident draws attention to the necessity for international collaboration. Aviation is therefore transnational, and cyber incidents standing on this domain cannot possibly be handled by national agencies only. Eurocontrol, the European Commission, and cross-border cybersecurity task forces must spearhead this initiative to ensure aviation-wide resilience.
Human Stories Amid a Digital Crisis
Beyond technical jargon and policy response, the human stories had perhaps the greatest impact coming from Heathrow. Passengers spoke of hours spent queuing, heading to funerals, and being hungry and exhausted as they waited for their flights. For many, the cyber-attack was no mere headline; instead, it was ¬ a living reality of disruption.
These stories reflect the fact that cybersecurity is no hunger strike; it touches people's lives. In critical sectors such as aviation, one hour of disruption means missed connections for passengers, lost revenue for airlines, and inculcates immense emotional stress. Crisis management must therefore entail technical recovery and passenger care, communication, and support on the ground.
Conclusion
The cybersecurity crisis of Heathrow and other European airports emphasizes the threat of cyber disruption on the modern legitimacy of aviation. The use of increased connectivity for airport processes means that any cyber disruption present, no matter how small, can affect scheduling issues regionally or on other continents, even threatening lives. The occurrences confirm a few things: a resilient solution should provide redundancy not efficiency; international networking and collaboration is paramount; and communicating with the traveling public is just as important (if not more) as the technical recovery process.
As governments, airlines, and technology providers analyse the disruption, the question is longer if aviation can withstand cyber threats, but to what extent it will be prepared to defend itself against those attacks. The Heathrow crisis is a reminder that the stake of cybersecurity is not just about a data breach or outright stealing of money but also about stealing the very systems that keep global mobility in motion. Now, the aviation industry is tested to make this disruption an opportunity to fortify the digital defences and start preparing for the next inevitable production.
References
- https://www.bbc.com/news/articles/c3drpgv33pxo
- https://www.theguardian.com/business/2025/sep/21/delays-continue-at-heathrow-brussels-and-berlin-airports-after-alleged-cyber-attack
- https://www.reuters.com/business/aerospace-defense/eu-agency-says-third-party-ransomware-behind-airport-disruptions-2025-09-22/

In 2023, PIB reported that up to 22% of young women in India are affected by Polycystic Ovarian Syndrome (PCOS). However, access to reliable information regarding the condition and its treatment remains a challenge. A study by the PGIMER Chandigarh conducted in 2021 revealed that approximately 37% of affected women rely on the internet as their primary source of information for PCOS. However, it can be difficult to distinguish credible medical advice from misleading or inaccurate information online since the internet and social media are rife with misinformation. The uptake of misinformation can significantly delay the diagnosis and treatment of medical conditions, jeopardizing health outcomes for all.
The PCOS Misinformation Ecosystem Online
PCOS is one of the most common disorders diagnosed in the female endocrine system, characterized by the swelling of ovaries and the formation of small cysts on their outer edges. This may lead to irregular menstruation, weight gain, hirsutism, possible infertility, poor mental health, and other symptoms. However, there is limited research on its causes, leaving most medical practitioners in India ill-equipped to manage the issue effectively and pushing women to seek alternate remedies from various sources.
This creates space for the proliferation of rumours, unverified cures and superstitions, on social media, For example, content on YouTube, Facebook, and Instagram may promote “miracle cures” like detox teas or restrictive diets, or viral myths claiming PCOS can be “cured” through extreme weight loss or herbal remedies. Such misinformation not only creates false hope for women but also delays treatment, or may worsen symptoms.
How Tech Platforms Amplify Misinformation
- Engagement vs. Accuracy: Social media algorithms are designed to reward viral content, even if it’s misleading or incendiary since it generates advertisement revenue. Further, non-medical health influencers often dominate health conversations online and offer advice with promises of curing the condition.
- Lack of Verification: Although platforms like YouTube try to provide verified health-related videos through content shelves, and label unverified content, the sheer volume of content online means that a significant chunk of content escapes the net of content moderation.
- Cultural Context: In India, discussions around women’s health, especially reproductive health, are stigmatized, making social media the go-to source for private, albeit unreliable, information.
Way Forward
a. Regulating Health Content on Tech Platforms: Social media is a significant source of health information to millions who may otherwise lack access to affordable healthcare. Rather than rolling back content moderation practices as seen recently, platforms must dedicate more resources to identify and debunk misinformation, particularly health misinformation.
b. Public Awareness Campaigns: Governments and NGOs should run nationwide campaigns in digital literacy to educate on women’s health issues in vernacular languages and utilize online platforms for culturally sensitive messaging to reach rural and semi-urban populations. This is vital for countering the stigma and lack of awareness which enables misinformation to proliferate.
c. Empowering Healthcare Communication: Several studies suggest a widespread dissatisfaction among women in many parts of the world regarding the information and care they receive for PCOS. This is what drives them to social media for answers. Training PCOS specialists and healthcare workers to provide accurate details and counter misinformation during patient consultations can improve the communication gaps between healthcare professionals and patients.
d. Strengthening the Research for PCOS: The allocation of funding for research in PCOS is vital, especially in the face of its growing prevalence amongst Indian women. Academic and healthcare institutions must collaborate to produce culturally relevant, evidence-based interventions for PCOS. Information regarding this must be made available online since the internet is most often a primary source of information. An improvement in the research will inform improved communication, which will help reduce the trust deficit between women and healthcare professionals when it comes to women’s health concerns.
Conclusion
In India, the PCOS misinformation ecosystem is shaped by a mix of local and global factors such as health communication failures, cultural stigma, and tech platform design prioritizing engagement over accuracy. With millions of women turning to the internet for guidance regarding their conditions, they are increasingly vulnerable to unverified claims and pseudoscientific remedies which can lead to delayed diagnoses, ineffective treatments, and worsened health outcomes. The rising number of PCOS cases in the country warrants the bridging of health research and communications gaps so that women can be empowered with accurate, actionable information to make the best decisions regarding their health and well-being.
Sources
- https://pib.gov.in/PressReleasePage.aspx?PRID=1893279#:~:text=It%20is%20the%20most%20prevailing%20female%20endocrine,neuroendocrine%20system%2C%20sedentary%20lifestyle%2C%20diet%2C%20and%20obesity.
- https://www.thinkglobalhealth.org/article/india-unprepared-pcos-crisis?utm_source=chatgpt.com
- https://www.bbc.com/news/articles/ckgz2p0999yo
- https://pmc.ncbi.nlm.nih.gov/articles/PMC9092874/

Introduction: Reasons Why These Amendments Have Been Suggested.
The suggested changes in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are the much-needed regulatory reaction to the blistering emergence of synthetic information and deepfakes. These reforms are due to the pressing necessity to govern risks within the digital ecosystem as opposed to regular reformation.
The Emergence of the Digital Menace
Generative AI tools have also facilitated the generation of very realistic images, videos, audio, and text in recent years. Such artificial media have been abused to portray people in situations they are not in or in statements they have never said. The market size is expected to have a compound annual growth rate(CAGR) from 2025 to 2031 of 37.57%, resulting in a market volume of US$400.00 bn by 2031. Therefore, tight regulatory controls are necessary to curb a high prevalence of harm in the Indian digital world.
The Gap in Law and Institution
None of the IT Rules, 2021, clearly addressed synthetic content. Although the Information Technology Act, 2000 dealt with identity theft, impersonation and violation of privacy, the intermediaries were not explicitly obligated on artificial media. This left a loophole in enforcement, particularly since AI-generated content might get around the old system of moderation. These amendments bring India closer to the international standards, including the EU AI Act, which requires transparency and labelling of AI-driven content. India addresses such requirements and adapts to local constitutional and digital ecosystem needs.
II. Explanation of the Amendments
The amendments of 2025 present five alternative changes in the current IT Rules framework, which address various areas of synthetic media regulation.
A. Definitional Clarification: Synthetic Generation of Information Introduction.
Rule 2(1)(wa) Amendment:
The amendments provide an all-inclusive definition of what is meant by “synthetically generated information” as information, which is created, or produced, changed or distorted with the use of a computer resource, in a way that such information can reasonably be perceived to be genuine. This definition is intentionally broad and is not limited to deepfakes in the strict sense but to any artificial media that has gone through algorithmic manipulation in order to have a semblance of authenticity.
Expansion of Legal Scope:
Rule 2(1A) also makes it clear that any mention of information in the context of unlawful acts, namely, including categories listed in Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4), should be understood to mean synthetically generated information. This is a pivotal interpretative protection that does not allow intermediaries to purport that synthetic versions of illegal material are not under the control of the regulation since they are algorithmic creations and not descriptions of what actually occurred.
B. Safe Harbour Protection and Content Removal Requirements
Amendment, rule 3(1)(b)- Safe Harbour Clarification:
The amendments add a certain proviso to the Rule (3) (1)(b) that explains a deletion or facilitation of access of synthetically produced information (or any information falling within specified categories) which the intermediaries have made in good faith as part of reasonable endeavours or at the receipt of a complaint shall not be considered a breach of the Section 79(2) (a) or (b) of the Information Technology Act, 2000. This coverage is relevant especially since it insures the intermediaries against liability in situations where they censor the synthetic contents in advance of a court ruling or governmental warnings.
C. Labelling and Metadata Requirements that are mandatory on Intermediaries that enable the creation of synthetic content
The amendments establish a new framework of due diligence in Rule 3(3) on the case of intermediaries that offer tools to generate, modify, or alter the synthetically generated information. Two fundamental requirements are laid down.
- The generated information must be prominently labelled or embedded with a permanent, unique metadata or identifier. The label or metadata must be:
- Visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
- It should cover at least 10% of the surface of the visual display or, in the case of audio content, during the initial 10% of its duration.
- It can be used to immediately identify that such information is synthetically generated information which has been created, generated, modified, or altered using the computer resource of the intermediary.
- The intermediary in clause (a) shall not enable modification, suppression or removal of such label, permanent unique metadata or identifier, by whatever name called.
D. Important Social Media Intermediaries- Pre-Publication Checking Responsibilities
The amendments present a three-step verification mechanism, under Rule 4(1A), to Significant Social Media Intermediaries (SSMIs), which enables displaying, uploading or publishing on its computer resource before such display, uploading, or publication has to follow three steps.
Step 1- User Declaration: It should compel the users to indicate whether the materials they are posting are synthetically created. This puts the first burden on users.
Step 2-Technical Verification: To ensure that the user is truly valid, the SSMIs need to provide reasonable technical means, such as automated tools or other applications. This duty is contextual and would be based on the nature, format and source of content. It does not allow intermediaries to escape when it is known that not every type of content can be verified using the same standards.
Step 3- Prominent Labelling: In case the synthetic origin is verified by user declaration or technical verification, SSMIs should have a notice or label that is prominently displayed to be seen by users before publication.
The amendments provide a better system of accountability and set that intermediaries will be found to have failed due diligence in a case where it is established that they either knowingly permitted, encouraged or otherwise failed to act on synthetically produced information in contravention of these requirements. This brings in an aspect of knowledge, and intermediaries cannot use accidental errors as an excuse for non-compliance.
An explanation clause makes it clear that SSMIs should also make reasonable and proportionate technical measures to check user declarations and keep no synthetic content published without adequate declaration or labelling. This eliminates confusion on the role of the intermediaries with respect to making declarations.
III. Attributes of The Amendment Framework
- Precision in Balancing Innovation and Accountability.
The amendments have commendably balanced two extreme regulatory postures by neither prohibiting nor allowing the synthetic media to run out of control. It has recognised the legitimate use of synthetic media creation in entertainment, education, research and artistic expression by adopting a transparent and traceable mandate that preserves innovation while ensuring accountability.
- Overt Acceptance of the Intermediary Liability and Reverse Onus of Knowledge
Rule 4(1A) gives a highly significant deeming rule; in cases where the intermediary permits or refrains from acting with respect to the synthetic content knowing that the rules are violated, it will be considered as having failed to comply with the due diligence provisions. This description closes any loopholes in unscrupulous supervision where intermediaries can be able to argue that they did so. Standard of scienter promotes material investment in the detection devices and censor mechanisms that have been in place to offer security to the platforms that have sound systems, albeit the fact that the tools fail to capture violations at times.
- Clarity Through Definition and Interpretive Guidance
The cautious definition of the term “synthetically generated information” and the guidance that is provided in Rule 2(1A) is an admirable attempt to solve confusion in the previous regulatory framework. Instead of having to go through conflicting case law or regulatory direction, the amendments give specific definitional limits. The purposefully broad formulation (artificially or algorithmically created, generated, modified or altered) makes sure that the framework is not avoided by semantic games over what is considered to be a real synthetic content versus a slight algorithmic alteration.
- Insurance of non-accountability but encourages preventative moderation
The safe harbour clarification of the Rule 3(1)(b) amendment clearly safeguards the intermediaries who voluntarily dismiss the synthetic content without a court order or government notification. It is an important incentive scheme that prompts platforms to implement sound self-regulation measures. In the absence of such protection, platforms may also make rational decisions to stay in a passive stance of compliance, only deleting content under the pressure of an external authority, thus making them more effective in keeping users safe against dangerous synthetic media.
IV. Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2025 suggest a structured, transparent, and accountable execution of curbing the rising predicaments of synthetic media and deepfakes. The amendments deal with the regulatory and interpretative gaps that have always existed in determining what should be considered as synthetically generated information, the intermediary liabilities and the mandatory labelling and metadata requirement. Safe-harbour protection will encourage the moderation proactively, and a scienter-based liability rule will not permit the intermediaries to escape liability when they are aware of the non-compliance but tolerate such non-compliance. The idea to introduce pre-publication verification of Significant Social Media Intermediaries adds the responsibility to users and due diligence to the platform. Overall, the amendments provide a reasonable balance between innovation and regulation, make the process more open with its proper definitions, promote responsible conduct on the platform and transform India and the new standards in the sphere of synthetic media regulation. They collaborate to enhance the verisimilitude, defence of the users, and visibility of the systems of the digital ecosystem of India.
V. References
2. https://www.statista.com/outlook/tmo/artificial-intelligence/generative-ai/worldwide