#FactCheck - Viral Claim About Nitish Kumar’s Resignation Over UGC Protests Is Misleading
Executive Summary
A news video is being widely circulated on social media with the claim that Bihar Chief Minister Nitish Kumar has resigned from his post in protest against the ongoing UGC-related controversy. Several users are sharing the clip while alleging that Kumar stepped down after opposing the issue. However, CyberPeace research has found the claim to be false. The researchrevealed that the video being shared is from 2022 and has no connection whatsoever with the UGC or any recent protests related to it. An old video has been misleadingly linked to a current issue to spread misinformation on social media.
Claim:
An Instagram user shared a video on January 26 claiming that Bihar Chief Minister Nitish Kumar had resigned. The post further alleged that the news was first aired on Republic channel and that Kumar had submitted his resignation to then-Governor Phagu Chauhan. The link to the post, its archived version, and screenshots can be seen below. (Links as provided)

Fact Check:
To verify the claim, CyberPeace first conducted a keyword-based search on Google. No credible or established media organisation reported any such resignation, clearly indicating that the viral claim lacked authenticity.

Further, the voiceover in the viral video states that Nitish Kumar handed over his resignation to Governor Phagu Chauhan. However, Phagu Chauhan ceased to be the Governor of Bihar in February 2023. The current Governor of Bihar is Arif Mohammad Khan, making the claim in the video factually incorrect and misleading.

In the next step, keyframes from the viral video were extracted and reverse-searched using Google Lens. This led to the official YouTube channel of Republic Bharat, where the full version of the same video was found. The video was uploaded on August 9, 2022. This clearly establishes that the clip circulating on social media is not recent and is being shared out of context.

Conclusion
CyberPeace’s research confirms that the viral video claiming Nitish Kumar resigned over the UGC issue is false. The video dates back to 2022 and has no link to the current UGC controversy. An old political video has been deliberately circulated with a misleading narrative to create confusion on social media.
Related Blogs

Introduction
Due to the rapid growth of high-capability AI systems around the world, growing concerns regarding safety, accountability, and governance have arisen throughout the world; thus, California has responded by passing the Transparency in Frontier Artificial Intelligence Act (TFAIA), the first state statute focused on "frontier" (highly capable) AI models. This statute is unique in that it does not only target harms caused by AI models in the form of consumer protection as compared to the majority of state statutes; rather, this statute addresses the catastrophic and systemic risks to society associated with large-scale AI systems. As California is a global technology leader, the TFAIA is positioned to have a significant impact on both domestic regulation and the evolution of international legal frameworks for AI technology (and as such has the potential to influence corporate compliance practices and the establishment of global norms related to the use of AI).
Understanding the Transparency in Frontier Artificial Intelligence Act
The Transparency in Frontier Artificial Intelligence Act provides a specific regulatory process for companies that create sophisticated AI systems with societal, economic, or national security implications. Covered developers are required to publish an extensive safety and transparency policy that details how they navigate risk throughout the artificial intelligence lifecycle. The act requires developers to notify the government of any significant incidents or failures with their deployed frontier models on a timely basis.
A significant aspect of the TFAIA is that it establishes the concept of "process transparency", which does not explicitly control how AI developers create their models, but rather holds them accountable for their internal safety governance by mandating that they develop Documented safety frameworks that outline risk assessment, mitigation, and monitoring processes. The act allows developers to protect their trade secrets, patents, and national defense concerns by providing them with limited opportunities for exemption and/or redaction of their documents so that they can maintain a balance between data openness and safeguarding sensitive information..
Extraterritorial Impact on Global AI Developers
While the Act is a state law, its implementation has far-reaching effects. Many of the largest AI companies have facilities, research labs or customers in California. Therefore, to be compliant with the TFAIA, these companies are required to do so commercially. The ability to develop a unified compliance model across regions enables companies to avoid developing duplicate compliance models.
This same pattern has occurred in other regulatory areas, like data protection regulations; where a region's regulations effectively became global compliance benchmarks for that regulatory area. The TFAIA could similarly serve as a global standard for transparency in frontier AI and shape how companies build their governance structure globally even if they don't have explicit regulations in the regions where they operate.
Influence on International AI Regulatory Models
The TFAIA offers a unique perspective on global discussions about regulating AI. In contrast to other legislation which defines different levels of risk depending on the type of AI, the TFAIA targets specifically high-impact or emerging technologies. Other nations may see value in this model of tiered regulations based on capability and apply it for their own regulation of AI, with the strictest obligations placed on those with the most critical potential harm.
The TFAIA may serve as a guide for international public policy makers by showing how they can reference existing standards and best practices in developing regulations, thus improving interoperability and potentially lessening regulatory barriers to cross-border AI innovations.
Corporate Governance, Compliance Costs, and Competition
From an industry perspective, the Act revolutionizes the way companies govern themselves. Developers are now required to create thorough risk assessments, red-teaming exercises, incident response protocols, and have board oversight for AI safety and regulation. The number of people involved in this process increases accountability but at the same time the increases will create a burden of cost for all involved.
The burden of compliance will be easier for large tech companies than for smaller or start-ups, and thus large tech companies may solidify their position of dominance over the development of frontier AI. Smaller and newer developers may be blocked from entering the market unless some form of proportional or scaled compliance mechanism for where they operate emerges. These developments certainly raise issues surrounding innovation policy and competition law at a global scale that will need to be addressed by regulators in conjunction with AI safety concerns.
Transparency, Public Trust, and Accountability
The TFAIA bolsters the capability of citizens, researchers and journalists to oversee the development and the use of artificial intelligence (AI) through its requirement for public disclosure of the safety framework of AI systems. The disclosures will allow citizens, researchers and journalists to critically evaluate corporate claims of responsible AI development. Over time, this evaluation could increase trust in publically regulated AI systems and would expose businesses that exhibit a poor risk management process.
However, how useful this transparency is depends on the quality and comparability of the information being disclosed. Many current disclosures are either too vague or too complex, thus limiting the ability to conduct meaningful oversight. There should be a push for clearer guidance and/or the establishment of standardised disclosure forms for the purposes of public accountability (i.e., citizens) and uniformity between countries.
Conclusion
The Transparency in Frontier Artificial Intelligence Act is a transformative development in the regulation of Artificial Intelligence Technology, specifically, a whole new risk profile of this new generation of AI / (Advanced High-Powered) Technologies such as Autonomous Vehicles. This new California law will create global impact because it Be will change how technology companies operate, create regulatory frameworks and develop standards to govern/oversee the use of Autonomous Vehicles. The Act creates a “transparent” means for regulating (or governing) Autonomous Vehicles as opposed to relying solely on “technical” means for these systems. As other regions experience similar challenges that US Government is facing with respect to this new generation of AI (written laws), California's approach will likely be used as an example for how AI laws are written in the future and develop a more unified and responsible international AI regulatory framework.
References
- https://www.whitecase.com/insight-alert/california-enacts-landmark-ai-transparency-law-transparency-frontier-artificial
- https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/
- https://www.mofo.com/resources/insights/251001-california-enacts-ai-safety-transparency-regulation-tfaia-sb-53
- https://www.dlapiper.com/en/insights/publications/2025/10/california-law-mandates-increased-developer-transparency-for-large-ai-models

Executive Summary:
Recently PAN-OS software of Palo Alto Networks was discovered with the critical vulnerability CVE-2024-3400. It is the software used to power all their networks in the next generation firewalls. This vulnerability is a common injection vulnerability which provides access to unauthenticated attackers to execute random code having root privileges on the attacked system. This has been exploited actively by threat actors, leaving many organizations at risk for severe cyberattacks. This report helps to understand the exploitation, detection, mitigations and recommendations for this vulnerability.

Understanding The CVE-2024-3400 Vulnerability:
CVE-2024-3400 impacts the particular version of PAN-OS and a certain configuration susceptible to this kind of a security issue. It is a command injection, which exists in the GlobalProtect module of the PAN-OS software. The vulnerability can be exploited by an unauthorized user to run any code on the firewall having root privileges. This targets Active Directory database (ntds.dit), important data (DPAPI), and Windows event logs (Microsoft-Windows-TerminalServices-LocalSessionManager%4Operational.evtx) and also login data, cookies, and local state data for Chrome and Microsoft Edge from specific targets leading attackers to capture the browser master key and steal sensitive information of the organization.
The CVE-2024-3400 has been provided with a critical severity rating of 10.0. The following two weaknesses make this CVE highly severe:
- CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection')
- CWE-20: Improper Input Validation.
Impacted Products:
The affected version of PAN-OS by CVE-2024-3400 are-

Only the versions 10.2, 11.0, and 11.1, setup with GlobalProtect Gateway or GlobalProtect Portal are exploited by this vulnerability. Whereas the Cloud NGFW, Panorama appliances and Prisma Access are not affected.
Detecting Potential Exploitation:
Palo Alto Networks has confirmed that they are aware of the exploitation of this particular vulnerability by threat actors. In a recent publication they have given acknowledgement to Volexity for identifying the vulnerability. There is an increasing number of organizations that face severe and immediate risk by this exploitation. Third parties also have released the proof of concept for the vulnerability.
The suggestions were provided by Palo Alto Networks to detect this critical vulnerability. To detect this vulnerability, the following command shall be run on the command-line interface of PAN-OS device:
grep pattern "failed to unmarshal session(.\+.\/" mp-log gpsvc.log*
This command looks through device logs for specific entries related to vulnerability.
These log entries should contain a long, random-looking code called a GUID (Globally Unique Identifier) between the words "session(" and ")". If an attacker has tried to exploit the vulnerability, this section might contain a file path or malicious code instead of a GUID.
Presence of such entries in your logs, could be a sign of a potential attack to hack your device which may look like:
- failed to unmarshal session(../../some/path)
A normal, harmless log entry would look like this:
- failed to unmarshal session(01234567-89ab-cdef-1234-567890abcdef)
Further investigations and actions shall be needed to secure the system in case the GUID entries were not found and suspicious.
Mitigation and Recommendations:
Mitigation of the risks posed by the critical CVE-2024-3400 vulnerability, can be accomplished by the following recommended steps:
- Immediately update Software: This vulnerability is fixed in software releases namely PAN-OS 10.2.9-h1, PAN-OS 11.0.4-h1, PAN-OS 11.1.2-h3, and all higher versions. Updating software to these versions will protect your systems fully against potential exploitation.
- Leverage Hotfixes: Palo Alto Networks has released hotfixes for commonly deployed maintenance releases of PAN-OS 10.2, 11.0, and 11.1 for the users who cannot upgrade to the latest versions immediately. These hotfixes do provide a temporary solution while you prepare for the full upgrade.
- Enable Threat Prevention: Incase of available Threat Prevention subscription, enable Threat IDs 95187, 95189, and 95191 to block attacks targeting the CVE-2024-3400 vulnerability. These Threat IDs are available in Applications and Threats content version 8836-8695 and later.
- Apply Vulnerability Protection: Ensure that vulnerability protection has been applied in the GlobalProtect interface to prevent the exploitation on the device. It can be implemented using these instructions.
- Monitor Advisory Updates: Regularly checking for the updates to the official advisory of Palo Alto Networks. This helps to stay up to date of the new releases of the guidance and threat prevention IDs of CVE-2024-3400.
- Disable Device Telemetry – Optional: It is suggested to disable the device telemetry as an additional precautionary measure.
- Remediation: If there is an active exploitation observed, follow the steps mentioned in this Knowledge Base article by Palo Alto Networks.
Implementation of the above mitigation measures and recommendations would be in a position to greatly reduce the risk of exploitation you might face from a cyber attack targeting the CVE-2024-3400 vulnerability in Palo Alto Networks' PAN-OS software.
Conclusion:
The immediate response should be taken against the offensive use of the critical CVE-2024-3400 vulnerability found in the PAN-OS platform of Palo Alto Networks. Organizations should actively respond by implementing the suggested mitigation measures such as upgrading to the patched versions, enabling threat prevention and applying vulnerability protection to immediately protect from this vulnerability. Regular monitoring, implementing security defense mechanisms and security audits are the necessary measures that help to combat emerging threats and save critical resources.

Introduction
In the hyperconnected world, cyber incidents can no longer be treated as sporadic disruptions; such incidents have become an everyday occurrence. The attack landscape today is very consequential and shows significant multiplication in its frequency, with ransomware attacks incapacitating a health system, phishing attacks hitting a financial institution, or state-sponsored attacks on critical infrastructures. Towards counteracting such threats, traditional ways alone are not enough, they gravely rely on manual research and human intellect. Attackers exercise speed, scale, and stealth, and defenders are always four steps behind. With such a widening gap, it is deemed necessary to facilitate incident response and crisis management with the intervention of automation and artificial intelligence (AI) for faster detection, context-driven decision-making, and collaborative response beyond human capabilities.
Incident Response and Crisis Management
Incident response is the structured way in which organisations deal with responding to detecting, segregating, and recovering from security incidents. Crisis management takes this even further, dealing not only with the technical fallout of a breach but also its business, reputation, and regulatory implications. Echelon used to depend on manual teams of people sorting through logs, cross-correlating alarms, and generating responses, a paradigm effective for small numbers but quickly inadequate in today's threat climate. Today's opponents attack at machine speed, employing automation to launch attacks. Under such circumstances, responding with slow, manual methods means delay and draconian consequences. The AI and automation introduction is a paradigm change that allows organisations to equate the pace and precision with which attackers initiate attacks in responding to incidents.
How Automation Reinvents Response
Cybercrime automation liberates cybercrime analysts from boring and repetitive tasks that consume time. An analyst manually detects potential threats from a list of hundreds each day, while automated systems sift through noise and focus only on genuine threats. Malware can automatically cause infected computers to be disconnected from the network to avoid spreading or may automatically have its suspicious account permissions removed without human intervention. The security orchestration systems move further by introducing playbooks, predefined steps describing how incidents of a certain type (e.g., phishing attempts or malware infections) should be handled. This ensures fast containment while ensuring consistency and minimising human error amid the urgency of dealing with thousands of alerts.
Automation takes care of threat detection, prioritisation, and containment, allowing human analysts to refocus on more complex decision-making. Instead of drowning in the sea of trivial alerts, security teams can now devote their efforts to more strategic areas: threat hunting and longer-term resilience. Automation is a strong tool of defence, cutting response times down from hours to minutes.
The Intelligence Layer: AI in Action
If automation provides speed, then AI is what allows the brain to be intelligent and flexible. Working with old and fixed-rule systems, AI-enabled solutions learn from experiences, adapt to changes in threats, and discover hidden patterns of which human analysts themselves would be unaware. For instance, machine learning algorithms identify normal behaviour on a corporate network and raise alerts on any anomalies that could indicate an insider attack or an advanced persistent threat. Similarly, AI systems sift through global threat intelligence to predict likely attack vectors so organisations can have their vulnerabilities fixed before they are exploited.
AI also boosts forensic analysis. Instead of searching forever for clues, analysts let AI-driven systems trace back to the origin of an event, identify vulnerabilities exploited by attackers, and flag systems that are still under attack. During a crisis, AI is a decision support that predicts outcomes of different scenarios and recommends the best response. In response to a ransomware attack, for example, based on context, AI might advise separating a single network segment or restoring from backup or alerting law enforcement.
Real-World Applications and Case Studies
Already, this mitigation has been provided in the form of real-world applications of automation and AI. Consider, for example, IBM Watson for Cybersecurity, which has been applied in analysing unstructured threat intelligence and providing analysts with actionable results in minutes, rather than days. Like this, systems driven by AI in DARPA's Cyber Grand Challenge demonstrated the ability to automatically identify an instant vulnerability, patch it, and reveal the potential of a self-healing system. AI-powered fraud detection systems stop suspicious transactions in the middle of their execution and work all night to prevent losses. What is common in all these examples is that automation and AI lessen human effort, increase accuracy, and in the event of a cyberattack, buy precious time.
Challenges and Limitations
While promising, the technology is still not fully mature. The quality of an AI system is highly dependent on the training data provided; poor training can generate false positives that drown teams or worse false negatives that allow attackers to proceed unabated. Attackers have also started targeting AI itself by poisoning datasets or designing malware that does not get detected. Aside from risks that are more technical, the operational and financial costs involved in implementing advanced AI-based systems present expensive threats to any company. Organisations will have to make expenditures not only on technology but also for the training of staff to best utilise these tools. There are some ethical and privacy issues to consider as well because systems may be processing sensitive personal data, so global data protection laws such as the GDPR or India's DPDP Act could come into conflict.
Creating a Human-AI Collaboration
The future is not going to be one of substitution by machines but of creating human-AI synergy. Automation can do the drudgery, AI can provide smarts, and human professionals can use judgment, imagination, and ethical decisions. One would want to build AI-fuelled Security Operations Centres where technology and human experts work in tandem. Continuous training must be provided to AI models to reduce false alarms and make them most resistant against adversarial attacks. Regular conduct of crisis drills that combine AI tools and human teams can ensure preparedness for real-time events. Likewise, it is worth integrating ethical AI guidelines into security frameworks to ensure a stronger defence while respecting privacy and regulatory compliance.
Conclusion
Cyber-attacks are an eventuality in this modern time, but the actual impact need not be so harsh. The organisations can maintain the programmatic method of integrating automation and AI into incident response and crisis management so that the response against the very threat can be shifted from reactive firefighting to proactive resilience. Automation gives speed and efficiency while AI gives intelligence and foresight, hence putting the defenders on par and possibly exceeding the speed and sophistication of the attackers. But an utmost system without human inquisitiveness, ethical reasoning, and strategic foresight would remain imperfect. The best defence is in that human-machine relationship symbiotic system wherein automation and AI take care of how fast and how many cyber threats come in, whereas human intellect ensures that every response is aligned with larger organizational goals. This synergy is where cybersecurity resiliency will reside in the future-the defenders won't just be reacting to emergencies but will rather be driving the way.
References
- https://www.sisainfosec.com/blogs/incident-response-automation/
- https://stratpilot.ai/role-of-ai-in-crisis-management-and-its-critical-importance/
- https://www.juvare.com/integrating-artificial-intelligence-into-crisis-management/
- https://www.motadata.com/blog/role-of-automation-in-incident-management/