#FactCheck-Claim of Jaguar Jet Failing to Land During IAF Drill Is Fake; Viral Video Digitally Manipulated
Executive Summary
A video circulating on social media claims that a Jaguar fighter jet of the Indian Air Force (IAF) failed to land during a takeoff and landing exercise held on April 22, 2026, at the Purvanchal Expressway in Uttar Pradesh. The claim suggests that the incident disrupted preparations for “Operation Sindoor.” However, an research by the CyberPeace Research Wing has found the claim to be false.
Claim
The video was shared by a Facebook user, ‘Meera MJ,’ alleging that the Jaguar aircraft could not land during the exercise conducted near Sultanpur. To verify the authenticity of the video, multiple keyframes were extracted and analyzed using reverse image search tools. This led to the original footage shared by ANI on its official X (formerly Twitter) handle on April 22, 2026. The authentic video of the air show does not show any such incident of a failed landing.

Fact Check
A detailed review of ANI’s social media posts also revealed no evidence supporting the viral claim. This strongly indicates that the circulating clip has been digitally manipulated by altering the original footage.

Further corroboration came from a report published by Bhaskar.com, which extensively covered the air show. According to the report, the event featured successful operations by multiple aircraft, including the C-295 transport aircraft landing on the expressway airstrip, followed by Jaguar jets taking off. Sukhoi and Mirage fighter jets also performed takeoff and landing drills, while M17 helicopters carried out commando mock operations. Additionally, the M32 Bhishma aircraft conducted ‘touch and go’

Conclusion:
The viral claim that a Jaguar fighter jet failed to land during the Indian Air Force drill is baseless. The video being circulated is digitally manipulated and does not reflect any real incident.
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide
.webp)
Executive Summary:
On July 4, 2024, a giant password dump, “RockYou2024” was posted on a cybercrime marketplace containing 9,948,575,739 plain-text credentials. This blog explains the technical aspects of this leakage and its consequences in the sphere of information security.
RockYou2024 is a list of passwords obtained from different data breaches ranging over the course of more than twenty years. It integrates older passwords with the lexical database with the additional passwords from the recent hacks, thereby, cumulating the database of genuine and existing passwords. The compilation is said to contain data from more than 4,000 databases putting the tool in the hands of potential attackers. RockYou owns the name to this type of attack since a data breach attacked a social media company named , “RockYou'' and released 3.2 million users’ passwords as a .txt file. Since then, the term gained a common meaning connected with mass password data breaches.
Technical Implications:
- Credential Stuffing Attacks: The RockYou2024 list comprises a great number of actual passwords that increases the likelihood of credential stuffing attacks. With this, the attackers help themselves with an opportunity to try to gain unlawful access into several online accounts that a user may have, particularly ones where an individual re-uses the same password.
- Brute-Force Attacks: The collection is extensive for brute force attack on systems that have no protection against such exercise. This is especially the case for devices and services that are exposed to the internet and which may use either weak or factory-set alphanumeric codes.
- Password Cracking: Web compilations that include such lists are often employed by security specialists and penetration testers who use John the Ripper or Hashcat to check the password’s strength or the system’s susceptibility to attacks.
- Machine Learning Models: The dataset could be used to create machine learning models for password prediction or analysis, which would only lead to further better methods to be used in the attacks.
Countermeasures / Mitigation:
Below are the technical risk/process operating proposed to reduce the risks associated with RockYou2024:
- Password Hashing: It is necessary to ensure that all the passwords required to be saved should be encrypted in one of the most secure algorithms like bcrypt, Argon2, or PBKDF2 along with a reasonable number of iterations.
- Salt and Pepper: The features for both salting and peppering should also be enabled to complicate the cracking of passwords even after the hashed password databases have been procured.
- Multi-Factor Authentication (MFA): Ensure the usage of complex passwords in addition to deploying MFA across all the technological systems and services within the company.
- Password Strength Policies: Adhere to password policies for features like the length, strength of the passwords and the change in password frequency.
- Rate Limiting and Account Lockouts: Inactivity methods must be used on consecutive attempts to log in and to the temporary lock out after so many attempts in a bid to discourage brute force attacks.
- Monitoring and Alerting: There should be measures in place to monitor for any violations such as login tappings or a form of credential stuffings and there should be alerts, where securities risks are likely to arise, in real time.
- API Security: The following proper API security measures that will result in the prevention of the following attacks; rate limiting, input validation, and token.
- Web Application Firewalls (WAF): To defend against threats from the internet for potential credential stuffing or brute-forcing the authentication process, utilize WAFs to operate at the application layer.
Analyzing the Impact:
To understand the potential impact of RockYou2024, organizations should assess the possible effects of RockYou2024, such as:
- Conduct Password Audits: LeakYou2024 scan current passwords database with RockYou2024 (in ethical and safe methods) and see which accounts have been compromised.
- Implement Continuous Monitoring: If this is a monthly or weekly event then there must be new information on data breaches and act on it concerning new security changes.
- Educate Users: Continued security consciousness training, regarding the effective protection of an individual’s password in combination with a password generator.
- Perform Penetration Testing: It is suggested to conduct penetration testing at least twice a year to find out if there are vulnerabilities in the systems and applications in the current use.
Conclusion:
The RockYou2024 leaked password database is a serious security risk; it contains almost 10 billion account credentials. This unprecedented leak further increases the exposure to credential stuffing, brute force and password cracking attacks. To deal with these threats, organizations need to have measures that include password hashing, multi-factor authentication, password strengthening and password audit. Patching, user awareness, bandit activities are imperative to prevent future invasions and strengthen the cyber security posture.
References :
- https://statanalytica.com/blog/rockyou-2024-txt-password/
- https://dig.watch/updates/rockyou2024-password-leak-exposes-nearly-10-billion-unique-passwords
- https://complexdiscovery.com/rockyou2024-leak-nearly-10-billion-passwords-exposed-heightening-cybersecurity-risks-for-businesses/

March 3rd 2023, New Delhi: If you have received any message that contains a link asking users to download an application to avail Income Tax Refund or KYC benefits with the name of Income Tax Department or reputed Banks, Beware!
CyberPeace Foundation and Autobot Infosec Private Limited along with the academic partners under CyberPeace Center of Excellence (CCoE) recently conducted five different studies on phishing campaigns that have been circulating on the internet by using misleading tactics to convince users to install malicious applications on their devices. The first campaign impersonates the Income Tax Department, while the rest of the campaigns impersonate ICICI Bank, State Bank of India, IDFC Bank and Axis bank respectively. The phishing campaigns aim to trick users into divulging their personal and financial information.
After a detailed study, the research team found that:
- All campaigns appear to be an offer from reputed entities, however hosted on third-party domains instead of the official website of the Income Tax Department or the respective Banks, raising suspicion.
- The applications ask several access permissions of the device. Moreover some of them seek users to provide full control of the device. Allowing such access permission could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications.
- Cybercriminals created malicious applications using icons that closely resemble those of legitimate entities with the intention of enticing users into downloading the malicious applications.
- The applications collect user’s personal and banking information. Getting into this type of trap could lead users to face significant financial losses.
- While investigating the impersonated Income Tax Department’s application, the Research team identified the application sends http traffic to a remote server which acts as a Command and Control (CnC/C2) for the application.
- Customers who desire to avail benefits or refunds from respective banks, download relevant apps, believing that the chosen app will assist them. However, they are not always aware that the app may be fraudulent.
“The Research highlights the importance of being vigilant while browsing the internet and not falling prey to such phishing attacks. It is crucial to be cautious when clicking on links or downloading attachments from unknown sources, as they may contain malware that can harm the device or compromise the data.” spokesperson, CyberPeace added.
In addition to this in an earlier report released in last month, the same research team had drawn attention to the WhatsApp messages masquerading as an offer from Tanishq Jewellers with links luring unsuspecting users with the promise of free valentine’s day presents making the rounds on the app.
CyberPeace Advisory:
- The Research team recommends that people should avoid opening such messages sent via social platforms. One must always think before clicking on such links, or downloading any attachments from unauthorised sources.
- Downloading any application from any third party sources instead of the official app store should be avoided. This will greatly reduce the risk of downloading a malicious app, as official app stores have strict guidelines for app developers and review each app before it gets published on the store.
- Even if you download the application from an authorised source, check the app’s permissions before you install it. Some malicious apps may request access to sensitive information or resources on your device. If an app is asking for too many permissions, it’s best to avoid it.
- Keep your device and the app-store app up to date. This will ensure that you have the latest security updates and bug fixes.
- Falling into such a trap could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications and could lead users to financial loss.
- Do not share confidential details like credentials, banking information with such types of Phishing scams.
- Never share or forward fake messages containing links on any social platform without proper verification.