#FactCheck: Fake Claim that US has used Indian Airspace to attack Iran
Executive Summary:
An online claim alleging that U.S. bombers used Indian airspace to strike Iran has been widely circulated, particularly on Pakistani social media. However, official briefings from the U.S. Department of Defense and visuals shared by the Pentagon confirm that the bombers flew over Lebanon, Syria, and Iraq. Indian authorities have also refuted the claim, and the Press Information Bureau (PIB) has issued a fact-check dismissing it as false. The available evidence clearly indicates that Indian airspace was not involved in the operation.
Claim:
Various Pakistani social media users [archived here and here] have alleged that U.S. bombers used Indian airspace to carry out airstrikes on Iran. One widely circulated post claimed, “CONFIRMED: Indian airspace was used by U.S. forces to strike Iran. New Delhi’s quiet complicity now places it on the wrong side of history. Iran will not forget.”

Fact Check:
Contrary to viral social media claims, official details from U.S. authorities confirm that American B2 bombers used a Middle Eastern flight path specifically flying over Lebanon, Syria, and Iraq to reach Iran during Operation Midnight Hammer.

The Pentagon released visuals and unclassified briefings showing this route, with Joint Chiefs of Staff Chair Gen. Dan Caine explained that the bombers coordinated with support aircraft over the Middle East in a highly synchronized operation.

Additionally, Indian authorities have denied any involvement, and India’s Press Information Bureau (PIB) issued a fact-check debunking the false narrative that Indian airspace was used.

Conclusion:
In conclusion, official U.S. briefings and visuals confirm that B-2 bombers flew over the Middle East not India to strike Iran. Both the Pentagon and Indian authorities have denied any use of Indian airspace, and the Press Information Bureau has labeled the viral claims as false.
- Claim: Fake Claim that US has used Indian Airspace to attack Iran
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The digital landscape of the nation has reached a critical point in its evolution. The rapid adoption of technologies such as cloud computing, mobile payment systems, artificial intelligence, and smart infrastructure has led to a high degree of integration between digital systems and governance, commercial activity, and everyday life. As dependence on these systems continues to grow, a wide range of cyber threats has emerged that are complex, multi-layered, and closely interconnected. By 2026, cyber security threats directed at India are expected to include an increasing number of targeted, well-organised, and strategic cyber attacks. These attacks are likely to focus on exploiting the trust placed in technology, institutions, automation, and the fast pace of technological change.
1. Social Engineering 2.0: Hyper-Personalised AI Phishing & Mobile Banking Malware
Cybercriminals have moved from generalised methods to hyper-targeted attacks through AI-based psychological manipulation. In addition to social media profiles, data breaches, and digital/tracking footprints, the latest types of cybercrimes expected in 2026 will involve AI-based analysis of this information to create and increase the use of hyper-targeted phishing emails.
Phishing emails are capable of impersonating banks, employers, and even family members, with all the same regionally or culturally relevant tone, language, and context as would be done if these persons were sending the emails in person.
With malicious applications disguised as legitimate service apps, cybercriminals have the ability to intercept and capture One-Time Passwords (OTPs), hijack user sessions, and steal money from user accounts in a matter of minutes.
These types of attempts or attacks are successful not only because of their technical sophistication, but because they take advantage of human trust at scale, giving them an almost limitless reach into the financial systems of people around the world through their computers and mobile devices.
2. Cloud and Supply Chain Vulnerabilities
As Indian organisations increasingly migrate to cloud infrastructure, cloud misconfigurations are emerging as a major cybersecurity risk. Weak identity controls, exposed storage, and improper access management can allow attackers to bypass traditional network defences. Alongside this, supply chain attacks are expected to intensify in 2026.
In supply chain attacks, cybercriminals compromise a trusted software vendor or service provider to infiltrate multiple downstream organisations. Even entities with strong internal security can be affected through third-party dependencies. For India’s startup ecosystem, government digital platforms, and IT service providers, this presents a systemic risk. Strengthening vendor risk management and visibility across digital supply chains will be essential.
3. Threats to IoT and Critical Infrastructure
By implementing smart cities, digital utilities, and connected public services, IoT has opened itself up to increased levels of operational technology (OT) through India’s initiative. However, there is currently a lack of adequate security in the form of strong authentication, encryption, and update methods available on many IoT devices. By the year 2026, attackers are going to be able to exploit these vulnerabilities much more than they already are.
Cyberattacks on critical infrastructure such as energy, transportation, healthcare, and telecom systems have far-reaching consequences that extend well beyond data loss; they directly affect the provision of essential services, can damage public safety, and raise concerns over national security. Effectively securing critical infrastructure needs to involve dedicated security solutions to deal with the specific needs of critical infrastructure, in contrast to conventional IT security.
4. Hidden File Vectors and Stealth Payload Delivery
SVG File Abuse in Stealth Attacks
Cybercriminals are continually searching for ways to bypass security filters, and hidden file vectors are emerging as a preferred tactic. One such method involves the abuse of SVG (Scalable Vector Graphics) files. Although commonly perceived as harmless image files, SVGs can contain embedded scripts capable of executing malicious actions.
By 2026, SVG-based attacks are expected to be used in phishing emails, cloud file sharing, and messaging platforms. Because these files often bypass traditional antivirus and email security systems, they provide an effective stealth delivery mechanism. Indian organisations will need to rethink assumptions about “safe” file formats and strengthen deep content inspection capabilities.
5. Quantum-Era Cyber Risks and “Harvest Now, Decrypt Later” Attacks
Although practical quantum computers are still emerging, quantum-era cyber risks are already a present-day concern. Adversaries are believed to be intercepting and storing encrypted data now with the intention of decrypting it in the future once quantum capabilities mature—a strategy known as “harvest now, decrypt later.” This poses serious long-term confidentiality risks.
Recognising this threat, the United States took early action during the Biden administration through National Security Memorandum 10, which directed federal agencies to prepare for the transition to quantum-resistant cryptography. For India, similar foresight is essential, as sensitive government communications, financial data, health records, and intellectual property could otherwise be exposed retrospectively. Preparing for quantum-safe cryptography will therefore become a strategic priority in the coming years.
6. AI Trust Manipulation and Model Exploitation
Poisoning the Well – Direct Attacks on AI Models
As artificial intelligence systems are increasingly used for decision-making—ranging from fraud detection and credit scoring to surveillance and cybersecurity—attackers are shifting focus from systems to models themselves. “Poisoning the well” refers to attacks that manipulate training data, feedback mechanisms, or input environments to distort AI outputs.
In the context of India's rapidly growing digital ecosystem, compromised AI models can result in biased decisions, false security alerts or denying legitimate services. The big problem with these types of attacks is they may occur without triggering conventional security measures. Transparency, integrity and continuous monitoring of AI systems will be key to creating and maintaining stakeholder confidence in the decision-making process of the automated systems.
Recommendations
Despite the increasing sophistication of malicious cyber actors, India is entering this phase with a growing level of preparedness and institutional capacity. The country has strengthened its cyber security posture through dedicated mechanisms and relevant agencies such as the Indian Cyber Crime Coordination Centre, which play a central role in coordination, threat response, and capacity building. At the same time, sustained collaboration among government bodies, non-governmental organisations, technology companies, and academic institutions has expanded cyber security awareness, skill development, and research. These collective efforts have improved detection capabilities, response readiness, and public resilience, placing India in a stronger position to manage emerging cyber threats and adapt to the evolving digital environment.
Conclusion
By 2026, complexity, intelligence, and strategic intent will increasingly define cyber threats to the digital ecosystem. Cyber criminals are expected to use advanced methods of attack, including artificial intelligence assisted social engineering and the exploitation of cloud supply chain risks. As these threats evolve, adversaries may also experiment with quantum computing techniques and the manipulation of AI models to create new ways of influencing and disrupting digital systems. In response, the focus of cybersecurity is shifting from merely preventing breaches to actively protecting and restoring digital trust. While technical controls remain essential, they must be complemented by strong cybersecurity governance, adherence to regulatory standards, and sustained user education. As India continues its digital transformation, this period presents a valuable opportunity to invest proactively in cybersecurity resilience, enabling the country to safeguard citizens, institutions, and national interests with confidence in an increasingly complex and dynamic digital future.
References
- https://www.seqrite.com/india-cyber-threat-report-2026/
- https://www.uscsinstitute.org/cybersecurity-insights/blog/ai-powered-phishing-detection-and-prevention-strategies-for-2026
- https://www.expresscomputer.in/guest-blogs/cloud-security-risks-that-should-guide-leadership-in-2026/130849/
- https://www.hakunamatatatech.com/our-resources/blog/top-iot-challenges
- https://csrc.nist.gov/csrc/media/Presentations/2024/u-s-government-s-transition-to-pqc/images-media/presman-govt-transition-pqc2024.pdf
- https://www.cyber.nj.gov/Home/Components/News/News/1721/214

Introduction
Google is committed to supporting the upcoming elections in India by providing high-quality information to voters, safeguarding platforms from abuse, and helping people navigate AI-generated content. Google will connect voters to helpful information through enhanced features, collaborating with the Election Commission of India (ECI) to provide voting information in both English and Hindi. Emphasis is also placed on showcasing authoritative information on YouTube. YouTube will highlight authoritative news sources and offer context on topics prone to misinformation. YouTube also appends information panels directing viewers to the Election Commission of India's FAQs. This support will help millions of eligible voters navigate the electoral process and ensure a fair and transparent election process.
Key Highlights of Google’s Approach
The step taken by Google will support the democratic process during the upcoming General Election in India. The initiative focuses on three main pillars: disseminating information, tackling misinformation, and navigating AI-generated content. Google is enhancing its Search and YouTube features to provide essential election-related information, including voter registration, polling guidelines, and candidate profiles. Google is also addressing the challenges posed by AI-generated content by offering clarity on content origins, particularly for election-related ads and YouTube videos. Google has strict policies and restrictions regarding who can run election-related advertising on its platforms, including identity verification, pre-certificates, and in-ad disclosures. Additionally, Google is utilising tools and policies like Ads disclosures, content labels on YouTube, and digital watermarking to help users to identify AI-generated content.
Google has joined hands with ECI
The tech giant Google is partnering with the Election Commission of India (ECI) to provide voting information on Google Search in both English and Hindi. YouTube will feature election information panels, including candidate profiles and registration guidelines, ensuring users have access to authoritative sources. Google's recommendation system will display content from trusted publishers on election-related topics. Protecting the integrity of elections is a top priority, and the company is employing advanced AI models and machine learning techniques to identify and remove content that violates its policies at scale. A dedicated team of local experts across major Indian languages is assigned to provide relevant context and ensure swift action against emerging threats. Google is also tightening up who can advertise on its platforms, requiring advertisers to undergo an identity verification process and obtain a pre-certificate from the ECI or authorised entities for each election ad they wish to run.
Tackling Electoral Misinformation
Google is enhancing its platform security measures to prevent misinformation. It is using AI models and human expertise to identify and address policy violations, while stringent verification processes and disclosures are being implemented to maintain user trust.
Collaborations to promote reliable information
Google is supporting the Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact checkers to detect online misinformation, including deepfakes. The project will provide news entities and fact checkers with essential training in fact-checking methodologies, deepfake detection, and the latest Google tools to streamline verification processes, as stated in Google’s blog post.
Conclusion
Google has taken proactive steps to ensure a secure electoral process during the upcoming general elections in India. These include preventing the misuse of false information by helping voters navigate AI-generated content and safeguarding its platforms from abuse. Google India has built faster and more adaptable enforcement systems with recent advances in its Large Language Models (LLMs), enabling the company to remain nimble and take action quickly when new threats emerge. Google is dedicated to collaborating with government, industry, and civil society to provide voters with reliable and trustworthy online information. Google is implementing a comprehensive strategy to empower voters, safeguard its platforms, and combat misinformation in India's upcoming general elections. Google’s step is commendable and aims to ensure a secure electoral process, empowering millions of citizens to exercise their democratic rights.
References:
- https://blog.google/intl/en-in/company-news/outreach-initiatives/supporting-the-2024-indian-general-election/
- https://inc42.com/buzz/following-gemini-row-google-strengthens-checks-on-ai-generated-content-before-elections/#:~:text=In%20an%20effort%20to%20ensure,safeguarding%20its%20platforms%20from%20abuse
- https://www.indiatvnews.com/technology/news/google-introduces-enhanced-tools-for-supporting-elections-in-india-2024-03-12-921096
- https://economictimes.indiatimes.com/news/elections/lok-sabha/india/google-ties-up-with-eci-to-prevent-spread-of-false-information/articleshow/108431021.cms?from=mdr
- https://www.businesstoday.in/technology/news/story/google-joins-hands-with-election-commission-of-india-to-help-voters-via-search-youtube-421112-2024-03-12
- https://indianexpress.com/article/technology/tech-news-technology/google-2024-general-elections-support-9209588/

Introduction
In the age of social media, the news can spread like wildfire. A recent viral claim contained that police have started a nationwide scheme of free travel service for women at night. It stated that any woman who is alone and cannot find a vehicle to go home between 10 PM and 06 AM can contact the provided numbers and request a free vehicle. The viral message further contained the request to share and forward this information to everyone to get the women to know about the free vehicle service offered by police at night. However, upon fact check the claim was found to be misleading.
Social Impact of Misleading Information
The fact that such misleading information gets viral at a fast speed is because of its ability to impact and influence people through emotional resonance. Especially during a time when women's safety is a topic discussed in media sensationalism due to recently highlighted rape or sexual violence incidents, such fake viral claims often spark widespread public concern, causing emotional resonance to people and they unknowingly share or forward such messages in the spike of emotional and sensational appeal contained in such messages. The emotional nature of these viral texts often overrides scepticism, leading to immediate sharing without verification.
Such nature of viral messages often tends to bring people to protest, raise awareness and create support networks, but in spite of emotional resonance people get targeted by misinformation and become the unintended superspreaders of fake news fueled by emotional and social media-driven reactions. Women’s safety in society is a sensitive topic and when people discover such viral claims to be misleading and fake, it often hurts the sentiments of society leading to significant social impacts, including distrust in social media, unnecessary panic and confusion.
CyberPeace Policy Vertical Advisory for Social Media Users
- Think before Sharing: All netizens must practice caution while sharing anything and double-check its authenticity before sharing/forwarding or reposting it on your social media stories.
- Don't be unintended superspreaders of Misinformation: Misinformation with emotional resonance and widespread sharing by netizens can lead to them becoming "superspreaders of misinformation" and making it viral quickly. Hence you must avoid such unintended consequences by following the best practices of being vigilant and informed by reliable sources.
- Exercise vigilance and scepticism: It is important that netizens exercise vigilance and they build cognitive abilities to recognise the red flags of misleading information. You can do so by following the official communication channels, looking for any discrepancy in the content of susceptible information and double-checking its authenticity before sharing it with anyone.
- Verify the information from official sources: Follow the official communication channels of concerned authorities for any kind of information, circulars, notifications etc. In case of finding any piece of information to be susceptible or misleading, intimate it to the relevant authority and the fact-checking organizations.
- Stay in touch with expert organizations: Cybersecurity experts and civil society organisations possess the unique blend of large-scale impact potential and technical expertise. Netizens can stay updated about recent developments in the tech-policy sphere and learn about internet best practices, and measures to counter misinformation through methods such as prebunking, debunking and more.
Connect with CyberPeace
As an expert organisation, we have the ability to educate and empower huge numbers, along with the skills and policy acumen needed to be able to not just make people aware of the problem but also teach them how to solve it for themselves. At CyberPeace we regularly produce fact-check reports, blogs & advisories, and insights on prebunking & debunking measures and capacity-building programs with the aim of empowering netizens at the heart of our initiatives. CyberPeace has established the largest network of CyberPeace Corps volunteers globally. These volunteers play a crucial role in assisting victims, raising awareness, and promoting proactive measures.
References: