#FactCheck: Viral Video Showing Pakistan Shot Down Indian Air Force' MiG-29 Fighter Jet
Executive Summary
Recent claims circulating on social media allege that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani forces during "Operation Sindoor." These reports suggest the incident involved a jet crash attributed to hostile action. However, these assertions have been officially refuted. No credible evidence supports the existence of such an operation or the downing of an Indian aircraft as described. The Indian Air Force has not confirmed any such event, and the claim appears to be misinformation.

Claim
A social media rumor has been circulating, suggesting that an Indian Air Force MiG-29 fighter jet was shot down by Pakistani Air forces during "Operation Sindoor." The claim is accompanied by images purported to show the wreckage of the aircraft.

Fact Check
The social media posts have falsely claimed that a Pakistani Air Force shot down an Indian Air Force MiG-29 during "Operation Sindoor." This claim has been confirmed to be untrue. The image being circulated is not related to any recent IAF operations and has been previously used in unrelated contexts. The content being shared is misleading and does not reflect any verified incident involving the Indian Air Force.

After conducting research by extracting key frames from the video and performing reverse image searches, we successfully traced the original post, which was first published in 2024, and can be seen in a news article from The Hindu and Times of India.
A MiG-29 fighter jet of the Indian Air Force (IAF), engaged in a routine training mission, crashed near Barmer, Rajasthan, on Monday evening (September 2, 2024). Fortunately, the pilot safely ejected and escaped unscathed, hence the claim is false and an act to spread misinformation.

Conclusion
The claims regarding the downing of an Indian Air Force MiG-29 during "Operation Sindoor" are unfounded and lack any credible verification. The image being circulated is outdated and unrelated to current IAF operations. There has been no official confirmation of such an incident, and the narrative appears to be misleading. Peoples are advised to rely on verified sources for accurate information regarding defence matters.
- Claim: Pakistan Shot down an Indian Fighter Jet, MIG-29
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
As our experiments with Generative Artificial Intelligence (AI) continue, companies and individuals look for new ways to incorporate and capitalise on it. This also includes big tech companies betting on their potential through investments. This process also sheds light on how such innovations are being carried out, used, and affect other stakeholders. Google’s AI overview feature has raised concerns from various website publishers and regulators. Recently, Chegg, a US-based tech education company that provides online resources for high school and college students, has filed a lawsuit against Google alleging abuse of monopoly over the searching mechanism.
Legal Background
Google’s AI Overview/Search Generative Experience (SGE) is a feature that incorporates AI into its standard search tool and helps summarise search results. This is then presented at the top, over the other published websites, when one looks for the search result. Although the sources of the information present are linked, they are half-covered, and it is ambiguous to tell which claims made by the AI come from which link. This creates an additional step for the searcher as, to find out the latter, their user interface requires the searcher to click on a drop-down box. Individual publishers and companies like Chegg have argued that such summaries deter their potential traffic and lead to losses as they continue to bid higher for advertisement services that Google offers, only to have their target audience discouraged from visiting their websites. What is unique about the lawsuit that has been filed by Chegg, is that it is based on anti-trust law rather than copyright law, which it has dealt with previously. In August 2024, a US Federal Judge had ruled that Google had an illegal monopoly over internet search and search text advertising markets, and by November, the US Department of Justice (DOJ) filed its proposed remedy. Some of them were giving advertisers and publishers more control of their data flowing through Google’s products, opening Google’s search index to the rest of the market, and imposing public oversight over Google’s AI investments. Currently, the DOJ has emphasised its stand on dismantling the search monopoly through structural separations, i.e., divesting Google of Chrome. The company is slated to defend itself before the DC District Court Judge Amit Mehta starting April 20, 2025.
CyberPeace Insights
As per a report by Statista (Global market share of leading search engines 2015-2025), Google, as the market leader, held a search traffic share of around 89.62 per cent. It is also stated that its advertising services account for the majority of its revenue, which amounted to a total of 305.63 billion U.S. dollars in 2023. The inclusion of the AI feature is undoubtedly changing how we search for things online. Benefits for users include an immediate, convenient scan of general information pertaining to the looked-up subject, but it may also raise concerns on the part of the website publishers and their loss of ad revenue owing to fewer impressions/clicks. Even though links (sources) are mentioned, they are usually buried. Such a searching mechanism questions the incentive on both ends- the user to explore various viewpoints, as people are now satisfied with the first few results that pop up, and the incentive for a creator/publisher to create new content as well as generate an income out of it. There might be a shift to more passive consumption rather than an active one, where one looks up/or is genuinely searching for information.
Conclusion
AI might make life more convenient, but in this case, it might also take away from small businesses, their finances, and the results of their hard work. It is also necessary for regulators, publishers, and users to continue asking such critical questions to keep the accountability of big tech giants in check, whilst not compromising their creations and publications.
References
- https://www.washingtonpost.com/technology/2024/05/13/google-ai-search-io-sge/
- https://www.theverge.com/news/619051/chegg-google-ai-overviews-monopoly
- https://economictimes.indiatimes.com/tech/technology/google-leans-further-into-ai-generated-overviews-for-its-search-engine/articleshow/118742139.cms?from=mdr
- https://www.nytimes.com/2024/12/03/technology/google-search-antitrust-judge.html
- https://www.odinhalvorson.com/monopoly-and-misuse-googles-strategic-ai-narrative/
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/google-leans-further-into-ai-generated-overviews-for-its-search-engine/118748621
- https://www.techpolicy.press/the-elephant-in-the-room-in-the-google-search-case-generative-ai/
- https://www.karooya.com/blog/proposed-remedies-break-googles-monopoly-antitrust/
- https://getellipsis.com/blog/googles-monopoly-and-the-hidden-brake-on-ai-innovation/
- https://www.statista.com/statistics/266249/advertising-revenue-of-google/#:~:text=Google:%20annual%20advertising%20revenue%202001,local%20products%20are%20more%20preferred.
- https://www.statista.com/statistics/1381664/worldwide-all-devices-market-share-of-search-engines/
- https://www.techpolicy.press/doj-sets-record-straight-of-whats-needed-to-dismantle-googles-search-monopoly/

Introduction
As digital platforms rapidly become repositories of information related to health, YouTube has emerged as a trusted source people look to for answers. To counter rampant health misinformation online, the platform launched YouTube Health, a program aiming to make “high-quality health information available to all” by collaborating with health experts and content creators. While this is an effort in the right direction, the program needs to be tailored to the specificities of the Indian context if it aims to transform healthcare communication in the long run.
The Indian Digital Health Context
India’s growing internet penetration and lack of accessible healthcare infrastructure, especially in rural areas, facilitates a reliance on digital platforms for health information. However, these, especially social media, are rife with misinformation. Supplemented by varying literacy levels, access disparities, and lack of digital awareness, health misinformation can lead to serious negative health outcomes. The report ‘Health Misinformation Vectors in India’ by DataLEADS suggests a growing reluctance surrounding conventional medicine, with people looking for affordable and accessible natural remedies instead. Social media helps facilitate this shift. However, media-sharing platforms such as WhatsApp, YouTube, and Facebook host a large chunk of health misinformation. The report identifies that cancer, reproductive health, vaccines, and lifestyle diseases are four key areas susceptible to misinformation in India.
YouTube’s Efforts in Promoting Credible Health Content
YouTube Health aims to provide evidence-based health information with “digestible, compelling, and emotionally supportive health videos,” from leading experts to everyone irrespective of who they are or where they live. So far, it executes this vision through:
- Content Curation: The platform has health source information panels and content shelves highlighting videos regarding 140+ medical conditions from authority sources like All India Institute of Medical Sciences (AIIMS), National Institute of Mental Health and Neurosciences (NIMHANS), Max Healthcare etc., whenever users search for health-related topics.
- Localization Strategies: The platform offers multilingual health content in regional languages such as Hindi, Tamil, Telugu, Marathi, Kannada, Malayalam, Punjabi, and Bengali, apart from English. This is to help health information reach viewers across most of the country.
- Verification of professionals: Healthcare professionals and organisations can apply to YouTube’s health feature for their videos to be authenticated as an authority health source on the platform and for their videos to show up on the ‘Health Sources’ shelf.
Challenges
- Limited Reach: India has a diverse linguistic ecosystem. While health information is made available in over 8 languages, the number is not enough to reach everyone in the country. Efforts to reach more people in vernacular languages need to be ramped up. Further, while there were around 50 billion views of health content on YouTube in 2023, it is difficult to measure the on-ground outcomes of those views.
- Lack of Digital Literacy: Misinformation on digital platforms cannot be entirely curtailed owing to the way algorithms are designed to enhance user engagement. However, uploading authoritative health information as a solution may not be enough, if users lack awareness about misinformation and the need to critically evaluate and trust only credible sources. In India, this critical awareness remains largely underdeveloped.
Conclusion
Considering that India has over 450 million users, by far the highest number of users in any country in the world, the platform has recognized that it can play a transformative role in the country’s digital health ecosystem. To accomplish its mission “to combat the societal threat of medical misinformation,” YouTube will have to continue to take several proactive measures. There is scope for strengthening collaborations with Indian public health agencies and trusted public figures, national and regional, to provide credible health information to all. The approach will have to be tailored to India’s vast linguistic diversity, by encouraging capacity-building for vernacular creators to produce credible content. Finally, multiple stakeholders will need to come together to promote digital literacy through education campaigns about identifying trustworthy sources.
Sources
- https://indianexpress.com/article/technology/tech-news-technology/youtube-health-dr-garth-graham-interview-9746673/
- https://economictimes.indiatimes.com/news/india/cancer-misinformation-extremely-prevalent-in-india-trust-in-science-medicine-crucial-report/articleshow/115931783.cms?from=mdr
- https://health.youtube/our-mission/
- https://health.youtube/features-application/
- https://backlinko.com/youtube-users
.webp)
Introduction
The unprecedented rise of social media, challenges with regional languages, and the heavy use of messaging apps like WhatsApp have all led to an increase in misinformation in India. False stories spread quickly and can cause significant harm, like political propaganda and health-related mis/misinformation. Programs that teach people how to use social media responsibly and attempt to check facts are essential, but they do not always connect with people deeply. Reading stories, attending lectures, and using tools that check facts are standard passive learning methods used in traditional media literacy programs.
Adding game-like features to non-game settings is called "gamification," it could be a new and interesting way to answer this question. Gamification involves engaging people by making them active players instead of just passive consumers of information. Research shows that interactive learning improves interest, thinking skills, and memory. People can learn to recognise fake news safely by turning fact-checking into a game before encountering it in real life. A study by Roozenbeek and van der Linden (2019) showed that playing misinformation games can significantly enhance people's capacity to recognise and avoid false information.
Several misinformation-related games have been successfully implemented worldwide:
- The Bad News Game – This browser-based game by Cambridge University lets players step into the shoes of a fake news creator, teaching them how misinformation is crafted and spread (Roozenbeek & van der Linden, 2019).
- Factitious – A quiz game where users swipe left or right to decide whether a news headline is real or fake (Guess et al., 2020).
- Go Viral! – A game designed to inoculate people against COVID-19 misinformation by simulating the tactics used by fake news peddlers (van der Linden et al., 2020).
For programs to effectively combat misinformation in India, they must consider factors such as the responsible use of smartphones, evolving language trends, and common misinformation patterns in the country. Here are some key aspects to keep in mind:
- Vernacular Languages
There should be games in Hindi, Tamil, Bengali, Telugu, and other major languages since that is how rumours spread in different areas and diverse cultural contexts. AI voice conversation and translation can help reduce literacy differences. Research shows that people are more likely to engage with and trust information in their native language (Pennycook & Rand, 2019).
- Games Based on WhatsApp
Interactive tests and chatbot-powered games can educate consumers directly within the app they use most frequently since WhatsApp is a significant hub for false information. A game with a WhatsApp-like interface where players may feel like they are in real life, having to decide whether to avoid, check the facts of, or forward messages that are going viral could be helpful in India.
- Detecting False Information
As part of a mobile-friendly game, players can pretend to be reporters or fact-checkers and have to prove stories that are going viral. They can do the same with real-life tools like reverse picture searches or reliable websites that check facts. Research shows that doing interactive tasks to find fake news makes people more aware of it over time (Lewandowsky et al., 2017).
- Reward-Based Participation
Participation could be increased by providing rewards for finishing misleading challenges, such as badges, diplomas, or even incentives on mobile data. This might be easier to do if there are relationships with phone companies. Reward-based learning has made people more interested and motivated in digital literacy classes (Deterding et al., 2011).
- Universities and Schools
Educational institutions can help people spot false information by adding game-like elements to their lessons. Hamari et al. (2014) say that students are more likely to join and remember what they learn when there are competitive and interactive parts to the learning. Misinformation games can be used in media studies classes at schools and universities by using models to teach students how to check sources, spot bias, and understand the psychological tricks that misinformation campaigns use.
What Artificial Intelligence Can Do for Gamification
Artificial intelligence can tailor learning experiences to each player in false games. AI-powered misinformation detection bots could lead participants through situations tailored to their learning level, ensuring they are consistently challenged. Recent natural language processing (NLP) developments enable AI to identify nuanced misinformation patterns and adjust gameplay accordingly (Zellers et al., 2019). This could be especially helpful in India, where fake news is spread differently depending on the language and area.
Possible Opportunities
Augmented reality (AR) scavenger hunts for misinformation, interactive misinformation events, and educational misinformation tournaments are all examples of games that help fight misinformation. India can help millions, especially young people, think critically and combat the spread of false information by making media literacy fun and interesting. Using Artificial Intelligence (AI) in gamified treatments for misinformation could be a fascinating area of study in the future. AI-powered bots could mimic real-time cases of misinformation and give quick feedback, which would help students learn more.
Problems and Moral Consequences
While gaming is an interesting way to fight false information, it also comes with some problems that you should think about:
- Ethical Concerns: Games that try to imitate how fake news spreads must ensure players do not learn how to spread false information by accident.
- Scalability: Although worldwide misinformation initiatives exist, developing and expanding localised versions for India's varied language and cultural contexts provide significant challenges.
- Assessing Impact: There is a necessity for rigorous research approaches to evaluate the efficacy of gamified treatments in altering misinformation-related behaviours, keeping cultural and socio-economic contexts in the picture.
Conclusion
A gamified approach can serve as an effective tool in India's fight against misinformation. By integrating game elements into digital literacy programs, it can encourage critical thinking and help people recognize misinformation more effectively. The goal is to scale these efforts, collaborate with educators, and leverage India's rapidly evolving technology to make fact-checking a regular practice rather than an occasional concern.
As technology and misinformation evolve, so must the strategies to counter them. A coordinated and multifaceted approach, one that involves active participation from netizens, strict platform guidelines, fact-checking initiatives, and support from expert organizations that proactively prebunk and debunk misinformation can be a strong way forward.
References
- Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining "gamification". Proceedings of the 15th International Academic MindTrek Conference.
- Guess, A., Nagler, J., & Tucker, J. (2020). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances.
- Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work?—A literature review of empirical studies on gamification. Proceedings of the 47th Hawaii International Conference on System Sciences.
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition.
- Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using “accuracy prompts”. Nature Human Behaviour.
- Roozenbeek, J., & van der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research.
- van der Linden, S., Roozenbeek, J., Compton, J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology.
- Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems.