#FactCheck: Viral AI image shown as AI -171 caught fire after collision
Executive Summary:
A dramatic image circulating online, showing a Boeing 787 of Air India engulfed in flames after crashing into a building in Ahmedabad, is not a genuine photograph from the incident. Our research has confirmed it was created using artificial intelligence.

Claim:
Social media posts and forwarded messages allege that the image shows the actual crash of Air India Flight AI‑171 near Ahmedabad airport on June 12, 2025.

Fact Check:
In our research to validate the authenticity of the viral image, we conducted a reverse image search and analyzed it using AI-detection tools like Hive Moderation. The image showed clear signs of manipulation, distorted details, and inconsistent lighting. Hive Moderation flagged it as “Likely AI-generated”, confirming it was synthetically created and not a real photograph.

In contrast, verified visuals and information about the Air India Flight AI-171 crash have been published by credible news agencies like The Indian Express and Hindustan Times, confirmed by the aviation authorities. Authentic reports include on-ground video footage and official statements, none of which feature the viral image. This confirms that the circulating photo is unrelated to the actual incident.

Conclusion:
The viral photograph is a fabrication, created by AI, not a real depiction of the Ahmedabad crash. It does not represent factual visuals from the tragedy. It’s essential to rely on verified images from credible news agencies and official investigation reports when discussing such sensitive events.
- Claim: An Air India Boeing aircraft crashed into a building near Ahmedabad airport
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency
.webp)
Introduction
Social media platforms have begun to shape the public understanding of history in today’s digital landscape. You may have encountered videos, images, and posts that claim to reveal an untold story about our past. For example, you might have seen a post on your feed that has a painted or black and white image of a princess and labelled as "the most beautiful princess of Rajasthan who fought countless wars but has been erased from history.” Such emotionally charged narratives spread quickly, without any academic scrutiny or citation. Unfortunately, the originator believes it to be true.
Such unverified content may look harmless. But it profoundly contributes to the systematic distortion of historical information. Such misinformation reoccurs on feeds and becomes embedded in popular memory. It misguides the public discourse and undermines the scholarly research on the relevant topic. Sometimes, it also contributes to communal outrage and social tensions. It is time to recognise that protecting the integrity of our cultural and historical narratives is not only an academic concern but a legal and institutional responsibility. This is where the role of the Ministry of Culture becomes critical.
Pseudohistorical News Information in India
Fake news and misinformation are frequently disseminated via images, pictures, and videos on various messaging applications, which is referred to as “WhatsApp University” in a derogatory way. WhatsApp has become India’s favourite method of communication, while users have to stay very conscious about what they are consuming from forwarded messages. Academic historians strive to understand the past in its context to differentiate it from the present, whereas pseudo-historians try to manipulate history to satisfy their political agendas. Unfortunately, this wave of pseudo-history is expanding rapidly, with platforms like 'WhatsApp University' playing a significant role in amplifying its spread. This has led to an increase in fake historical news and paid journalism. Unlike pseudo-history, academic history is created by professional historians in academic contexts, adhering to strict disciplinary guidelines, including peer review and expert examination of justifications, assertions, and publications.
How to Identify Pseudo-Historic Misinformation
1. Lack of Credible Sources: There is a lack of reliable primary and secondary sources. Instead, pseudohistorical works depend on hearsay and unreliable eyewitness accounts.
2. Selective Use of Evidence: Misinformative posts portray only those facts that support their argument and minimise the facts which is contradictory to their assertions.
3. Incorporation of Conspiracy Theories: They often include conspiracy theories, which postulate secret groups, repressed knowledge. They might mention that evil powers influenced the historical events. Such hypotheses frequently lack any supporting data.
4. Extravagant Claims: Pseudo-historic tales sometimes present unbelievable assertions about historic persons or events.
5. Lack of Peer Review: Such work is generally never published on authentic academic platforms. You would not find them on platforms like LinkedIn, but on platforms like Instagram and Facebook, as they do not pitch for academic publications. Authentic historical research is examined by subject-matter authorities.
6. Neglect of Established Historiographical Methods: Such posts lack knowledge of a recognised methodology and procedures, like the critical study of sources.
7. Ideologically Driven Narratives: Political, communal, ideological, and personal opinions are prioritised in such posts. The author has a prior goal, instead of finding the truth.
8. Exploitation of Gaps in the Historical Record: Pseudo-historians often use missing or unclear parts of history to suggest that regular historians are hiding important secrets. They make the story sound more mysterious than it is.
9. Rejection of Scholarly Consensus: Pseudo-historians often reject the views of experts and historians, choosing instead to believe and promote their strange ideas.
10. Emphasis on Sensationalism: Pseudo-historical works may put more emphasis on sensationalism than academic rigour to pique public interest rather than offer a fair and thorough account of the history.
Legal and Institutional Responsibility
Public opinion is the heart of democracy. It should not be affected by any misinformation or disinformation. Vested interests cannot be allowed to sabotage this public opinion. Specifically, when it concerns academia, it cannot be shared unverified without any fact-checking. Such unverified claims can be called out, and action can be taken only if the authorities take over the charge. In India, the Indian Council of Historical Research (ICHR) regulates the historical academia. As per the official website, their stated aim is to “take all such measures as may be found necessary from time to time to promote historical research and its utilisation in the country,”. However, it is now essential to modernise the functioning of the ICHR to meet the demands of the digital era. Concerned authorities can run campaigns and awareness programmes to question the validity and research of such misinformative posts. Just as there are fact-checking mechanisms for news, there must also be an institutional push to fact-check and regulate historical content online. The following measures can be taken by authorities to strike down such misinformation online:
- Launch a nationwide awareness campaign about historical misinformation.
- Work with scholars, historians, and digital platforms to promote verified content.
- Encourage social media platforms to introduce fact-check labels for historical posts.
- Consider legal frameworks that penalise the deliberate spread of false historical narratives.
History is part of our national heritage, and preserving its accuracy is a matter of public interest. Misinformation and pseudo-history are a combination that misleads the public and weakens the foundation of shared cultural identity. In this digital era, false narratives spread rapidly, and it is important to promote critical thinking, encourage responsible academic work, and ensure that the public has access to accurate and well-researched historical information. Protecting the integrity of history is not just the work of historians — it is a collective responsibility that serves the future of our democracy.
References:
- https://kuey.net/index.php/kuey/article/view/4091
- https://www.drishtiias.com/daily-news-editorials/social-media-and-the-menace-of-false-information

Executive Summary:
We have identified a post addressing a scam email that falsely claims to offer a download link for an e-PAN Card. This deceptive email is designed to mislead recipients into disclosing sensitive financial information by impersonating official communication from Income Tax Department authorities. Our report aims to raise awareness about this fraudulent scheme and emphasize the importance of safeguarding personal data against such cyber threats.

Claim:
Scammers are sending fake emails, asking people to download their e-PAN cards. These emails pretend to be from government authorities like the Income Tax Department and contain harmful links that can steal personal information or infect devices with malware.
Fact Check:
Through our research, we have found that scammers are sending fake emails, posing as the Income Tax Department, to trick users into downloading e-PAN cards from unofficial links. These emails contain malicious links that can lead to phishing attacks or malware infections. Genuine e-PAN services are only available through official platforms such as the Income Tax Department's website (www.incometaxindia.gov.in) and the NSDL/UTIITSL portals. Despite repeated warnings, many individuals still fall victim to such scams. To combat this, the Income Tax Department has a dedicated page for reporting phishing attempts: Report Phishing - Income Tax India. It is crucial for users to stay cautious, verify email authenticity, and avoid clicking on suspicious links to protect their personal information.

Conclusion:
The emails currently in circulation claiming to provide e-PAN card downloads are fraudulent and should not be trusted. These deceptive messages often impersonate government authorities and contain malicious links that can result in identity theft or financial fraud. Clicking on such links may compromise sensitive personal information, putting individuals at serious risk. To ensure security, users are strongly advised to verify any such communication directly through official government websites and avoid engaging with unverified sources. Additionally, any phishing attempts should be reported to the Income Tax Department and also to the National Cyber Crime Reporting Portal to help prevent the spread of such scams. Staying vigilant and exercising caution when handling unsolicited emails is crucial in safeguarding personal and financial data.
- Claim: Fake emails claim to offer e-PAN card downloads.
- Claimed On: Social Media
- Fact Check: False and Misleading