#FactCheck - Viral Photos Falsely Linked to Iranian President Ebrahim Raisi's Helicopter Crash
Executive Summary:
On 20th May, 2024, Iranian President Ebrahim Raisi and several others died in a helicopter crash that occurred northwest of Iran. The images circulated on social media claiming to show the crash site, are found to be false. CyberPeace Research Team’s investigation revealed that these images show the wreckage of a training plane crash in Iran's Mazandaran province in 2019 or 2020. Reverse image searches and confirmations from Tehran-based Rokna Press and Ten News verified that the viral images originated from an incident involving a police force's two-seater training plane, not the recent helicopter crash.
Claims:
The images circulating on social media claim to show the site of Iranian President Ebrahim Raisi's helicopter crash.



Fact Check:
After receiving the posts, we reverse-searched each of the images and found a link to the 2020 Air Crash incident, except for the blue plane that can be seen in the viral image. We found a website where they uploaded the viral plane crash images on April 22, 2020.

According to the website, a police training plane crashed in the forests of Mazandaran, Swan Motel. We also found the images on another Iran News media outlet named, ‘Ten News’.

The Photos uploaded on to this website were posted in May 2019. The news reads, “A training plane that was flying from Bisheh Kolah to Tehran. The wreckage of the plane was found near Salman Shahr in the area of Qila Kala Abbas Abad.”
Hence, we concluded that the recent viral photos are not of Iranian President Ebrahim Raisi's Chopper Crash, It’s false and Misleading.
Conclusion:
The images being shared on social media as evidence of the helicopter crash involving Iranian President Ebrahim Raisi are incorrectly shown. They actually show the aftermath of a training plane crash that occurred in Mazandaran province in 2019 or 2020 which is uncertain. This has been confirmed through reverse image searches that traced the images back to their original publication by Rokna Press and Ten News. Consequently, the claim that these images are from the site of President Ebrahim Raisi's helicopter crash is false and Misleading.
- Claim: Viral images of Iranian President Raisi's fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs

What are Deepfakes?
A deepfake is essentially a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. Deepfake technology is a method for manipulating videos, images, and audio utilising powerful computers and deep learning. It is used to generate fake news and commit financial fraud, among other wrongdoings. It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. The term deepfake was coined first time in 2017 by an anonymous Reddit user, who called himself deepfake.
Deepfakes works on a combination of AI and ML, which makes the technology hard to detect by Web 2.0 applications, and it is almost impossible for a layman to see if an image or video is fake or has been created using deepfakes. In recent times, we have seen a wave of AI-driven tools which have impacted all industries and professions across the globe. Deepfakes are often created to spread misinformation. There lies a key difference between image morphing and deepfakes. Image morphing is primarily used for evading facial recognition, but deepfakes are created to spread misinformation and propaganda.
Issues Pertaining to Deepfakes in India
Deepfakes are a threat to any nation as the impact can be divesting in terms of monetary losses, social and cultural unrest, and actions against the sovereignty of India by anti-national elements. Deepfake detection is difficult but not impossible. The following threats/issues are seen to be originating out of deep fakes:
- Misinformation: One of the biggest issues of Deepfake is misinformation, the same was seen during the Russia-Ukraine conflict, where in a deepfake of Ukraine’s president, Mr Zelensky, surfaced on the internet and caused mass confusion and propaganda-based misappropriation among the Ukrainians.
- Instigation against the Union of India: Deepfake poses a massive threat to the integrity of the Union of India, as this is one of the easiest ways for anti-national elements to propagate violence or instigate people against the nation and its interests. As India grows, so do the possibilities of anti-national attacks against the nation.
- Cyberbullying/ Harassment: Deepfakes can be used by bad actors to harass and bully people online in order to extort money from them.
- Exposure to Illicit Content: Deepfakes can be easily used to create illicit content, and oftentimes, it is seen that it is being circulated on online gaming platforms where children engage the most.
- Threat to Digital Privacy: Deepfakes are created by using existing videos. Hence, bad actors often use photos and videos from Social media accounts to create deepfakes, this directly poses a threat to the digital privacy of a netizen.
- Lack of Grievance Redressal Mechanism: In the contemporary world, the majority of nations lack a concrete policy to address the aspects of deepfake. Hence, it is of paramount importance to establish legal and industry-based grievance redressal mechanisms for the victims.
- Lack of Digital Literacy: Despite of high internet and technology penetration rates in India, digital literacy lags behind, this is a massive concern for the Indian netizens as it takes them far from understanding the tech, which results in the under-reporting of crimes. Large-scale awareness and sensitisation campaigns need to be undertaken in India to address misinformation and the influence of deepfakes.
How to spot deepfakes?
Deepfakes look like the original video at first look, but as we progress into the digital world, it is pertinent to establish identifying deepfakes in our digital routine and netiquettes in order to stay protected in the future and to address this issue before it is too late. The following aspects can be kept in mind while differentiating between a real video and a deepfake
- Look for facial expressions and irregularities: Whenever differentiating between an original video and deepfake, always look for changes in facial expressions and irregularities, it can be seen that the facial expressions, such as eye movement and a temporary twitch on the face, are all signs of a video being a deepfake.
- Listen to the audio: The audio in deepfake also has variations as it is imposed on an existing video, so keep a check on the sound effects coming from a video in congruence with the actions or gestures in the video.
- Pay attention to the background: The most easiest way to spot a deepfake is to pay attention to the background, in all deepfakes, you can spot irregularities in the background as, in most cases, its created using virtual effects so that all deepfakes will have an element of artificialness in the background.
- Context and Content: Most of the instances of deepfake have been focused towards creating or spreading misinformation hence, the context and content of any video is an integral part of differentiating between an original video and deepfake.
- Fact-Checking: As a basic cyber safety and digital hygiene protocol, one should always make sure to fact-check each and every piece of information they come across on social media. As a preventive measure, always make sure to fact-check any information or post sharing it with your known ones.
- AI Tools: When in doubt, check it out, and never refrain from using Deepfake detection tools like- Sentinel, Intel’s real-time deepfake detector - Fake catcher, We Verify, and Microsoft’s Video Authenticator tool to analyze the videos and combating technology with technology.
Recent Instance
A deepfake video of actress Rashmika Mandanna recently went viral on social media, creating quite a stir. The video showed a woman entering an elevator who looked remarkably like Mandanna. However, it was later revealed that the woman in the video was not Mandanna, but rather, her face was superimposed using AI tools. Some social media users were deceived into believing that the woman was indeed Mandanna, while others identified it as an AI-generated deepfake. The original video was actually of a British-Indian girl named Zara Patel, who has a substantial following on Instagram. This incident sparked criticism from social media users towards those who created and shared the video merely for views, and there were calls for strict action against the uploaders. The rapid changes in the digital world pose a threat to personal privacy; hence, caution is advised when sharing personal items on social media.
Legal Remedies
Although Deepfake is not recognised by law in India, it is indirectly addressed by Sec. 66 E of the IT Act, which makes it illegal to capture, publish, or transmit someone's image in the media without that person's consent, thus violating their privacy. The maximum penalty for this violation is ₹2 lakh in fines or three years in prison. The DPDP Act's applicability in 2023 means that the creation of deepfakes will directly affect an individual's right to digital privacy and will also violate the IT guidelines under the Intermediary Guidelines, as platforms will be required to exercise caution while disseminating and publishing misinformation through deepfakes. The indirect provisions of the Indian Penal Code, which cover the sale and dissemination of derogatory publications, songs and actions, deception in the delivery of property, cheating and dishonestly influencing the delivery of property, and forgery with the intent to defame, are the only legal remedies available for deepfakes. Deep fakes must be recognized legally due to the growing power of misinformation. The Data Protection Board and the soon-to-be-established fact-checking body must recognize crimes related to deepfakes and provide an efficient system for filing complaints.
Conclusion
Deepfake is an aftermath of the advancements of Web 3.0 and, hence is just the tip of the iceberg in terms of the issues/threats from emerging technologies. It is pertinent to upskill and educate the netizens about the keen aspects of deepfakes to stay safe in the future. At the same time, developing and developed nations need to create policies and laws to efficiently regulate deepfake and to set up redressal mechanisms for victims and industry. As we move ahead, it is pertinent to address the threats originating out of the emerging techs and, at the same time, create a robust resilience for the same.
References

Introduction
In a setback to the Centre, the Bombay High Court on Friday 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Chronological Overview
- On 6th April 2023, the Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2023 (IT Amendment Rules, 2023). These rules introduced new provisions to establish a fact-checking unit with respect to “any business of the central government”. This amendment was done In exercise of the powers conferred by section 87 of the Information Technology Act, 2000. (IT Act).
- On 20 March 2024, the Central Government notified the Press Information Bureau (PIB) as FCU under rule 3(1)(b)(v) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023 (IT Amendment Rules 2023).
- The next day on 21st March 2024, the Supreme Court stayed the Centre's decision on notifying PIB -FCU, considering the pendency of the proceedings before the High Court of Judicature at Bombay. A detailed analysis covered by CyberPeace on the Supreme Court Stay decision can be accessed here.
- In the latest development, the Bombay High Court on 20th September 2024, struck down the provisions under IT Amendment Rules 2023, which empowered the Central Government to establish Fact Check Units (FCUs) to identify ‘fake and misleading’ information about its business on social media platforms.
Brief Overview of Bombay High Court decision dated 20th September 2024
Justice AS Chandurkar was appointed as the third judge after a split verdict in January 2023 by a division bench consisting of Justices Gautam Patel and Neela Gokhal. As a Tie-breaker judge' Justice AS Chandurkar delivered the decision striking down provisions for setting up a Fact Check Unit under IT amendment 2023 rules. Striking down the Centre's proposed fact check unit provision, Justice A S Chandurkar of Bombay High Court also opined that there was no rationale to undertake an exercise in determining whether information related to the business of the Central govt was fake or false or misleading when in digital form but not doing the same when such information was in print. It was also contended that there is no justification to introduce an FCU only in relation to the business of the Central Government. Rule 3(1)(b)(v) has a serious chilling effect on the exercise of the freedom of speech and expression under Article 19(1)(a) of the Constitution since the communication of the view of the FCU will result in the intermediary simply pulling down the content for fear of consequences or losing the safe harbour provision given under IT Act.
Justice Chandurkar held that the expressions ‘fake, false or misleading’ are ‘vague and overbroad’, and that the ‘test of proportionality’ is not satisfied. Rule 3(1)(b)(v), was violative of Articles 14 and 19 (1) (a) and 19 (1) (g) of the Constitution and it is “ultra vires”, or beyond the powers, of the IT Act.
Role of Expert Organisations in Curbing Mis/Disinformation and Fake News
In light of the recent developments, and the rising incidents of Mis/Disinformation and Fake News it becomes significantly important that we all stand together in the fight against these challenges. The actions against Mis/Disinformation and fake news should be strengthened by collective efforts, the expert organisations like CyberPeace Foundation plays an key role in enabling and encouraging netizens to exercise caution and rely on authenticated sources, rather than solely rely on govt FCU to block the content.
Mis/Disinformation and Fake News should be stopped, identified and countered by netizens at the very first stage of its spread. In light of the Bombay High Court's decision to stuck down the provision related to setting up the FCU by the Central Government, it entails that the government's intention to address misinformation related solely to its business/operations may not have been effectively communicated in the eyes of the judiciary.
It is high time to exercise collective efforts against Mis/Disinformation and Fake News and support expert organizations who are actively engaged in conducting proactive measures, and campaigns to target these challenges, specifically in the online information landscape. CyberPeace actively publishes fact-checking reports and insights on Prebunking and Debunking, conducts expert sessions and takes various key steps aimed at empowering netizens to build cognitive defences to recognise the susceptible information, disregard misleading claims and prevent further spreads to ensure the true online information landscape.
References:
- https://www.scconline.com/blog/post/2024/09/20/bombay-high-court-it-rules-amendment-2023-fact-check-units-article14-article19-legal-news/#:~:text=Bombay%20High%20Court%3A%20A%20case,grounds%20that%20it%20violated%20constitutional
- https://indianexpress.com/article/cities/mumbai/bombay-hc-strikes-down-it-act-amendment-fact-check-unit-9579044/
- https://www.cyberpeace.org/resources/blogs/supreme-court-stay-on-centres-notification-of-pibs-fact-check-unit-under-it-amendment-rules-2023

Introduction
The Union Minister of Information and Broadcasting Ashwini Vaishnaw addressed the Press Council of India on the occasion of National Press Day regarding emergent concerns in the digital media and technology landscape. Union Minister of Information and Broadcasting Ashwini Vaishnaw has identified four major challenges facing news media in India, including fake news, algorithmic bias, artificial intelligence, and fair compensation. He emphasized the need for greater accountability and fairness from Big Tech to combat misinformation and protect democracy. Vaishnaw argued that platforms do not verify information posted online, leading to the spread of false and misleading information. He called on online platforms and Big Tech to combat misinformation and protect democracy.
Key Concerns Highlighted by Union Minister Ashwini Vaishnaw
- Misinformation: Due to India's unique sensitivities, digital platforms should adopt country-specific responsibilities and metrics. The Minister also questioned the safe harbour principle, which shields platforms from liability for user-generated content.
- Algorithmic Biases: The prioritisation of viral content, which is often divisive, by social media algorithms can have serious implications on societal peace.
- Impact of AI on intellectual Property: The training of AI on pre-existing datasets presents the ethical challenge of robbing original creators of their rights to their intellectual property
- Fair compensation: Traditional news media is increasingly facing financial strain since news consumption is shifting rapidly to social media platforms, creating uneven compensation dynamics.
Cyberpeace Insights
- Misinformation: Marked by routine upheavals and moral panics, Indian society is vulnerable to the severe impacts of fake news, including mob violence, political propaganda, health misinformation and more. Inspired by the EU's Digital Services Act, 2022, and other related legislation that addresses hate speech and misinformation, the Indian Minister has called for revisiting the safe harbour protection under Section 79 of the IT Act, 2000. However, any legislation on misinformation must strike a balance between protecting the fundamental rights to freedom of speech, and privacy while safeguarding citizens from its harmful effects.
- Algorithmic Biases: Social media algorithms are designed to boost user engagement since this increases advertisement revenue. This leads to the creation of filter bubbles- exposure to personalized information online and echo chambers interaction with other users with the same opinions that align with their worldview. These phenomena induce radicalization of views, increase intolerance fuel polarization in public discourse, and trigger the spread of more misinformation. Tackling this requires algorithmic design changes such as disincentivizing sensationalism, content labelling, funding fact-checking networks, etc. to improve transparency.
- Impact of AI on Intellectual Property: AI models are trained on data that may contain copyrighted material. It can lead to a loss of revenue for primary content creators, while tech companies owning AI models may financially benefit disproportionately by re-rendering their original works. Large-scale uptake of AI models will significantly impact fields such as advertising, journalism, entertainment, etc by disrupting their market. Managing this requires a push for Ethical AI regulations and the protection of original content creators.
Conclusion: Charting a Balanced Path
The socio-cultural and economic fabric of the Indian subcontinent is not only distinct from the rest of the world but has cross-cutting internal diversities, too. Its digital landscape stands at a crossroads as rapid global technological advancements present increasing opportunities and challenges. In light of growing incidents of misinformation on social media platforms, it is also crucial that regulators consider framing rules that encourage and mandate content verification mechanisms for online platforms, incentivizing them to adopt advanced AI-driven fact-checking tools and other relevant measures. Additionally, establishing public-private partnerships to monitor misinformation trends is crucial to rapidly debunking viral falsehoods. However ethical concerns and user privacy should be taken into consideration while taking such steps. Addressing misinformation requires a collaborative approach that balances platform accountability, technological innovation, and the protection of democratic values.
Sources
- https://www.indiatoday.in/india/story/news-media-4-challenges-ashwini-vaishnaw-national-press-day-speech-big-tech-fake-news-algorithm-ai-2634737-2024-11-17
- https://ec.europa.eu/commission/presscorner/detail/en/ip_24_881
- https://www.legaldive.com/news/digital-services-act-dsa-eu-misinformation-law-propaganda-compliance-facebook-gdpr/691657/
- https://www.fondationdescartes.org/en/2020/07/filter-bubbles-and-echo-chambers/
- https://www.google.com/searchq=News+Media+Bargaining+Code&oq=News+Media+Bargaining+Code&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQABiABDIHCAIQABiABDIHCAMQABiABDIHCAQQABiABDIHCAUQABiABDIICAYQABgWGB4yCAgHEAAYFhgeMggICBAAGBYYHjIICAkQABgWGB7SAQcyMjVqMGo3qAIIsAIB&sourceid=chrome&ie=UTF-8