#FactCheck - Viral Photos Falsely Linked to Iranian President Ebrahim Raisi's Helicopter Crash
Executive Summary:
On 20th May, 2024, Iranian President Ebrahim Raisi and several others died in a helicopter crash that occurred northwest of Iran. The images circulated on social media claiming to show the crash site, are found to be false. CyberPeace Research Team’s investigation revealed that these images show the wreckage of a training plane crash in Iran's Mazandaran province in 2019 or 2020. Reverse image searches and confirmations from Tehran-based Rokna Press and Ten News verified that the viral images originated from an incident involving a police force's two-seater training plane, not the recent helicopter crash.
Claims:
The images circulating on social media claim to show the site of Iranian President Ebrahim Raisi's helicopter crash.



Fact Check:
After receiving the posts, we reverse-searched each of the images and found a link to the 2020 Air Crash incident, except for the blue plane that can be seen in the viral image. We found a website where they uploaded the viral plane crash images on April 22, 2020.

According to the website, a police training plane crashed in the forests of Mazandaran, Swan Motel. We also found the images on another Iran News media outlet named, ‘Ten News’.

The Photos uploaded on to this website were posted in May 2019. The news reads, “A training plane that was flying from Bisheh Kolah to Tehran. The wreckage of the plane was found near Salman Shahr in the area of Qila Kala Abbas Abad.”
Hence, we concluded that the recent viral photos are not of Iranian President Ebrahim Raisi's Chopper Crash, It’s false and Misleading.
Conclusion:
The images being shared on social media as evidence of the helicopter crash involving Iranian President Ebrahim Raisi are incorrectly shown. They actually show the aftermath of a training plane crash that occurred in Mazandaran province in 2019 or 2020 which is uncertain. This has been confirmed through reverse image searches that traced the images back to their original publication by Rokna Press and Ten News. Consequently, the claim that these images are from the site of President Ebrahim Raisi's helicopter crash is false and Misleading.
- Claim: Viral images of Iranian President Raisi's fatal chopper crash.
- Claimed on: X (Formerly known as Twitter), YouTube, Instagram
- Fact Check: Fake & Misleading
Related Blogs

Introduction
As our experiments with Generative Artificial Intelligence (AI) continue, companies and individuals look for new ways to incorporate and capitalise on it. This also includes big tech companies betting on their potential through investments. This process also sheds light on how such innovations are being carried out, used, and affect other stakeholders. Google’s AI overview feature has raised concerns from various website publishers and regulators. Recently, Chegg, a US-based tech education company that provides online resources for high school and college students, has filed a lawsuit against Google alleging abuse of monopoly over the searching mechanism.
Legal Background
Google’s AI Overview/Search Generative Experience (SGE) is a feature that incorporates AI into its standard search tool and helps summarise search results. This is then presented at the top, over the other published websites, when one looks for the search result. Although the sources of the information present are linked, they are half-covered, and it is ambiguous to tell which claims made by the AI come from which link. This creates an additional step for the searcher as, to find out the latter, their user interface requires the searcher to click on a drop-down box. Individual publishers and companies like Chegg have argued that such summaries deter their potential traffic and lead to losses as they continue to bid higher for advertisement services that Google offers, only to have their target audience discouraged from visiting their websites. What is unique about the lawsuit that has been filed by Chegg, is that it is based on anti-trust law rather than copyright law, which it has dealt with previously. In August 2024, a US Federal Judge had ruled that Google had an illegal monopoly over internet search and search text advertising markets, and by November, the US Department of Justice (DOJ) filed its proposed remedy. Some of them were giving advertisers and publishers more control of their data flowing through Google’s products, opening Google’s search index to the rest of the market, and imposing public oversight over Google’s AI investments. Currently, the DOJ has emphasised its stand on dismantling the search monopoly through structural separations, i.e., divesting Google of Chrome. The company is slated to defend itself before the DC District Court Judge Amit Mehta starting April 20, 2025.
CyberPeace Insights
As per a report by Statista (Global market share of leading search engines 2015-2025), Google, as the market leader, held a search traffic share of around 89.62 per cent. It is also stated that its advertising services account for the majority of its revenue, which amounted to a total of 305.63 billion U.S. dollars in 2023. The inclusion of the AI feature is undoubtedly changing how we search for things online. Benefits for users include an immediate, convenient scan of general information pertaining to the looked-up subject, but it may also raise concerns on the part of the website publishers and their loss of ad revenue owing to fewer impressions/clicks. Even though links (sources) are mentioned, they are usually buried. Such a searching mechanism questions the incentive on both ends- the user to explore various viewpoints, as people are now satisfied with the first few results that pop up, and the incentive for a creator/publisher to create new content as well as generate an income out of it. There might be a shift to more passive consumption rather than an active one, where one looks up/or is genuinely searching for information.
Conclusion
AI might make life more convenient, but in this case, it might also take away from small businesses, their finances, and the results of their hard work. It is also necessary for regulators, publishers, and users to continue asking such critical questions to keep the accountability of big tech giants in check, whilst not compromising their creations and publications.
References
- https://www.washingtonpost.com/technology/2024/05/13/google-ai-search-io-sge/
- https://www.theverge.com/news/619051/chegg-google-ai-overviews-monopoly
- https://economictimes.indiatimes.com/tech/technology/google-leans-further-into-ai-generated-overviews-for-its-search-engine/articleshow/118742139.cms?from=mdr
- https://www.nytimes.com/2024/12/03/technology/google-search-antitrust-judge.html
- https://www.odinhalvorson.com/monopoly-and-misuse-googles-strategic-ai-narrative/
- https://cio.economictimes.indiatimes.com/news/artificial-intelligence/google-leans-further-into-ai-generated-overviews-for-its-search-engine/118748621
- https://www.techpolicy.press/the-elephant-in-the-room-in-the-google-search-case-generative-ai/
- https://www.karooya.com/blog/proposed-remedies-break-googles-monopoly-antitrust/
- https://getellipsis.com/blog/googles-monopoly-and-the-hidden-brake-on-ai-innovation/
- https://www.statista.com/statistics/266249/advertising-revenue-of-google/#:~:text=Google:%20annual%20advertising%20revenue%202001,local%20products%20are%20more%20preferred.
- https://www.statista.com/statistics/1381664/worldwide-all-devices-market-share-of-search-engines/
- https://www.techpolicy.press/doj-sets-record-straight-of-whats-needed-to-dismantle-googles-search-monopoly/

Introduction
Given the era of digital trust and technological innovation, the age of artificial intelligence has provided a new dimension to how people communicate and how they create and consume content. However, like all borrowed powers, the misuse of AI can lead to terrible consequences. One recent dark example was a cybercrime in Brazil: a sophisticated online scam using deepfake technology to impersonate celebrities of global stature, including supermodel Gisele Bündchen, in misleading Instagram ads. Luring in millions of reais in revenue, this crime clearly brings forth the concern of AI-generative content having rightfully set on the side of criminals.
Scam in Motion
Lately, the federal police of Brazil have stated that this scheme has been in circulation since 2024, when the ads were already being touted as apparently very genuine, using AI-generated video and images. The ads showed Gisele Bündchen and other celebrities endorsing skincare products, promotional giveaways, or time-limited discounts. The victims were tricked into making petty payments, mostly under 100 reais (about $19) for these fake products or were lured into paying "shipping costs" for prizes that never actually arrived.
The criminals leveraged their approach by scaling it up and focusing on minor losses accumulated from every victim, thus christening it "statistical immunity" by investigators. Victims being pocketed only a couple of dollars made most of them stay on their heels in terms of filing a complaint, thereby allowing these crooks extra limbs to shove on. Over time, authorities estimated that the group had gathered over 20 million reais ($3.9 million) in this elaborate con.
The scam was detected when a victim came forth with the information that an Instagram advertisement portraying a deepfake video of Gisele Bündchen was indeed false. With Anna looking to be Gisele and on the recommendation of a skincare company, the deepfake video was the most well-produced fake video. On going further into the matter, it became apparent that the investigations uncovered a whole network of deceptive social media pages, payment gateways, and laundering channels spread over five states in Brazil.
The Role of AI and Deepfakes in Modern Fraud
It is one of the first few large-scale cases in Brazil where AI-generated deepfakes have been used to perpetrate financial fraud. Deepfake technology, aided by machine learning algorithms, can realistically mimic human appearance and speech and has become increasingly accessible and sophisticated. Whereas before a level of expertise and computer resources were needed, one now only requires an online tool or app.
With criminals gaining a psychological advantage through deepfakes, the audiences would be more willing to accept the ad as being genuine as they saw a familiar and trusted face, a celebrity known for integrity and success. The human brain is wired to trust certain visual cues, making deepfakes an exploitation of this cognitive bias. Unlike phishing emails brimming with spelling and grammatical errors, deepfake videos are immersive, emotional, and visually convincing.
This is the growing terrain: AI-enabled misinformation. From financial scams to political propaganda, manipulated media is killing trust in the digital ecosystem.
Legalities and Platform Accountability
The Brazilian government had taken a proactive stance on the issue. In June 2025, the country's Supreme Court held that social media platforms could be held liable for failure to expeditiously remove criminal content, even in the absence of a formal order from a court. The icing on the cake is that that judgment would go a long way in architecting platform accountability in Brazil and potentially worldwide as jurisdictions adopt processes to deal with AI-generated fraud.
Meta, the parent company of Instagram, had said its policies forbid "ads that deceptively use public figures to scam people." Meta claims to use advanced detection mechanisms, trained review teams, and user tools to report violations. The persistence of such scams shows that the enforcement mechanisms still lag the pace and scale of AI-based deception.
Why These Scams Succeed
There are many reasons for the success of these AI-powered scams.
- Trust Due to Familiarity: Human beings tend to believe anything put forth by a known individual.
- Micro-Fraud: Keeping the money laundered from victims small prevents any increase in the number of complaints about these crimes.
- Speed To Create Content: New ads are being generated by criminals faster than ads can be checked for and removed by platforms via AI tools.
- Cross-Platform Propagation: A deepfake ad is then reshared onto various other social networking platforms once it starts gaining some traction, thereby worsening the problem.
- Absence of Public Awareness: Most users still cannot discern manipulated media, especially when high-quality deepfakes come into play.
Wider Implications on Cybersecurity and Society
The Brazilian case is but a microcosm of a much bigger problem. With deepfake technology evolving, AI-generated deception threatens not only individuals but also institutions, markets, and democratic systems. From investment scams and fake charters to synthetic IDs for corporate fraud, the possibilities for abuse are endless.
Moreover, with generative AIs being adopted by cybercriminals, law enforcement faces obstructions to properly attributing, validating evidence, and conducting digital forensics. Determining what is actual and what is manipulated has now given rise to the need for a forensic AI model that has triggered the deployment of the opposite on the other side, the attacker, thus initiating a rising tech arms race between the two parties.
Protecting Citizens from AI-Powered Scams
Public awareness has remained the best defence for people in such scams. Gisele Bündchen's squad encouraged members of the public to verify any advertisement through official brand or celebrity channels before engaging with said advertisements. Consumers need to be wary of offers that appear "too good to be true" and double-check the URL for authenticity before sharing any kind of personal information
Individually though, just a few acts go so far in lessening some of the risk factors:
- Verify an advertisement's origin before clicking or sharing it
- Never share any monetary or sensitive personal information through an unverifiable link
- Enable two-factor authentication on all your social accounts
- Periodically check transaction history for any unusual activity
- Report any deepfake or fraudulent advertisement immediately to the platform or cybercrime authorities
Collaboration will be the way ahead for governments and technology companies. Investing in AI-based detection systems, cooperating on international law enforcement, and building capacity for digital literacy programs will enable us to stem this rising tide of synthetic media scams.
Conclusion
The deepfake case in Brazil with Gisele Bündchen acts as a clarion for citizens and legislators alike. This shows the evolution of cybercrime that profited off the very AI technologies that were once hailed for innovation and creativity. In this new digital frontier that society is now embracing, authenticity stands closer to manipulation, disappearing faster with each dawn.
While keeping public safety will certainly still require great cybersecurity measures in this new environment, it will demand equal contributions on vigilance, awareness, and ethical responsibility. Deepfakes are not only a technology problem but a societal one-crossing into global cooperation, media literacy, and accountability at every level throughout the entire digital ecosystem.

Introduction
The much-awaited DPDP Rules have now finally been released in the official Gazette on 3rd January 2025 for consultation. The draft Digital Personal Data Protection Rules, 2025 (DPDP Rules) invites objections and suggestions from stakeholders that can be submitted on MyGov (https://mygov.in) by 18th February 2025.
DPDP Rules at Glance
- Processing of Children's Data: The draft rules say that ‘A Data Fiduciary shall adopt appropriate technical and organisational measures to ensure that verifiable consent of the parent is obtained before the processing of any personal data of a child’. It entails that children below 18 will need parents' consent to create social media accounts.
- The identity of the parents and their age can be verified through reliable details of identity and age available with the Data Fiduciary, voluntarily provided identity proof or virtual token mapped to the same. The data fiduciaries are also required to observe due diligence for checking that the individual identifying themselves as the parent is an adult who is identifiable, if required, in connection with compliance with any law for the time being in force in India. Additionally, the government will also extend exemptions from these specific provisions pertaining to processing of children's data to educational institutions, and child welfare organisations.
- Processing of Personal Data Outside India: The draft rules specify that the transfer of personal data outside India, whether it is processed within the country or outside in connection with offering goods or services to individuals in India, is permitted only if the Data Fiduciary complies with the conditions prescribed by the Central Government through general or specific orders.
- Intimation of Personal Data Breach: On becoming aware of a personal data breach, the Data Fiduciary must promptly notify the affected Data Principals in a clear and concise manner through their user account or registered communication method. This notification should include a description of the breach (nature, extent, timing, and location), potential consequences for the Data Principal, measures taken or planned to mitigate risks, recommended safety actions for the Data Principal, and contact information of a representative to address queries. Additionally, the Data Fiduciary must inform the Board without delay, providing details of the breach, its likely impact, and initial findings. Within 72 hours (or a longer period allowed by the Board upon request), the Data Fiduciary must submit updated information, including the facts and circumstances of the breach, mitigation measures, findings about the cause, steps to prevent recurrence, and a report on notifications given to affected Data Principals.
- Data Protection Board: The draft rules propose establishing the Data Protection Board, which will function as a digital office, enabling remote hearings, and will hold powers to investigate breaches, impose penalties, and perform related regulatory functions.
Journey of Digital Personal Data Protection Act, 2023
The foundation for the single statute legislation on Data Protection was laid down in 2017, in the famous ‘Puttaswami judgment,’ which is also well recognised as the Aadhar Card judgment. In this case, ‘privacy’ was recognised as intrinsic to the right to life and personal liberty, guaranteed by Article 21 of the Constitution of India, thus making ‘Right to Privacy’ a fundamental right. In the landmark Puttaswamy ruling, the apex court of India stressed the need for a comprehensive data protection law.
Eight years on and several draft bills later, the Union Cabinet approved the Digital Personal Data Protection Bill (DPDP) on 5th July 2023. The bill was tabled in the Lok Sabha on 3rd August 2023, and It was passed by Lok Sabha on 7th August, and the bill passed by Rajya Sabha on 9th August and got the president's assent on 11th August 2023; and India finally came up with the ‘Digital Personal Data Protection Act, 2023. This is a significant development that has the potential to bring about major improvements to online privacy and the handling of digital personal data by the platforms.
The Digital Personal Data Protection Act, 2023, is a newly-enacted legislation designed to protect individuals' digital personal data. It aims to ensure compliance by Data Fiduciaries and imposes specific obligations on both Data Principals and Data Fiduciaries. The Act promotes consent-based data collection practices and establishes the Data Protection Board to oversee compliance and address grievances. Additionally, it includes provisions for penalties of up to ₹250 crores in the event of a data breach. However, despite the DPDP Act being passed by parliament last year, the Act has not yet taken effect since its rules and regulations are still not finalised.
Conclusion
It is heartening to see that the Ministry of Electronics and Technology (MeitY) has finally released the draft of the much-awaited DPDP rules for consultation from stakeholders. Though noting certain positive aspects, there is still room for addressing certain gaps and multiple aspects under the draft rules that require attention. The public consultation, including the inputs from the tech platforms, is likely to see critical inputs on multiple aspects under the proposed rules. One such key area of interest will be the requirement of verifiable parental consent, which will likely include recommendations for a balanced approach which maintains children’s safety and mechanisms for the requirement of verifiable consent. The Provisions permitting government access to personal data on grounds of national security are also expected to face scrutiny. The proposed rules, after the consultation process, will be taken into consideration for finalisation after 18th February 2025. The move towards establishing a robust data protection law in India signals a significant step toward enhancing trust and accountability in the digital ecosystem. However, its success will hinge on effective implementation, clear compliance mechanisms, and the adaptability of stakeholders to this evolving regulatory landscape.