#FactCheck - Viral image circulating on social media depicts a natural optical illusion from Epirus, Greece.
Executive Summary:
A viral image circulating on social media claims it to be a natural optical illusion from Epirus, Greece. However, upon fact-checking, it was found that the image is an AI-generated artwork created by Iranian artist Hamidreza Edalatnia using the Stable Diffusion AI tool. CyberPeace Research Team found it through reverse image search and analysis with an AI content detection tool named HIVE Detection, which indicated a 100% likelihood of AI generation. The claim of the image being a natural phenomenon from Epirus, Greece, is false, as no evidence of such optical illusions in the region was found.

Claims:
The viral image circulating on social media depicts a natural optical illusion from Epirus, Greece. Users share on X (formerly known as Twitter), YouTube Video, and Facebook. It’s spreading very fast across Social Media.

Similar Posts:


Fact Check:
Upon receiving the Posts, the CyberPeace Research Team first checked for any Synthetic Media detection, and the Hive AI Detection tool found it to be 100% AI generated, which is proof that the Image is AI Generated. Then, we checked for the source of the image and did a reverse image search for it. We landed on similar Posts from where an Instagram account is linked, and the account of similar visuals was made by the creator named hamidreza.edalatnia. The account we landed posted a photo of similar types of visuals.

We searched for the viral image in his account, and it was confirmed that the viral image was created by this person.

The Photo was posted on 10th December, 2023 and he mentioned using AI Stable Diffusion the image was generated . Hence, the Claim made in the Viral image of the optical illusion from Epirus, Greece is Misleading.
Conclusion:
The image claiming to show a natural optical illusion in Epirus, Greece, is not genuine, and it's False. It is an artificial artwork created by Hamidreza Edalatnia, an artist from Iran, using the artificial intelligence tool Stable Diffusion. Hence the claim is false.
Related Blogs

Executive Summary:
In the context of the recent earthquake in Taiwan, a video has gone viral and is being spread on social media claiming that the video was taken during the recent earthquake that occurred in Taiwan. However, fact checking reveals it to be an old video. The video is from September 2022, when Taiwan had another earthquake of magnitude 7.2. It is clear that the reversed image search and comparison with old videos has established the fact that the viral video is from the 2022 earthquake and not the recent 2024-event. Several news outlets had covered the 2022 incident, mentioning additional confirmation of the video's origin.

Claims:
There is a news circulating on social media about the earthquake in Taiwan and Japan recently. There is a post on “X” stating that,
“BREAKING NEWS :
Horrific #earthquake of 7.4 magnitude hit #Taiwan and #Japan. There is an alert that #Tsunami might hit them soon”.

Similar Posts:


Fact Check:
We started our investigation by watching the videos thoroughly. We divided the video into frames. Subsequently, we performed reverse search on the images and it took us to an X (formally Twitter) post where a user posted the same viral video on Sept 18, 2022. Worth to notice, the post has the caption-
“#Tsunami warnings issued after Taiwan quake. #Taiwan #Earthquake #TaiwanEarthquake”

The same viral video was posted on several news media in September 2022.

The viral video was also shared on September 18, 2022 on NDTV News channel as shown below.

Conclusion:
To conclude, the viral video that claims to depict the 2024 Taiwan earthquake was from September 2022. In the course of the rigorous inspection of the old proof and the new evidence, it has become clear that the video does not refer to the recent earthquake that took place as stated. Hence, the recent viral video is misleading . It is important to validate the information before sharing it on social media to prevent the spread of misinformation.
Claim: Video circulating on social media captures the recent 2024 earthquake in Taiwan.
Claimed on: X, Facebook, YouTube
Fact Check: Fake & Misleading, the video actually refers to an incident from 2022.

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/
.webp)
Introduction: The Internet’s Foundational Ideal of Openness
The Internet was built as a decentralised network to foster open communication and global collaboration. Unlike traditional media or state infrastructure, no single government, company, or institution controls the Internet. Instead, it has historically been governed by a consensus of the multiple communities, like universities, independent researchers, and engineers, who were involved in building it. This bottom-up, cooperative approach was the foundation of Internet governance and ensured that the Internet remained open, interoperable, and accessible to all. As the Internet began to influence every aspect of life, including commerce, culture, education, and politics, it required a more organised governance model. This compelled the rise of the multi-stakeholder internet governance model in the early 2000s.
The Rise of Multistakeholder Internet Governance
Representatives from governments, civil society, technical experts, and the private sector congregated at the United Nations World Summit on Information Society (WSIS), and adopted the Tunis Agenda for the Information Society. Per this Agenda, internet governance was defined as “… the development and application by governments, the private sector, and civil society in their respective roles of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet.” Internet issues are cross-cutting across technical, political, economic, and social domains, and no one actor can manage them alone. Thus, stakeholders with varying interests are meant to come together to give direction to issues in the digital environment, like data privacy, child safety, cybersecurity, freedom of expression, and more, while upholding human rights.
Internet Governance in Practice: A History of Power Shifts
While the idea of democratizing Internet governance is a noble one, the Tunis Agenda has been criticised for reflecting geopolitical asymmetries and relegating the roles of technical communities and civil society to the sidelines. Throughout the history of the internet, certain players have wielded more power in shaping how it is managed. Accordingly, internet governance can be said to have undergone three broad phases.
In the first phase, the Internet was managed primarily by technical experts in universities and private companies, which contributed to building and scaling it up. The standards and protocols set during this phase are in use today and make the Internet function the way it does. This was the time when the Internet was a transformative invention and optimistically hailed as the harbinger of a utopian society, especially in the USA, where it was invented.
In the second phase, the ideal of multistakeholderism was promoted, in which all those who benefit from the Internet work together to create processes that will govern it democratically. This model also aims to reduce the Internet’s vulnerability to unilateral decision-making, an ideal that has been under threat because this phase has seen the growth of Big Tech. What started as platforms enabling access to information, free speech, and creativity has turned into a breeding ground for misinformation, hate speech, cybercrime, Child Sexual Abuse Material (CSAM), and privacy concerns. The rise of generative AI only compounds these challenges. Tech giants like Google, Meta, X (formerly Twitter), OpenAI, Microsoft, Apple, etc. have amassed vast financial capital, technological monopoly, and user datasets. This gives them unprecedented influence not only over communications but also culture, society, and technology governance.
The anxieties surrounding Big Tech have fed into the third phase, with increasing calls for government regulation and digital nationalism. Governments worldwide are scrambling to regulate AI, data privacy, and cybersecurity, often through processes that lack transparency. An example is India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which was passed without parliamentary debate. Governments are also pressuring platforms to take down content through opaque takedown orders. Laws like the UK’s Investigatory Powers Act, 2016, are criticised for giving the government the power to indirectly mandate encryption backdoors, compromising the strength of end-to-end encryption systems. Further, the internet itself is fragmenting into the “splinternet” amid rising geopolitical tensions, in the form of Russia’s “sovereign internet” or through China’s Great Firewall.
Conclusion
While multistakeholderism is an ideal, Internet governance is a playground of contesting power relations in practice. As governments assert digital sovereignty and Big Tech consolidates influence, the space for meaningful participation of other stakeholders has been negligible. Consultation processes have often been symbolic. The principles of openness, inclusivity, and networked decision-making are once again at risk of being sidelined in favour of nationalism or profit. The promise of a decentralised, rights-respecting, and interoperable internet will only be fulfilled if we recommit to the spirit of Multi-Stakeholder Internet Governance, not just its structure. Efficient internet governance requires that the multiple stakeholders be empowered to carry out their roles, not just talk about them.
References
- https://www.newyorker.com/magazine/2024/02/05/can-the-internet-be-governed
- https://www.internetsociety.org/wp-content/uploads/2017/09/ISOC-PolicyBrief-InternetGovernance-20151030-nb.pdf
- https://itp.cdn.icann.org/en/files/government-engagement-ge/multistakeholder-model-internet-governance-fact-sheet-05-09-2024-en.pdf\
- https://nrs.help/post/internet-governance-and-its-importance/
- https://daidac.thecjid.org/how-data-power-is-skewing-internet-governance-to-big-tech-companies-and-ai-tech-guys/