#FactCheck - Debunked: AI-Generated Image Circulating as April Solar Eclipse Snapshot
Executive Summary:
A picture about the April 8 solar eclipse, which was authored by AI and was not a real picture of the astronomical event, has been spreading on social media. Despite all the claims of the authenticity of the image, the CyberPeace’s analysis showed that the image was made using Artificial Intelligence image-creation algorithms. The total solar eclipse on April 8 was observable only in those places on the North American continent that were located in the path of totality, whereas a partial visibility in other places was possible. NASA made the eclipse live broadcast for people who were out of the totality path. The spread of false information about rare celestial occurrences, among others, necessitates relying on trustworthy sources like NASA for correct information.
Claims:
An image making the rounds through social networks, looks like the eclipse of the sun of the 8th of April, which makes it look like a real photograph.
Fact Check:
After receiving the news, the first thing we did was to try with Keyword Search to find if NASA had posted any lookalike image related to the viral photo or any celestial events that might have caused this photo to be taken, on their official social media accounts or website. The total eclipse on April 8 was experienced by certain parts of North America that were located in the eclipse pathway. A part of the sky above Mazatlan, Mexico, was the first to witness it. Partial eclipse was also visible for those who were not in the path of totality.
Next, we ran the image through the AI Image detection tool by Hive moderation, which found it to be 99.2% AI-generated.
Following that, we applied another AI Image detection tool called Isitai, and it found the image to be 96.16% AI-generated.
With the help of AI detection tools, we came to the conclusion that the claims made by different social media users are fake and misleading. The viral image is AI-generated and not a real photograph.
Conclusion:
Hence, it is a generated image by AI that has been circulated on the internet as a real eclipse photo on April 8. In spite of some debatable claims to the contrary, the study showed that the photo was created using an artificial intelligence algorithm. The total eclipse was not visible everywhere in North America, but rather only in a certain part along the eclipse path, with partial visibility elsewhere. Through AI detection tools, we were able to establish a definite fact that the image is fake. It is very important, when you are talking about rare celestial phenomena, to use the information that is provided by the trusted sources like NASA for the accurate reason.
- Claim: A viral image of a solar eclipse claiming to be a real photograph of the celestial event on April 08
- Claimed on: X, Facebook, Instagram, website
- Fact Check: Fake & Misleading
Related Blogs
Executive Summary:
The rise in cybercrime targeting vulnerable individuals, particularly students and their families, has reached alarming levels. Impersonation scams, where fraudsters pose as Law Enforcement Officers, have become increasingly sophisticated, exploiting fear, urgency, and social stigma. This report delves into recent incidents of ransom scams involving fake CBI officers, highlighting the execution methods, psychological impact on victims, and preventive measures. The goal is to raise public awareness and equip individuals with the knowledge needed to protect themselves from such fraudulent activities.
Introduction:
Cybercriminals are evolving their tactics, with impersonation and social engineering at the forefront. Scams involving fake law enforcement officers have become rampant, preying on the fear of legal repercussions and the desire to protect loved ones. This report examines incidents where scammers impersonated CBI officers to extort money from families of students, emphasizing the urgent need for awareness, verification, and preventive measures.
Case Study:
This case study explains how the scammers impersonate themselves for the money targeting students' families.
Targets receive calls from scammers posing as CBI officers. Mostly the families of students are targeted by the fraudsters using sophisticated impersonation and emotional manipulation tactics. In our case study, the targets received calls from unknown international numbers, falsely claiming that the students, along with their friends, were involved in a fabricated rape case. The parents get calls during school or college hours, a time when it is particularly difficult and chaotic for parents to reach their children, adding to the panic and sense of urgency. The scammers manipulate the parents by stating that, due to the students' clean records, they are not officially arrested but would face severe legal consequences unless a sum of money is paid immediately.
Although in these specific cases, the parents did not pay the money, many parents in our country fall victim to such scams, paying large sums out of fear and desperation to protect their children’s futures. The fear of legal repercussions, social stigma, and the potential damage to the students' reputations, the scammers used high-pressure tactics to force compliance.
These incidents may result in significant financial losses, emotional trauma, and a profound loss of trust in communication channels and authorities. This underscores the urgent need for awareness, verification of authority, and prompt reporting of such scams to prevent further victimisation
Modus Operandi:
- Caller ID Spoofing: The scammer used a unknown number and spoofing techniques to mimic a legitimate law enforcement authority.
- Fear Induction: The fraudster played on the family's fear of social stigma, manipulating them into compliance through emotional blackmail.
Analysis:
Our research found that the unknown international numbers used in these scams are not real but are puppet numbers often used for prank calls and fraudulent activities. This incident also raises concerns about data breaches, as the scammers accurately recited students' details, including names and their parents' information, adding a layer of credibility and increasing the pressure on the victims. These incidents result in significant financial losses, emotional trauma, and a profound loss of trust in communication channels and authorities.
Impact on Victims:
- Financial and Psychological Losses: The family may face substantial financial losses, coupled with emotional and psychological distress.
- Loss of Trust in Authorities: Such scams undermine trust in official communication and law enforcement channels.
- Exploitation of Fear and Urgency: Scammers prey on emotions such as fear, urgency, and social stigma to manipulate victims.
- Sophisticated Impersonation Techniques: Using caller ID spoofing, Virtual/Temporary numbers and impersonation of Law Enforcement Officers adds credibility to the scam.
- Lack of Verification: Victims often do not verify the caller's identity, leading to successful scams.
- Significant Psychological Impact: Beyond financial losses, these scams cause lasting emotional trauma and distrust in institutions.
Recommendations:
- Cross-Verification: Always cross-verify with official sources before acting on such claims. Always contact official numbers listed on trusted Government websites to verify any claims made by callers posing as law enforcement.
- Promote Awareness: Educational institutions should conduct regular awareness programs to help students and families recognize and respond to scams.
- Encourage Prompt Reporting: Reporting such incidents to authorities can help track scammers and prevent future cases. Encourage victims to report incidents promptly to local authorities and cybercrime units.
- Enhance Public Awareness: Continuous public awareness campaigns are essential to educate people about the risks and signs of impersonation scams.
- Educational Outreach: Schools and colleges should include Cybersecurity awareness as part of their curriculum, focusing on identifying and responding to scams.
- Parental Guidance and Support: Parents should be encouraged to discuss online safety and scam tactics with their children regularly, fostering a vigilant mindset.
Conclusion:
The rise of impersonation scams targeting students and their families is a growing concern that demands immediate attention. By raising awareness, encouraging verification of claims, and promoting proactive reporting, we can protect vulnerable individuals from falling victim to these manipulative and harmful tactics. It is high time for the authorities, educational institutions, and the public to collaborate in combating these scams and safeguarding our communities. Strengthening data protection measures and enhancing public education on the importance of verifying claims can significantly reduce the impact of these fraudulent schemes and prevent further victimisation.
Executive Summary
A misleading advertisement circulating in social media providing attractive offers like iPhone15, AirPods and Smartwatches from the Indian e-commerce platform ‘Myntra’. This “Myntra - Festival Gifts” scam aims to attract the unsuspecting users into a series of redirects and fake interactions to compromise their personal information and devices. It is important to stay vigilant to protect ourselves from misleading attractive offers. Through this report, the Research Wing of CyberPeace explains about a series of processes that happens when the link gets clicked. Through this knowledge, we aim to provide awareness and empower the users to guard themselves and not fall into deceptive offers that aim to scam them.
False Claim
The widely shared WhatsApp message claims that Myntra is offering a wide range of high-valued prizes including the latest iPhone 15, AirPods, various smartwatches among all as a Festival Gift promotion. The campaign invites the users to click on the link provided and take a short quiz to be eligible for the prize.
The Deceptive Scheme
- The link in the social media post is tailored to work only on mobile devices, users are taken through a chain of redirects.
- Users are greeted with the Myntra's "Big Fashion Festival" branding accompanied by Myntra’s logo once they reach the landing page, which gives an impression of authenticity.
- Next, a simple quiz asks basic questions about the user's shopping experience with Myntra, their age, and gender.
- On the bottom of the quiz, there is a comment section that shows the comments from users who are supposedly provided with the prizes to look real,
- After the completion of the quiz, users are presented with a Spin-to-Win mechanism, to win the prize.
- After winning, a congratulatory message is displayed which says that the user has won an iPhone 15.
- The final step requires the user to share the campaign over WhatsApp in order to claim the prize.
Analyzing the Fraudulent Campaign
- The use of Myntra's branding and the promise of exclusive, high-value prizes are designed to attract users' interest.
- The fake comments and social proof elements aim to create a false sense of legitimacy and widespread participation, making the offer seem more credible.
- The series of redirects, quizzes, and Spin-to-Win mechanics are tactics to keep users engaged and increase the likelihood of them falling for the scam.
- The final step of sharing the post on WhatsApp is a way for the scammers to further spread the campaign and compromise more victims. Through sharing the link over WhatsApp, users become unaware accomplices that are simply assisting the scammers to reach an even bigger audience and hence their popularity.
- The primary objectives of such scams are to gather users' personal information and potentially gain access to their devices. By luring users with the promise of exclusive gifts and creating a false sense of legitimacy, the scammers aim to exploit user trust and compromise their data, leading to potential identity theft, financial fraud, or the installation of potentially unwanted softwares.
- We have also cross-checked and as of now there is no well established and credible source or any official notification that has confirmed such an offer advertised by Myntra.
- Domain Analysis: If we closely look at the viral message, it is clearly visible that the scammers mentioned myntra.com in the url. However, the actual url takes the user to a different domain as the campaign is hosted on a third party domain instead of the official Website of Myntra, this raised suspicion. This is the common way to deceive users into falling for a Phishing scam. Whois information reveals that the domain has been registered not long ago i.e on 8th April 2024, just a few days back. Cybercriminals used Cloudflare technology to mask the actual IP address of the fraudulent website.
- Domain Name: MYTNRA.CYOU
- Registry Domain ID: D445770144-CNIC
- Registrar WHOIS Server: whois.hkdns.hk
- Registrar URL: http://www.hkdns.hk
- Updated Date: 2024-04-08T03:27:58.0Z
- Creation Date: 2024-04-08T02:58:14.0Z
- Registry Expiry Date: 2025-04-08T23:59:59.0Z
- Registrar: West263 International Limited
- Registrant State/Province: Delhi
- Registrant Country: IN
- Name Server: NORMAN.NS.CLOUDFLARE.COM
- Name Server: PAM.NS.CLOUDFLARE.COM
CyberPeace Advisory and Best Practices
- Do not open those messages received from social platforms in which you think that such messages are suspicious or unsolicited. In the beginning, your own discretion can become your best weapon.
- Falling prey to such scams could compromise your entire system, potentially granting unauthorized access to your microphone, camera, text messages, contacts, pictures, videos, banking applications, and more. Keep your cyber world safe against any attacks.
- Never, in any case, reveal such sensitive data as your login credentials and banking details to entities you haven't validated as reliable ones.
- Before sharing any content or clicking on links within messages, always verify the legitimacy of the source. Protect not only yourself but also those in your digital circle.
- For the sake of the truthfulness of offers and messages, find the official sources and companies directly. Verify the authenticity of alluring offers before taking any action.
Conclusion:
The “Myntra - Festival Gift” scam is a kind of manipulation in which the fraudsters exploit the trust of the users and take advantage of a popular e-commerce website. It is equally crucial to equip the users by imparting them knowledge on fraudulent behavior tactics like impersonating brands, creating fake social proof and application of different engagement strategies. We are required to remain alert and stand firm against cyber attacks. Be careful, make sure that information is verified and share awareness to help make a safe online environment for all users.
Introduction
The spread of information in the quickly changing digital age presents both advantages and difficulties. The phrases "misinformation" and "disinformation" are commonly used in conversations concerning information inaccuracy. It's important to counter such prevalent threats, especially in light of how they affect countries like India. It becomes essential to investigate the practical ramifications of misinformation/disinformation and other prevalent digital threats. Like many other nations, India has had to deal with the fallout from fraudulent internet actions in 2023, which has highlighted the critical necessity for strong cybersecurity safeguards.
The Emergence of AI Chatbots; OpenAI's ChatGPT and Google's Bard
The launch of OpenAI's ChatGPT in November 2022 was a major turning point in the AI space, inspiring the creation of rival chatbot ‘Google's Bard’ (Launched in 2023). These chatbots represent a significant breakthrough in artificial intelligence (AI) as they produce replies by combining information gathered from huge databases, driven by Large Language Models (LLMs). In the same way, AI picture generators that make use of diffusion models and existing datasets have attracted a lot of interest in 2023.
Deepfake Proliferation in 2023
Deepfake technology's proliferation in 2023 contributed to misinformation/disinformation in India, affecting politicians, corporate leaders, and celebrities. Some of these fakes were used for political purposes while others were for creating pornographic and entertainment content. Social turmoil, political instability, and financial ramifications were among the outcomes. The lack of tech measures about the same added difficulties in detection & prevention, causing widespread synthetic content.
Challenges of Synthetic Media
Problems of synthetic media, especially AI-powered or synthetic Audio video content proliferated widely during 2023 in India. These included issues with political manipulation, identity theft, disinformation, legal and ethical issues, security risks, difficulties with identification, and issues with media integrity. It covered an array of consequences, ranging from financial deception and the dissemination of false information to swaying elections and intensifying intercultural conflicts.
Biometric Fraud Surge in 2023
Biometric fraud in India, especially through the Aadhaar-enabled Payment System (AePS), has become a major threat in 2023. Due to the AePS's weaknesses being exploited by cybercriminals, many depositors have had their hard-earned assets stolen by fraudulent activity. This demonstrates the real effects of biometric fraud on those who have had their Aadhaar-linked data manipulated and unauthorized access granted. The use of biometric data in financial systems raises more questions about the security and integrity of the nation's digital payment systems in addition to endangering individual financial stability.
Government strategies to counter digital threats
- The Indian Union Government has sent a warning to the country's largest social media platforms, highlighting the importance of exercising caution when spotting and responding to deepfake and false material. The advice directs intermediaries to delete reported information within 36 hours, disable access in compliance with IT Rules 2021, and act quickly against content that violates laws and regulations. The government's dedication to ensuring the safety of digital citizens was underscored by Union Minister Rajeev Chandrasekhar, who also stressed the gravity of deepfake crimes, which disproportionately impact women.
- The government has recently come up with an advisory to social media intermediaries to identify misinformation and deepfakes and to make sure of the compliance of Information Technology (IT) Rules 2021. It is the legal obligation of online platforms to prevent the spread of misinformation and exercise due diligence or reasonable efforts to identify misinformation and deepfakes.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2021 were amended in 2023. The online gaming industry is required to abide by a set of rules. These include not hosting harmful or unverified online games, not promoting games without approval from the SRB, labelling real-money games with a verification mark, educating users about deposit and winning policies, setting up a quick and effective grievance redressal process, requesting user information, and forbidding the offering of credit or financing for real-money gaming. These steps are intended to guarantee ethical and open behaviour throughout the online gaming industry.
- With an emphasis on Personal Data Protection, the government enacted the Digital Personal Data Protection Act, 2023. It is a brand-new framework for digital personal data protection which aims to protect the individual's digital personal data.
- The " Cyber Swachhta Kendra " (Botnet Cleaning and Malware Analysis Centre) is a part of the Government of India's Digital India initiative under the (MeitY) to create a secure cyberspace. It uses malware research and botnet identification to tackle cybersecurity. It works with antivirus software providers and internet service providers to establish a safer digital environment.
Strategies by Social Media Platforms
Various social media platforms like YouTube, and Meta have reformed their policies on misinformation and disinformation. This shows their comprehensive strategy for combating deepfake, misinformation/disinformation content on the network. The platform YouTube prioritizes eliminating content that transgresses its regulations, decreasing the amount of questionable information that is recommended, endorsing reliable news sources, and assisting reputable authors. YouTube uses unambiguous facts and expert consensus to thwart misrepresentation. In order to quickly delete information that violates policies, a mix of content reviewers and machine learning is used throughout the enforcement process. Policies are designed in partnership with external experts and producers. In order to improve the overall quality of information that users have access to, the platform also gives users the ability to flag material, places a strong emphasis on media literacy, and gives precedence to giving context.
Meta’s policies address different misinformation categories, aiming for a balance between expression, safety, and authenticity. Content directly contributing to imminent harm or political interference is removed, with partnerships with experts for assessment. To counter misinformation, the efforts include fact-checking partnerships, directing users to authoritative sources, and promoting media literacy.
Promoting ‘Tech for Good’
By 2024, the vision for "Tech for Good" will have expanded to include programs that enable people to understand the ever-complex digital world and promote a more secure and reliable online community. The emphasis is on using technology to strengthen cybersecurity defenses and combat dishonest practices. This entails encouraging digital literacy and providing users with the knowledge and skills to recognize and stop false information, online dangers, and cybercrimes. Furthermore, the focus is on promoting and exposing effective strategies for preventing cybercrime through cooperation between citizens, government agencies, and technology businesses. The intention is to employ technology's good aspects to build a digital environment that values security, honesty, and moral behaviour while also promoting innovation and connectedness.
Conclusion
In the evolving digital landscape, difficulties are presented by false information powered by artificial intelligence and the misuse of advanced technology by bad actors. Notably, there are ongoing collaborative efforts and progress in creating a secure digital environment. Governments, social media corporations, civil societies and tech companies have shown a united commitment to tackling the intricacies of the digital world in 2024 through their own projects. It is evident that everyone has a shared obligation to establish a safe online environment with the adoption of ethical norms, protective laws, and cybersecurity measures. The "Tech for Good" goal for 2024, which emphasizes digital literacy, collaboration, and the ethical use of technology, seems promising. The cooperative efforts of people, governments, civil societies and tech firms will play a crucial role as we continue to improve our policies, practices, and technical solutions.
References:
- https://news.abplive.com/fact-check/deepfakes-ai-driven-misinformation-year-2023-brought-new-era-of-digital-deception-abpp-1651243
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445