#FactCheck - Debunked: AI-Generated Image Circulating as April Solar Eclipse Snapshot
Research Wing
Innovation and Research
PUBLISHED ON
Apr 18, 2024
10
Executive Summary:
A picture about the April 8 solar eclipse, which was authored by AI and was not a real picture of the astronomical event, has been spreading on social media. Despite all the claims of the authenticity of the image, the CyberPeace’s analysis showed that the image was made using Artificial Intelligence image-creation algorithms. The total solar eclipse on April 8 was observable only in those places on the North American continent that were located in the path of totality, whereas a partial visibility in other places was possible. NASA made the eclipse live broadcast for people who were out of the totality path. The spread of false information about rare celestial occurrences, among others, necessitates relying on trustworthy sources like NASA for correct information.
Claims:
An image making the rounds through social networks, looks like the eclipse of the sun of the 8th of April, which makes it look like a real photograph.
After receiving the news, the first thing we did was to try with Keyword Search to find if NASA had posted any lookalike image related to the viral photo or any celestial events that might have caused this photo to be taken, on their official social media accounts or website. The total eclipse on April 8 was experienced by certain parts of North America that were located in the eclipse pathway. A part of the sky above Mazatlan, Mexico, was the first to witness it. Partial eclipse was also visible for those who were not in the path of totality.
Next, we ran the image through the AI Image detection tool by Hive moderation, which found it to be 99.2% AI-generated.
Following that, we applied another AI Image detection tool called Isitai, and it found the image to be 96.16% AI-generated.
With the help of AI detection tools, we came to the conclusion that the claims made by different social media users are fake and misleading. The viral image is AI-generated and not a real photograph.
Conclusion:
Hence, it is a generated image by AI that has been circulated on the internet as a real eclipse photo on April 8. In spite of some debatable claims to the contrary, the study showed that the photo was created using an artificial intelligence algorithm. The total eclipse was not visible everywhere in North America, but rather only in a certain part along the eclipse path, with partial visibility elsewhere. Through AI detection tools, we were able to establish a definite fact that the image is fake. It is very important, when you are talking about rare celestial phenomena, to use the information that is provided by the trusted sources like NASA for the accurate reason.
Claim: A viral image of a solar eclipse claiming to be a real photograph of the celestial event on April 08
Ministry of Electronics and Information Technology (MeitY) Announces to Centre Government to Plan to Certify Permissible Online Games.
In a recent update to the notification released by the Ministry of Electronics and Information Technology (MeitY) on April 6, MeitY has requested gaming entities to establish self-regulatory organisations (SROs) within a timeframe of 30 days or a maximum of 90 days from the date of the notification, which is April 6, 2023. The Ministry of Electronics and Information Technology (MeitY) has further announced that the central government will certify which online games are permissible until the SROs are officially established. The intention behind establishing SROs is to assist intermediaries, such as Apple or Google, in determining what constitutes a permitted online game, but the SRO will take 2-3 months to complete. In the meanwhile, the Central government will step in and determine what is a permissible online game.
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 & Intermediary Guidelines and Digital Media Ethics Code Amendment Rules, 2023
By enacting these rules, the Indian government has taken decisive action to protect Indian gamers and their financial resources against scams and fraud. The rules also serve to promote responsible gaming while preventing young and vulnerable users from being exposed to indecent or abusive content.
Amendment Rules developed the concept of a “Permissible online real money game.” This designation is reserved for games that have passed a review process conducted by a self-regulatory body (SRB). Amendment rules indicate that Online Gaming Intermediaries must ensure that they do not permit any third party to host non-permissible online real money games on their platforms. This development is important because it empowers us to distinguish between legitimate and illicit real money games.
The Amendment Rules define an online gaming provider as an “intermediary” under the Information Technology Act of 2000, creating a separate classification called ‘Online Gaming Intermediary’.
Central government to certify what is an ‘Online Permissible Game’
The industry has been wondering what games come under wagering and will be banned. So, until the SROs are officially established, the government, in the interim, will certify what is a permissible game, what is wagering, and what is not wagering. Games that involve elements of wagering are going to be barred. The new regulations prohibit wagering on any outcome, whether in skill-based or chance-based games. Hence gaming applications involving wagering and betting apps will be barred.
Self-Regulatory Organizations (SROs)
According to the new regulations by the Ministry of Electronics and Information Technology (MeitY), online gaming intermediaries must establish a Self-Regulatory Body (SRO) to approve games offered to users over the Internet. The SRO must be registered with the Ministry and develop a framework to ensure compliance with the IT Rules 2021 objectives. An ‘online game’ can be registered by the SRO if it meets specific criteria, which include that the game is offered by an online gaming intermediary that is a member of the self-regulatory body, the game is not containing any content harmful to India’s interests, and complying with all relevant Indian regulations. If these requirements are met, the intermediary can display a visible registration mark indicating its registration with the self-regulatory authority.
Conclusion
MeitY found that with the rapid growth of the gaming industry, the real money gaming (RMG) sector had to be regulated properly. Rules framed must be properly implemented to stop gambling, betting, and wagering apps.
The IT Rules 2021, along with the Amendment Rules 2023, are created to take concrete action to curb the proliferation of gambling, betting, and wagering apps in India. These rules empower to issue of directives to ban specific apps that facilitate or promote such activities. The app ban directive allows the government to take decisive action by blocking access to these apps, making them unavailable for download or use within the country. This measure is aimed at curbing the negative impact of gambling, betting, and wagering on individuals and society, including issues related to addiction, financial loss, and illegal activities. Rules aim to actively combat the spread and influence of such apps and provide a safer online environment for gaming users.
The self-regulatory body in the context of online gaming will have the authority to grant membership to gaming intermediaries, register online games, develop a framework for regulation, interact with the Central Government, address user complaints, report instances of non-compliance, and take necessary actions to safeguard online gaming users.
With the increasing reliance on digital technologies in the banking industry, cyber threats have become a significant concern. Cyberlaw plays a crucial role in safeguarding the banking sector from cybercrimes and ensuring the security and integrity of financial systems.
The banking industry has witnessed a rapid digital transformation, enabling convenient services and greater access to financial resources. However, this digitalisation also exposes the industry to cyber threats, necessitating the formulation and implementation of effective cyber law frameworks.
Recent Trends in the Banking Industry
Digital Transformation: The banking industry has embraced digital technologies, such as mobile banking, internet banking, and financial apps, to enhance customer experience and operational efficiency.
Open Banking: The concept of open banking has gained prominence, enabling data sharing between banks and third-party service providers, which introduces new cyber risks.
How Cyber Law Helps the Banking Sector
The banking sector and cyber crime share an unspoken synergy due to the mass digitisation of banking services. Thanks to QR codes, UPI and online banking payments, India is now home to 40% of global online banking transactions. Some critical aspects of the cyber law and banking sector are as follows:
Data Protection: Cyberlaw mandates banks to implement robust data protection measures, including encryption, access controls, and regular security audits, to safeguard customer data.
Incident Response and Reporting: Cyberlaw requires banks to establish incident response plans, promptly report cyber incidents to regulatory authorities, and cooperate in investigations.
Customer Protection: Cyberlaw enforces regulations related to online banking fraud, identity theft, and unauthorised transactions, ensuring that customers are protected from cybercrimes.
Legal Framework: Cyberlaw provides a legal foundation for digitalisation in the banking sector, assuring customers that regulations protect their digital transactions and data.
Cybersecurity Training and Awareness: Cyberlaw encourages banks to conduct regular training programs and create awareness among employees and customers about cyber threats, safe digital practices, and reporting procedures.
RBI Guidelines
The RBI, as India’s central banking institution, has issued comprehensive guidelines to enhance cyber resilience in the banking industry. These guidelines address various aspects, including:
Technology Risk Management
Cyber Security Framework
IT Governance
Cyber Crisis Management Plan
Incident Reporting and Response
Recent Trends in Banking Sector Frauds and the Role of Cyber Law
Phishing Attacks: Cyberlaw helps banks combat phishing attacks by imposing penalties on perpetrators and mandating preventive measures like two-factor authentication.
Insider Threats: Cyberlaw regulations emphasise the need for stringent access controls, employee background checks, and legal consequences for insiders involved in fraudulent activities.
Ransomware Attacks: Cyberlaw frameworks assist banks in dealing with ransomware attacks by enabling legal actions against hackers and promoting preventive measures, such as regular software updates and data backups.
Master Directions on Cyber Resilience and Digital Payment Security Controls for Payment System Operators (PSOs)
Draft of Master Directions on Cyber Resilience and Digital Payment Security Controls for Payment System Operators (PSOs) issued by the Reserve Bank of India (RBI). The directions provide guidelines and requirements for PSOs to improve the safety and security of their payment systems, with a focus on cyber resilience. These guidelines for PSOs include mobile payment service providers like Paytm or digital wallet payment platforms.
Here are the highlights-
The Directions aim to improve the safety and security of payment systems operated by PSOs by providing a framework for overall information security preparedness, with an emphasis on cyber resilience.
The Directions apply to all authorised non-bank PSOs.
PSOs must ensure adherence to these Directions by unregulated entities in their digital payments ecosystem, such as payment gateways, third-party service providers, vendors, and merchants.
The PSO’s Board of Directors is responsible for ensuring adequate oversight over information security risks, including cyber risk and cyber resilience. A sub-committee of the Board may be delegated with primary oversight responsibilities.
PSOs must formulate a Board-approved Information Security (IS) policy that covers roles and responsibilities, measures to identify and manage cyber security risks, training and awareness programs, and more.
PSOs should have a distinct Board-approved Cyber Crisis Management Plan (CCMP) to detect, contain, respond, and recover from cyber threats and attacks.
A senior-level executive, such as a Chief Information Security Officer (CISO), should be responsible for implementing the IS policy and the cyber resilience framework and assessing the overall information security posture of the PSO.
PSOs need to define Key Risk Indicators (KRIs) and Key Performance Indicators (KPIs) to identify potential risk events and assess the effectiveness of security controls. The sub-committee of the Board is responsible for monitoring these indicators.
PSOs should conduct a cyber risk assessment when launching new products, services, technologies, or significant changes to existing infrastructure or processes.
PSOs, including inventory management, identity and access management, network security, application security life cycle, security testing, vendor risk management, data security, patch and change management life cycle, incident response, business continuity planning, API security, employee awareness and training, and other security measures should implement various baseline information security measures and controls.
PSOs should ensure that payment transactions involving debit to accounts conducted electronically are permitted only through multi-factor authentication, except where explicitly permitted/relaxed.
Conclusion
The relationship between cyber law and the banking industry is crucial in ensuring a secure and trusted digital environment. Recent trends indicate that cyber threats are evolving and becoming more sophisticated. Compliance with cyber law provisions and adherence to guidelines such as those provided by the RBI is essential for banks to protect themselves and their customers from cybercrimes. By embracing robust cyber law frameworks, the banking industry can foster a resilient ecosystem that enables innovation while safeguarding the interests of all stakeholders or users.
Search engines have become indispensable in our daily lives, allowing us to find information instantly by entering keywords or phrases. Using the prompt "search Google or type a URL" reflects just how seamless this journey to knowledge has become. With millions of searches conducted every second, and Google handling over 6.3 million searches per minute as of 2023 (Statista), one critical question arises: do search engines prioritise results based on user preferences and past behaviours, or are they truly unbiased?
Understanding AI Bias in Search Algorithms
AI bias is also known as machine learning bias or algorithm bias. It refers to the occurrence of biased results due to human biases that deviate from the original training data or AI algorithm which leads to distortion of outputs and creation of potentially harmful outcomes. The sources of this bias are algorithmic bias, data bias and interpretation bias which emerge from user history, geographical data, and even broader societal biases in training data.
Common biases include excluding certain groups of people from opportunities because of AI bias. In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms. While AI helps streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in biased outcomes due to a biased dataset or any other bias in the input data.
Case in Point: Google’s "Helpful" Results and Its Impact
Google optimises results by analysing user interactions to determine satisfaction with specific types of content. This data-driven approach forms ‘filter bubbles’ by repeatedly displaying content that aligns with a user’s preferences, regardless of factual accuracy. While this can create a more personalised experience, it risks confining users to a limited view, excluding diverse perspectives or alternative viewpoints.
The personal and societal impacts of such biases are significant. At an individual level, filter bubbles can influence decision-making, perceptions, and even mental health. On a societal level, these biases can reinforce stereotypes, polarise opinions, and shape collective narratives. There is also a growing concern that these biases may promote misinformation or limit users’ exposure to diverse perspectives, all stemming from the inherent bias in search algorithms.
Policy Challenges and Regulatory Measures
Regulating emerging technologies like AI, especially in search engine algorithms, presents significant challenges due to their intricate, proprietary nature. Traditional regulatory frameworks struggle to keep up with them as existing laws were not designed to address the nuances of algorithm-driven platforms. Regulatory bodies are pushing for transparency and accountability in AI-powered search algorithms to counter biases and ensure fairness globally. For example, the EU’s Artificial Intelligence Act aims to establish a regulatory framework that will categorise AI systems based on risk and enforces strict standards for transparency, accountability, and fairness, especially for high-risk AI applications, which may include search engines. India has proposed the Digital India Act in 2023 which will define and regulate High-risk AI.
Efforts include ethical guidelines emphasising fairness, accountability, and transparency in information prioritisation. However, a complex regulatory landscape could hinder market entrants, highlighting the need for adaptable, balanced frameworks that protect user interests without stifling innovation.
CyberPeace Insights
In a world where search engines are gateways to knowledge, ensuring unbiased, accurate, and diverse information access is crucial. True objectivity remains elusive as AI-driven algorithms tend to personalise results based on user preferences and past behaviour, often creating a biased view of the web. Filter bubbles, which reinforce individual perspectives, can obscure factual accuracy and limit exposure to diverse viewpoints. Addressing this bias requires efforts from both users and companies. Users should diversify sources and verify information, while companies should enhance transparency and regularly audit algorithms for biases. Together, these actions can promote a more equitable, accurate, and unbiased search experience for all users.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.