#FactCheck: An image shows Sunita Williams with Trump and Elon Musk post her space return.
Executive Summary:
Our research has determined that a widely circulated social media image purportedly showing astronaut Sunita Williams with U.S. President Donald Trump and entrepreneur Elon Musk following her return from space is AI-generated. There is no verifiable evidence to suggest that such a meeting took place or was officially announced. The image exhibits clear indicators of AI generation, including inconsistencies in facial features and unnatural detailing.
Claim:
It was claimed on social media that after returning to Earth from space, astronaut Sunita Williams met with U.S. President Donald Trump and Elon Musk, as shown in a circulated picture.

Fact Check:
Following a comprehensive analysis using Hive Moderation, the image has been verified as fake and AI-generated. Distinct signs of AI manipulation include unnatural skin texture, inconsistent lighting, and distorted facial features. Furthermore, no credible news sources or official reports substantiate or confirm such a meeting. The image is likely a digitally altered post designed to mislead viewers.

While reviewing the accounts that shared the image, we found that former Indian cricketer Manoj Tiwary had also posted the same image and a video of a space capsule returning, congratulating Sunita Williams on her homecoming. Notably, the image featured a Grok watermark in the bottom right corner, confirming that it was AI-generated.

Additionally, we discovered a post from Grok on X (formerly known as Twitter) featuring the watermark, stating that the image was likely AI-generated.
Conclusion:
As per our research on the viral image of Sunita Williams with Donald Trump and Elon Musk is AI-generated. Indicators such as unnatural facial features, lighting inconsistencies, and a Grok watermark suggest digital manipulation. No credible sources validate the meeting, and a post from Grok on X further supports this finding. This case underscores the need for careful verification before sharing online content to prevent the spread of misinformation.
- Claim: Sunita Williams met Donald Trump and Elon Musk after her space mission.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

The World Wide Web was created as a portal for communication, to connect people from far away, and while it started with electronic mail, mail moved to instant messaging, which let people have conversations and interact with each other from afar in real-time. But now, the new paradigm is the Internet of Things and how machines can communicate with one another. Now one can use a wearable gadget that can unlock the front door upon arrival at home and can message the air conditioner so that it switches on. This is IoT.
WHAT EXACTLY IS IoT?
The term ‘Internet of Things’ was coined in 1999 by Kevin Ashton, a computer scientist who put Radio Frequency Identification (RFID) chips on products in order to track them in the supply chain, while he worked at Proctor & Gamble (P&G). And after the launch of the iPhone in 2007, there were already more connected devices than people on the planet.
Fast forward to today and we live in a more connected world than ever. So much so that even our handheld devices and household appliances can now connect and communicate through a vast network that has been built so that data can be transferred and received between devices. There are currently more IoT devices than users in the world and according to the WEF’s report on State of the Connected World, by 2025 there will be more than 40 billion such devices that will record data so it can be analyzed.
IoT finds use in many parts of our lives. It has helped businesses streamline their operations, reduce costs, and improve productivity. IoT also helped during the Covid-19 pandemic, with devices that could help with contact tracing and wearables that could be used for health monitoring. All of these devices are able to gather, store and share data so that it can be analyzed. The information is gathered according to rules set by the people who build these systems.
APPLICATION OF IoT
IoT is used by both consumers and the industry.
Some of the widely used examples of CIoT (Consumer IoT) are wearables like health and fitness trackers, smart rings with near-field communication (NFC), and smartwatches. Smartwatches gather a lot of personal data. Smart clothing, with sensors on it, can monitor the wearer’s vital signs. There are even smart jewelry, which can monitor sleeping patterns and also stress levels.
With the advent of virtual and augmented reality, the gaming industry can now make the experience even more immersive and engrossing. Smart glasses and headsets are used, along with armbands fitted with sensors that can detect the movement of arms and replicate the movement in the game.
At home, there are smart TVs, security cameras, smart bulbs, home control devices, and other IoT-enabled ‘smart’ appliances like coffee makers, that can be turned on through an app, or at a particular time in the morning so that it acts as an alarm. There are also voice-command assistants like Alexa and Siri, and these work with software written by manufacturers that can understand simple instructions.
Industrial IoT (IIoT) mainly uses connected machines for the purposes of synchronization, efficiency, and cost-cutting. For example, smart factories gather and analyze data as the work is being done. Sensors are also used in agriculture to check soil moisture levels, and these then automatically run the irrigation system without the need for human intervention.
Statistics
- The IoT device market is poised to reach $1.4 trillion by 2027, according to Fortune Business Insight.
- The number of cellular IoT connections is expected to reach 3.5 billion by 2023. (Forbes)
- The amount of data generated by IoT devices is expected to reach 73.1 ZB (zettabytes) by 2025.
- 94% of retailers agree that the benefits of implementing IoT outweigh the risk.
- 55% of companies believe that 3rd party IoT providers should have to comply with IoT security and privacy regulations.
- 53% of all users acknowledge that wearable devices will be vulnerable to data breaches, viruses,
- Companies could invest up to 15 trillion dollars in IoT by 2025 (Gigabit)
CONCERNS AND SOLUTIONS
- Two of the biggest concerns with IoT devices are the privacy of users and the devices being secure in order to prevent attacks by bad actors. This makes knowledge of how these things work absolutely imperative.
- It is worth noting that these devices all work with a central hub, like a smartphone. This means that it pairs with the smartphone through an app and acts as a gateway, which could compromise the smartphone as well if a hacker were to target that IoT device.
- With technology like smart television sets that have cameras and microphones, the major concern is that hackers could hack and take over the functioning of the television as these are not adequately secured by the manufacturer.
- A hacker could control the camera and cyberstalk the victim, and therefore it is very important to become familiar with the features of a device and ensure that it is well protected from any unauthorized usage. Even simple things, like keeping the camera covered when it is not being used.
- There is also the concern that since IoT devices gather and share data without human intervention, they could be transmitting data that the user does not want to share. This is true of health trackers. Users who wear heart and blood pressure monitors have their data sent to the insurance company, who may then decide to raise the premium on their life insurance based on the data they get.
- IoT devices often keep functioning as normal even if they have been compromised. Most devices do not log an attack or alert the user, and changes like higher power or bandwidth usage go unnoticed after the attack. It is therefore very important to make sure the device is properly protected.
- It is also important to keep the software of the device updated as vulnerabilities are found in the code and fixes are provided by the manufacturer. Some IoT devices, however, lack the capability to be patched and are therefore permanently ‘at risk’.
CONCLUSION
Humanity inhabits this world that is made up of all these nodes that talk to each other and get things done. Users can harmonize their devices so that everything runs like a tandem bike – completely in sync with all other parts. But while we make use of all the benefits, it is also very important that one understands what they are using, how it is functioning, and how one can tackle issues should they come up. This is also important to understand because once people get used to IoT, it will be that much more difficult to give up the comfort and ease that these systems provide, and therefore it would make more sense to be prepared for any eventuality. A lot of times, good and sensible usage alone can keep devices safe and services intact. But users should be aware of any issues because forewarned is forearmed.

March 3rd 2023, New Delhi: If you have received any message that contains a link asking users to download an application to avail Income Tax Refund or KYC benefits with the name of Income Tax Department or reputed Banks, Beware!
CyberPeace Foundation and Autobot Infosec Private Limited along with the academic partners under CyberPeace Center of Excellence (CCoE) recently conducted five different studies on phishing campaigns that have been circulating on the internet by using misleading tactics to convince users to install malicious applications on their devices. The first campaign impersonates the Income Tax Department, while the rest of the campaigns impersonate ICICI Bank, State Bank of India, IDFC Bank and Axis bank respectively. The phishing campaigns aim to trick users into divulging their personal and financial information.
After a detailed study, the research team found that:
- All campaigns appear to be an offer from reputed entities, however hosted on third-party domains instead of the official website of the Income Tax Department or the respective Banks, raising suspicion.
- The applications ask several access permissions of the device. Moreover some of them seek users to provide full control of the device. Allowing such access permission could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications.
- Cybercriminals created malicious applications using icons that closely resemble those of legitimate entities with the intention of enticing users into downloading the malicious applications.
- The applications collect user’s personal and banking information. Getting into this type of trap could lead users to face significant financial losses.
- While investigating the impersonated Income Tax Department’s application, the Research team identified the application sends http traffic to a remote server which acts as a Command and Control (CnC/C2) for the application.
- Customers who desire to avail benefits or refunds from respective banks, download relevant apps, believing that the chosen app will assist them. However, they are not always aware that the app may be fraudulent.
“The Research highlights the importance of being vigilant while browsing the internet and not falling prey to such phishing attacks. It is crucial to be cautious when clicking on links or downloading attachments from unknown sources, as they may contain malware that can harm the device or compromise the data.” spokesperson, CyberPeace added.
In addition to this in an earlier report released in last month, the same research team had drawn attention to the WhatsApp messages masquerading as an offer from Tanishq Jewellers with links luring unsuspecting users with the promise of free valentine’s day presents making the rounds on the app.
CyberPeace Advisory:
- The Research team recommends that people should avoid opening such messages sent via social platforms. One must always think before clicking on such links, or downloading any attachments from unauthorised sources.
- Downloading any application from any third party sources instead of the official app store should be avoided. This will greatly reduce the risk of downloading a malicious app, as official app stores have strict guidelines for app developers and review each app before it gets published on the store.
- Even if you download the application from an authorised source, check the app’s permissions before you install it. Some malicious apps may request access to sensitive information or resources on your device. If an app is asking for too many permissions, it’s best to avoid it.
- Keep your device and the app-store app up to date. This will ensure that you have the latest security updates and bug fixes.
- Falling into such a trap could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications and could lead users to financial loss.
- Do not share confidential details like credentials, banking information with such types of Phishing scams.
- Never share or forward fake messages containing links on any social platform without proper verification.

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.