#FactCheck: An image shows Sunita Williams with Trump and Elon Musk post her space return.
Executive Summary:
Our research has determined that a widely circulated social media image purportedly showing astronaut Sunita Williams with U.S. President Donald Trump and entrepreneur Elon Musk following her return from space is AI-generated. There is no verifiable evidence to suggest that such a meeting took place or was officially announced. The image exhibits clear indicators of AI generation, including inconsistencies in facial features and unnatural detailing.
Claim:
It was claimed on social media that after returning to Earth from space, astronaut Sunita Williams met with U.S. President Donald Trump and Elon Musk, as shown in a circulated picture.

Fact Check:
Following a comprehensive analysis using Hive Moderation, the image has been verified as fake and AI-generated. Distinct signs of AI manipulation include unnatural skin texture, inconsistent lighting, and distorted facial features. Furthermore, no credible news sources or official reports substantiate or confirm such a meeting. The image is likely a digitally altered post designed to mislead viewers.

While reviewing the accounts that shared the image, we found that former Indian cricketer Manoj Tiwary had also posted the same image and a video of a space capsule returning, congratulating Sunita Williams on her homecoming. Notably, the image featured a Grok watermark in the bottom right corner, confirming that it was AI-generated.

Additionally, we discovered a post from Grok on X (formerly known as Twitter) featuring the watermark, stating that the image was likely AI-generated.
Conclusion:
As per our research on the viral image of Sunita Williams with Donald Trump and Elon Musk is AI-generated. Indicators such as unnatural facial features, lighting inconsistencies, and a Grok watermark suggest digital manipulation. No credible sources validate the meeting, and a post from Grok on X further supports this finding. This case underscores the need for careful verification before sharing online content to prevent the spread of misinformation.
- Claim: Sunita Williams met Donald Trump and Elon Musk after her space mission.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Given the era of digital trust and technological innovation, the age of artificial intelligence has provided a new dimension to how people communicate and how they create and consume content. However, like all borrowed powers, the misuse of AI can lead to terrible consequences. One recent dark example was a cybercrime in Brazil: a sophisticated online scam using deepfake technology to impersonate celebrities of global stature, including supermodel Gisele Bündchen, in misleading Instagram ads. Luring in millions of reais in revenue, this crime clearly brings forth the concern of AI-generative content having rightfully set on the side of criminals.
Scam in Motion
Lately, the federal police of Brazil have stated that this scheme has been in circulation since 2024, when the ads were already being touted as apparently very genuine, using AI-generated video and images. The ads showed Gisele Bündchen and other celebrities endorsing skincare products, promotional giveaways, or time-limited discounts. The victims were tricked into making petty payments, mostly under 100 reais (about $19) for these fake products or were lured into paying "shipping costs" for prizes that never actually arrived.
The criminals leveraged their approach by scaling it up and focusing on minor losses accumulated from every victim, thus christening it "statistical immunity" by investigators. Victims being pocketed only a couple of dollars made most of them stay on their heels in terms of filing a complaint, thereby allowing these crooks extra limbs to shove on. Over time, authorities estimated that the group had gathered over 20 million reais ($3.9 million) in this elaborate con.
The scam was detected when a victim came forth with the information that an Instagram advertisement portraying a deepfake video of Gisele Bündchen was indeed false. With Anna looking to be Gisele and on the recommendation of a skincare company, the deepfake video was the most well-produced fake video. On going further into the matter, it became apparent that the investigations uncovered a whole network of deceptive social media pages, payment gateways, and laundering channels spread over five states in Brazil.
The Role of AI and Deepfakes in Modern Fraud
It is one of the first few large-scale cases in Brazil where AI-generated deepfakes have been used to perpetrate financial fraud. Deepfake technology, aided by machine learning algorithms, can realistically mimic human appearance and speech and has become increasingly accessible and sophisticated. Whereas before a level of expertise and computer resources were needed, one now only requires an online tool or app.
With criminals gaining a psychological advantage through deepfakes, the audiences would be more willing to accept the ad as being genuine as they saw a familiar and trusted face, a celebrity known for integrity and success. The human brain is wired to trust certain visual cues, making deepfakes an exploitation of this cognitive bias. Unlike phishing emails brimming with spelling and grammatical errors, deepfake videos are immersive, emotional, and visually convincing.
This is the growing terrain: AI-enabled misinformation. From financial scams to political propaganda, manipulated media is killing trust in the digital ecosystem.
Legalities and Platform Accountability
The Brazilian government had taken a proactive stance on the issue. In June 2025, the country's Supreme Court held that social media platforms could be held liable for failure to expeditiously remove criminal content, even in the absence of a formal order from a court. The icing on the cake is that that judgment would go a long way in architecting platform accountability in Brazil and potentially worldwide as jurisdictions adopt processes to deal with AI-generated fraud.
Meta, the parent company of Instagram, had said its policies forbid "ads that deceptively use public figures to scam people." Meta claims to use advanced detection mechanisms, trained review teams, and user tools to report violations. The persistence of such scams shows that the enforcement mechanisms still lag the pace and scale of AI-based deception.
Why These Scams Succeed
There are many reasons for the success of these AI-powered scams.
- Trust Due to Familiarity: Human beings tend to believe anything put forth by a known individual.
- Micro-Fraud: Keeping the money laundered from victims small prevents any increase in the number of complaints about these crimes.
- Speed To Create Content: New ads are being generated by criminals faster than ads can be checked for and removed by platforms via AI tools.
- Cross-Platform Propagation: A deepfake ad is then reshared onto various other social networking platforms once it starts gaining some traction, thereby worsening the problem.
- Absence of Public Awareness: Most users still cannot discern manipulated media, especially when high-quality deepfakes come into play.
Wider Implications on Cybersecurity and Society
The Brazilian case is but a microcosm of a much bigger problem. With deepfake technology evolving, AI-generated deception threatens not only individuals but also institutions, markets, and democratic systems. From investment scams and fake charters to synthetic IDs for corporate fraud, the possibilities for abuse are endless.
Moreover, with generative AIs being adopted by cybercriminals, law enforcement faces obstructions to properly attributing, validating evidence, and conducting digital forensics. Determining what is actual and what is manipulated has now given rise to the need for a forensic AI model that has triggered the deployment of the opposite on the other side, the attacker, thus initiating a rising tech arms race between the two parties.
Protecting Citizens from AI-Powered Scams
Public awareness has remained the best defence for people in such scams. Gisele Bündchen's squad encouraged members of the public to verify any advertisement through official brand or celebrity channels before engaging with said advertisements. Consumers need to be wary of offers that appear "too good to be true" and double-check the URL for authenticity before sharing any kind of personal information
Individually though, just a few acts go so far in lessening some of the risk factors:
- Verify an advertisement's origin before clicking or sharing it
- Never share any monetary or sensitive personal information through an unverifiable link
- Enable two-factor authentication on all your social accounts
- Periodically check transaction history for any unusual activity
- Report any deepfake or fraudulent advertisement immediately to the platform or cybercrime authorities
Collaboration will be the way ahead for governments and technology companies. Investing in AI-based detection systems, cooperating on international law enforcement, and building capacity for digital literacy programs will enable us to stem this rising tide of synthetic media scams.
Conclusion
The deepfake case in Brazil with Gisele Bündchen acts as a clarion for citizens and legislators alike. This shows the evolution of cybercrime that profited off the very AI technologies that were once hailed for innovation and creativity. In this new digital frontier that society is now embracing, authenticity stands closer to manipulation, disappearing faster with each dawn.
While keeping public safety will certainly still require great cybersecurity measures in this new environment, it will demand equal contributions on vigilance, awareness, and ethical responsibility. Deepfakes are not only a technology problem but a societal one-crossing into global cooperation, media literacy, and accountability at every level throughout the entire digital ecosystem.

Executive Summary:
Apple has quickly responded to two severe zero-day threats, CVE-2024-44308 and CVE-2024-44309 in iOS, macOS, visionOS, and Safari. These defects, actively used in more focused attacks presumably by state actors, allow for code execution and cross-site scripting (XSS). In a report shared by Google’s Threat Analysis Group, the existing gaps prove that modern attacks are highly developed. Apple’s mitigation comprises memory management, especially state management to strengthen device security. Users are encouraged to update their devices as soon as possible, turn on automatic updates and be careful in the internet space to avoid these new threats.
Introduction
Apple has proved its devotion to the security issue releasing the updates fixing two zero-day bugs actively exploited by hackers. The bugs, with the IDs CVE-2024-44308 and CVE-2024-44309, are dangerous and can lead to code execution and cross-site scripting attacks. The vulnerabilities have been employed in attack and the significance of quick patch release for the safety of the users.
Vulnerabilities in Detail
The discovery of vulnerabilities (CVE-2024-44308, CVE-2024-44309) is credited to Clément Lecigne and Benoît Sevens of Google's Threat Analysis Group (TAG). These vulnerabilities were found in JavaScriptCore and WebKit, integral components of Apple’s web rendering framework. The details of these vulnerabilities are mentioned below:
CVE-2024-44308
- Severity: High (CVSS score: 8.8)
- Description: A flaw in the JavaScriptCore component of WebKit. Malicious web content could cause code to be executed on the target system and make the system vulnerable to the full control of the attacker.
- Technical Finding: This vulnerability involves bad handling of memory in the course of executing JavaScript, allowing the use of injected payloads remotely by the attackers.
CVE-2024-44309
- Severity: Moderate (CVSS score: 6.1)
- Description: A cookie management flaw in WebKit which might result in cross site scripting (XSS). This vulnerability enables the attackers to embed unauthorized scripts into genuine websites and endanger the privacy of users as well as their identities.
- Technical Finding: This issue arises because of wrong handling of cookies at the state level while processing the maliciously crafted web content and provides an unauthorized route to session data.
Affected Systems
These vulnerabilities impact a wide range of Apple devices and software versions:
- iOS 18.1.1 and iPadOS 18.1.1: For devices including iPhone XS and later, iPad Pro (13-inch), and iPad mini 5th generation onwards.
- iOS 17.7.2 and iPadOS 17.7.2: Supports earlier models such as iPad Pro (10.5-inch) and iPad Air 3rd generation.
- macOS Sequoia 15.1.1: Specifically targets systems running macOS Sequoia.
- visionOS 2.1.1: Exclusively for Apple Vision Pro.
- Safari 18.1.1: For Macs running macOS Ventura and Sonoma.
Apple's Mitigation Approach
Apple has implemented the following fixes:
- CVE-2024-44308: Enhanced input validation and robust memory checks to prevent arbitrary code execution.
- CVE-2024-44309: Improved state management to eliminate cookie mismanagement vulnerabilities.
These measures ensure stronger protection against exploitation and bolster the underlying security architecture of affected components.
Broader Implications
The exploitation of these zero-days highlights the evolving nature of threat landscapes:
- Increasing Sophistication: Attackers are refining techniques to target niche vulnerabilities, bypassing traditional defenses.
- Spyware Concerns: These flaws align with the modus operandi of spyware tools, potentially impacting privacy and national security.
- Call for Timely Updates: Users delaying updates inadvertently increase their risk exposure
Technical Recommendations for Users
To mitigate potential risks:
- Update Devices Promptly: Install the latest patches for iOS, macOS, visionOS, and Safari.
- Enable Automatic Updates: Ensures timely application of future patches.
- Restrict WebKit Access: Avoid visiting untrusted websites until updates are installed.
- Monitor System Behavior: Look for anomalies that could indicate exploitation.
Conclusion
The exploitation of CVE-2024-44308 and CVE-2024-44309 targeting Apple devices highlight the importance of timely software updates to protect users from potential exploitation. The swift action of Apple by providing immediate improved checks, state management and security patches. Users are therefore encouraged to install updates as soon as possible to guard against these zero day flaws.
References:
- https://support.apple.com/en-us/121752
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-44308
- https://securityonline.info/cve-2024-44308-and-cve-2024-44309-apple-addresses-zero-day-vulnerabilities/

The Digital Personal Data Protection (DPDP) Act, 2023, operationalises data privacy largely through a consent management framework. It aims to give data principles, ie, individuals, control over their personal data by giving them the power to track, change, and withdraw their consent from its processing. However, in practice, consent management is often not straightforward. For example, people may be frequently bombarded with requests, which can lead to fatigue and eventual overlooking of consent requests. This article discusses the way consent management is handled by the DPDP Act, and looks at how India can design the system to genuinely empower users while holding organisations accountable.
Consent Management in the DPDP Act
According to the DPDP Act, consent must be unambiguous, free, specific, and informed. It must also be easy for people to revoke their consent (DPO India, 2023). To this end, the Act creates Consent Managers- registered middlemen- who serve as a link between users and data custodians.
The purpose of consent managers is to streamline and centralise the consent procedure. Users can view, grant, update, or revoke consent across various platforms using the dashboards they offer. They hope to improve transparency and lessen the strain on people to keep track of permissions across different services by standardising the way consent is presented (IAPP, 2024).
The Act draws inspiration from international frameworks such as the GDPR (General Data Protection Regulation), mandating that Indian users be provided with a single platform to manage permissions rather than having to deal with dispersed consent prompts from every service.
The Challenges
Despite the mandate for an interoperable platform for consent management, several key challenges emerge. There is a lack of clarity on how consent management will be operationalised. This creates challenges of accountability and implementation. Thus, :
- If the interface is poorly designed, users could be bombarded with content permissions from apps/platforms/ services that are not fully compliant with the platform.
- If consent notices are vague, frequent, lengthy, or complex, users may continue to grant permissions without meaningful engagement.
- It leaves scope for data fiduciaries to use dark patterns to coerce customers into granting consent through poor UI/UX design.
- The lack of clear, standardised interoperability protocols across sectors could lead to a fragmented system, undermining the goal of a single, easy-to-use platform.
- Consent fatigue could easily appear in India's digital ecosystem, where apps, e-commerce websites, and government services all ask for permissions from over 950 million internet subscribers. Experiences from GDPR countries show that users who are repeatedly prompted eventually become banner blind, which causes them to ignore notices entirely.
- Low levels of literacy (including digital literacy) and unequal access to digital devices among women and marginalised communities create complexities in the substantive coverage of privacy rights.
- Placing the burden of verification of legal guardianship for children and persons with disabilities (PwDs) on data fiduciaries might be ineffective, as SMEs may lack the resources to undertake this activity. This could create new forms of vulnerability for the two groups.
Legal experts claim that this results in what they refer to as a legal fiction, wherein consent is treated as valid by the law despite the fact that it does not represent true understanding or choice (Lawvs, 2023). Additionally, research indicates that users hardly ever read privacy policies in their entirety. People are very likely to tick boxes without fully understanding what they are agreeing to. By drastically limiting user control, this has a bearing on the privacy rights of Indian citizens and residents. (IJLLR, 2023).
Impacts of Weak Consent Management:
According to the Indian Journal of Law and Technology, in an era of asymmetry and information overload, privacy cannot be sufficiently protected by relying only on consent (IJLT, 2023). Almost every individual will be impacted by inadequate consent management.
- For Users: True autonomy is replaced by the appearance of control. Individuals may unintentionally disclose private information, which undermines confidence in digital services.
- For Businesses: Compliance could become a mere formality. Further, if acquired consent is found to be manipulated or invalid, it creates space for legal risks and reputational damage.
- For Regulators: It becomes difficult to oversee a system where consent is frequently disregarded or misinterpreted. When consent is merely formal, the law's promise to protect personal information is undermined.
Way Forward
- Layered and Simplified Notices: Simple language and layers of visual cues should be used in consent requests. Important details like the type of data being gathered, its intended use, and its duration should be made clear up front. Additional explanations are available for users who would like more information. This method enhances comprehension and lessens cognitive overload (Lawvs, 2023).
- Effective Dashboards: Dashboards from consent managers should be user-friendly, cross-platform, and multilingual. Management is made simple by features like alerts, one-click withdrawal or modification, and summaries of active permissions. The system is more predictable and dependable when all services use the same format, which also reduces confusion (IAPP, 2024).
- Dynamic and Contextual Consent: Instead of appearing as generic pop-ups, consent requests should show up when they are pertinent to a user's actions. Users can make well-informed decisions without feeling overburdened by subtle cues, such as emphasising risks when sensitive data is requested (IJLLR, 2023).
- Accountability of Consent Managers: Organisations that offer consent management services must be accountable and independent, through clear certification, auditing, and specific legal accountability frameworks. Even when formal consent is given, strong trustee accountability guarantees that data is not misused (IJLT, 2023).
- Complementary Protections Beyond Consent: Consent continues to be crucial, but some high-risk data processing might call for extra protections. These may consist of increased responsibilities for fiduciaries or proportionality checks. These steps improve people's general protection and lessen the need for frequent consent requests (IJLLR, 2023).
Conclusion
The core of the DPDP Act is to empower users to have control over their data through measures such as consent management. But requesting consent is insufficient; the system must make it simple for people to manage, monitor, and change it. Effectively designed, managed, and executed consent management has the potential to revolutionise user experience and trust in India's digital ecosystem if it is implemented carefully.To make consent management genuinely meaningful, it is imperative to standardise procedures, hold fiduciaries accountable, simplify interfaces, and investigate supplementary protections.
References
Building Trust with Technology: Consent Management Under India’s DPDP Act, 2023
Consent Fatigue and Data Protection Laws: Is ‘Informed Consent’ a Legal Fiction
Beyond Consent: Enhancing India's Digital Personal Data Protection Framework
Top 10 operational impacts of India’s DPDPA – Consent management