#FactCheck - "Deepfake Video Falsely Claims of Elon Musk conducting give away for Cryptocurrency”
Executive Summary:
A viral online video claims Billionaire and Founder of Tesla & SpaceX Elon Musk of promoting Cryptocurrency. The CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Elon’s facial expressions and voice through the use of relevant, reputed and well verified AI tools and applications to arrive at the above conclusion for the same. The original footage had no connections to any cryptocurrency, BTC or ETH apportion to the ardent followers of crypto-trading. The claim that Mr. Musk endorses the same and is therefore concluded to be false and misleading.

Claims:
A viral video falsely claims that Billionaire and founder of Tesla Elon Musk is endorsing a Crypto giveaway project for the crypto enthusiasts which are also his followers by consigning a portion of his valuable Bitcoin and Ethereum stock.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Mr. Elon Musk but none of them included any promotion of any cryptocurrency giveaway. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 99.0% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Mr. Musk revealed no mention of any such giveaway. No credible reports were found linking Elon Musk to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Elon Musk promotes a crypto giveaway is a deep fake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Elon Musk conducting giving away Cryptocurrency viral on social media.
- Claimed on: X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
In today’s digital world, where everything is related to data, the more data you own, the more control and compliance you have over the market, which is why companies are looking for ways to use data to improve their business. But at the same time, they have to make sure they are protecting people’s privacy. It is very tricky to strike a balance between both of them. Imagine you are trying to bake a cake where you need to use all the ingredients to make it taste great, but you also have to make sure no one can tell what’s in it. That’s kind of what companies are dealing with when it comes to data. Here, ‘Pseudonymisation’ emerges as a critical technical and legal mechanism that offers a middle ground between data anonymisation and unrestricted data processing.
Legal Framework and Regulatory Landscape
Pseudonymisation, as defined by the General Data Protection Regulation (GDPR) in Article 4(5), refers to “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person”. This technique represents a paradigm shift in data protection strategy, enabling organisations to preserve data utility while significantly reducing privacy risks. The growing importance of this balance is evident in the proliferation of data protection laws worldwide, from GDPR in Europe to India’s Digital Personal Data Protection Act (DPDP) of 2023.
Its legal treatment varies across jurisdictions, but a convergent approach is emerging that recognises its value as a data protection safeguard while maintaining that the pseudonymised data remains personal data. Article 25(1) of GDPR recognises it as “an appropriate technical and organisational measure” and emphasises its role in reducing risks to data subjects. It protects personal data by reducing the risk of identifying individuals during data processing. The European Data Protection Board’s (EDPB) 2025 Guidelines on Pseudonymisation provide detailed guidance emphasising the importance of defining the “pseudonymisation domain”. It defines who is prevented from attributing data to specific individuals and ensures that the technical and organised measures are in place to block unauthorised linkage of pseudonymised data to the original data subjects. In India, while the DPDP Act does not explicitly define pseudonymisation, legal scholars argue that such data would still fall under the definition of personal data, as it remains potentially identifiable. The Act defines personal data defined in section 2(t) broadly as “any data about an individual who is identifiable by or in relation to such data,” suggesting that the pseudonymised information, being reversible, would continue to require compliance with data protection obligations.
Further, the DPDP Act, 2023 also includes principles of data minimisation and purpose limitation. Section 8(4) says that a “Data Fiduciary shall implement appropriate technical and organisational measures to ensure effective observance of the provisions of this Act and the Rules made under it.” The concept of Pseudonymization fits here because it is a recognised technical safeguard, which means companies can use pseudonymization as one of the methods or part of their compliance toolkit under Section 8(4) of the DPDP Act. However, its use should be assessed on a case to case basis, since ‘encryption’ is also considered one of the strongest methods for protecting personal data. The suitability of pseudonymization depends on the nature of the processing activity, the type of data involved, and the level of risk that needs to be mitigated. In practice, organisations may use pseudonymization in combination with other safeguards to strengthen overall compliance and security.
The European Court of Justice’s recent jurisprudence has introduced nuanced considerations about when pseudonymised data might not constitute personal data for certain entities. In cases where only the original controller possesses the means to re-identify individuals, third parties processing such data may not be subject to the full scope of data protection obligations, provided they cannot reasonably identify the data subjects. The “means reasonably likely” assessment represents a significant development in understanding the boundaries of data protection law.
Corporate Implementation Strategies
Companies find that pseudonymisation is not just about following rules, but it also brings real benefits. By using this technique, businesses can keep their data more secure and reduce the damage in the event of a breach. Customers feel more confident knowing that their information is protected, which builds trust. Additionally, companies can utilise this data for their research or other important purposes without compromising user privacy.
Key Benefits of Pseudonymisation:
- Enhanced Privacy Protection: It hides personal details like names or IDs with fake ones (with artificial values or codes), making it harder for accidental privacy breaches.
- Preserved Data Utility: Unlike completely anonymous data, pseudonymised data keeps its usefulness by maintaining important patterns and relationships within datasets.
- Facilitate Data Sharing: It’s easier to share pseudonymised data with partners or researchers because it protects privacy while still being useful.
However, using pseudonymisation is not as easy as companies have to deal with tricky technical issues like choosing the right methods, such as encryption or tokenisation and managing security keys safely. They have to implement strong policies to stop anyone from figuring out who the data belongs to. This can get expensive and complicated, especially when dealing with a large amount of data, and it often requires expert help and regular upkeep.
Balancing Privacy Rights and Data Utility
The primary challenge in pseudonymisation is striking the right balance between protecting individuals' privacy and maintaining the utility of the data. To get this right, companies need to consider several factors, such as why they are using the data, the potential hacker's level of skill, and the type of data being used.
Conclusion
Pseudonymisation offers a practical middle ground between full anonymisation and restricted data use, enabling organisations to harness the value of data while protecting individual privacy. Legally, it is recognised as a safeguard but still treated as personal data, requiring compliance under frameworks like GDPR and India’s DPDP Act. For companies, it is not only regulatory adherence but also ensuring that it builds trust and enhances data security. However, its effectiveness depends on robust technical methods, governance, and vigilance. Striking the right balance between privacy and data utility is crucial for sustainable, ethical, and innovation-driven data practices.
References:
- https://gdpr-info.eu/art-4-gdpr/
- https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
- https://gdpr-info.eu/art-25-gdpr/
- https://www.edpb.europa.eu/system/files/2025-01/edpb_guidelines_202501_pseudonymisation_en.pdf
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915
- https://curia.europa.eu/juris/document/document.jsf?text=&docid=303863&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=16466915

Artificial intelligence is growing at a rapid pace, with startups promising breakthroughs in industries and attracting billions in investment. Among these was Builder.ai, a London-based company founded in 2016 by an Indian entrepreneur. Once valued at over $1.5 billion, it was known for its game-changing platform that could let anyone build custom apps quickly and affordably with the help of AI.
Yet in 2025, Builder.ai collapsed dramatically, filing for bankruptcy across multiple countries and laying off nearly 80% of its workforce. What was once a celebrated unicorn has become a cautionary tale, exposing not only the risks of hype-driven growth in AI but also inflicting reputational damage on Indian founders in the global startup ecosystem.
The Rise: Big Promises, Big Investors
Builder.ai branded itself as a no-code/low-code app development platform, where its AI assistant “Natasha” would guide customers in creating apps without technical expertise. The pitch was simple and attractive: app development was made “as easy as ordering pizza.” The story resonated with major investors. Backed by SoftBank, Microsoft, and Qatar’s sovereign wealth fund, Builder.ai raised more than $450 million. It scaled rapidly, positioning itself as one of Europe’s most promising AI startups.
The Cracks Appear
Behind the glamour, the first cracks appeared as early as 2019, when The Wall Street Journal reported that Builder.ai’s platform depended far more on human engineers than on the AI automation it advertised. In reality, the much-hyped AI assistant “Natasha” was often just “a guy instead”, i.e., skilled developers in India manually writing code behind the scenes on whose backs the company expanded aggressively.
The real blow came from Builder.ai’s finances. The company was accused of inflating revenue figures by 300%, with alleged use of round-tripping tactics involving fake invoices that inflated financials. While it publicly projected revenues of $220 million in 2024, its actual figure was closer to $55 million. When this reality surfaced, investor confidence was lost quickly, and the company’s liabilities ballooned to nearly $100 million, with less than $ 10 million in assets remaining.
Collapse and Legal Scrutiny
By 2025, the company’s foundations had crumbled. The founder stepped down as CEO but retained the unusual title of “Chief Wizard.” Massive debts to AWS, Microsoft, and other partners mounted into the hundreds of millions. Assets were seized, and the company filed for bankruptcy in the U.S., UK, India, and the UAE.
For clients, the collapse meant abandoned projects. For employees, around 1,000 of them, it meant sudden unemployment. And for investors, it was a devastating loss. The Securities and Exchange Commission and U.S. Attorney’s Office in New York have since launched investigations into potential fraud and investor misrepresentation.
Reputational Damage: Impact on Indian Founders
Perhaps the most enduring consequence of Builder.ai’s downfall is the hit to the credibility of Indian founders on the global stage.
For years, Indian entrepreneurs have earned trust in global tech circles, with leaders heading companies from Google to Microsoft. Indian-led startups abroad were viewed as reliable, innovative, and growth-driven. Builder.ai’s collapse disrupts this narrative.
The allegations of inflated revenue, AI exaggeration, and questionable governance risk reinforcing skepticism among global investors regarding Indian organisational ethics. For other Indian founders seeking international capital, the road has now become tougher: stricter due diligence, harsher scrutiny of claims, and slower trust-building.
This reputational damage arrives at a critical time when India is positioning itself as a global hub for AI and leads the world in AI skill penetration. Rather than highlighting the strength of India’s entrepreneurial and talent ecosystem, the fall of Builder.ai has drawn attention to the risks of overpromising and underdelivering.
Conclusion
The fall of Builder.ai is more than the bankruptcy of one AI unicorn. It is a warning to companies against chasing hyper growth fueled by the riding of the AI wave. While the company’s downfall exposed flaws in governance and accountability, its deeper impact lies in how it dented trust. To drive AI and technology innovation, startups must move beyond flashy valuations and commit to authentic innovation, transparency, and financial integrity.
References
- https://www.moodys.com/web/en/us/insights/lending/moodys-early-warning-in-action-builder-ai.html#:~:text=Despite%20marketing%20itself%20as%20an,fake%20invoices%20that%20inflated%20financials.
- https://today-innovation.webflow.io/unveiling-the-power-of-natasha-an-ai-assistant-of-builder-ai-to-revolutionize-app-generation
- https://finance.yahoo.com/news/builder-ais-shocking-450m-fall-170009323.html
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2108810
- https://www.cnbctv18.com/technology/how-a-london-based-startups-artificial-ai-gambit-backfired-ws-l-19613692.htm#

Introduction
A famous quote, “Half knowledge is always dangerous”, but “Too much knowledge of anything can lead to destruction”. Recently very infamous spyware and malware named WyrmSpy and Dragon Egg were invented by a Chinese group of hackers APT41. The APT41 is a state-endorsed Clandstein active group based in the People’s Republic of China that has been active since 2012. In contrast to numerous countries-government supported, APT has a footprint record jeopardising both government organisations for clandestine activities as well as different private organisations or enterprises for their financial gain. APT41 group aims at Android devices through spyware wyrmspy and dragon egg, which masquerades as a legitimate application. According to the U.S. jury legal accusation from 2019 to 2020, the group was entangled in threatening over more than 100 public and private individuals and organisations in the United States and around the world.Moreover, a detailed analysis report was shared by the Lookout Threat Researchers, that has been actively monitoring and tracking both spyware and malware.
Briefing about how spyware attacks on Android devices take place
To begin with, this malware imitates a real source Android application to show some sort of notification. Once it is successfully installed on the user’s machine, proclaims multiple device’s permission to enable data filtration.
Wyrmspy complies with log files, photos, device locations, SMS(read and write), and audio recordings. It has also authenticated that there are no detection malware activities found on google play even after running multiple security levels. These malicious things are made with the intent to obtain rooting access privileges to the device and monitor activities to the specified commands received from the C2 servers.
Similarly, Dragon Egg can collect data files, contacts, locations, and audio recordings, and it also accesses camera photos once it successfully trade-off the device. Dragon egg receives a payload that is also known as “smallmload.jar”, which is either from APK(Android Packet Kit).
WyrmSpy initially masquerades as a default operation system application, and Dragon Egg simulates a third-party keyboard/ messaging application.
Overview of APT41 Chinese group background
APT41 is a Chinese-based stealth activity-carrying group that is said to be active since mid-2006. Rumours about APT41 that it was also a part of the 2nd Bureau of the People’s Liberation Army (PLA) General Staff Department’s (GSD) 3rd Department. Owning to that fact, 2006 has seen 140+ organisations’ security getting compromised, ranging from 20 strategically crucial companies.APT is also recognised for rationally plundering hundreds of terabytes of data from at least 141 organisations between 2006 and 2013. It typically begins with spear-phishing emails to the targeted victims. These sent emails contain official templates along with language pretending to be from a legitimate real source, carrying a malicious attachment. As the victim opens the attached file, the backdoor bestows the control of the targeted machine to the APT groups machine. Once there is an unauthorised gain of access, the attacker visits and revisits the victim’s machine. The group remains dormant for lengthy durations, more likely for months or even for years.
Advisory points need to adhere to while using Android devices
- The security patch update is necessary at least once a week
- Clearing up unwanted junk files.
- Cache files of every frequently used application need to clear out.
- Install only required applications from
Google play store. - Download only necessary APK files only it comes from trusted resources.
- Before giving device permission, it is advisable to run your files or URLs on VirusTotal.com this website will give a good closure to the malicious intent.
- Install good antivirus software.
- Individuals need to check the source of the email before opening an attachment to it.
- Never collect or add any randomly found device to your system
- Moreover, the user needs to keep track of their device activity. Rather than using devices just for entertainment purposes, it is more important to look for data protection on that device.
Conclusion
Network Crack Program Hacker Group (NCPH), which grew as an APT41 group with malicious intent, earlier performed the role of grey hat hacker, this group somehow grew up greedy to enhance more money laundering by hacking networks, devices, etc. As this group conducts a supply chain of attacks to gain unauthorised access to the network throughout the world, targeting hundreds of companies, including an extensive selection of industries such as social media, telecommunications, government, defence, education, and manufacturing. Last but not least, many more fraud-making groups with malicious intent will be forming and implementing in the future. It is on individuals and organisations to secure themselves but practise basic security levels to safeguard themselves against such threats and attacks.