#FactCheck - "Deepfake Video Falsely Claims of Elon Musk conducting give away for Cryptocurrency”
Executive Summary:
A viral online video claims Billionaire and Founder of Tesla & SpaceX Elon Musk of promoting Cryptocurrency. The CyberPeace Research Team has confirmed that the video is a deepfake, created using AI technology to manipulate Elon’s facial expressions and voice through the use of relevant, reputed and well verified AI tools and applications to arrive at the above conclusion for the same. The original footage had no connections to any cryptocurrency, BTC or ETH apportion to the ardent followers of crypto-trading. The claim that Mr. Musk endorses the same and is therefore concluded to be false and misleading.

Claims:
A viral video falsely claims that Billionaire and founder of Tesla Elon Musk is endorsing a Crypto giveaway project for the crypto enthusiasts which are also his followers by consigning a portion of his valuable Bitcoin and Ethereum stock.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search led us to various legitimate sources featuring Mr. Elon Musk but none of them included any promotion of any cryptocurrency giveaway. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 99.0% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the facial movements and voice, which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Mr. Musk revealed no mention of any such giveaway. No credible reports were found linking Elon Musk to this promotion, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming that Elon Musk promotes a crypto giveaway is a deep fake. The research using various tools such as Google Lens, AI detection tool confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Elon Musk conducting giving away Cryptocurrency viral on social media.
- Claimed on: X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
Intricate and winding are the passageways of the modern digital age, a place where the reverberations of truth effortlessly blend, yet hauntingly contrast, with the echoes of falsehood. Within this complex realm, the World Economic Forum (WEF) has illuminated the darkened corners with its powerful spotlight, revealing the festering, insidious network of misinformation and disinformation that snakes through the virtual and physical worlds alike. Gravely identified by the “WEF's Global Risks Report 2024” as the most formidable and immediate threats to our collective well-being, this malignant duo—misinformation and disinformation.
The report published with the solemn tone suitable for the prelude to such a grand international gathering as the Annual Summit in Davos, the report presents a vivid tableau of our shared global landscape—one that is dominated by the treacherous pitfalls of deceits and unverified claims. These perils, if unrecognised and unchecked by societal checks and balances, possess the force to rip apart the intricate tapestry of our liberal institutions, shaking the pillars of democracies and endangering the vulnerable fabric of social cohesion.
Election Mania
As we find ourselves perched on the edge of a future, one where the voices of nearly three billion human beings make their mark on the annals of history—within the varied electoral processes of nations such as Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom, and the United States. However, the spectre of misinformation can potentially corrode the integrity of the governing entities that will emerge from these democratic processes. The warning issued by the WEF is unambiguous: we are flirting with the possibility of disorder and turmoil, where the unchecked dispersion of fabrications and lies could kindle flames of unrest, manifesting in violent protests, hate-driven crimes, civil unrest, and the scourge of terrorism.
Derived from the collective wisdom of over 1,400 experts in global risk, esteemed policymakers, and industry leaders, the report crafts a sobering depiction of our world's journey. It paints an ominous future that increasingly endows governments with formidable power—to brandish the weapon of censorship, to unilaterally declare what is deemed 'true' and what ought to be obscured or eliminated in the virtual world of sharing information. This trend signals a looming potential for wider and more comprehensive repression, hindering the freedoms traditionally associated with the Internet, journalism, and unhindered access to a panoply of information sources—vital fora for the exchange of ideas and knowledge in a myriad of countries across the globe.
Prominence of AI
When the gaze of the report extends further over a decade-long horizon, the prominence of environmental challenges such as the erosion of biodiversity and alarming shifts in the Earth's life-support systems ascend to the pinnacle of concern. Yet, trailing closely, the digital risks continue to pulsate—perpetuated by the distortions of misinformation, the echoing falsities of disinformation, and the unpredictable repercussions stemming from the utilization and, at times, the malevolent deployment of artificial intelligence (AI). These ethereal digital entities, far from being illusory shades, are the precursors of a disintegrating world order, a stage on which regional powers move to assert and maintain their influence, instituting their own unique standards and norms.
The prophecies set forth by the WEF should not be dismissed as mere academic conjecture; they are instead a trumpet's urgent call to mobilize. With a startling 30 percent of surveyed global experts bracing for the prospect of international calamities within the mere span of the coming two years, and an even more significant portion—nearly two-thirds—envisaging such crises within the forthcoming decade, it is unmistakable that the time to confront and tackle these looming risks is now. The clarion is sounding, and the message is clear: inaction is no longer an available luxury.
Maldives and India Row
To pluck precise examples from the boundless field of misinformation, we might observe the Lakshadweep-Malé incident wherein an ordinary boat accident off the coast of Kerala was grotesquely transformed into a vessel for the far-reaching tendrils of fabricated narratives, erroneously implicating Lakshadweep in the spectacle. Similarly, the tension-laden India-Maldives diplomatic exchange becomes a harrowing testament to how strained international relations may become fertile ground for the rampant spread of misleading content. The suspension of Maldivian deputy ministers following offensive remarks, the immediate tumult that followed on social media, and the explosive proliferation of counterfeit news targeting both nations paint a stark and intricate picture of how intertwined are the threads of politics, the digital platforms of social media, and the virulent propagation of falsehoods.
Yet, these are mere fragments within the extensive and elaborate weave of misinformation that threatens to enmesh our globe. As we venture forth into this dangerous and murky topography, it becomes our collective responsibility to maintain a sense of heightened vigilance, to consistently question and verify the sources and content of the information that assails us from all directions, and to cultivate an enduring culture anchored in critical thinking and discernment. The stakes are colossal—for it is not merely truth itself that we defend, but rather the underlying tenets of our societies and the sanctity of our cherished democratic institutions.
Conclusion
In this fraught era, marked indelibly by uncertainty and perched precariously on the cusp of numerous pivotal electoral ventures, let us refuse the role of passive bystanders to unraveling our collective reality. We must embrace our role as active participants in the relentless pursuit of truth, fortified with the stark awareness that our entwined futures rest precariously on our willingness and ability to distinguish the veritable from the spurious within the perilous lattice of falsehoods of misinformation. We must continually remind ourselves that, in the quest for a stable and just global order, the unerring discernment of fact from fiction becomes not only an act of intellectual integrity but a deed of civic and moral imperative.
References
- https://www.businessinsider.in/politics/world/election-fuelled-misinformation-is-serious-global-risk-in-2024-says-wef/articleshow/106727033.cms
- https://www.deccanchronicle.com/nation/current-affairs/100124/misinformation-tops-global-risks-2024.html
- https://www.msn.com/en-in/news/India/fact-check-in-lakshadweep-male-row-kerala-boat-accident-becomes-vessel-for-fake-news/ar-AA1mOJqY
- https://www.boomlive.in/news/india-maldives-muizzu-pm-modi-lakshadweep-fact-check-24085
- https://www.weforum.org/press/2024/01/global-risks-report-2024-press-release/

Introduction
The emergence of deepfake technology has become a significant problem in an era driven by technological growth and power. The government has reacted proactively as a result of concerns about the exploitation of this technology due to its extraordinary realism in manipulating information. The national government is in the vanguard of defending national interests, public trust, and security as the digital world changes. On the 26th of December 2023, the central government issued an advisory to businesses, highlighting how urgent it is to confront this growing threat.
The directive aims to directly address the growing concerns around Deepfakes, or misinformation driven by AI. This advice represents the result of talks that Union Minister Shri Rajeev Chandrasekhar, had with intermediaries during the course of a month-long Digital India dialogue. The main aim of the advisory is to accurately and clearly inform users about information that is forbidden, especially those listed under Rule 3(1)(b) of the IT Rules.
Advisory
The Ministry of Electronics and Information Technology (MeitY) has sent a formal recommendation to all intermediaries, requesting adherence to current IT regulations and emphasizing the need to address issues with misinformation, specifically those driven by artificial intelligence (AI), such as Deepfakes. Union Minister Rajeev Chandrasekhar released the recommendation, which highlights the necessity of communicating forbidden information in a clear and understandable manner, particularly in light of Rule 3(1)(b) of the IT Rules.
Advise on Prohibited Content Communication
According to MeitY's advice, intermediaries must transmit content that is prohibited by Rule 3(1)(b) of the IT Rules in a clear and accurate manner. This involves giving users precise details during enrollment, login, and content sharing/uploading on the website, as well as including such information in customer contracts and terms of service.
Ensuring Users Are Aware of the Rules
Digital platform suppliers are required to inform their users of the laws that are relevant to them. This covers provisions found in the IT Act of 2000 and the Indian Penal Code (IPC). Corporations should inform users of the potential consequences of breaking the restrictions outlined in Rule 3(1)(b) and should also urge users to notify any illegal activity to law enforcement.
Talks Concerning Deepfakes
For more than a month, Union Minister Rajeev Chandrasekhar had a significant talk with various platforms where they addressed the issue of "deepfakes," or computer-generated fake videos. The meeting emphasized how crucial it is that everyone abides by the laws and regulations in effect, particularly the IT Rules to prevent deepfakes from spreading.
Addressing the Danger of Disinformation
Minister Chandrasekhar underlined the grave issue of disinformation, particularly in the context of deepfakes, which are false pieces of content produced using the latest developments such as artificial intelligence. He emphasized the dangers this deceptive data posed to internet users' security and confidence. The Minister emphasized the efficiency of the IT regulations in addressing this issue and cited the Prime Minister's caution about the risks of deepfakes.
Rule Against Spreading False Information
The Minister referred particularly to Rule 3(1)(b)(v), which states unequivocally that it is forbidden to disseminate false information, even when doing so involves cutting-edge technology like deepfakes. He called on intermediaries—the businesses that offer digital platforms—to take prompt action to take such content down from their systems. Additionally, he ensured that everyone is aware that breaking such rules has legal implications.
Analysis
The Central Government's latest advisory on deepfake technology demonstrates a proactive strategy to deal with new issues. It also highlights the necessity of comprehensive legislation to directly regulate AI material, particularly with regard to user interests.
There is a wider regulatory vacuum for content produced by artificial intelligence, even though the current guideline concentrates on the precision and lucidity of information distribution. While some limitations are mentioned in the existing laws, there are no clear guidelines for controlling or differentiating AI-generated content.
Positively, it is laudable that the government has recognized the dangers posed by deepfakes and is making appropriate efforts to counter them. As AI technology develops, there is a chance to create thorough laws that not only solve problems but also create a supportive environment for the creation of ethical AI content. User protection, accountability, openness, and moral AI use would all benefit from such laws. This offers an opportunity for regulatory development to guarantee the successful and advantageous incorporation of AI into our digital environment.
Conclusion
The Central Government's preemptive advice on deepfake technology shows a great dedication to tackling new risks in the digital sphere. The advice highlights the urgent need to combat deepfakes, but it also highlights the necessity for extensive legislation on content produced by artificial intelligence. The lack of clear norms offers a chance for constructive regulatory development to protect the interests of users. The advancement of AI technology necessitates the adoption of rules that promote the creation of ethical AI content, guaranteeing user protection, accountability, and transparency. This is a turning point in the evolution of regulations, making it easier to responsibly incorporate AI into our changing digital landscape.
References
- https://economictimes.indiatimes.com/tech/technology/deepfake-menace-govt-issues-advisory-to-intermediaries-to-comply-with-existing-it-rules/articleshow/106297813.cms
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1990542#:~:text=Ministry%20of%20Electronics%20and%20Information,misinformation%20powered%20by%20AI%20%E2%80%93%20Deepfakes.
- https://www.timesnownews.com/india/centres-deepfake-warning-to-it-firms-ensure-users-dont-violate-content-rules-article-106298282#:~:text=The%20Union%20government%20on%20Tuesday,actors%2C%20businesspersons%20and%20other%20celebrities

Introduction
How Generative Artificial Intelligence, or GenAI, is changing the employee workday is no longer limited to writing emails or debugging code, but now also includes analysing contracts, generating reports, and much more. The use of AI tools in everyday work has become commonplace, but the speed at which companies have adopted these technologies has created a new kind of risk. Unlike threats that come from an outside attacker, Shadow AI is created inside an organisation by a legitimate employee who uses unapproved AI tools to make their work more efficient and productive. In many cases, the employee is unaware of the potential security, data privacy, and compliance risks involved in using such tools to perform their job duties.
What Is Shadow AI?
Shadow AI is when individuals use AI tools at work that aren’t provided by the company, like tools or other software programs, without the knowledge or permission of the employer. Examples of shadow AI include:
- Using personal ChatGPT or other chatbot accounts to complete tasks at the office
- Uploading business-related documents to online AI technologies for analysis or summarisation.
- Copying proprietary source code into an online AI model for debugging
- Installing browser extensions and add-ons that are not approved by IT or Security personnel.
How Shadow AI Is Harmful
1. Uncontrolled Data Exposure
When employees access or input information into their user-created AI, it becomes outside the controls of the company, such as both employee personal information and any third-party personal information, private company information (such as source code or contracts), and company internal strategies. After a user enters data into their user-created AIs, the company loses all ability to monitor how that data is stored, processed, or maintained. A data leak situation exists without a malicious cyberattack. The biggest risk of a data leak is not maliciousness but rather the loss of control and governance over sensitive data.
2. Regulatory and Legal Non-Compliance
Data protection laws like GDPR, India’s Digital Personal Data Protection (DPDP) Act, HIPAA, and other relevant sectoral laws require businesses to process data in accordance with the law, to minimise the amount of data they use, and to be accountable for their actions. Shadow AI often results in the unlawful use of personal data due to a lack of a legal basis for the processing, unauthorised cross-border data transfers, and not having appropriate contractual protections in place with their AI service providers. Regulators do not see the convenience of employees as an excuse for not complying with the law, and therefore, the organisation is ultimately responsible for any violations that occur.
3. Loss of Intellectual Property
Employees frequently use AI tools to speed up tasks involving proprietary information—debugging code, reviewing contracts, or summarising internal research. When done using unapproved AI platforms, this can expose trade secrets and intellectual property, eroding competitive advantage and creating long-term business risk.
Real-Life Example: Samsung’s ChatGPT Data Leak
In 2023, a case study exemplifying the Shadow AI risk occurred when Samsung Electronics placed a temporary ban on employee access to ChatGPT and other AI tools after reports from engineers revealed they were using ChatGPT to create debugging processes for internal source code and to summarise meeting notes. Consequently, confidential source code related to semiconductors was inadvertently uploaded onto a public AI platform. While there were no known incursions into the company’s system due to this incident, Samsung faced a significant challenge: once sensitive information is input into a public AI tool, it exists on external servers that are outside of the company’s purview or control.
As a result of this incident, Samsung restricted employee use of ChatGPT on corporate devices, issued a series of internal communications prohibiting the sharing of corporate data with public AI tools, and increased the urgency of their discussions regarding the adoption of secure, enterprise-level AI (artificial intelligence) solutions.
What Organisations Are Doing Today
Many organisations respond to Shadow AI risk by:
- Blocking access at the network level
- Circulating warning emails or policies
While these actions may reduce immediate exposure, they fail to address the root cause: employees still need AI to perform their jobs efficiently. As a result, bans often push AI usage underground, increasing Shadow AI rather than eliminating it.
Why Blocking AI Does Not Work—Governance Does
History has demonstrated that prohibition does not work - we see this when trying to block access to cloud storage, instant messaging and collaboration tools. Employees are forced to use personal devices and/or accounts when their employers block AI, which means employers do not have real-time visibility into how their employees are using these technologies, and creates friction with the security and compliance team as they try to enforce the types of tools their employees can use. Prohibiting AI adoption will not stop it from being adopted; it will just create a challenge for employers regarding how safe and responsible it is. The challenge for effective organisations is therefore to shift from denial and develop governance-first AI strategies aimed at controlling data usage, protection and security, rather than merely restricting access to a list of specific tools.
Shadow AI: A Silent Legal Liability Under the GDPR
Shadow AI isn't a problem for the Information Technology Department; it is a failure of Governance, Compliance and Law. By using AI tools that have not been approved as a result, the organisation processes personal data without a lawful basis (Article 6 of the General Data Protection Regulation (GDPR)), repurposes data for use beyond its original intent and in breach of the Purpose Limitation (Article 5(1)(b)), and routinely exceeds necessity and in breach of Data Minimisation (Article 5(1)(c)). The outcome of these actions is the use of tools that involve International Data Transfers Without Authorisation and are therefore in breach of Chapter V, and violate Article 32 because there are no enforceable safeguards in place. Most significantly, the failure to demonstrate Oversight, Logging and Control under Articles 5(2) and 24 constitutes a failure in Accountability. Therefore, from a Regulatory perspective, Shadow AI is not accidental and is not defensible.
The Right Solution: Secure and Governed AI Adoption
1. Provide Approved AI Tools
Employers have an obligation to supply business-approved AI technology for helping workers to be productive while maintaining maximum protections, like storing data separately and not using employees' data for training a model; defining how long data is kept, and the rules around deleting that data. When employees are provided with verified and secure AI options that align with their work processes, they will rely significantly less on Shadow AI.
2. Enforce Zero-Trust Data Access
The governance of AI systems must follow the principles of "zero trust," granting access to data only through the principle of "least privilege," which means that data access will only be allowed by the system user, and providing continuous verification of user-identity and context; this supports and helps establish context-aware controls to monitor and track all user activities, which will be especially important as agent-like AI systems become increasingly autonomous and are capable of operating at machine-speed where even small errors in configuration, will result in rapid and large expose to data.
3. Apply DLP and Audit Logging
It is important to have robust data loss prevention measures in place to protect sensitive data that is sent outside an organisation. The first end user or machine that accesses the data should be detailed in a comprehensive audit log that indicates when and how the data is accessed. In combination with other controls, these measures create accountability, comply with regulations, and assist with appropriately detecting and responding to incidents.
4. Maintain Visibility Across AI, Cloud, and SaaS
Security teams need unified visibility across AI tools, personal cloud applications, and SaaS platforms. Risks move across systems, and controls must follow the data wherever it flows.
Conclusion
This new threat exposes an organisation to the risk of data loss through leaks, regulatory fines, liability for the loss of intellectual property, and reputational damage, all of which can occur without any intent to cause harm. The way forward is not to block AI, but to adopt a clear framework built on governance, visibility, and secure enablement. This approach allows organisations to use AI with confidence, while ensuring trust, accountability, and effective oversight to protect data and support AI in reaching its full transformative potential. AI use is encouraged, but it must be done responsibly, ethically, and securely.
References
- https://bronson.ai/resources/shadow-ai/
- https://www.varonis.com/blog/shadow-ai
- https://www.waymakeros.com/learn/gdpr-hipaa-shadow-ai-compliance-nightmare
- https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/
- https://www.usatoday.com/story/special/contributor-content/2025/05/23/shadow-ai-the-hidden-risk-in-todays-workplace/83822081007