#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Executive Summary:
In the digital world, people are becoming targets more and more of online scams, which rely on deception. One of the ways the social media is being used for the elections in recent time, is the "BJP - Election Bonus" offer that promises a cash prize of Rs. 5000 or more, through some easy questionnaire. This article provides the details of this swindle and reveals its deceptive tricks as well as gives a set of recommendations on how to protect yourself from such online fraud, especially during the upcoming elections.
False Claim:
The "BJP - Election Bonus" campaign boasts that by taking a few clicks of the mouse, users will get a cash prize. This scheme is nothing but a fake association with the Bharatiya Janata Party (BJP)’s Government and Prime Minister Shri Narendra Modi and therefore, it uses the images and brands of both of them to give the scheme an impression of legitimacy. The imposters are taking advantage of the public's trust for the Government and the widespread desire for remuneration to ensnare the unaware victims, specifically before the upcoming Lok Sabha elections.

The Deceptive Scheme:
- Tempting Social Media Offer: The fraud begins with an attractive link on the social media platforms. The scammers say that the proposal is related to the Bharatiya Janata Party (BJP) with the caption of “The official party has prepared many gifts for their supporters.” accompanied by an image of the Prime Minister Shri Narendra Modi.
- Luring with Money: The offer promises to give Rs.5,000 or more. This is aimed at drawing in people specifically during election campaigns; and people’s desire for financial gain.
- Tricking with Questions: When the link is clicked, the person is brought to the page with the simple questions. The purpose of these questions is to make people feel safe and believe that they have been selected for an actual government’s program.
- The Open-the-Box Trap: Finally, the questions are answered and the last instruction is to open-the-box for the prize. However, this is just a tactic for them to make you curious about the reward.
- Fake Reward and Spreading the Scam: Upon opening the box, the recipient will be greeted with the text of Rs. 5000. However, this is not true; it is just a way to make them share the link on WhatsApp, helping the scammers to reach more victims.
The fraudsters use political party names and the Prime Minister's name to increase the plausibility of it, although there is no real connection. They employ the people's desire for monetary help, and also the time of the elections, making them susceptible to their tricks.
Analytical Breakdown:
- The campaign is a cleverly-created scheme to lure people by misusing the trust they have in the Government. By using BJP's branding and the Prime Minister's photo, fraudsters aim to make their misleading offer look credible. Fake reviews and cash reward are the two main components of the scheme that are intended to lure users into getting involved, and the end result of this is the path of deception.
- Through sharing the link over WhatsApp, users become unaware accomplices that are simply assisting the scammers to reach an even bigger audience and hence their popularity, especially with the elections around the corner.
- On top of this, the time of committing this fraud is very disturbing, as the election is just round the corner. Scammers do this in the context of the political turmoil and the spread of unconfirmed rumors and speculation about the upcoming elections in the same way they did earlier. The fraudsters are using this strategy to take advantage of the political affiliations by linking their scam to the Political party and their Leaderships.
- We have also cross-checked and as of now there is no well established and credible source or any official notification that has confirmed such an offer advertised by the Party.
- Domain Analysis: The campaign is hosted on a third party domain, which is different from the official website, thus creating doubts. Whois information reveals that the domain has been registered not long ago. The domain was registered on 29th march 2024, just a few days back.

- Domain Name: PSURVEY[.]CYOU
- Registry Domain ID: D443702580-CNIC
- Registrar WHOIS Server: whois.hkdns.hk
- Registrar URL: http://www.hkdns.hk
- Updated Date: 2024-03-29T16:18:00.0Z
- Creation Date: 2024-03-29T15:59:17.0Z (Recently Created)
- Registry Expiry Date: 2025-03-29T23:59:59.0Z
- Registrant State/Province: Anhui
- Registrant Country: CN (China)
- Name Server: NORMAN.NS.CLOUDFLARE.COM
- Name Server: PAM.NS.CLOUDFLARE.COM
Note: Cybercriminals used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory and Best Practices:
- Be careful and watchful for any offers that seem too good to be true online, particularly during election periods. Exercise caution at a high level when you come across such offers, because they are usually accompanied by dishonest schemes.
- Carefully cross-check the authenticity of every campaign or offer you’re considering before interacting with it. Do not click on suspicious links and do not share private data that can be further used to run the scam.
- If you come across any such suspicious activity or if you feel you have been scammed, report it to the relevant authorities, such as the local police or the cybercrime section. Reporting is one of the most effective instruments to prevent the spread of these misleading schemes and it can support the course of the investigations.
- Educate yourselves and your families on the usual scammers’ tricks, including their election-related strategies. Prompt people to think critically and a good deal of skepticism when they meet online offers and promotions that evoke a possibility to obtain money or rewards easily.
- Ensure that you are always on a high level of alert as you explore the digital field, especially during elections. The authenticity of the information you encounter should always be verified before you act on it or pass it over to someone else.
- In case you have any doubt or worry regarding a certain e-commerce offer or campaign, don’t hesitate to ask for help from reliable sources such as Cybersecurity experts or Government agencies. A consultation with credible sources will assist you in coming up with informed decisions and guarding yourself against being navigated by these schemes.
Conclusion:
The "BJP - Election Bonus" campaign is a real case study of how Internet fraud is becoming more popular day by day, particularly before the elections. Through the awareness of the tactics employed by these scammers and their abuse of the community's trust in the Government and political figures, we can equip ourselves and our communities to avert becoming the victim of such fraudulent schemes. As a team, we can collectively strive for a digital environment free of threats and breaches of security, even in times of high political tension that accompany elections.

Introduction
A bill requiring social media companies, providers of encrypted communications, and other online services to report drug activity on their platforms to the U.S. The Drug Enforcement Administration (DEA) advanced to the Senate floor, alarming privacy advocates who claim the legislation transforms businesses into de facto drug enforcement agents and exposes many of them to liability for providing end-to-end encryption.
Why is there a requirement for online companies to report drug activity?
The reason behind the bill is that there was a Kansas teenager died after unknowingly taking a fentanyl-laced pill he purchased on Snapchat. The bill requires social media companies and other web communication providers to provide the DEA with users’ names and other information when the companies have “actual knowledge” that illicit drugs are being distributed on their platforms.
There is an urgent need to look into this matter as platforms like Snapchat and Instagram are the constant applications that netizens use. If these kinds of apps promote the selling of drugs, then it will result in major drug-selling vehicles and become drug-selling platforms.
Threat to end to end encryption
End-to-end encryption has long been criticised by law enforcement for creating a “lawless space” that criminals, terrorists, and other bad actors can exploit for their illicit purposes. End- to end encryption is important for privacy, but it has been criticised as criminals also use it for bad purposes that result in cyber fraud and cybercrimes.
Cases of drug peddling on social media platforms
It is very easy to get drugs on social media, just like calling an Uber. It is that simple to get the drugs. The survey discovered that access to illegal drugs is “staggering” on social media applications, which has contributed to the rising number of fentanyl overdoses, which has resulted in suicide, gun violence, and accidents.
According to another survey, drug dealers use slang, emoticons, QR codes, and disappearing messages to reach customers while avoiding content monitoring measures on social networking platforms. Drug dealers are frequently active on numerous social media platforms, advertising their products on Instagram while providing their WhatApps or Snapchat names for queries, making it difficult for law officials to crack down on the transactions.
There is a need for social media platforms to report these kinds of drug-selling activity on specific platforms to the Drug enforcement administration. The bill requires online companies to report drug cases going on websites, such as the above-mentioned Snapchat case. There are so many other cases where drug dealers sell the drug through Instagram, Snapchat etc. Usually, if Instagram blocks one account, they create another account for the drug selling. Just by only blocking the account does not help to stop drug trafficking on social media platforms.
Will this put the privacy of users at risk?
It is important to report the cybercrime activities of selling drugs on social media platforms. The companies will only detect the activity regarding the drugs which are being sold through social media platforms which are able to detect bad actors and cyber criminals. The detection will be on the particular activities on the applications where it is happening because the social media platforms lack regulations to govern them, and their convenience becomes the major vehicle for the drugs sale.
Conclusion
Social media companies are required to report these kinds of activities happening on their platforms immediately to the Drugs enforcement Administration so that the DEA will take the required steps instead of just blocking the account. Because just blocking does not stop these drug markets from happening online. There must be proper reporting for that. And there is a need for social media regulations. Social media platforms mostly influence people.

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency