#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The Ministry of Electronics and Information Technology recently released the IT Intermediary Guidelines 2023 Amendment for social media and online gaming. The notification is crucial when the Digital India Bill’s drafting is underway. There is no denying that this bill, part of a series of bills focused on amendments and adding new provisions, will significantly improve the dynamics of Cyberspace in India in terms of reporting, grievance redressal, accountability and protection of digital rights and duties.
What is the Amendment?
The amendment comes as a key feature of cyberspace as the bill introduces fact-checking, a crucial aspect of relating information on various platforms prevailing in cyberspace. Misformation and disinformation were seen rising significantly during the Covid-19 pandemic, and fact-checking was more important than ever. This has been taken into consideration by the policymakers and hence has been incorporated as part of the Intermediary guidelines. The key features of the guidelines are as follows –
- The phrase “online game,” which is now defined as “a game that is offered on the Internet and is accessible by a user through a computer resource or an intermediary,” has been added.
- A clause has been added that emphasises that if an online game poses a risk of harm to the user, intermediaries and complaint-handling systems must advise the user not to host, display, upload, modify, publish, transmit, store, update, or share any data related to that risky online game.
- A proviso to Rule 3(1)(f) has been added, which states that if an online gaming intermediary has provided users access to any legal online real money game, it must promptly notify its users of the change, within 24 hours.
- Sub-rules have been added to Rule 4 that focus on any legal online real money game and require large social media intermediaries to exercise further due diligence. In certain situations, online gaming intermediaries:
- Are required to display a demonstrable and obvious mark of verification of such online game by an online gaming self-regulatory organisation on such permitted online real money game
- Will not offer to finance themselves or allow financing to be provided by a third party.
- Verification of real money online gaming has been added to Rule 4-A.
- The Ministry may name as many self-regulatory organisations for online gaming as it deems necessary for confirming an online real-money game.
- Each online gaming self-regulatory body will prominently publish on its website/mobile application the procedure for filing complaints and the appropriate contact information.
- After reviewing an application, the self-regulatory authority may declare a real money online game to be a legal game if it is satisfied that:
- There is no wagering on the outcome of the game.
- Complies with the regulations governing the legal age at which a person can engage into a contract.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 have a new rule 4-B (Applicability of certain obligations after an initial period) that states that the obligations of the rule under rules 3 and 4 will only apply to online games after a three-month period has passed.
- According to Rule 4-C (Obligations in Relation to Online Games Other Than Online Real Money Games), the Central Government may direct the intermediary to make necessary modifications without affecting the main idea if it deems it necessary in the interest of India’s sovereignty and integrity, the security of the State, or friendship with foreign States.
- Intermediaries, such as social media companies or internet service providers, will have to take action against such content identified by this unit or risk losing their “safe harbour” protections under Section 79 of the IT Act, which let intermediaries escape liability for what third parties post on their websites. This is problematic and unacceptable. Additionally, these notified revisions can circumvent the takedown order process described in Section 69A of the IT Act, 2000. They also violated the ruling in Shreya Singhal v. Union of India (2015), which established precise rules for content banning.
- The government cannot decide if any material is “fake” or “false” without a right of appeal or the ability for judicial monitoring since the power to do so could be abused to thwart examination or investigation by media groups. Government takedown orders have been issued for critical remarks or opinions posted on social media sites; most of the platforms have to abide by them, and just a few, like Twitter, have challenged them in court.
Conclusion
The new rules briefly cover the aspects of fact-checking, content takedown by Govt, and the relevance and scope of sections 69A and 79 of the Information Technology Act, 2000. Hence, it is pertinent that the intermediaries maintain compliance with rules to ensure that the regulations are sustainable and efficient for the future. Despite these rules, the responsibility of the netizens cannot be neglected, and hence active civic participation coupled with such efficient regulations will go a long way in safeguarding the Indian cyber ecosystem.

Introduction
The Telecom Regulatory Authority of India (TRAI) on 20th August 2024 issued directives requiring Access Service Providers to adhere to the specific guidelines to protect consumer interests and prevent fraudulent activities. TRAI has mandated all Access Service Providers to abide by the directives. These steps advance TRAI's efforts to promote a secure messaging ecosystem, protecting consumer interests and eliminating fraudulent conduct.
Key Highlights of the TRAI’s Directives
- For improved monitoring and control, TRAI has directed that Access Service Providers move telemarketing calls, beginning with the 140 series, to an online DLT (Digital Ledger Technology) platform by September 30, 2024, at the latest.
- All Access Service Providers will be forbidden from sending messages that contain URLs, APKs, OTT links, or callback numbers that the sender has not whitelisted, the rule is to be effective from September 1st, 2024.
- In an effort to improve message traceability, TRAI has made it mandatory for all messages, starting on November 1, 2024, to include a traceable trail from sender to receiver. Any message with an undefined or mismatched telemarketer chain will be rejected.
- To discourage the exploitation or misuse of templates for promotional content, TRAI has introduced punitive actions in case of non-compliance. Content Templates registered in the wrong category will be banned, and subsequent offences will result in a one-month suspension of the Sender's services.
- To assure compliance with rules, all Headers and Content Templates registered on DLT must follow the requirements. Furthermore, a single Content Template cannot be connected to numerous headers.
- If any misuse of headers or content templates by a sender is discovered, TRAI has instructed an immediate ‘suspension of traffic’ from all of that sender's headers and content templates for their verification. Such suspension can only be revoked only after the Sender has taken legal action against such usage. Furthermore, Delivery-Telemarketers must identify and disclose companies guilty of such misuse within two business days, or else risk comparable repercussions.
CyberPeace Policy Outlook
TRAI’s measures are aimed at curbing the misuse of messaging services including spam. TRAI has mandated that headers and content templates follow defined requirements. Punitive actions are introduced in case of non-compliance with the directives, such as blacklisting and service suspension. TRAI’s measures will surely curb the increasing rate of scams such as phishing, spamming, and other fraudulent activities and ultimately protect consumer's interests and establish a true cyber-safe environment in messaging services ecosystem.
The official text of TRAI directives is available on the official website of TRAI or you can access the link here.
References
- https://www.trai.gov.in/sites/default/files/Direction_20082024.pdf
- https://www.trai.gov.in/sites/default/files/PR_No.53of2024.pdf
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2046872
- https://legal.economictimes.indiatimes.com/news/regulators/trai-issues-directives-to-access-providers-to-curb-misuse-fraud-through-messaging/112669368

The Ghibli trend has been in the news for the past couple of weeks for multiple reasons, be it good or bad. The nostalgia that everyone has for the art form has made people turn a blind eye to what the trend means to the artists who painstakingly create the art. The open-source platforms may be trained on artistic material without the artist's ‘explicit permission’ making it so that the rights of the artists are downgraded. The artistic community has reached a level where they are questioning their ability to create, which can be recreated by this software in a couple of seconds and without any thought as to what it is doing. OpenAI’s update on ChatGPT makes it simple for users to create illustrations that are like the style created by Hayao Miyazaki and made into anything from personal pictures to movie scenes and making them into Ghibli-style art. The updates in AI to generate art, including Ghibli-style, may raise critical questions about artistic integrity, intellectual property, and data privacy risks.
AI and the Democratization of Creativity
AI-powered tools have lowered barriers and enable more people to engage with artistic expression. AI allows people to create appealing content in the form of art regardless of their artistic capabilities. The update of ChatGPT has made it so that art has been democratized, and the abilities of the user don't matter. It makes art accessible, efficient and a creative experiment to many.
Unfortunately, these developments also pose challenges for the original artistry and the labour of human creators. The concern doesn't just stop at AI replacing artists, but also about the potential misuse it can lead to. This includes unauthorized replication of distinct styles or deepfake applications. When it is used ethically, AI can enhance artistic processes. It can assist with repetitive tasks, improving efficiency, and enabling creative experimentation.
However, its ability to mimic existing styles raises concerns. The potential that AI-generated content has could lead to a devaluation of human artists' work, potential copyright issues, and even data privacy risks. Unauthorized training of AI models that create art can be exploited for misinformation and deepfakes, making human oversight essential. Few artists believe that AI artworks are disrupting the accepted norms of the art world. Additionally, AI can misinterpret prompts, producing distorted or unethical imagery that contradicts artistic intent and cultural values, highlighting the critical need for human oversight.
The Ethical and Legal Dilemmas
The main dilemma that surrounds trends such as the Ghibli trend is whether it compromises human efforts by blurring the line between inspiration and infringement of artistic freedom. Further, an issue that is not considered by most users is whether the personal content (personal pictures in this case) uploaded on AI models is posing a risk to their privacy. This leads to the issue where the potential misuse of AI-generated content can be used to spread misinformation through misleading or inappropriate visuals.
The negative effects can only be balanced if a policy framework is created that can ensure the fair use of AI in Art. Further, this should ensure that the training of AI models is done in a manner that is fair to the artists who are the original creators of a style. Human oversight is needed to moderate the AI-generated content. This oversight can be created by creating ethical AI usage guidelines for platforms that host AI-generated art.
Conclusion: What Can Potentially Be Done?
AI is not a replacement for human effort, it is to ease human effort. We need to promote a balanced AI approach that protects the integrity of artists and, at the same time, continues to foster innovation. And finally, strengthening copyright laws to address AI-generated content. Labelling AI content and ensuring that this content is disclosed as AI-generated is the first step. Furthermore, there should be fair compensation made to the human artists based on whose work the AI model is trained. There is an increasing need to create global AI ethics guidelines to ensure that there is transparency, ethical use and human oversight in AI-driven art. The need of the hour is that industries should work collaboratively with regulators to ensure that there is responsible use of AI.
References
- https://medium.com/@haileyq/my-experience-with-studio-ghibli-style-ai-art-ethical-debates-in-the-gpt-4o-era-b84e5a24cb60
- https://www.bbc.com/future/article/20241018-ai-art-the-end-of-creativity-or-a-new-movement