#FactCheck: Fake video falsely claims FM Sitharaman endorsed investment scheme
Executive Summary:
A video gone viral on Facebook claims Union Finance Minister Nirmala Sitharaman endorsed the government’s new investment project. The video has been widely shared. However, our research indicates that the video has been AI altered and is being used to spread misinformation.

Claim:
The claim in this video suggests that Finance Minister Nirmala Sitharaman is endorsing an automotive system that promises daily earnings of ₹15,00,000 with an initial investment of ₹21,000.

Fact Check:
To check the genuineness of the claim, we used the keyword search for “Nirmala Sitharaman investment program” but we haven’t found any investment related scheme. We observed that the lip movements appeared unnatural and did not align perfectly with the speech, leading us to suspect that the video may have been AI-manipulated.
When we reverse searched the video which led us to this DD News live-stream of Sitharaman’s press conference after presenting the Union Budget on February 1, 2025. Sitharaman never mentioned any investment or trading platform during the press conference, showing that the viral video was digitally altered. Technical analysis using Hive moderator further found that the viral clip is Manipulated by voice cloning.

Conclusion:
The viral video on social media shows Union Finance Minister Nirmala Sitharaman endorsing the government’s new investment project as completely voice cloned, manipulated and false. This highlights the risk of online manipulation, making it crucial to verify news with credible sources before sharing it. With the growing risk of AI-generated misinformation, promoting media literacy is essential in the fight against false information.
- Claim: Fake video falsely claims FM Nirmala Sitharaman endorsed an investment scheme.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

A few of us were sitting together, talking shop - which, for moms, inevitably circles back to children, their health and education. Mothers of teenagers were concerned that their children seemed to spend an excessive amount of time online and had significantly reduced verbal communication at home.
Reena shared that she was struggling to understand her two boys, who had suddenly transformed from talkative, lively children into quiet, withdrawn teenagers.
Naaz nodded. “My daughter is glued to her device. I just can’t get her off it! What do I do, girls? Any suggestions?”
Mou sighed, “And what about the rising scams? I keep warning my kids about online threats, but I’m not sure I’m doing enough.”
Not just scams, those come later. What worries me more are the videos and photos of unsuspecting children being edited and misused on digital platforms,” added Reena.
The Digital Parenting Dilemma
For parents, it’s a constant challenge—allowing children internet access means exposing them to potential risks while restricting it invites criticism for being overly strict.
‘What do I do?’ is a question that troubles many parents, as they know how addictive phones and gaming devices can be. (Fun fact: Even parents sometimes struggle to resist endlessly scrolling through social media!)
‘What should I tell them, and when?’ This becomes a pressing concern when parents hear about cyberbullying, online grooming, or even cyberabduction.
‘How do I ensure they stay cybersafe?’ This remains an ongoing worry, as children grow and their online activities evolve.
Whether it’s a single-child, dual-income household, a two-child, single-income family, or any other combination, parents have their hands full managing work, chores, and home life. Sometimes, children have to be left alone—with grandparents, caregivers, or even by themselves for a few hours—making it difficult to monitor their digital lives. While smartphones help parents stay connected and track their child’s location, they can also expose children to risks if not used responsibly.
Breaking It Down
Start cybersafety discussions early and tailor them to your child’s age.
For simplicity, let’s categorize learning into five key age groups:
- 0 – 2 years
- 3 – 7 years
- 8 – 12 years
- 13 – 16 years
- 16 – 19 years
Let’s explore the key safety messages for each stage.
Reminder:
Children will always test boundaries and may resist rules. The key is to lead by example—practice cybersafety as a family.
0 – 2 Years: Newborns & Infants
Pediatricians recommend avoiding screen exposure for children under two years old. If you occasionally allow screen time (for example, while changing them), keep it to a minimum. Children are easily distracted—use this to your advantage.
What can you do?
- Avoid watching TV or using mobile devices in front of them.
- Keep activity books, empty boxes, pots, and ladles handy to engage them.
3 – 7 Years: Toddlers & Preschoolers
Cybersafety education should ideally begin when a child starts engaging with screens. At this stage, parents have complete control over what their child watches and for how long.
What can you do?
- Keep screen time limited and fully supervised.
- Introduce basic cybersecurity concepts, such as stranger danger and good picture vs. bad picture.
- Encourage offline activities—educational toys, books, and games.
- Restrict your own screen time when your child is awake to set a good example.
- Set up parental controls and create child-specific accounts on devices.Secure all devices with comprehensive security software.
8 – 12 Years: Primary & Preteens
Cyber-discipline should start now. Strengthen rules, set clear boundaries, and establish consequences for rule violations.
What can you do?
- Increase screen time gradually to accommodate studies, communication, and entertainment.
- Teach them about privacy and the dangers of oversharing personal information.
- Continue stranger-danger education, including safe/unsafe websites and apps.
- Emphasize reviewing T&Cs before downloading apps.Introduce concepts like scams, phishing, deepfakes, and virus attacks using real-life examples.
- Keep banking and credit card credentials private—children may unintentionally share sensitive information.
Cyber Safety Mantras:
- STOP. THINK. ACT.
- Do Not Trust Blindly Online.
13 – 16 Years: The Teenage Phase
Teenagers are likely to resist rules and demand independence, but if cybersecurity has been a part of their upbringing, they will tolerate parental oversight.
What can you do?
- Continue parental controls but allow greater access to previously restricted content.
- Encourage open conversations about digital safety and online threats.
- Respect their need for privacy but remain involved as a silent observer.
- Discuss cyberbullying, harassment, and online reputation management.
- Keep phones out of bedrooms at night and maintain device-free zones during family time.
- Address online relationships and risks like dating scams, sextortion, and trafficking.
16 – 19 Years: The Transition to Adulthood
By this stage, children have developed a sense of responsibility and maturity. It’s time to gradually loosen control while reinforcing good digital habits.
What can you do?
- Monitor their online presence without being intrusive.Maintain open discussions—teens still value parental advice.
- Stay updated on digital trends so you can offer relevant guidance.
- Encourage digital balance by planning device-free family outings.
Final Thoughts
As a parent, your role is not just to set rules but to empower your child to navigate the digital world safely. Lead by example, encourage responsible usage, and create an environment where your child feels comfortable discussing online challenges with you.
Wishing you a safe and successful digital parenting journey!
.webp)
Introduction
In the fast-paced digital age, misinformation spreads faster than actual news. This was seen recently when inaccurate information on social media was spread, stating that the Election Commission of India (ECI) had taken down e-voter rolls for some states from its website overnight. The rumour confused the public and caused political debate in states like Maharashtra, MP, Bihar, UP and Haryana, resulting in public confusion. But the ECI quickly called the viral information "fake news" and made sure that voters could still get access to the electoral rolls of all States and Union Territories, available at voters.eci.gov.in. The incident shows how electoral information could be harmed by the impact of misinformation and how important it is to verify the authenticity.
The Incident and Allegations
On August 7, 2025, social media posts on platforms like X and WhatsApp claimed that the Election Commission of India had removed e-voter lists from its website. The posts appeared after public allegations about irregularities in certain constituencies. However, the claims about the removal of voter lists were unverified.
The Election Commission’s Response
In a formal tweet posted on X, it stated categorically:
“This is a fake news. Anyone can download the Electoral Roll for any of 36 States/UTs through this link: https://voters.eci.gov.in/download-eroll.”
The Commission clarified that no deletion has been done at all and that all the voters' rolls are intact and accessible to the public. Keeping with the spirit of transparency, the ECI reaffirmed its overall practice of public access to electoral information by clarifying that the system is intact and accessible for inspection.
Importance of Timely Clarifications
By countering factually incorrect information the moment it was spread on a large scale, the ECI stopped possible harm to public trust. Election officials rely upon being trusted, and any speculation concerning their honesty can prove harmful to democracy. Such prompt action stops false information from becoming a standard in public discourse.
Misinformation in the Electoral Space
- How False Narratives Gain Traction
Election misinformation increases in significant political environments. Social media, confirmation bias, and increased emotional states during elections enable rumour spread. On this occasion, the unfounded report struck a chord with widespread political distrust, and hence, people easily believed and shared it without checking if it was true or not.
- Risks to Democratic Integrity
When misinformation impacts election procedures, the consequences can be profound:
- Erosion of Trust: People can lose faith in the neutrality of election administrators quite easily.
- Polarization: Untrue assertions tend to reinforce political divides, rendering constructive communication more difficult.
- The Role of Media Literacy
Combating such mis-disinformation requires more than official statements. Media skills training courses can equip individuals with the ability to recognise warning signs in suspect messages. Even basic actions like checking official sources prior to sharing can move far in keeping untruths from being spread.
Strategies to Counter Electoral Misinformation
Multi-Stakeholder Action
Effective counteracting of electoral disinformation requires coordination among election officials, fact-checkers, media, and platforms. Actions that are suggested include:
- Rapid Response Protocols: Institutions should maintain dedicated monitoring teams for quick rebuttals.
- Confirmed Channels of Communication: Providing official sites and pages for actual electoral news.
- Proactive Transparency: Regular publication of electoral process updates can anticipate rumours.
- Platform Accountability: Social media sites must label or limit the visibility of information found to be false by credentialed fact-checkers.
Conclusion
The recent allegations of e-voter rolls deletion underscore the susceptibility of contemporary democracies to mis-disinformation. Even though the circumstances were brought back into order by the ECI's swift and unambiguous denunciation, the incident itself serves to emphasise the necessity of preventive steps to maintain election faith. Even though fact-checking alone might not work in an environment where the information space is growing more polarised and algorithmic, the long-term solution to such complications is to grow an ironclad democratic culture where everyone, every organisation, and platforms value the truth over clickbait. The lesson is clear: in the age of instant news, accurate communication is vital for maintaining democratic integrity, not extravagances.
References
- https://www.newsonair.gov.in/election-commission-dismisses-fake-news-on-removal-of-e-voter-rolls/
- https://economictimes.indiatimes.com/news/india/eci-dismisses-claims-of-removing-e-voter-rolls-from-its-website-calls-it-fake-news/articleshow/123190662.cms
- https://www.thehindu.com/news/national/vote-theft-claim-of-congress-factually-incorrect-election-commission/article69921742.ece
- https://www.thehindu.com/opinion/editorial/a-crisis-of-trust-on-the-election-commission-of-india/article69893682.ece

Introduction
The integration of Artificial Intelligence into our daily workflows has compelled global policymakers to develop legislative frameworks to govern its impact efficiently. The question that we arrive at here is: While AI is undoubtedly transforming global economies, who governs the transformation? The EU AI Act was the first of its kind legislation to govern Artificial Intelligence, making the EU a pioneer in the emerging technology regulation space. This blog analyses the EU's Draft AI Rules and Code of Practice, exploring their implications for ethics, innovation, and governance.
Background: The Need for AI Regulation
AI adoption has been happening at a rapid pace and is projected to contribute $15.7 trillion to the global economy by 2030. The AI market size is expected to grow by at least 120% year-over-year. Both of these statistics have been stated in arguments citing concrete examples of AI risks (e.g., bias in recruitment tools, misinformation spread through deepfakes). Unlike the U.S., which relies on sector-specific regulations, the EU proposes a unified framework to address AI's challenges comprehensively, especially with the vacuum that exists in the governance of emerging technologies such as AI. It should be noted that the GDPR or the General Data Protection Regulation has been a success with its global influence on data privacy laws and has started a domino effect for the creation of privacy regulations all over the world. This precedent emphasises the EU's proactive approach towards regulations which are population-centric.
Overview of the Draft EU AI Rules
This Draft General Purpose AI Code of Practice details the AI rules for the AI Act rules and the providers of general-purpose AI models with systemic risks. The European AI Office facilitated the drawing up of the code, and was chaired by independent experts and involved nearly 1000 stakeholders and EU member state representatives and observers both European and international observers.
14th November 2024 marks the publishing of the first draft of the EU’s General-Purpose AI Code of Practice, established by the EU AI Act. As per Article 56 of the EU AI Act, the code outlines the rules that operationalise the requirements, set out for General-Purpose AI (GPAI) model under Article 53 and GPAI models with systemic risks under Article 55. The AI Act is legislation that finds its base in product safety and relies on setting harmonised standards in order to support compliance. These harmonised standards are essentially sets of operational rules that have been established by the European Standardisation bodies, such as the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC) and the European Telecommunications Standards Institute. Industry experts, civil society and trade unions are translating the requirements set out by the EU sectoral legislation into the specific mandates set by the European Commission. The AI Act obligates the developers, deployers and users of AI on mandates for transparency, risk management and compliance mechanisms
The Code of Practice for General Purpose AI
The most popular applications of GPAI include ChatGPT and other foundational models such as CoPilot from Microsoft, BERT from Google, Llama from Meta AI and many others and they are under constant development and upgradation. The 36-pages long draft Code of Practice for General Purpose AI is meant to serve as a roadmap for tech companies to comply with the AI Act and avoid paying penalties. It focuses on transparency, copyright compliance, risk assessment, and technical/governance risk mitigation as the core areas for the companies that are developing GPAIs. It also lays down guidelines that look to enable greater transparency on what goes into developing GPAIs.
The Draft Code's provisions for risk assessment focus on preventing cyber attacks, large-scale discrimination, nuclear and misinformation risks, and the risk of the models acting autonomously without oversight.
Policy Implications
The EU’s Draft AI Rules and Code of Practice represent a bold step in shaping the governance of general-purpose AI, positioning the EU as a global pioneer in responsible AI regulation. By prioritising harmonised standards, ethical safeguards, and risk mitigation, these rules aim to ensure AI benefits society while addressing its inherent risks. While the code is a welcome step, the compliance burdens on MSMEs and startups could hinder innovation, whereas, the voluntary nature of the Code raises concerns about accountability. Additionally, harmonising these ambitious standards with varying global frameworks, especially in regions like the U.S. and India, presents a significant challenge to achieving a cohesive global approach.
Conclusion
The EU’s initiative to regulate general-purpose AI aligns with its legacy of proactive governance, setting the stage for a transformative approach to balancing innovation with ethical accountability. However, challenges remain. Striking the right balance is crucial to avoid stifling innovation while ensuring robust enforcement and inclusivity for smaller players. Global collaboration is the next frontier. As the EU leads, the world must respond by building bridges between regional regulations and fostering a unified vision for AI governance. This demands active stakeholder engagement, adaptive frameworks, and a shared commitment to addressing emerging challenges in AI. The EU’s Draft AI Rules are not just about regulation, they are about leading a global conversation.
References
- https://indianexpress.com/article/technology/artificial-intelligence/new-eu-ai-code-of-practice-draft-rules-9671152/
- https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft#:~:text=Drafting%20of%20the%20Code%20of%20Practice%20is%20taking%20place%20under,the%20drafting%20of%20the%20code.
- https://copyrightblog.kluweriplaw.com/2024/12/16/first-draft-of-the-general-purpose-ai-code-of-practice-has-been-released/