#FactCheck: Viral AI Video Showing Finance Minister of India endorsing an investment platform offering high returns.
Executive Summary:
A video circulating on social media falsely claims that India’s Finance Minister, Smt. Nirmala Sitharaman, has endorsed an investment platform promising unusually high returns. Upon investigation, it was confirmed that the video is a deepfake—digitally manipulated using artificial intelligence. The Finance Minister has made no such endorsement through any official platform. This incident highlights a concerning trend of scammers using AI-generated videos to create misleading and seemingly legitimate advertisements to deceive the public.

Claim:
A viral video falsely claims that the Finance Minister of India Smt. Nirmala Sitharaman is endorsing an investment platform, promoting it as a secure and highly profitable scheme for Indian citizens. The video alleges that individuals can start with an investment of ₹22,000 and earn up to ₹25 lakh per month as guaranteed daily income.

Fact check:
By doing a reverse image search from the key frames of the viral fake video we found an original YouTube clip of the Finance Minister of India delivering a speech on the webinar regarding 'Regulatory, Investment and EODB reforms'. Upon further research we have not found anything related to the viral investment scheme in the whole video.
The manipulated video has had an AI-generated voice/audio and scripted text injected into it to make it appear as if she has approved an investment platform.

The key to deepfakes is that they seem relatively realistic in their facial movement; however, if you look closely, you can see that there are mismatched lip-syncing and visual transitions that are out of the ordinary, and the results prove our point.


Also, there doesn't appear to be any acknowledgment of any such endorsement from a legitimate government website or a credible news outlet. This video is a fabricated piece of misinformation to attempt to scam the viewers by leveraging the image of a trusted public figure.
Conclusion:
The viral video showing the Finance Minister of India, Smt. Nirmala Sitharaman promoting an investment platform is fake and AI-generated. This is a clear case of deepfake misuse aimed at misleading the public and luring individuals into fraudulent schemes. Citizens are advised to exercise caution, verify any such claims through official government channels, and refrain from clicking on unknown investment links circulating on social media.
- Claim: Nirmala Sitharaman promoted an investment app in a viral video.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The digital landscape of the nation has reached a critical point in its evolution. The rapid adoption of technologies such as cloud computing, mobile payment systems, artificial intelligence, and smart infrastructure has led to a high degree of integration between digital systems and governance, commercial activity, and everyday life. As dependence on these systems continues to grow, a wide range of cyber threats has emerged that are complex, multi-layered, and closely interconnected. By 2026, cyber security threats directed at India are expected to include an increasing number of targeted, well-organised, and strategic cyber attacks. These attacks are likely to focus on exploiting the trust placed in technology, institutions, automation, and the fast pace of technological change.
1. Social Engineering 2.0: Hyper-Personalised AI Phishing & Mobile Banking Malware
Cybercriminals have moved from generalised methods to hyper-targeted attacks through AI-based psychological manipulation. In addition to social media profiles, data breaches, and digital/tracking footprints, the latest types of cybercrimes expected in 2026 will involve AI-based analysis of this information to create and increase the use of hyper-targeted phishing emails.
Phishing emails are capable of impersonating banks, employers, and even family members, with all the same regionally or culturally relevant tone, language, and context as would be done if these persons were sending the emails in person.
With malicious applications disguised as legitimate service apps, cybercriminals have the ability to intercept and capture One-Time Passwords (OTPs), hijack user sessions, and steal money from user accounts in a matter of minutes.
These types of attempts or attacks are successful not only because of their technical sophistication, but because they take advantage of human trust at scale, giving them an almost limitless reach into the financial systems of people around the world through their computers and mobile devices.
2. Cloud and Supply Chain Vulnerabilities
As Indian organisations increasingly migrate to cloud infrastructure, cloud misconfigurations are emerging as a major cybersecurity risk. Weak identity controls, exposed storage, and improper access management can allow attackers to bypass traditional network defences. Alongside this, supply chain attacks are expected to intensify in 2026.
In supply chain attacks, cybercriminals compromise a trusted software vendor or service provider to infiltrate multiple downstream organisations. Even entities with strong internal security can be affected through third-party dependencies. For India’s startup ecosystem, government digital platforms, and IT service providers, this presents a systemic risk. Strengthening vendor risk management and visibility across digital supply chains will be essential.
3. Threats to IoT and Critical Infrastructure
By implementing smart cities, digital utilities, and connected public services, IoT has opened itself up to increased levels of operational technology (OT) through India’s initiative. However, there is currently a lack of adequate security in the form of strong authentication, encryption, and update methods available on many IoT devices. By the year 2026, attackers are going to be able to exploit these vulnerabilities much more than they already are.
Cyberattacks on critical infrastructure such as energy, transportation, healthcare, and telecom systems have far-reaching consequences that extend well beyond data loss; they directly affect the provision of essential services, can damage public safety, and raise concerns over national security. Effectively securing critical infrastructure needs to involve dedicated security solutions to deal with the specific needs of critical infrastructure, in contrast to conventional IT security.
4. Hidden File Vectors and Stealth Payload Delivery
SVG File Abuse in Stealth Attacks
Cybercriminals are continually searching for ways to bypass security filters, and hidden file vectors are emerging as a preferred tactic. One such method involves the abuse of SVG (Scalable Vector Graphics) files. Although commonly perceived as harmless image files, SVGs can contain embedded scripts capable of executing malicious actions.
By 2026, SVG-based attacks are expected to be used in phishing emails, cloud file sharing, and messaging platforms. Because these files often bypass traditional antivirus and email security systems, they provide an effective stealth delivery mechanism. Indian organisations will need to rethink assumptions about “safe” file formats and strengthen deep content inspection capabilities.
5. Quantum-Era Cyber Risks and “Harvest Now, Decrypt Later” Attacks
Although practical quantum computers are still emerging, quantum-era cyber risks are already a present-day concern. Adversaries are believed to be intercepting and storing encrypted data now with the intention of decrypting it in the future once quantum capabilities mature—a strategy known as “harvest now, decrypt later.” This poses serious long-term confidentiality risks.
Recognising this threat, the United States took early action during the Biden administration through National Security Memorandum 10, which directed federal agencies to prepare for the transition to quantum-resistant cryptography. For India, similar foresight is essential, as sensitive government communications, financial data, health records, and intellectual property could otherwise be exposed retrospectively. Preparing for quantum-safe cryptography will therefore become a strategic priority in the coming years.
6. AI Trust Manipulation and Model Exploitation
Poisoning the Well – Direct Attacks on AI Models
As artificial intelligence systems are increasingly used for decision-making—ranging from fraud detection and credit scoring to surveillance and cybersecurity—attackers are shifting focus from systems to models themselves. “Poisoning the well” refers to attacks that manipulate training data, feedback mechanisms, or input environments to distort AI outputs.
In the context of India's rapidly growing digital ecosystem, compromised AI models can result in biased decisions, false security alerts or denying legitimate services. The big problem with these types of attacks is they may occur without triggering conventional security measures. Transparency, integrity and continuous monitoring of AI systems will be key to creating and maintaining stakeholder confidence in the decision-making process of the automated systems.
Recommendations
Despite the increasing sophistication of malicious cyber actors, India is entering this phase with a growing level of preparedness and institutional capacity. The country has strengthened its cyber security posture through dedicated mechanisms and relevant agencies such as the Indian Cyber Crime Coordination Centre, which play a central role in coordination, threat response, and capacity building. At the same time, sustained collaboration among government bodies, non-governmental organisations, technology companies, and academic institutions has expanded cyber security awareness, skill development, and research. These collective efforts have improved detection capabilities, response readiness, and public resilience, placing India in a stronger position to manage emerging cyber threats and adapt to the evolving digital environment.
Conclusion
By 2026, complexity, intelligence, and strategic intent will increasingly define cyber threats to the digital ecosystem. Cyber criminals are expected to use advanced methods of attack, including artificial intelligence assisted social engineering and the exploitation of cloud supply chain risks. As these threats evolve, adversaries may also experiment with quantum computing techniques and the manipulation of AI models to create new ways of influencing and disrupting digital systems. In response, the focus of cybersecurity is shifting from merely preventing breaches to actively protecting and restoring digital trust. While technical controls remain essential, they must be complemented by strong cybersecurity governance, adherence to regulatory standards, and sustained user education. As India continues its digital transformation, this period presents a valuable opportunity to invest proactively in cybersecurity resilience, enabling the country to safeguard citizens, institutions, and national interests with confidence in an increasingly complex and dynamic digital future.
References
- https://www.seqrite.com/india-cyber-threat-report-2026/
- https://www.uscsinstitute.org/cybersecurity-insights/blog/ai-powered-phishing-detection-and-prevention-strategies-for-2026
- https://www.expresscomputer.in/guest-blogs/cloud-security-risks-that-should-guide-leadership-in-2026/130849/
- https://www.hakunamatatatech.com/our-resources/blog/top-iot-challenges
- https://csrc.nist.gov/csrc/media/Presentations/2024/u-s-government-s-transition-to-pqc/images-media/presman-govt-transition-pqc2024.pdf
- https://www.cyber.nj.gov/Home/Components/News/News/1721/214
.webp)
Misinformation spread has become a cause for concern for all stakeholders, be it the government, policymakers, business organisations or the citizens. The current push for combating misinformation is rooted in the growing awareness that misinformation leads to sentiment exploitation and can result in economic instability, personal risks, and a rise in political, regional, and religious tensions. The circulation of misinformation poses significant challenges for organisations, brands and administrators of all types. The spread of misinformation online poses a risk not only to the everyday content consumer, but also creates concerns for the sharer but the platforms themselves. Sharing misinformation in the digital realm, intentionally or not, can have real consequences.
Consequences for Platforms
Platforms have been scrutinised for the content they allow to be published and what they don't. It is important to understand not only how this misinformation affects platform users, but also its impact and consequences for the platforms themselves. These consequences highlight the complex environment that social media platforms operate in, where the stakes are high from the perspective of both business and societal impact. They are:
- Legal Consequences: Platforms can be fined by regulators if they fail to comply with content moderation or misinformation-related laws and a prime example of such a law is the Digital Services Act of the EU, which has been created for the regulation of digital services that act as intermediaries for consumers and goods, services, and content. They can face lawsuits by individuals, organisations or governments for any damages due to misinformation. Defamation suits are part of the standard practice when dealing with misinformation-causing vectors. In India, the Prohibition of Fake News on Social Media Bill of 2023 is in the pipeline and would establish a regulatory body for fake news on social media platforms.
- Reputational Consequences: Platforms employ a trust model where the user trusts it and its content. If a user loses trust in the platform because of misinformation, it can reduce engagement. This might even lead to negative coverage that affects the public opinion of the brand, its value and viability in the long run.
- Financial Consequences: Businesses that engage with the platform may end their engagement with platforms accused of misinformation, which can lead to a revenue drop. This can also have major consequences affecting the long-term financial health of the platform, such as a decline in stock prices.
- Operational Consequences: To counter the scrutiny from regulators, the platform might need to engage in stricter content moderation policies or other resource-intensive tasks, increasing operational costs for the platforms.
- Market Position Loss: If the reliability of a platform is under question, then, platform users can migrate to other platforms, leading to a loss in the market share in favour of those platforms that manage misinformation more effectively.
- Freedom of Expression vs. Censorship Debate: There needs to be a balance between freedom of expression and the prevention of misinformation. Censorship can become an accusation for the platform in case of stricter content moderation and if the users feel that their opinions are unfairly suppressed.
- Ethical and Moral Responsibilities: Accountability for platforms extends to moral accountability as they allow content that affects different spheres of the user's life such as public health, democracy etc. Misinformation can cause real-world harm like health misinformation or inciting violence, which leads to the fact that platforms have social responsibility too.
Misinformation has turned into a global issue and because of this, digital platforms need to be vigilant while they navigate the varying legal, cultural and social expectations across different jurisdictions. Efforts to create standardised practices and policies have been complicated by the diversity of approaches, leading platforms to adopt flexible strategies for managing misinformation that align with global and local standards.
Addressing the Consequences
These consequences can be addressed by undertaking the following measures:
- The implementation of a more robust content moderation system by the platforms using a combination of AI and human oversight for the identification and removal of misinformation in an effective manner.
- Enhancing the transparency in platform policies for content moderation and decision-making would build user trust and reduce the backlash associated with perceived censorship.
- Collaborations with fact checkers in the form of partnerships to help verify the accuracy of content and reduce the spread of misinformation.
- Engage with regulators proactively to stay ahead of legal and regulatory requirements and avoid punitive actions.
- Platforms should Invest in media literacy initiatives and help users critically evaluate the content available to them.
Final Takeaways
The accrual of misinformation on digital platforms has resulted in presenting significant challenges across legal, reputational, financial, and operational functions for all stakeholders. As a result, a critical need arises where the interlinked, but seemingly-exclusive priorities of preventing misinformation and upholding freedom of expression must be balanced. Platforms must invest in the creation and implementation of a robust content moderation system with in-built transparency, collaborating with fact-checkers, and media literacy efforts to mitigate the adverse effects of misinformation. In addition to this, adapting to diverse international standards is essential to maintaining their global presence and societal trust.
References
- https://pirg.org/edfund/articles/misinformation-on-social-media/
- https://www.mdpi.com/2076-0760/12/12/674
- https://scroll.in/article/1057626/israel-hamas-war-misinformation-is-being-spread-across-social-media-with-real-world-consequences
- https://www.who.int/europe/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds
.webp)
Introduction
Cyber slavery is a form of modern exploitation that begins with online deception and evolves into physical human trafficking. In recent times, cyber slavery has emerged as a serious threat that involves exploiting individuals through digital means under coercive or deceptive conditions. Offenders target innocent individuals and lure them by giving fake promises to offer them employment or alike. Cyber slavery can occur on a global scale, targeting vulnerable individuals worldwide through the internet and is a disturbing continuum of online manipulation that leads to real-world abuse and exploitation, where individuals are entrapped by false promises and subjected to severe human rights violations. It can take many different forms, such as coercive involvement in cybercrime, forced employment in online frauds, exploitation in the gig economy, or involuntary slavery. This issue has escalated to the highest level where Indians are being trafficked for jobs in countries like Laos and Cambodia. Recently over 5,000 Indians were reported to be trapped in Southeast Asia, where they are allegedly being coerced into carrying out cyber fraud. It was reported that particularly Indian techies were lured to Cambodia for high-paying jobs and later they found themselves trapped in cyber fraud schemes, forced to work 16 hours a day under severe conditions. This is the harsh reality for thousands of Indian tech professionals who are lured under false pretences to employment in Southeast Asia, where they are forced into committing cyber crimes.
Over 5,000 Indians Held in Cyber Slavery and Human Trafficking Rings
India has rescued 250 citizens in Cambodia who were forced to run online scams, with more than 5,000 Indians stuck in Southeast Asia. The victims, mostly young and tech-savvy, are lured into illegal online work ranging from money laundering and crypto fraud to love scams, where they pose as lovers online. It was reported that Indians are being trafficked for jobs in countries like Laos and Cambodia, where they are forced to conduct cybercrime activities. Victims are often deceived about where they would be working, thinking it will be in Thailand or the Philippines. Instead, they are sent to Cambodia, where their travel documents are confiscated and they are forced to carry out a variety of cybercrimes, from stealing life savings to attacking international governmental or non-governmental organizations. The Indian embassy in Phnom Penh has also released an advisory warning Indian nationals of advertisements for fake jobs in the country through which victims are coerced to undertake online financial scams and other illegal activities.
Regulatory Landscape
Trafficking in Human Beings (THB) is prohibited under the Constitution of India under Article
23 (1). The Immoral Traffic (Prevention) Act, of 1956 (ITPA) is the premier legislation for the prevention of trafficking for commercial sexual exploitation. Section 111 of the Bharatiya Nyaya Sanhita (BNS), 2023, is a comprehensive legal provision aimed at combating organized crime and will be useful in persecuting people involved in such large-scale scams. India has also ratified certain bilateral agreements with several countries to facilitate intelligence sharing and coordinated efforts to combat transnational organized crime and human trafficking.
CyberPeace Policy Recommendations
● Misuse of Technology has exploited the new genre of cybercrimes whereby cybercriminals utilise social media platforms as a tool for targeting innocent individuals. It requires collective efforts from social media companies and regulatory authorities to time to time address the new emerging cybercrimes and develop robust preventive measures to counter them.
● Despite the regulatory mechanism in place, there are certain challenges such as jurisdictional challenges, challenges in detection due to anonymity, and investigations challenges which significantly make the issue of cyber human trafficking a serious evolving threat. Hence International collaboration between the countries is encouraged to address the issue considering the present situation in a technologically driven world. Robust legislation that addresses both national and international cases of human trafficking and contains strict penalties for offenders must be enforced.
● Cybercriminals target innocent people by offering fake high-pay job opportunities, building trust and luring them. It is high time that all netizens should be aware of such tactics deployed by bad actors and recognise the early signs of them. By staying vigilant and cross-verifying the details from authentic sources, netizens can safeguard themselves from such serious threats which even endanger their life by putting them under restrictions once they are being trafficked. It is a notable fact that the Indian government and its agencies are continuously making efforts to rescue the victims of cyber human trafficking or cyber slavery, they must further develop robust mechanisms in place to conduct specialised operations by specialised government agencies to rescue the victims in a timely manner.
● Capacity building and support mechanisms must be encouraged by government entities, cyber security experts and Non-Governmental Organisations (NGOs) to empower the netizens to follow best practices while navigating the online landscape, providing them with helpline or help centres to report any suspicious activity or behaviour they encounter, and making them empowered to feel safe on the Internet while simultaneously building defenses to stay protected from cyber threats.
References:
2. https://www.bbc.com/news/world-asia-india-68705913
3. https://therecord.media/india-rescued-cambodia-scam-centers-citizens
4. https://www.the420.in/rescue-indian-tech-workers-cambodia-cyber-fraud-awareness/
7. https://www.dyami.services/post/intel-brief-250-indian-citizens-rescued-from-cyber-slavery
8. https://www.mea.gov.in/human-trafficking.htm
9. https://www.drishtiias.com/blog/the-vicious-cycle-of-human-trafficking-and-cybercrime