#FactCheck- Viral Kapil Mishra Video on 50% Attendance Not Recent
Executive Summary
A video of Delhi government cabinet minister Kapil Mishra is being shared on social media. In the clip, he can be heard saying that from the next day, only 50 percent attendance will be allowed in offices, while the remaining 50 percent employees will work from home. He also states that all institutions must comply with this. Users are sharing the video as a recent development. However, a study by the CyberPeace found the viral claim to be misleading. Our research revealed that the video is not recent but dates back to December 2025.
Claim:
An Instagram user shared the viral video on March 24, 2026. The link to the post is given below.

Fact Check:
To verify the claim, we conducted a keyword search on Google. During this process, we found a report published on December 17, 2025, on NDTV Hindi. According to the report, the Delhi government had made 50 percent work-from-home mandatory in government offices due to severe air pollution. Additional restrictions were also imposed under GRAP Stage IV.

Further, we found the original video on the official social media handle of BJP Delhi. In this video, Kapil Mishra can be heard stating that 50 percent work-from-home has been made mandatory in all government and private offices in Delhi, while health and other essential services have been exempted from this arrangement.

Conclusion:
Our research found that the viral video is not recent. It is from December 2025 and is being shared with a misleading claim.
Related Blogs

Introduction
In today’s digital world, data has emerged as the new currency that influences global politics, markets, and societies. Companies, governments, and tech behemoths aim to control data because it accords them influence and power. However, a fundamental challenge brought about by this increased reliance on data is how to strike a balance between privacy protection and innovation and utility.
In recognition of these dangers, more than 200 Nobel laureates, scientists, and world leaders have recently signed the Global Call for AI Red Lines. Governments are urged by this initiative to create legally binding international regulations on artificial intelligence by 2026. Its goal is to stop AI from going beyond moral and security bounds, particularly in areas like political manipulation, mass surveillance, cyberattacks, and dangers to democratic institutions.
One way to address the threat to privacy is through pseudonymization, which makes it possible to use data valuable for research and innovation by substituting personal identifiers for artificial ones. Pseudonymization thus directly advances the AI Red Lines initiative's mission of facilitating technological advancement while lowering the risks of data misuse and privacy violations.
The Red Lines of AI: Why do they matter?
The Global Call for AI Red Lines initiative represents a collective attempt to impose precaution before catastrophe, which was done with the objective of recognising the Red Lines in the use of AI tools. Thus, anything that unites the risks of using AI is due to the absence of global safeguards. Some of these Red Lines can be understood as;
- Cybersecurity breaches in the form of exposure of financial and personal data due to AI-driven hacking and surveillance.
- Occurrence of privacy invasions due to endless tracking.
- Generative AI can also help to create realistic fake content, undermining the trust of public discourses, leading to misinformation.
- Algorithmic amplification of polarising content can also threaten civic stability, leading to a demographic disruption.
Legal Frameworks and Regulatory Landscape
The regulations of Artificial Intelligence stand fragmented across jurisdictions, leaving significant loopholes aside. Some of the frameworks already provide partial guidance. The European Union’s Artificial Intelligence Act 2024 bans “unacceptable” AI practices, whereas the US-China Agreement also ensures that nuclear weapons remain under human, not machine-controlled. The UN General Assembly has adopted resolutions urging safe and ethical AI usage, with a binding and elusive global treaty.
On the front of data protection, the General Data Protection Regulations (GDPR) of EU offers a clear definition of Pseudonymisation under Article 4(5). It also describes a process where personal data is altered in a way that it cannot be attributed to an individual without additional information, which must be stored securely and separately. Importantly, pseudonymised data still qualifies as “personal data” under GDPR. However, India’s Digital Personal Data Protection Act (DPDP) 2023 adopts a similar stance. It does not explicitly define pseudonymisation in broad terms, such as “personal data” by including potentially reversible identifiers. According to Section 8(4) of the Act, companies are meant to adopt appropriate technical or organisational measures. International bodies and conventions like the OECD Principles on AI or the Council of Europe Convention 108+ emphasize accountability, transparency, and data minimisation. Collectively, these instruments point towards pseudonymization as a best practice, though interpretations of its scope differ.
Strategies for Corporate Implementation
For a company, pseudonymisation is not just about compliance, it is also a practical solution that offers measurable benefits. By pseudonymising data, businesses can get benefits, such as;
- Enhancing Privacy protection by masking identifiers like names or IDs by reducing the impact of data breaches.
- Preserving Data Utility, unlike having a full anonymisation, pseudonymisation also retains patterns that are essential for analytical innovation.
- Facilitating data sharing can allow organizations to collaborate with their partners and researchers while maintaining proper trust.
According to these benefits, competitive advantages get translated to clauses where customers find it more likely to trust organizations that prioritise data protection, while pseudonymisation further enables the firms to engage in cross-border collaboration without violating local data laws.
Balancing Privacy Rights and Data Utility
Balancing is a central dilemma; on one side lies the case of necessity over data utility, where companies, researchers and governments rely on large datasets to enhance the scale of AI innovation. On the other hand lies the question of the right to privacy, which is a non-negotiable principle protected under the international human rights law.
Pseudonymisation offers a practical compromise by enabling the use of sensitive data while reducing the privacy risks. Taking examples of different domains, such as healthcare, it allows the researchers to work with patient information without exposing identities, whereas in finance, it supports fraud detection without revealing the customer details.
Conclusion
The rapid rise of artificial intelligence has led to the outpacing of regulations, raising urgent questions related to safety, fairness and accountability. The global call for recognising the AI red lines is a bold step that looks in the direction of setting universal boundaries. Yet, alongside the remaining global treaties, practical safeguards are also needed. Pseudonymisation exemplifies such a safeguard, which is legally recognised under the GDPR and increasingly relevant in India’s DPDP Act. It balances the twin imperatives of privacy, protection, and data utility. For organizations, adopting pseudonymisation is not only about ensuring regulatory compliance, rather, it is also about building trust, ensuring resilience, and aligning with the broader ethical responsibilities in this digital age. As the future of AI is debatable, the guiding principles also need to be clear. By embedding techniques for preserving privacy, like pseudonymisation, into AI systems, we can take a significant step towards developing a sustainable, ethical and innovation-driven digital ecosystem.
References
https://www.techaheadcorp.com/blog/shadow-ai-the-risks-of-unregulated-ai-usage-in-enterprises/
https://planetmainframe.com/2024/11/the-risks-of-unregulated-ai-what-to-know/
https://cepr.org/voxeu/columns/dangers-unregulated-artificial-intelligence
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/

Introduction
There has been a recent surge of misinformation all over social media, claiming that every Indian ought to receive an allowance of ₹2,000 under some "Prime Minister's scheme." The message, which has been circulated far and wide on almost all platforms-WhatsApp, Facebook, Telegram, etc.-has urged users to click on an unfamiliar link to claim the allowance in their bank accounts.
It would seem like a very attractive offer, especially at a time when common citizens are coping with rising costs of living. But upon further examination, it turns out to be an outright online scam. NewsMobile fact-checked the claim and confirmed that no such scheme exists. Thus, the message circulating is a scam that aims to mislead common citizens.
Such an incident is not isolated. Over the years, fraudulent posts falsely offering benefits in the name of the government or well-known brands have been on the rise. These scams are not just about misinformation-they take advantage of trust, lure people into clicking, and sharing personal info that poses serious risks to financial and personal security.
Anatomy of the Viral PM Scheme Scam
The viral message received attention and was written in Hindi. It read:
“सभी नागरिकों को PM योजना के तहत दो हज़ार रुपए का भत्ता प्रदान किया गया है अपने bank खाते में प्राप्त करने के लिए click करें."
(English: “All citizens have been provided an allowance of ₹2000 under the PM scheme. Click to receive it in your bank account.”)
Beneath this was an odd link that, upon clicking through investigation, turned out to be not working and invalid. An examination of government sites, official handle accounts, and other such was done and no announcement for any such allowance was found.
This provides a neat explanation of a phishing attempt by which a scammer induces urgency and temptation in order to lure citizens into clicking a malicious link. While the link may no longer be active, it could very well have once redirected users to websites that harvest personal information such as Aadhaar numbers, bank details, or login credentials.
The Broader Problem: Fake Government Scheme Scams
Some scams have been exploiting the hoax gimmick of the ₹2,000 PM scheme into the wider trend. How do the con men work? They leverage the credibility of governmental initiatives to scam citizens. In the past, fake promises were made concerning free gas cylinders, cash allowances, subsidised rations, or even job opportunities.
During the COVID times, for instance, fake vaccination registration links and so-called relief scheme offers went viral, preying on the fears and vulnerabilities of ill-informed citizens. Likewise, false schemes associated with reputed companies such as Amazon, Flipkart, TATA Group, and Hermès have also gone viral, promising free gifts or allowances.
The one thing that makes scams associated with the government very dangerous is the exploitation of people's trust in authority. The common citizen is predisposed to believe the PM scheme or the Government Yojana because of the social credibility accorded to these announcements.
How These Scams Operate
These are scams where the creators intend deception and in the end, gain from defrauding a person. Fraudsters first create clickbait messages that are duly recorded to resemble official communications and often bear the government logos and bear a mix of Hindi-English text with the phrase "Pradhan Mantri Yojana" to make it sound legitimate. The messages then redirect users to bogus websites that really look very much like the government's portals, asking sick persons to enter personal information. Finally, as soon as they have obtained this data, the scammer uses it for identity theft, bank fraud, or sells it on the dark web. Social engineering does play a large role in these scams: here terms of urgency like limited time, last chance, and whatnot get created with the aim of pushing the targets to act on these without thinking. For maximum reach, victims are also asked to forward the message to their friends and family, causing the scammer to go viral across WhatsApp, Facebook, and Telegram.
Risks to Citizens
Risks are serious and manifold to falling prey to these scams. The immediate kind of risk is financial loss: divulging bank account details, an OTP, or credentials may constitute providing attackers the power to drain funds therefrom. Another prevalent kind of identity theft occurs through hijacked Aadhaar, PAN, or personal information that subsequently finds its way into fake loans or SIM activations. Apart from monetary losses, opening malicious links might also make devices infected with spyware or ransomware, thereby invading privacy and security. Victims tend to experience a form of psychological trauma due to feelings of betrayal or humiliation of being deceived, thus discouraging them from reporting, which in turn enables such scams to go undetected.
Best Practices for Prevention
It is prudent to exercise good cyber hygiene and be on the lookout for such scams. The citizens should verify each statement against government-authorised websites like https://www.mygov.in or through press statements of the ministries prior to believing it. One should not click on suspicious links offering money, gifts, or subsidies. Red flags like poor grammar, an unofficial domain name, or too-good-to-be-true offers can enable one to identify the scam in time. Two-factor authentication, antivirus software updates, and securing devices can drastically lower the threat from the technical angle. Equally important is the reporting of issues: always report any suspicious activities to cybercrime.gov.in or to the nearest cyber cell so that the authorities may trace some pattern and issue advisories accordingly. Finally, one can do some good by sharing verified fact checks within their circles to build added strength against misinformation and scams.
Policy and Community Role
While individual awareness is important, collective action must be taken against these fake government scheme scams. Platforms such as WhatsApp, Facebook, and X (Twitter) must tune up fraudsters' message detection mechanisms. In the meantime, Government Bodies must alert citizens periodically on new scams through their official handles/schemes and through community outreach.
Civil society and fact-checking agencies play an important role in dispelling frequently viral hoaxes. This work must be amplified to reach people's consciousness in regional languages for the very reason that in these terrain zones, forwarded messages are much more trusted.
Conclusion
The viral ₹2,000 PM scheme scam is a reminder that everything that is viral online cannot be trusted in toto. The scammers of the day are inventing newer scams to gain trust, spread misinformation, and extort innocent citizens.
The best defence will be awareness and alertness. Citizens must verify any claims through official channels before clicking on a link, sharing their data, or even acting upon it in any way. With proper cyber hygiene and avoiding suspicious messages, we can counterattack by reducing the percentage of impact that these scams may have and collaboratively build a secure digital environment.
As India pushes itself further into a digital ecosystem, both empowering and being resilient to cyber fraud is not a state of individual security, but a national agenda.
References
- https://www.newsmobile.in/nm-fact-checker/fact-check-viral-post-claiming-pm-scheme-offering-rs-2000-allowance-is-a-scam/
- https://timesofindia.indiatimes.com/business/financial-literacy/investing/beware-of-deepfake-scams-fraudsters-using-ai-videos-to-push-schemes-promising-unrealistic-returns-red-flags-to-watch-out-for/articleshow/124085155.cms
- https://www.business-standard.com/finance/personal-finance/invest-rs-21-000-to-earn-rs-20-lakh-monthly-viral-videos-of-fm-are-fake-125082000517_1.html
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2124728

Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.

Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.

Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.


We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.

Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading