#FactCheck - Viral Claim About Anti-Trump Protests in the US Is Misleading
A photograph showing a massive crowd on a road is being widely shared on social media. The image is being circulated with the claim that people in the United States are staging large-scale protests against President Donald Trump.
However, CyberPeace Foundation’s research has found this claim to be misleading. Our fact-check reveals that the viral photograph is nearly eight years old and has been falsely linked to recent political developments.
Claim:
Social media users are sharing a photograph and claiming that it shows people protesting against US President Donald Trump.An X (formerly Twitter) user, Salman Khan Gauri (@khansalman88177), shared the image with the caption:“Today, a massive protest is taking place in America against Donald Trump.”
The post can be viewed here, and its archived version is available here.

FactCheck:
To verify the claim, we conducted a reverse image search of the viral photograph using Google. This led us to a report published by The Mercury News on April 6, 2018.
The report features the same image and states that the photograph was taken on March 24, 2018, during the ‘March for Our Lives’ rally in Washington, DC. The rally was organized to demand stricter gun control laws in the United States. The image shows a large crowd gathered on Pennsylvania Avenue in support of gun reform.
The report further notes that the Associated Press, on March 30, 2018, debunked false claims circulating online which alleged that liberal billionaire George Soros and his organizations had paid protesters $300 each to participate in the rally.

Further research led us to a report published by The Hindu on March 25, 2018, which also carries the same photograph. According to the report, thousands of Americans across the country participated in ‘March for Our Lives’ rallies following a mass shooting at a school in Florida. The protests were led by survivors and victims, demanding stronger gun laws.
The objective of these demonstrations was to break the legislative deadlock that has long hindered efforts to tighten firearm regulations in a country frequently rocked by mass shootings in schools and colleges.

Conclusion
The viral photograph is nearly eight years old and is unrelated to any recent protests against President Donald Trump.The image actually depicts a gun control protest held in 2018 and is being falsely shared with a misleading political claim.By circulating this outdated image with an incorrect context, social media users are spreading misinformation.
Related Blogs

Executive Summary:
A video of India’s Defence Minister Rajnath Singh is going viral on social media. The post claims that Rajnath Singh is openly supporting Israeli-American attacks against Iran. In the video, he can allegedly be heard saying that Prime Minister Narendra Modi had visited Israel before the war began and warned Tehran that disturbing peace would have serious consequences.
Research by CyberPeace found that the viral video is a deepfake created using Artificial Intelligence (AI). Rajnath Singh has not made any such statement about Iran or the Israel-US conflict.
Claim
A Facebook user “Sheikh Sadeque Ali” shared the video on March 2, 2026. The caption of the post reads, “Indian Defence Minister Rajnath Singh is supporting Israel’s attack on Iran. This clearly shows that India supports the killing of Muslims.”
In the viral video, Rajnath Singh appears to say in English: “Prime Minister Modi’s visit to Israel before the attack on Iran reflects India’s solidarity with its strategic partner… He warned Tehran that hostile actions would have serious consequences for regional peace.”

Fact Check:
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search. During the research , we found the original video on Rajnath Singh’s official YouTube channel. The video was uploaded on November 23, 2025.In the original video, Rajnath Singh was addressing a Sindhi community conference in Delhi. During his speech, he was talking about Sindhi culture and the history of Partition. He did not mention Israel, Iran or any Middle East conflict during the entire program.

Upon closely examining the viral video, technical inconsistencies between the lip movements and the audio (lip-sync discrepancies) can be observed, which strongly indicate that the video may have been generated using AI. To verify this, we analysed the clip using several AI-detection tools. The AI detection tool Hive Moderation indicated that the video has a 99% probability of being AI-generated.

Conclusion:
Our research found that the viral video of Rajnath Singh is a deepfake. He has not made any statement supporting Israel or opposing Iran. The original video is from a Sindhi community event in Delhi, which has been digitally altered using AI to spread a misleading claim.

Social media has become far more than a tool of communication, engagement and entertainment. It shapes politics, community identity, and even shapes agendas. When misused, the consequences can be grave: communal disharmony, riots, false rumours, harassment or worse. Emphasising the need for digital Atmanirbhar, Prime Minister Narendra Modi recently urged India’s youth to develop the country’s own social media platforms, like Facebook, Instagram and X, to ensure that the nation’s technological ecosystems remain secure and independent, reinforcing digital autonomy. This growing influence of platforms has sharpened the tussle between government regulation, the independence of social media companies, and the protection of freedom of expression in most countries.
Why Government Regulation Is Especially Needed
While self-regulation has its advantages, ‘real-world harms’ show why state oversight cannot be optional:
- Incitement to violence and communal unrest: Misinformation and hate speech can inflame tensions. In Manipur (May 2023), false posts, including unverified sexual-violence claims, spread online, worsening clashes. Authorities shut down mobile internet on 3 May 2023 to curb “disinformation and false rumours,” showing how quickly harmful content can escalate and why enforceable moderation rules matter.
- Fake news and misinformation: False content about health, elections or individuals spreads far faster than corrections. During COVID-19, an “infodemic” of fake cures, conspiracy theories and religious discrimination went viral on WhatsApp and Facebook, starting with false claims that the virus came from eating bats. The WHO warned of serious knock-on effects, and a Reuters Institute study found that although such claims by public figures were fewer, they gained the highest engagement, showing why self-regulation alone often fails to stop it.
Nepal’s Example:
Nepal provides a clear example of the tension between government regulation and the self-regulation tussle of social media. In 2023, the government issued rules requiring all social media platforms, whether local or foreign, to register with the Ministry of Communication and Information Technology, appoint a local contact person, and comply with Nepali law. By 2025, major platforms such as Facebook, Instagram, and YouTube had not met the registration deadline. In response, the Nepal Telecommunications Authority began blocking unregistered platforms until they complied. While journalists, civil-rights groups and Gen Z criticised the move as potentially limiting free speech and exposing corruption against the government. The government argued it was necessary to stop harmful content and misinformation. The case shows that without enforceable obligations, self-regulation can leave platforms unaccountable, but it must also balance with protecting free speech.
Self-Regulation: Strengths and Challenges
Most social-media companies prefer to self-regulate. They write community rules, trust & safety guidelines, and give users ways to flag harmful posts, and lean on a mix of staff, outside boards and AI filters to handle content that crosses the line. The big advantage here is speed: when something dangerous appears, a platform can react within minutes, far quicker than a court or lawmaker. Because they know their systems inside out, from user habits to algorithmic quirks, they can adapt fast.
But there’s a downside. These platforms thrive on engagement, hence sensational or hateful posts often keep people scrolling longer. That means the very content that makes money can also be the content that most needs moderating , a built-in conflict of interest.
Government Regulation: Strengths and Risks
Public rules make platforms answerable. Laws can require illegal content to be removed, force transparency and protect user rights. They can also stop serious harms such as fake news that might spark violence, and they often feel more legitimate when made through open, democratic processes.
Yet regulation can lag behind technology. Vague or heavy-handed rules may be misused to silence critics or curb free speech. Global enforcement is messy, and compliance can be costly for smaller firms.
Practical Implications & Hybrid Governance
For users, regulation brings clearer rights and safer spaces, but it must be carefully drafted to protect legitimate speech. For platforms, self-regulation gives flexibility but less certainty; government rules provide a level playing field but add compliance costs. For governments, regulation helps protect public safety, reduce communal disharmony, and fight misinformation, but it requires transparency and safeguards to avoid misuse.
Hybrid Approach
A combined model of self-regulation plus government regulation is likely to be most effective. Laws should establish baseline obligations: registration, local grievance officers, timely removal of illegal content, and transparency reporting. Platforms should retain flexibility in how they implement these obligations and innovate with tools for user safety. Independent audits, civil society oversight, and simple user appeals can help keep both governments and platforms accountable.
Conclusion
Social media has great power. It can bring people together, but it can also spread false stories, deepen divides and even stir violence. Acting on their own, platforms can move fast and try new ideas, but that alone rarely stops harmful content. Good government rules can fill the gap by holding companies to account and protecting people’s rights.
The best way forward is to mix both approaches, clear laws, outside checks, open reporting, easy complaint systems and support for local platforms, so the digital space stays safer and more trustworthy.
References
- https://timesofindia.indiatimes.com/india/need-desi-social-media-platforms-to-secure-digital-sovereignty-pm/articleshow/123327780.cms#
- https://www.bbc.com/news/world-asia-india-66255989
- https://nepallawsunshine.com/social-media-registration-in-nepal/ https://www.newsonair.gov.in/nepal-bans-26-unregistered-social-media-sites-including-facebook-whatsapp-instagram/
- https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
- https://www.drishtiias.com/daily-updates/daily-news-analysis/social-media-regulation-in-india

Introduction
Election misinformation poses a major threat to democratic processes all over the world. The rampant spread of misleading information intentionally (disinformation) and unintentionally (misinformation) during the election cycle can not only create grounds for voter confusion with ramifications on election results but also incite harassment, bullying, and even physical violence. The attack on the United States Capitol Building in Washington D.C., in 2021, is a classic example of this phenomenon, where the spread of dis/misinformation snowballed into riots.
Election Dis/Misinformation
Election dis/misinformation is false or misleading information that affects/influences public understanding of voting, candidates, and election integrity. The internet, particularly social media, is the foremost source of false information during elections. It hosts fabricated news articles, posts or messages containing incorrectly-captioned pictures and videos, fabricated websites, synthetic media and memes, and distorted truths or lies. In a recent example during the 2024 US elections, fake videos using the Federal Bureau of Investigation’s (FBI) insignia alleging voter fraud in collusion with a political party and claiming the threat of terrorist attacks were circulated. According to polling data collected by Brookings, false claims influenced how voters saw candidates and shaped opinions on major issues like the economy, immigration, and crime. It also impacted how they viewed the news media’s coverage of the candidates’ campaign. The shaping of public perceptions can thus, directly influence election outcomes. It can increase polarisation, affect the quality of democratic discourse, and cause disenfranchisement. From a broader perspective, pervasive and persistent misinformation during the electoral process also has the potential to erode public trust in democratic government institutions and destabilise social order in the long run.
Challenges In Combating Dis/Misinformation
- Platform Limitations: Current content moderation practices by social media companies struggle to identify and flag misinformation effectively. To address this, further adjustments are needed, including platform design improvements, algorithm changes, enhanced content moderation, and stronger regulations.
- Speed and Spread: Due to increasingly powerful algorithms, the speed and scale at which misinformation can spread is unprecedented. In contrast, content moderation and fact-checking are reactive and are more time-consuming. Further, incendiary material, which is often the subject of fake news, tends to command higher emotional engagement and thus, spreads faster (virality).
- Geopolitical influences: Foreign actors seeking to benefit from the erosion of public trust in the USA present a challenge to the country's governance, administration and security machinery. In 2018, the federal jury indicted 11 Russian military officials for alleged computer hacking to gain access to files during the 2016 elections. Similarly, Russian involvement in the 2024 federal elections has been alleged by high-ranking officials such as White House national security spokesman John Kirby, and Attorney General Merrick Garland.
- Lack of Targeted Plan to Combat Election Dis/Misinformation: In the USA, dis/misinformation is indirectly addressed through laws on commercial advertising, fraud, defamation, etc. At the state level, some laws such as Bills AB 730, AB 2655, AB 2839, and AB 2355 in California target election dis/misinformation. The federal and state governments criminalize false claims about election procedures, but the Constitution mandates “breathing space” for protection from false statements within election speech. This makes it difficult for the government to regulate election-related falsities.
CyberPeace Recommendations
- Strengthening Election Cybersecurity Infrastructure: To build public trust in the electoral process and its institutions, security measures such as updated data protection protocols, publicized audits of election results, encryption of voter data, etc. can be taken. In 2022, the federal legislative body of the USA passed the Electoral Count Reform and Presidential Transition Improvement Act (ECRA), pushing reforms allowing only a state’s governor or designated executive official to submit official election results, preventing state legislatures from altering elector appointment rules after Election Day and making it more difficult for federal legislators to overturn election results. More investments can be made in training, scenario planning, and fact-checking for more robust mitigation of election-related malpractices online.
- Regulating Transparency on Social Media Platforms: Measures such as transparent labeling of election-related content and clear disclosure of political advertising to increase accountability can make it easier for voters to identify potential misinformation. This type of transparency is a necessary first step in the regulation of content on social media and is useful in providing disclosures, public reporting, and access to data for researchers. Regulatory support is also required in cases where popular platforms actively promote election misinformation.
- Increasing focus on ‘Prebunking’ and Debunking Information: Rather than addressing misinformation after it spreads, ‘prebunking’ should serve as the primary defence to strengthen public resilience ahead of time. On the other hand, misinformation needs to be debunked repeatedly through trusted channels. Psychological inoculation techniques against dis/misinformation can be scaled to reach millions on social media through short videos or messages.
- Focused Interventions On Contentious Themes By Social Media Platforms: As platforms prioritize user growth, the burden of verifying the accuracy of posts largely rests with users. To shoulder the responsibility of tackling false information, social media platforms can outline critical themes with large-scale impact such as anti-vax content, and either censor, ban, or tweak the recommendations algorithm to reduce exposure and weaken online echo chambers.
- Addressing Dis/Information through a Socio-Psychological Lens: Dis/misinformation and its impact on domains like health, education, economy, politics, etc. need to be understood through a psychological and sociological lens, apart from the technological one. A holistic understanding of the propagation of false information should inform digital literacy training in schools and public awareness campaigns to empower citizens to evaluate online information critically.
Conclusion
According to the World Economic Forum’s Global Risks Report 2024, the link between misleading or false information and societal unrest will be a focal point during elections in several major economies over the next two years. Democracies must employ a mixed approach of immediate tactical solutions, such as large-scale fact-checking and content labelling, and long-term evidence-backed countermeasures, such as digital literacy, to curb the spread and impact of dis/misinformation.
Sources
- https://www.cbsnews.com/news/2024-election-misinformation-fbi-fake-videos/
- https://www.brookings.edu/articles/how-disinformation-defined-the-2024-election-narrative/
- https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections
- https://indianexpress.com/article/world/misinformation-spreads-fear-distrust-ahead-us-election-9652111/
- https://academic.oup.com/ajcl/article/70/Supplement_1/i278/6597032#377629256
- https://www.brennancenter.org/our-work/policy-solutions/how-states-can-prevent-election-subversion-2024-and-beyond
- https://www.bbc.com/news/articles/cx2dpj485nno
- https://msutoday.msu.edu/news/2022/how-misinformation-and-disinformation-influence-elections
- https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
- https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
- https://www.weforum.org/stories/2024/03/disinformation-trust-ecosystem-experts-curb-it/
- https://www.apa.org/topics/journalism-facts/misinformation-recommendations
- https://mythvsreality.eci.gov.in/
- https://www.brookings.edu/articles/transparency-is-essential-for-effective-social-media-regulation/
- https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/