#FactCheck - Suryakumar Yadav–Salman Ali Agha Handshake Row: Viral Image Found AI-Generated
Executive Summary
An image circulating on social media claims to show Suryakumar Yadav, captain of the Indian cricket team, extending his hand to greet Pakistan’s skipper Salman Ali Agha, who allegedly refused the gesture during the India–Pakistan T20 World Cup match held on February 15. Users shared the image as evidence of a real incident from the high-profile clash. However, a research by CyberPeace found that the image is AI-generated and was falsely circulated to mislead viewers.
Claim
On February 15, an X account named “@iffiViews,” reportedly operated from Pakistan, shared the image claiming it was taken during the India–Pakistan T20 World Cup match at the R. Premadasa Stadium in Colombo. The viral image appeared to show Yadav attempting to shake hands with Agha, who seemed to decline the gesture. The post quickly gained significant traction online, attracting around one million views at the time of reporting. Here is the link and archive link to the post, along with a screenshot.
- https://x.com/iffiViews/status/2023024665770484206?s=20
- https://archive.ph/xvtBs

Fact Check:
To verify the authenticity of the image, researchers closely examined the visual and identified a watermark associated with an AI image-generation tool. This raised strong indications that the image was digitally created and did not depict an actual event.

The image was further analysed using an AI detection tool, which indicated a 99.9 percent probability that the content was artificially generated or manipulated.

Researchers also conducted keyword searches to check whether the two captains had exchanged a handshake during the match. The search revealed media reports confirming that the traditional handshake between players has been discontinued since the Asia Cup 2025 in both men’s and women’s cricket. A report published by The Times of India on February 15 confirmed that no such customary exchange took place during the match between the two teams in Colombo.

Conclusion
The viral image claiming to show Suryakumar Yadav attempting to shake hands with Salman Ali Agha is not authentic. The visual is AI-generated and has been shared online with misleading claims.
Related Blogs

Executive Summary
Amid the ongoing tensions between the United States, Israel, and Iran, a video circulating on social media claims that Israeli Prime Minister Benjamin Netanyahu was seen running after Iran launched an attack on Israel. However, research by the CyberPeace found the viral claim to be misleading. Our research revealed that the video has no connection with the current tensions between the United States, Israel, and Iran. In reality, the clip dates back to 2021, when Netanyahu was rushing inside Israel’s parliament to cast his vote after arriving late.
Claim:
On the social media platform X (formerly Twitter), a user shared the video on March 5, 2026, claiming that Netanyahu had fled and gone into hiding due to fear of Iran. The post included inflammatory remarks suggesting that Iran had demonstrated its power and that Netanyahu had abandoned his country out of fear.

Fact Check
To verify the authenticity of the video, we extracted several keyframes and conducted a reverse image search on Google. During the research, we found the same video on the official X account of Benjamin Netanyahu, posted on December 14, 2021. In the post, Netanyahu wrote in Hebrew, which translates to,“I am always proud to run for you. Photographed half an hour ago in the Knesset.”

Further research also led us to a Hebrew news website where the same video was published.

According to the report, voting in the Knesset (Israel’s parliament) continued throughout the night, and an explosives-related bill was passed by a very narrow margin. At the time, opposition leader Benjamin Netanyahu was in his room inside the Knesset building. When he was called for the vote, he hurried through the parliament corridors to reach the chamber in time to cast his vote.
Conclusion:
Our research found that the viral video is unrelated to the ongoing tensions involving the United States, Israel, and Iran. The footage is from 2021 and shows Benjamin Netanyahu rushing inside the Knesset to participate in a parliamentary vote after being called in at the last moment.

Introduction
Election misinformation poses a major threat to democratic processes all over the world. The rampant spread of misleading information intentionally (disinformation) and unintentionally (misinformation) during the election cycle can not only create grounds for voter confusion with ramifications on election results but also incite harassment, bullying, and even physical violence. The attack on the United States Capitol Building in Washington D.C., in 2021, is a classic example of this phenomenon, where the spread of dis/misinformation snowballed into riots.
Election Dis/Misinformation
Election dis/misinformation is false or misleading information that affects/influences public understanding of voting, candidates, and election integrity. The internet, particularly social media, is the foremost source of false information during elections. It hosts fabricated news articles, posts or messages containing incorrectly-captioned pictures and videos, fabricated websites, synthetic media and memes, and distorted truths or lies. In a recent example during the 2024 US elections, fake videos using the Federal Bureau of Investigation’s (FBI) insignia alleging voter fraud in collusion with a political party and claiming the threat of terrorist attacks were circulated. According to polling data collected by Brookings, false claims influenced how voters saw candidates and shaped opinions on major issues like the economy, immigration, and crime. It also impacted how they viewed the news media’s coverage of the candidates’ campaign. The shaping of public perceptions can thus, directly influence election outcomes. It can increase polarisation, affect the quality of democratic discourse, and cause disenfranchisement. From a broader perspective, pervasive and persistent misinformation during the electoral process also has the potential to erode public trust in democratic government institutions and destabilise social order in the long run.
Challenges In Combating Dis/Misinformation
- Platform Limitations: Current content moderation practices by social media companies struggle to identify and flag misinformation effectively. To address this, further adjustments are needed, including platform design improvements, algorithm changes, enhanced content moderation, and stronger regulations.
- Speed and Spread: Due to increasingly powerful algorithms, the speed and scale at which misinformation can spread is unprecedented. In contrast, content moderation and fact-checking are reactive and are more time-consuming. Further, incendiary material, which is often the subject of fake news, tends to command higher emotional engagement and thus, spreads faster (virality).
- Geopolitical influences: Foreign actors seeking to benefit from the erosion of public trust in the USA present a challenge to the country's governance, administration and security machinery. In 2018, the federal jury indicted 11 Russian military officials for alleged computer hacking to gain access to files during the 2016 elections. Similarly, Russian involvement in the 2024 federal elections has been alleged by high-ranking officials such as White House national security spokesman John Kirby, and Attorney General Merrick Garland.
- Lack of Targeted Plan to Combat Election Dis/Misinformation: In the USA, dis/misinformation is indirectly addressed through laws on commercial advertising, fraud, defamation, etc. At the state level, some laws such as Bills AB 730, AB 2655, AB 2839, and AB 2355 in California target election dis/misinformation. The federal and state governments criminalize false claims about election procedures, but the Constitution mandates “breathing space” for protection from false statements within election speech. This makes it difficult for the government to regulate election-related falsities.
CyberPeace Recommendations
- Strengthening Election Cybersecurity Infrastructure: To build public trust in the electoral process and its institutions, security measures such as updated data protection protocols, publicized audits of election results, encryption of voter data, etc. can be taken. In 2022, the federal legislative body of the USA passed the Electoral Count Reform and Presidential Transition Improvement Act (ECRA), pushing reforms allowing only a state’s governor or designated executive official to submit official election results, preventing state legislatures from altering elector appointment rules after Election Day and making it more difficult for federal legislators to overturn election results. More investments can be made in training, scenario planning, and fact-checking for more robust mitigation of election-related malpractices online.
- Regulating Transparency on Social Media Platforms: Measures such as transparent labeling of election-related content and clear disclosure of political advertising to increase accountability can make it easier for voters to identify potential misinformation. This type of transparency is a necessary first step in the regulation of content on social media and is useful in providing disclosures, public reporting, and access to data for researchers. Regulatory support is also required in cases where popular platforms actively promote election misinformation.
- Increasing focus on ‘Prebunking’ and Debunking Information: Rather than addressing misinformation after it spreads, ‘prebunking’ should serve as the primary defence to strengthen public resilience ahead of time. On the other hand, misinformation needs to be debunked repeatedly through trusted channels. Psychological inoculation techniques against dis/misinformation can be scaled to reach millions on social media through short videos or messages.
- Focused Interventions On Contentious Themes By Social Media Platforms: As platforms prioritize user growth, the burden of verifying the accuracy of posts largely rests with users. To shoulder the responsibility of tackling false information, social media platforms can outline critical themes with large-scale impact such as anti-vax content, and either censor, ban, or tweak the recommendations algorithm to reduce exposure and weaken online echo chambers.
- Addressing Dis/Information through a Socio-Psychological Lens: Dis/misinformation and its impact on domains like health, education, economy, politics, etc. need to be understood through a psychological and sociological lens, apart from the technological one. A holistic understanding of the propagation of false information should inform digital literacy training in schools and public awareness campaigns to empower citizens to evaluate online information critically.
Conclusion
According to the World Economic Forum’s Global Risks Report 2024, the link between misleading or false information and societal unrest will be a focal point during elections in several major economies over the next two years. Democracies must employ a mixed approach of immediate tactical solutions, such as large-scale fact-checking and content labelling, and long-term evidence-backed countermeasures, such as digital literacy, to curb the spread and impact of dis/misinformation.
Sources
- https://www.cbsnews.com/news/2024-election-misinformation-fbi-fake-videos/
- https://www.brookings.edu/articles/how-disinformation-defined-the-2024-election-narrative/
- https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections
- https://indianexpress.com/article/world/misinformation-spreads-fear-distrust-ahead-us-election-9652111/
- https://academic.oup.com/ajcl/article/70/Supplement_1/i278/6597032#377629256
- https://www.brennancenter.org/our-work/policy-solutions/how-states-can-prevent-election-subversion-2024-and-beyond
- https://www.bbc.com/news/articles/cx2dpj485nno
- https://msutoday.msu.edu/news/2022/how-misinformation-and-disinformation-influence-elections
- https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
- https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
- https://www.weforum.org/stories/2024/03/disinformation-trust-ecosystem-experts-curb-it/
- https://www.apa.org/topics/journalism-facts/misinformation-recommendations
- https://mythvsreality.eci.gov.in/
- https://www.brookings.edu/articles/transparency-is-essential-for-effective-social-media-regulation/
- https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/

Introduction
The Indian government has developed the National Cybersecurity Reference Framework (NCRF) to provide an implementable measure for cybersecurity, based on existing legislations, policies, and guidelines. The National Critical Information Infrastructure Protection Centre is responsible for the framework. The government is expected to recommend enterprises, particularly those in critical sectors like banking, telecom, and energy, to use only security products and services developed in India. The NCRF aims to ensure that cybersecurity is protected and that the use of made-in-India products is encouraged to safeguard cyber infrastructure. The Centre is expected to emphasise the significant progress in developing indigenous cybersecurity products and solutions.
National Cybersecurity Reference Framework (NCRF)
The Indian government has developed the National Cybersecurity Reference Framework (NCRF), a guideline that sets the standard for cybersecurity in India. The framework focuses on critical sectors and provides guidelines to help organisations develop strong cybersecurity systems. It can serve as a template for critical sector entities to develop their own governance and management systems. The government has identified telecom, power, transportation, finance, strategic entities, government entities, and health as critical sectors.
The NCRF is non-binding in nature, meaning its recommendations will not be binding. It recommends enterprises allocate at least 10% of their total IT budget towards cybersecurity, with monitoring by top-level management or the board of directors. The framework may suggest that national nodal agencies evolve platforms and processes for machine-processing data from different sources to ensure proper audits and rate auditors based on performance.
Regulators overseeing critical sectors may have greater powers to set rules for information security and define information security requirements to ensure proper audits. They also need an effective Information Security Management System (ISMS) instance to access sensitive data and deficiencies related to operations in the critical sector. The policy is based on a Common but Differentiated Responsibility (CBDR) approach, recognising that different organisations have varying levels of cybersecurity needs and responsibilities.
India faces a barrage of cybersecurity-related incidents, such as the high-profile attack on AIIMS Delhi in 2022. Many ministries feel hamstrung by the lack of an overarching framework on cybersecurity when formulating sector-specific legislation. In recent years, threat actors backed by nation-states and organised cyber-criminal groups have attempted to target the critical information infrastructure (CII) of the government and enterprises. The current guiding framework on cybersecurity for critical infrastructure in India comes from the National Cybersecurity Policy of 2013. From 2013 to 2023, the world has evolved significantly due to the emergence of new threats necessitating the development of new strategies.
Significance in the realm of Critical Infrastructure
India faces numerous cybersecurity incidents due to a lack of a comprehensive framework. Critical Information Infrastructure like banking, energy, healthcare, telecommunications, transportation, strategic enterprises, and government enterprises are most targeted by threat actors, including nation-states and cybercriminals. These critical information sectors especially by their vary nature as they hold sensitive data make them prime targets for cyber threats and attacks. Cyber-attacks can compromise patient privacy, disrupt services, compromise control systems, pose safety risks, and disrupt critical services. Hence it is of paramount importance to come up with NCRF which can potentially address the emerging issues by providing sector-specific guidelines.
The Indian government is considering promoting the use of made-in-India products to enhance Cyber Infrastructure
India is preparing to recommend the use of domestically developed cybersecurity products and services, particularly for critical sectors like banking, telecom, and energy, to enhance national security in the face of escalating cybersecurity threats. The initiative aims to enhance national security in response to increasing cybersecurity threats.
Conclusion
Promoting locally made cybersecurity products and services in important industries shows India's commitment to strengthening national security. A step of coming up with the National Cybersecurity Reference Framework (NCRF) which outlines duties, responsibilities, and recommendations for organisations and regulators shows the critical step towards a comprehensive cybersecurity policy framework which is a need of the hour. The government underscoring made-in-India solutions and allocating cybersecurity resources underlines its determination to protect the country's cyber infrastructure in light of increasing cyber threats & attacks. The NCRF is expected to help draft sector-specific guidelines on cyber security.
References
- https://indianexpress.com/article/business/market/overhaul-of-cybersecurity-framework-to-safeguard-cyber-infra-govt-may-push-use-of-made-in-india-products-9133687/
- https://vajiramandravi.com/upsc-daily-current-affairs/mains-articles/national-cybersecurity-reference-framework-ncrf/
- https://m.toppersnotes.com/current-affairs/blog/to-push-cyber-infra-govt-may-push-use-of-made-in-india-products-DxQP
- https://appkida.in/overhaul-of-cybersecurity-framework-in-2024/