#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Global cybersecurity spending is expected to breach USD 210 billion in 2025, a ~10% increase from 2024 (Gartner). This is a result of an evolving and increasingly critical threat landscape enabled by factors such as the proliferation of IoT devices, the adoption of cloud networks, and the increasing size of the internet itself. Yet, breaches, misuse, and resistance persist. In 2025, global attack pressure rose ~21% Y-o-Y ( Q2 averages) (CheckPoint) and confirmed breaches climbed ~15%( Verizon DBIR). This means that rising investment in cybersecurity may not be yielding proportionate reductions in risk. But while mechanisms to strengthen technical defences and regulatory frameworks are constantly evolving, the social element of trust and how to embed it into cybersecurity systems remain largely overlooked.
Human Error and Digital Trust (Individual Trust)
Human error is consistently recognised as the weakest link in cybersecurity. While campaigns focusing on phishing prevention, urging password updates and using two-factor authentication (2FA) exist, relying solely on awareness measures to address human error in cyberspace is like putting a Band-Aid on a bullet wound. Rather, it needs to be examined through the lens of digital trust. As Chui (2022) notes, digital trust rests on security, dependability, integrity, and authenticity. These factors determine whether users comply with cybersecurity protocols. When people view rules as opaque, inconvenient, or imposed without accountability, they are more likely to cut corners, which creates vulnerabilities. Therefore, building digital trust means shifting from blaming people to design: embedding transparency, usability, and shared responsibility towards a culture of cybersecurity so that users are incentivised to make secure choices.
Organisational Trust and Insider Threats (Institutional Trust)
At the organisational level, compliance with cybersecurity protocols is significantly tied to whether employees trust employers/platforms to safeguard their data and treat them with integrity. Insider threats, stemming from both malicious and non-malicious actors, account for nearly 60% of all corporate breaches (Verizon DBIR 2024). A lack of trust in leadership may cause employees to feel disengaged or even act maliciously. Further, a 2022 study by Harvard Business Review finds that adhering to cybersecurity protocols adds to employee workload. When they are perceived as hindering productivity, employees are more likely to intentionally violate these protocols. The stress of working under surveillance systems that feel cumbersome or unreasonable, especially when working remotely, also reduces employee trust and, hence, compliance.
Trust, Inequality, and Vulnerability (Structural Trust)
Cyberspace encompasses a social system of its own since it involves patterned interactions and relationships between human beings. It also reproduces the social structures and resultant vulnerabilities of the physical world. As a result, different sections of society place varying levels of trust in digital systems. Women, rural, and marginalised groups often distrust existing digital security provisions more, and with reason. They are targeted disproportionately by cyber attackers, and yet are underprotected by systems, since these are designed prioritising urban/ male/ elite users. This leads to citizens adopting workarounds like password sharing for “safety” and disengaging from cyber safety discourse, as they find existing systems inaccessible or irrelevant to their realities. Cybersecurity governance that ignores these divides deepens exclusion and mistrust.
Laws and Compliances (Regulatory Trust)
Cybersecurity governance is operationalised in the form of laws, rules, and guidelines. However, these may often backfire due to inadequate design, reducing overall trust in governance mechanisms. For example, CERT-In’s mandate to report breaches within six hours of “noticing” it has been criticised as the steep timeframe being insufficient to generate an effective breach analysis report. Further, the multiplicity of regulatory frameworks in cross-border interactions can be costly and lead to compliance fatigue for organisations. Such factors can undermine organisational and user trust in the regulation’s ability to protect them from cyber attacks, fuelling a check-box-ticking culture for cybersecurity.
Conclusion
Cybersecurity is addressed primarily through code, firewall, and compliance today. But evidence suggests that technological and regulatory fixes, while essential, are insufficient to guarantee secure behaviour and resilient systems. Without trust in institutions, technologies, laws or each other, cybersecurity governance will remain a cat-and-mouse game. Building a trust-based architecture requires mechanisms to improve accountability, reliability, and transparency. It requires participatory designs of security systems and the recognition of unequal vulnerabilities. Thus, unless cybersecurity governance acknowledges that cyberspace is deeply social, investment may not be able to prevent the harms it seeks to curb.
References
- https://www.gartner.com/en/newsroom/press-releases/2025-07-29
- https://blog.checkpoint.com/research/global-cyber-attacks-surge-21-in-q2-2025
- https://www.verizon.com/business/resources/reports/2024-dbir-executive-summary.pdf
- https://www.verizon.com/business/resources/reports/2025-dbir-executive-summary.pdf
- https://insights2techinfo.com/wp-content/uploads/2023/08/Building-Digital-Trust-Challenges-and-Strategies-in-Cybersecurity.pdf
- https://www.coe.int/en/web/cyberviolence/cyberviolence-against-women
- https://www.upguard.com/blog/indias-6-hour-data-breach-reporting-rule

Introduction
Rajeev Chandrasekhar, Minister of State at the Ministry of Electronics and Information Technology, has emphasised the need for an open internet. He stated that no platform can deny content creators access to distribute and monetise content and that large technology companies have begun to play a significant role in the digital evolution. Chandrasekhar emphasised that the government does not want the internet or monetisation to be in the purview of just one or two companies and does not want 120 crore Indians on the internet in 2025 to be catered to by big islands on the internet.
The Voice for Open Internet
India's Minister of State for IT, Rajeev Chandrasekhar, has stated that no technology company or social media platform can deny content creators access to distribute and monetise their content. Speaking at the Digital News Publishers Association Conference in Delhi, Chandrasekhar emphasized that the government does not want the internet or monetization of the internet to be in the hands of just one or two companies. He argued that the government does not like monopoly or duopoly and does not want 120 crore Indians on the Internet in 2025 to be catered to by big islands on the internet.
Chandrasekhar highlighted that large technology companies have begun to exert influence when it comes to the dissemination of content, which has become an area of concern for publishers and content creators. He stated that if any platform finds it necessary to block any content, they need to give reasons or grounds to the creators, stating that the content is violating norms.
As India tries to establish itself as an innovator in the technology sector, a recent corpus of Rs 1 lakh crore was announced by the government in the interim Budget of 2024-25. As big companies continue to tighten their stronghold on the sector, content moderation has become crucial. Under the IT Rules Act, 11 types of categories are unlawful under IT Act and criminal law. Platforms must ensure no user posts content that falls under these categories, take down any such content, and gateway users to either de-platforming or prosecuting. Chandrasekhar believes that the government has to protect the fundamental rights of people and emphasises legislative guardrails to ensure platforms are accountable for the correctness of the content.
Monetizing Content on the Platform
No platform can deny a content creator access to the platform to distribute and monetise it,' Chandrasekhar declared, boldly laying down a gauntlet that defies the prevailing norms. This tenet signals a nascent dawn where creators may envision reaping the rewards borne of their creative endeavours unfettered by platform restrictions.
An increasingly contentious issue that shadows this debate is the moderation of content within the digital realm. In this vast uncharted expanse, the powers that be within these monolithic platforms assume the mantle of vigilance—policing the digital avenues for transgressions against a conscribed code of conduct. Under the stipulations of India's IT Rules Act, for example, platforms are duty-bound to interdict user content that strays into territories encompassing a spectrum of 11 delineated unlawful categories. Violations span the gamut from the infringement of intellectual property rights to the propagation of misinformation—each category necessitating swift and decisive intervention. He raised the alarm against misinformation—a malignant growth fed by the fertile soils of innovation—a phenomenon wherein media reports chillingly suggest that up to half of the information circulating on the internet might be a mere fabrication, a misleading simulacrum of authenticity.
The government's stance, as expounded by Chandrasekhar, pivots on an axis of safeguarding citizens' fundamental rights, compelling digital platforms to shoulder the responsibility of arbiters of truth. 'We are a nation of over 90 crores today, a nation progressing with vigour, yet we find ourselves beset by those who wish us ill,'
Upcoming Digital India Act
Awaiting upon the horizon, India's proposed Digital India Act (DIA), still in its embryonic stage of pre-consultation deliberation, seeks to sculpt these asymmetries into a more balanced form. Chandrasekhar hinted at the potential inclusion within the DIA of regulatory measures that would sculpt the interactions between platforms and the mosaic of content creators who inhabit them. Although specifics await the crucible of public discourse and the formalities of consultation, indications of a maturing framework are palpable.
Conclusion
It is essential that the fable of digital transformation reverberates with the voices of individual creators, the very lifeblood propelling the vibrant heartbeat of the internet's culture. These are the voices that must echo at the centre stage of policy deliberations and legislative assembly halls; these are the visions that must guide us, and these are the rights that we must uphold. As we stand upon the precipice of a nascent digital age, the decisions we forge at this moment will cascade into the morrow and define the internet of our future. This internet must eternally stand as a bastion of freedom, of ceaseless innovation and as a realm of boundless opportunity for every soul that ventures into its infinite expanse with responsible use.
References
- https://www.financialexpress.com/business/brandwagon-no-platform-can-deny-a-content-creator-access-to-distribute-and-monetise-content-says-mos-it-rajeev-chandrasekhar-3386388/
- https://indianexpress.com/article/india/meta-content-monetisation-social-media-it-rules-rajeev-chandrasekhar-9147334/
- https://www.medianama.com/2024/02/223-rajeev-chandrasekhar-content-creators-publishers/
.webp)
Introduction
The rise of unreliable social media newsgroups on online platforms has significantly altered the way people consume and interact with news, contributing to the spread of misinformation and leading to sources of unverified and misleading content. Unlike traditional news outlets that adhere to journalistic standards, these newsgroups often lack proper fact-checking and editorial oversight, leading to the rapid dissemination of false or distorted information. Social media transformed individuals into active content creators. Social media newsgroups (SMNs) are social media platforms used as sources of news and information. According to a survey by the Pew Research Center (July-August 2024), 54% of U.S. adults now rely on social media for news. This rise in SMNs has raised concerns over the integrity of online news and undermines trust in legitimate news sources. Social media users are advised to consume information and news from authentic sources or channels available on social media platforms.
The Growing Issue of Misinformation in Social Media Newsgroups
Social media newsgroups have become both a source of vital information and a conduit for misinformation. While these platforms allow rapid news sharing and facilitate political and social campaigns, they also pose significant risks of unverified information. Misleading information, often driven by algorithms designed to maximise user engagement, proliferates in these spaces. This has led to increasing challenges, as SMNs cater to diverse communities with varying political affiliations, gender demographics, and interests. This sometimes results in the creation of echo chambers where information is not critically assessed, amplifying the confirmation bias and enabling the unchecked spread of misinformation. A prominent example is the false narratives surrounding COVID-19 vaccines that spread across SMNs, contributing to widespread vaccine hesitancy and public health risks.
Understanding the Susceptibility of Online Newsgroups to Misinformation
Several factors make social media newsgroups particularly susceptible to misinformation. Some of the factors are listed below:
- The lack of robust fact-checking mechanisms in social media news groups can lead to false narratives which can spread easily.
- The lack of expertise from admins of online newsgroups, who are often regular users without journalism knowledge, can result in the spreading of inaccurate information. Their primary goal of increasing engagement may overshadow concerns about accuracy and credibility.
- The anonymity of users exacerbates the problem of misinformation. It allows users to share unverified or misleading content without accountability.
- The viral nature of social media also leads to the vast spread of misinformation to audiences instantly, often outpacing efforts to correct it.
- Unlike traditional media outlets, online newsgroups often lack formal fact-checking processes. This absence allows misinformation to circulate without verification, making it easier for inaccuracies to go unchallenged.
- The sheer volume of user engagement in the form of posts has created the struggle to moderate content effectively imposing significant challenges.
- Social Media Platforms have algorithms designed to enhance user engagement and inadvertently amplify sensational or emotionally charged content, which is more likely to be false.
Consequences of Misinformation in Newsgroups
The societal impacts of misinformation in SMNs are profound. Political polarisation can fuel one-sided views and create deep divides in democratic societies. Health risks emerge when false information spreads about critical issues, such as the anti-vaccine movements or misinformation related to public health crises. Misinformation has dire long-term implications and has the potential to destabilise governments and erode trust in media, in both traditional and social media leading to undermining democracy. If unaddressed, the consequences could continue to ripple through society, perpetuating false narratives that shape public opinion.
Steps to Mitigate Misinformation in Social Media Newsgroups
- Educating users in social media literacy education can empower critical assessment of the information encountered, reducing the spread of false narratives.
- Introducing stricter platform policies, including penalties for deliberately sharing misinformation, may act as a deterrent against sharing unverified information.
- Collaborative fact-checking initiatives with involvement from social media platforms, independent journalists, and expert organisations can provide a unified front against the spread of false information.
- From a policy perspective, a holistic approach that combines platform responsibility with user education and governmental and industry oversight is essential to curbing the spread of misinformation in social media newsgroups.
Conclusion
The emergence of Social media newsgroups has revolutionised the dissemination of information. This rapid spread of misinformation poses a significant challenge to the integrity of news in the digital age. It gets further amplified by algorithmic echo chambers unchecked user engagement and profound societal implications. A multi-faceted approach is required to tackle these issues, combining stringent platform policies, AI-driven moderation, and collaborative fact-checking initiatives. User empowerment concerning media literacy is an important factor in promoting critical thinking and building cognitive defences. By adopting these measures, we can better navigate the complexities of consuming news from social media newsgroups and preserve the reliability of online information. Furthermore, users need to consume news from authoritative sources available on social media platforms.