#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
A social media viral post claims to show a mosque being set on fire in India, contributing to growing communal tensions and misinformation. However, a detailed fact-check has revealed that the footage actually comes from Indonesia. The spread of such misleading content can dangerously escalate social unrest, making it crucial to rely on verified facts to prevent further division and harm.

Claim:
The viral video claims to show a mosque being set on fire in India, suggesting it is linked to communal violence.

Fact Check
The investigation revealed that the video was originally posted on 8th December 2024. A reverse image search allowed us to trace the source and confirm that the footage is not linked to any recent incidents. The original post, written in Indonesian, explained that the fire took place at the Central Market in Luwuk, Banggai, Indonesia, not in India.

Conclusion: The viral claim that a mosque was set on fire in India isn’t True. The video is actually from Indonesia and has been intentionally misrepresented to circulate false information. This event underscores the need to verify information before spreading it. Misinformation can spread quickly and cause harm. By taking the time to check facts and rely on credible sources, we can prevent false information from escalating and protect harmony in our communities.
- Claim: The video shows a mosque set on fire in India
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
In a world where Artificial Intelligence (AI) is already changing the creation and consumption of content at a breathtaking pace, distinguishing between genuine media and false or doctored content is a serious issue of international concern. AI-generated content in the form of deepfakes, synthetic text and photorealistic images is being used to disseminate misinformation, shape public opinion and commit fraud. As a response, governments, tech companies and regulatory bodies are exploring ‘watermarking’ as a key mechanism to promote transparency and accountability in AI-generated media. Watermarking embeds identifiable information into content to indicate its artificial origin.
Government Strategies Worldwide
Governments worldwide have pursued different strategies to address AI-generated media through watermarking standards. In the US, President Biden's 2023 Executive Order on AI directed the Department of Commerce and the National Institute of Standards and Technology (NIST) to establish clear guidelines for digital watermarking of AI-generated content. This action puts a big responsibility on large technology firms to put identifiers in media produced by generative models. These identifiers should help fight misinformation and address digital trust.
The European Union, in its Artificial Intelligence Act of 2024, requires AI-generated content to be labelled. Article 50 of the Act specifically demands that developers indicate whenever users engage with synthetic content. In addition, the EU is a proponent of the Coalition for Content Provenance and Authenticity (C2PA), an organisation that produces secure metadata standards to track the origin and changes of digital content.
India is currently in the process of developing policy frameworks to address AI and synthetic content, guided by judicial decisions that are helping shape the approach. In 2024, the Delhi High Court directed the central government to appoint members for a committee responsible for regulating deepfakes. Such moves indicate the government's willingness to regulate AI-generated content.
China, has already implemented mandatory watermarking on all deep synthesis content. Digital identifiers must be embedded in AI media by service providers, and China is one of the first countries to adopt stern watermarking legislation.
Understanding the Technical Feasibility
Watermarking AI media means inserting recognisable markers into digital material. They can be perceptible, such as logos or overlays or imperceptible, such as cryptographic tags or metadata. Sophisticated methods such as Google's SynthID apply imperceptible pixel-level changes that remain intact against standard image manipulation such as resizing or compression. Likewise, C2PA metadata standards enable the user to track the source and provenance of an item of content.
Nonetheless, watermarking is not an infallible process. Most watermarking methods are susceptible to tampering. Aforementioned adversaries with expertise, for instance, can use cropping editing or AI software to delete visible watermarks or remove metadata. Further, the absence of interoperability between different watermarking systems and platforms hampers their effectiveness. Scalability is also an issue enacting and authenticating watermarks for billions of units of online content necessitates huge computational efforts and routine policy enforcement across platforms. Scientists are currently working on solutions such as blockchain-based content authentication and zero-knowledge watermarking, which maintain authenticity without sacrificing privacy. These new techniques have potential for overcoming technical deficiencies and making watermarking more secure.
Challenges in Enforcement
Though increasing agreement exists for watermarking, implementation of such policies is still a major issue. Jurisdictional constraints prevent enforceability globally. A watermarking policy within one nation might not extend to content created or stored in another, particularly across decentralised or anonymous domains. This creates an exigency for international coordination and the development of worldwide digital trust standards. While it is a welcome step that platforms like Meta, YouTube, and TikTok have begun flagging AI-generated content, there remains a pressing need for a standardised policy that ensures consistency and accountability across all platforms. Voluntary compliance alone is insufficient without clear global mandates.
User literacy is also a significant hurdle. Even when content is properly watermarked, users might not see or comprehend its meaning. This aligns with issues of dealing with misinformation, wherein it's not sufficient just to mark off fake content, users need to be taught how to think critically about the information they're using. Public education campaigns, digital media literacy and embedding watermarking labels within user-friendly UI elements are necessary to ensure this technology is actually effective.
Balancing Privacy and Transparency
While watermarking serves to achieve digital transparency, it also presents privacy issues. In certain instances, watermarking might necessitate the embedding of metadata that will disclose the source or identity of the content producer. This threatens journalists, whistleblowers, activists, and artists utilising AI tools for creative or informative reasons. Governments have a responsibility to ensure that watermarking norms do not violate freedom of expression or facilitate surveillance. The solution is to achieve a balance by employing privacy-protection watermarking strategies that verify the origin of the content without revealing personally identifiable data. "Zero-knowledge proofs" in cryptography may assist in creating watermarking systems that guarantee authentication without undermining user anonymity.
On the transparency side, watermarking can be an effective antidote to misinformation and manipulation. For example, during the COVID-19 crisis, misinformation spread by AI on vaccines, treatments and public health interventions caused widespread impact on public behaviour and policy uptake. Watermarked content would have helped distinguish between authentic sources and manipulated media and protected public health efforts accordingly.
Best Practices and Emerging Solutions
Several programs and frameworks are at the forefront of watermarking norms. Adobe, Microsoft and others' collaborative C2PA framework puts tamper-proof metadata into images and videos, enabling complete traceability of content origin. SynthID from Google is already implemented on its Imagen text-to-image model and secretly watermarks images generated by AI without any susceptibility to tampering. The Partnership on AI (PAI) is also taking a leadership role by building out ethical standards for synthetic content, including standards around provenance and watermarking. These frameworks become guides for governments seeking to introduce equitable, effective policies. In addition, India's new legal mechanisms on misinformation and deepfake regulation present a timely point to integrate watermarking standards consistent with global practices while safeguarding civil liberties.
Conclusion
Watermarking regulations for synthetic media content are an essential step toward creating a safer and more credible digital world. As artificial media becomes increasingly indistinguishable from authentic content, the demand for transparency, origin, and responsibility increases. Governments, platforms, and civil society organisations will have to collaborate to deploy watermarking mechanisms that are technically feasible, compliant and privacy-friendly. India is especially at a turning point, with courts calling for action and regulatory agencies starting to take on the challenge. Empowering themselves with global lessons, applying best-in-class watermarking platforms and promoting public awareness can enable the nation to acquire a level of resilience against digital deception.
References
- https://artificialintelligenceact.eu/
- https://www.cyberpeace.org/resources/blogs/delhi-high-court-directs-centre-to-nominate-members-for-deepfake-committee
- https://c2pa.org
- https://www.cyberpeace.org/resources/blogs/misinformations-impact-on-public-health-policy-decisions
- https://deepmind.google/technologies/synthid/
- https://www.imatag.com/blog/china-regulates-ai-generated-content-towards-a-new-global-standard-for-transparency
.webp)
The Digital Covenant: Aligning Communication with SDG Goals
“Rethinking Communication, Cyber Responsibility, and Sustainability in a Connected World”
Introduction
It is rightly said by Antonio Guterres, United Nations Secretary General, “Everyone should be able to express themselves freely without fear of attack. Everyone should be able to access a range of views and information sources.” In 2024, when the Global Alliance for PR and Communication Management asserted that it aligns with the era of digital transformation, where technology is moving at terminal velocity and bringing various risks and threats, it called on the global leaders and stakeholders to proclaim ‘Responsible Communication’ as the 18th Sustainable Development Goal (SDG). On May 17th, as we celebrate World Telecommunication and Information Society Day (WTISD) 2025, we must align our personal, professional, and virtual spaces with a safe and sustainable information age.
In terms of digital growth, it is indubitable that India is growing at a brisk pace consistently in alignment with its South Asian and Western counterparts and has incorporated international covenants on digital personal data and cyber crimes within its domestic regime.
UN Global Principles for Information Integrity
The United Nations has displayed its constant commitment to the achievement of the seventeen SDGs that were adopted at the United Nations Conference in 2012 in Rio de Janeiro. It recognises that you cannot isolate the digital transformation, technology, and digitisation from other areas that are included within the SDGs, such as health, education, and poverty. The UN released Policy Brief 8 in June 2023 by the UN Secretary-General that seeks to empirically derive data on the threats posed to information integrity and then come up with norms that help guide the member states, the digital platforms, and other stakeholders. The norms must be in conformity with the right to freedom of opinion and expression and the right to information access.
In line with its agenda, it has formulated Global Principles of Information Integrity, which include “Societal Trust and Resilience”, “Healthy Incentives”, “Public Empowerment”, “Independent, Free and Pluralistic Media” and “Transparency and Research”. The principles recognise the harm caused by hatred, misinformation, and disinformation propagated by the misuse of advances in Artificial Intelligence Technology (AI).
Breaking the Binary: Bridging the Gender Digital Divide
The reflection of how far we have come and how far we have to go can be deciphered with a single sentence, i.e., using digital technologies to promote gender equality. This can be seen both as a paradox and a pressing call to action. As we celebrate WTISD 2025, the day highlights the fundamental role of Information and Communication Technologies (ICTs) in accelerating progress and bringing those not included in this digital transformation to become a part of this change, especially the female population that remains isolated from mainstream growth. As per the data given by ITU, “Out of the world population, 70 per cent of men are using the internet, compared with 65 per cent of women.”
This exclusion is not merely a technical gap but a societal and economic chasm, reinforcing existing inequalities. By including such an important goal in the theme of this day, it marks a critical moment towards the formation of gender-sensitive digital policies, promoting digital literacy among women and girls, and ensuring safe, affordable, and meaningful connectivity. We can explore the future potential where technology is the true instrument for gender parity, not a mirror of old hierarchies.
India and its courts have time and again proven their commitment to cultivating digital transformation as an inherent strength to bridge this digital divide, and the recent judgement where the court declared the right to digital access an intrinsic part of the right to life and liberty is a single instance among many.
CyberPeace Resolution on World Telecommunication and Information Society Day
CyberPeace is actively bridging the gap between digital safety and sustainable development through its initiatives, aligning with the principles of the Sustainable Development Goals (SDGs). The ‘CyberPeace Corps’ empowers communities by fostering cyber hygiene awareness and building digital resilience. The ‘CyberPeace Initiative’, a project with Google.org, tackles digital misinformation, promoting informed online engagement. Additionally, Digital Shakti, now in its fifth phase, empowers women by enhancing their digital literacy and safety. These are just a few of the many impactful initiatives by CyberPeace, aimed at creating a safer and more inclusive digital future. Together, we are spreading awareness and strengthening the foundation for a safer and more inclusive digital future and promoting responsible tech use. Let us be resolute on this World Telecommunication and Information Society Day for “Clean Data. Safe Clicks. Stronger Future. Pledge to Cyber Hygiene Today!”
References