#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Whatsapp is one of the leading OTT messaging platforms, which has been owned by the tech giant Meta since 2013. WhatsApp enjoys a user base of nearly 2.24 billion people globally, with almost 487 million users in India. Since the advent of Whatsapp, it has been the most commonly used messaging app, and it has made an impact to such an extent that it is used for professional as well as personal purposes. Meta powers the platform and follows similar guidelines and policies as its parent company.
The New Feature
Users of WhatsApp on the web and desktop can now access one account from various devices. One WhatsApp account may now be used on up to four handsets thanks to a new update from Meta. Be aware that the multi-device capability has been planned for some time and is finally being made available to stable WhatsApp users. Each linked device (up to four devices can be linked) will function independently, and the independent devices will continue to receive messages even if the central device’s network connection is lost. Remember that WhatsApp will automatically log out of all the companion devices if the primary smartphone is dormant for an extended period. Four more gadgets may be a mix of four PCs and smartphones or four smartphones. This feature is now available for updates and downloads on Android as well as iOS platforms.
Potential issues
As we go deeper into the digital age, it is the responsibility of the tech giants to pilot innovation with features of security by design. Thus such new features should be accompanied by coherent safety and security policies or advisories to ensure the users understand the implications of the new features. Convenience over conditions is an essential part of cyberspace. It points to the civic duty of netizens to go through the conditions of any app rather than only focus on the convenience it creates. The following potential issues may arise from the new features on Whatsapp –
- Increased cybercrime- The bad actors now do not need to access SIM cards to commit frauds over the platforms as now on a single number 4 devices can be used hence the cybercriminal activity can increase over the platform. It is also pertinent for the platform to create SoPs for fake accounts which use multiple devices, as they pose a direct threat to the users and their interests.
- Difficulty in identifying and tracing- The LEAs will face a significant issue in identifying the bad actors and tracing them as the individual’s involvement through a linked device needs to be given legal validity and scope for investigation. This may also cause issues in evidence handling and analysis.
- Surge in Misinformation and Disinformation- With access to multiple devices, the screen time of an individual is also bound to increase. This leads to more time spent online, thus causing a rise in instances of misinformation and disinformation by bad actors. Thus the aspect of fack checking is of prime importance.
- Potential Oversharing of Personal Data- With the increased accessibility on different devices, it is very easy for the app to seek data from all devices on which the app is running, thus leading to a bigger reservoir of personal data for the platforms and data fiduciaries.
- Higher risk of Phishing, Ransomware and Malware Attacks- As the devices under the same login credentials and mobile number will increase, the message can be viewed on all the devices, thus increasing the risk of widespread embedded ransomware and malware in multiple devices is and ever-present threat.
- One number, more criminals- This feature will allow cybercriminals to operate using one device only, earlier they used to forge Adhaar cards to get new sims, but this feature will enable the bad actors to commit crimes and attacks from one single SIM using 4 different devices.
- Rise in Digital Footprint- As the number of devices increases, the users will generate more digital footprints. As a tech giant, Meta will have access to a bigger database, which increases the risk of data breaches by third-party actors.
Conclusion
In the fast-paced digital world, it is important to remain updated about new software, technologies and policies for our applications or forms of tech. This was a long-awaited feature from WhatsApp, and its value of it doesn’t lie in technological advancement only but also in the formulation of policies to govern this technology towards the trust and safety aspect of users. The platforms, in synergy with the policy makers, need to create a robust framework to accommodate the new features and add-ons on apps vehicle, staying in compliance with the laws of the land. Awareness about new features and vulnerabilities is a must for all netizens, and it is a shared responsibility for all netizens to spread the word about safety and security mechanisms.
.jpg)
Introduction
The Indian Cabinet has approved a comprehensive national-level IndiaAI Mission with a budget outlay ofRs.10,371.92 crore. The mission aims to strengthen the Indian AI innovation ecosystem by democratizing computing access, improving data quality, developing indigenous AI capabilities, attracting top AI talent, enabling industry collaboration, providing startup risk capital, ensuring socially-impactful A projects, and bolstering ethical AI. The mission will be implemented by the'IndiaAI' Independent Business Division (IBD) under the Digital India Corporation (DIC) and consists of several components such as IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, India AI Application Development Initiative, IndiaAI Future Skills, IndiaAI Startup Financing, and Safe & Trusted AI over the next 5 years.
This financial outlay is intended to befulfilled through a public-private partnership model, to ensure a structured implementation of the IndiaAI Mission. The main objective is to create and nurture an ecosystem for India’s AI innovation. This mission is intended to act as a catalyst for shaping the future of AI for India and the world. AI has the potential to become an active enabler of the digital economy and the Indian government aims to harness its full potential to benefit its citizens and drive the growth of its economy.
Key Objectives of India's AI Mission
● With the advancements in data collection, processing and computational power, intelligent systems can be deployed in varied tasks and decision-making to enable better connectivity and enhance productivity.
● India’s AI Mission will concentrate on benefiting India and addressing societal needs in primary areas of healthcare, education, agriculture, smart cities and infrastructure, including smart mobility and transportation.
● This mission will work with extensive academia-industry interactions to ensure the development of core research capability at the national level. This initiative will involve international collaborations and efforts to advance technological frontiers by generating new knowledge and developing and implementing innovative applications.
The strategies developed for implementing the IndiaAI Mission are via Public-Private Partnerships, Skilling initiatives and AI Policy and Regulation. An example of the work towards the public-private partnership is the pre-bid meeting that the IT Ministry hosted on 29th August2024, which saw industrial participation from Nvidia, Intel, AMD, Qualcomm, Microsoft Azure, AWS, Google Cloud and Palo Alto Networks.
Components of IndiaAI Mission
The IndiaAI Compute Capacity: The IndiaAI Compute pillar will build a high-end scalable AI computing ecosystem to cater to India's rapidly expanding AI start-ups and research ecosystem. The ecosystem will comprise AI compute infrastructure of 10,000 or more GPUs, built through public-private partnerships. An AI marketplace will offer AI as a service and pre-trained models to AI innovators.
The IndiaAI Innovation Centre will undertake the development and deployment of indigenous Large Multimodal Models (LMMs) and domain-specific foundational models in critical sectors. The IndiaAI Datasets Platform will streamline access to quality on-personal datasets for AI innovation.
The IndiaAI Future Skills pillar will mitigate barriers to entry into AI programs and increase AI courses in undergraduate, master-level, and Ph.D. programs. Data and AI Labs will be set up in Tier 2 and Tier 3 cities across India to impart foundational-level courses.
The IndiaAI Startup Financing pillar will support and accelerate deep-tech AI startups, providing streamlined access to funding for futuristic AI projects.
The Safe & Trusted AI pillar will enable the implementation of responsible AI projects and the development of indigenous tools and frameworks, self-assessment check lists for innovators, and other guidelines and governance frameworks by recognising the need for adequate guardrails to advance the responsible development, deployment, and adoption of AI.
CyberPeace Considerations for the IndiaAI Mission
● Data privacy and security are paramount as emerging privacy instruments aim to ensure ethical AI use. Addressing bias and fairness in AI remains a significant challenge, especially with poor-quality or tampered datasets that can lead to flawed decision-making, posing risks to fairness, privacy, and security.
● Geopolitical tensions and export control regulations restrict access to cutting-edge AI technologies and critical hardware, delaying progress and impacting data security. In India, where multilingualism and regional diversity are key characteristics, the unavailability of large, clean, and labeled datasets in Indic languages hampers the development of fair and robust AI models suited to the local context.
● Infrastructure and accessibility pose additional hurdles in India’s AI development. The country faces challenges in building computing capacity, with delays in procuring essential hardware, such as GPUs like Nvidia’s A100 chip, hindering businesses, particularly smaller firms. AI development relies heavily on robust cloud computing infrastructure, which remains in its infancy in India. While initiatives like AIRAWAT signal progress, significant gaps persist in scaling AI infrastructure. Furthermore, the scarcity of skilled AI professionals is a pressing concern, alongside the high costs of implementing AI in industries like manufacturing. Finally, the growing computational demands of AI lead to increased energy consumption and environmental impact, raising concerns about balancing AI growth with sustainable practices.
Conclusion
We advocate for ethical and responsible AI development adoption to ensure ethical usage, safeguard privacy, and promote transparency. By setting clear guidelines and standards, the nation would be able to harness AI's potential while mitigating risks and fostering trust. The IndiaAI Mission will propel innovation, build domestic capacities, create highly-skilled employment opportunities, and demonstrate how transformative technology can be used for social good and enhance global competitiveness.
References
● https://pib.gov.in/PressReleasePage.aspx?PRID=2012375
.webp)
Smart Wearable devices are designed to track several activities in defined parameters and are increasingly becoming a part of everyday life. According to Markets and Markets Report, the global wearable tech market is projected to reach a staggering USD 256.4 billion by 2026. One of the main areas of use of wearable devices is health, including biomedical research, health care, personal health practices and tracking, technology development, and engineering. These wearable devices often include digital health technologies such as consumer smartwatches that monitor an individual's heart rate and step count, and other body-worn sensors like those that continuously monitor blood glucose concentration.
Wearable devices used by the general population are getting increasingly popular. Health devices like fitness trackers and smartwatches enable continuous monitoring of personal health. Privacy is an emerging concern due to the real-time collection of sensitive data. Vulnerabilities due to unauthorised access or discrimination in case of information being revealed without consent are the primary concerns with these devices. While these concerns are present a lot of related misinformation is emerging due to the same.
While wearable devices typically come with terms of use that outline how data is collected and used, and there are regulations in place such as EU Law GDPR, such regulations largely govern the regulatory compliances on the handling of personal data, however, the implementation and compliances by the manufacturer is a one another aspect which might present the question on privacy protection. In addition, beyond the challenge of regulatory compliance, the rise of myths and misinformation surrounding wearable tech presents a separate issue.
Common Misconceptions About Privacy with Wearable Tech
- With the rapid development and growth of wearable technology their use has been subject to countless rumours which fuel misinformation narratives in the minds of general public. Addressing these misconceptions and privacy concerns requires targeted strategies.
- A prevalent misconception is that they are constantly spying on users. While wearable devices collect users’ data in real time, their vulnerability to unauthorised access is similar to that of a non-wearable device. The issue is of consent when it comes to wearable technology because it gives the ability to record. If permissions are not asked when a person is being recorded then the data is accessible to external entities.
- There is a common myth that wearable tech is surveillance tool. This is entirely a conjecture. These devices collect the user data with their prior consent and have been created to provide them with real-time information, most commonly physical health information. Since users choose the information shared, the idea of wearable tech serving as a surveillance tool is unfounded.
- Another misconception about wearable tech is that it can diagnose medical conditions. These devices collect real-time health data, such as heart rate or activity levels, they are not designed for medical diagnosis. The data collected may not always be accurate or reliable for clinical use to be interpreted by a healthcare professional. This is mainly because the makers of these devices are not held to the safety and liability standards that medical providers are.
- A prevalent misconception is that wearable tech can cure health issues, which is simply untrue. Wearable tech devices are essentially tracking the health parameters that a user sets. It in no way is a cure for any health issue that one suffers from. A user can manage their health based on the parameters they set on the device such as the number of steps that they walk, check on the heart rate and other metrics for their mental satisfaction but they are not a cure to treat diseases. Wearable tech acts as alerts, notifying users of important health metrics and encouraging proactive health management.
Addressing Privacy and Health Concerns in Wearable Tech
Wearable technology raises concerns for privacy and health due to the colossal amount of personal data collected. To address these, strong data protection measures are essential, ensuring that sensitive health information is securely stored and shared only with consent. Providing users with control over their data is one of the ways to build user trust. It includes enabling them to opt in, access, or delete the data in question. Regulators should establish clear guidelines, ensuring wearables ensure the compliances with data protection regulations like HIPPA, GDPR or DPDP Act, whichever is applicable as per the jurisdiction. Furthermore, global standards for data encryption, device security, and user privacy should be implemented to mitigate risks. Transparency in data usage and consistent updates to software security are also crucial for protecting users' privacy and health while promoting the responsible use of wearable tech.
CyberPeace Insights
- Making informed decisions about wearable tech starts with thorough research. Start by reading reviews and comparing products to assess their features, compatibility, and security standards.
- Investigate the manufacturer’s reputation for data protection and device longevity. Understanding device capabilities is crucial. One should evaluate whether the wearable meets their needs, such as fitness tracking, health monitoring, or communication features. Consider software security and updates, and data accuracy when comparing options. Opt for devices that offer two-factor authentication for an additional layer of security.
- Check the permissions requested by the accompanying app; only grant access to data that is necessary for the device's functionality. Always read the terms of use to understand your rights and responsibilities regarding the use of the device. Review and customize data-sharing settings for better control to prevent unauthorised access.
- Staying updated on the tech is equally important. A user should follow the advancements in wearable technology be it regular security updates, or regulatory changes that may affect privacy and usability. This ensures getting tech that aligns with user lifestyle while meeting privacy and security expectations.
Conclusion
Privacy and Misinformation are key concerns that emerge due to the use of wearable tech designed to offer benefits such as health monitoring, fitness tracking, and personal convenience. It requires a combination of informed decision-making by users and stringent regulatory oversight to overcome the issues that emerge due to misinformation about these devices. Users must ensure they understand the capabilities and limitations of their devices, from data accuracy to privacy risks. Additionally, manufacturers and regulators need to prioritise transparency, data protection, and compliance with global standards like GDPR or DPDP to build trust. As wearable tech continues to evolve, a balanced approach to innovation and privacy will be essential in fostering its responsible and beneficial use for all.
References
- https://thehealthcaretechnologyreport.com/privacy-data-security-concerns-rise-as-healthcare-wearables-gain-popularity/
- https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000104
- https://www.marketsandmarkets.com/Market-Reports/wearable-electronics-market-983.html?gclid=Cj0KCQjwgMqSBhDCARIsAIIVN1V0sqrk6SpYSga3rcDtWcwh8npZ08L0_s4X91gh7yPAa6QmsctB-lMaAlpqEALw_wcB
- https://www.cambridge.org/core/journals/legal-information-management/article/health-data-on-the-go-navigating-privacy-concerns-with-wearable-technologies/05DAF11EFA807051362BB39260C4814C