#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

The race for global leadership in AI is in full force. As China and the US emerge as the ‘AI Superpowers’ in the world, the world grapples with the questions around AI governance, ethics, regulation, and safety. Some are calling this an ‘AI Arms Race.’ Most of the applications of these AI systems are in large language models for commercial use or military applications. Countries like Germany, Japan, France, Singapore, and India are now participating in this race and are not mere spectators.
The Government of India’s Ministry of Electronics and Information Technology (MeitY) has launched the IndiaAI Mission, an umbrella program for the use and development of AI technology. This MeitY initiative lays the groundwork for supporting an array of AI goals for the country. The government has allocated INR 10,300 crore for this endeavour. This mission includes pivotal initiatives like the IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, IndiaAI Application Development Initiative, IndiaAI FutureSkills, IndiaAI Startup Financing, and Safe & Trusted AI.
There are several challenges and opportunities that India will have to navigate and capitalize on to become a significant player in the global AI race. The various components of India’s ‘AI Stack’ will have to work well in tandem to create a robust ecosystem that yields globally competitive results. The IndiaAI mission focuses on building large language models in vernacular languages and developing compute infrastructure. There must be more focus on developing good datasets and research as well.
Resource Allocation and Infrastructure Development
The government is focusing on building the elementary foundation for AI competitiveness. This includes the procurement of AI chips and compute capacity, about 10,000 graphics processing units (GPUs), to support India’s start-ups, researchers, and academics. These GPUs have been strategically distributed, with 70% being high-end newer models and the remaining 30% comprising lower-end older-generation models. This approach ensures that a robust ecosystem is built, which includes everything from cutting-edge research to more routine applications. A major player in this initiative is Yotta Data Services, which holds the largest share of 9,216 GPUs, including 8,192 Nvidia H100s. Other significant contributors include Amazon AWS's managed service providers, Jio Platforms, and CtrlS Datacenters.
Policy Implications: Charting a Course for Tech Sovereignty and Self-reliance
With this government initiative, there is a concerted effort to develop indigenous AI models and reduce tech dependence on foreign players. There is a push to develop local Large Language Models and domain-specific foundational models, creating AI solutions that are truly Indian in nature and application. Many advanced chip manufacturing takes place in Taiwan, which has a looming China threat. India’s focus on chip procurement and GPUs speaks to a larger agenda of self-reliance and sovereignty, keeping in mind the geopolitical calculus. This is an important thing to focus on, however, it must not come at the cost of developing the technological ‘know-how’ and research.
Developing AI capabilities at home also has national security implications. When it comes to defence systems, control over AI infrastructure and data becomes extremely important. The IndiaAI Mission will focus on safe and trusted AI, including developing frameworks that fit the Indian context. It has to be ensured that AI applications align with India's security interests and can be confidently deployed in sensitive defence applications.
The big problem here to solve here is the ‘data problem.’ There must be a focus on developing strategies to mitigate the data problem that disadvantages the Indian AI ecosystem. Some data problems are unique to India, such as generating data in local languages. While other problems are the ones that appear in every AI ecosystem development lifecycle namely generating publicly available data and licensed data. India must strengthen its ‘Digital Public Infrastructure’ and data commons across sectors and domains.
India has proposed setting up the India Data Management Office to serve as India’s data regulator as part of its draft National Data Governance Framework Policy. The MeitY IndiaAI expert working group report also talked about operationalizing the India Datasets Platform and suggested the establishment of data management units within each ministry.
Economic Impact: Growth and Innovation
The government’s focus on technology and industry has far-reaching economic implications. There is a push to develop the AI startup ecosystem in the country. The IndiaAI mission heavily focuses on inviting ideas and projects under its ambit. The investments will strengthen the IndiaAI startup financing system, making it easier for nascent AI businesses to obtain capital and accelerate their development from product to market. Funding provisions for industry-led AI initiatives that promote social impact and stimulate innovation and entrepreneurship are also included in the plan. The government press release states, "The overarching aim of this financial outlay is to ensure a structured implementation of the IndiaAI Mission through a public-private partnership model aimed at nurturing India’s AI innovation ecosystem.”
The government also wants to establish India as a hub for sustainable AI innovation and attract top AI talent from across the globe. One crucial aspect that needs to be worked on here is fostering talent and skill development. India has a unique advantage, that is, top-tier talent in STEM fields. Yet we suffer from a severe talent gap that needs to be addressed on a priority basis. Even though India is making strides in nurturing AI talents, out-migration of tech talent is still a reality. Once the hardware manufacturing “goods-side” of economics transitions to service delivery in the field of AI globally, India will need to be ready to deploy its talent. Several structural and policy interfaces, like the New Education Policy and industry-academic partnership frameworks, allow India to capitalize on this opportunity.
India’s talent strategy must be robust and long-term, focusing heavily on multi-stakeholder engagement. The government has a pivotal role here by creating industry-academia interfaces and enabling tech hubs and innovation parks.
India's Position in the Global AI Race
India’s foreign policy and geopolitical standpoint have been one of global cooperation. This must not change when it comes to AI. Even though this has been dubbed as the “AI Arms Race,” India should encourage worldwide collaboration on AI R&D through collaboration with other countries in order to strengthen its own capabilities. India must prioritise more significant open-source AI development, work with the US, Europe, Australia, Japan, and other friendly countries to prevent the unethical use of AI and contribute to the formation of a global consensus on the boundaries for AI development.
The IndiaAI Mission will have far-reaching implications for India’s diplomatic and economic relations. The unique proposition that India comes with is its ethos of inclusivity, ethics, regulation, and safety from the get-go. We should keep up the efforts to create a powerful voice for the Global South in AI. The IndiaAI Mission marks a pivotal moment in India's technological journey. Its success could not only elevate India's status as a tech leader but also serve as a model for other nations looking to harness the power of AI for national development and global competitiveness. In conclusion, the IndiaAI Mission seeks to strengthen India's position as a global leader in AI, promote technological independence, guarantee the ethical and responsible application of AI, and democratise the advantages of AI at all societal levels.
References
- Ashwini Vaishnaw to launch IndiaAI portal, 10 firms to provide 14,000 GPUs. (2025, February 17). https://www.business-standard.com/. Retrieved February 25, 2025, from https://www.business-standard.com/industry/news/indiaai-compute-portal-ashwini-vaishnaw-gpu-artificial-intelligence-jio-125021700245_1.html
- Global IndiaAI Summit 2024 being organized with a commitment to advance responsible development, deployment and adoption of AI in the country. (n.d.). https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2029841
- India to Launch AI Compute Portal, 10 Firms to Supply 14,000 GPUs. (2025, February 17). apacnewsnetwork.com. https://apacnewsnetwork.com/2025/02/india-to-launch-ai-compute-portal-10-firms-to-supply-14000-gpus/
- INDIAai | Pillars. (n.d.). IndiaAI. https://indiaai.gov.in/
- IndiaAI Innovation Challenge 2024 | Software Technology Park of India | Ministry of Electronics & Information Technology Government of India. (n.d.). http://stpi.in/en/events/indiaai-innovation-challenge-2024
- IndiaAI Mission To Deploy 14,000 GPUs For Compute Capacity, Starts Subsidy Plan. (2025, February 17). www.businessworld.in. Retrieved February 25, 2025, from https://www.businessworld.in/article/indiaai-mission-to-deploy-14000-gpus-for-compute-capacity-starts-subsidy-plan-548253
- India’s interesting AI initiatives in 2024: AI landscape in India. (n.d.). IndiaAI. https://indiaai.gov.in/article/india-s-interesting-ai-initiatives-in-2024-ai-landscape-in-india
- Mehra, P. (2025, February 17). Yotta joins India AI Mission to provide advanced GPU, AI cloud services. Techcircle. https://www.techcircle.in/2025/02/17/yotta-joins-india-ai-mission-to-provide-advanced-gpu-ai-cloud-services/
- IndiaAI 2023: Expert Group Report – First Edition. (n.d.). IndiaAI. https://indiaai.gov.in/news/indiaai-2023-expert-group-report-first-edition
- Satish, R., Mahindru, T., World Economic Forum, Microsoft, Butterfield, K. F., Sarkar, A., Roy, A., Kumar, R., Sethi, A., Ravindran, B., Marchant, G., Google, Havens, J., Srichandra (IEEE), Vatsa, M., Goenka, S., Anandan, P., Panicker, R., Srivatsa, R., . . . Kumar, R. (2021). Approach Document for India. In World Economic Forum Centre for the Fourth Industrial Revolution, Approach Document for India [Report]. https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf
- Stratton, J. (2023, August 10). Those who solve the data dilemma will win the A.I. revolution. Fortune. https://fortune.com/2023/08/10/workday-data-ai-revolution/
- Suri, A. (n.d.). The missing pieces in India’s AI puzzle: talent, data, and R&D. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2025/02/the-missing-pieces-in-indias-ai-puzzle-talent-data-and-randd?lang=en
- The AI arms race. (2024, February 13). Financial Times. https://www.ft.com/content/21eb5996-89a3-11e8-bf9e-8771d5404543

Introduction:
The Indian Ministry of Communications has come up with a feature known as "Quick SMS Header Information" to provide citizens with more control over their messaging services. This feature would help users access crucial information about the sender through text message, therefore making the details readily available at their fingertips.
The Quick SMS Header service is the key to providing users with the feature to ensure that they are receiving messages from the correct source. Users can instantly learn all the necessary data about the sender of a certain SMS. This data is invaluable for making the distinction between real messages and suspicious spam or phishing, so the user can have a higher level of defense against online threats and scam activities.
Importance of Checking the Header:
1. Authenticity Verification: SMS header data represents another way to confirm the sender. This feature keeps the end user from wrongly assuming that the SMS is from a trusted source or an unknown sender. Hence, the end user is able to make a choice about the authenticity of the message.
2. Mitigating Spam and Phishing: The rise of SMS and phishing scams has created some significant hurdles for users in the process of differentiating between real and fake messages. Through the Quick SMS Header Information service, people will be able to look up any suspicious messages in order to be able to take appropriate steps to prevent links that lead to malicious websites or requests for personal information.
3. Enhancing User Security: The SMS header information plays an important role in ensuring that the user is secure and has no privacy issues. The checking of the message headers will help us limit the possibilities of bad activities and reduce the chances of being a victim of cybercriminals.
4. Empowering Consumer Awareness: This feature is designed to encourage the people involved to take responsibility for the security of their devices and establish a safer and more dependable digital platform.
Benefits:
- Enhanced Transparency: By giving access to the header information to the users, it is transparency that is promoted within the telecommunications ecosystem.
- Empowered Decision-Making: Now that users have information about the SMS header, they can make informed decisions regarding their communications and privacy.
- Efficient Resolution of Concerns: The Quick SMS Header Information serves the purpose of providing the needed resolution by telling us the message’s origin in cases where users come across any suspicious messages.
- User-Friendly Interface: With its easy and clear process, this feature caters to users of all technical proficiency levels, ensuring accessibility for all.
Working:
1. Compose Your SMS: Write a message with the header you wish to find the information about. For example, if you want to know details about a header labeled "SBIINB," your SMS should be in the format "DETAILS OF SBIINB." Note, all letters are in capital only.

2. Send it to 1909: Once your message is ready, send it to: 1909. Please note, this may charge you depending upon your current plan.

3. Receive Response: The response to your SMS will be sent to you by the concerned telecommunication service provider or directly by 1909, a few seconds after you have sent your message. This response will have the data associated with the header above.

Another method to find SMS header information:
TRAI (Telecom Regulatory Authority of India) has made a tool on the webpage (https://smsheader.trai.gov.in/) to check for the SMS header associated with the message.
TRAI has also mandated header registration for messages pertaining to transactional or promotional purposes. This has helped people identify the SMS header by simply looking into the database as made by TRAI.
Steps:
1. Go to https://smsheader.trai.gov.in/. The page looks like as shown below:

2. Enter your Email, Name and complete the captcha under the Download/View Header Details and click on continue

3. Enter the OTP received on your email with the captcha and click on continue
4. Now enter your SMS header in the format of AA-AAAA, where “AA” is your prefix and “AAAA” is your header name. For example, we have taken “AX-HDFCBK” as our sample header, so “AX” is our prefix and “HDFCBK” is our header name.

5. As soon as we press enter, the site returns the query with the information of the header, as shown below

Conclusion:
The importance of checking SMS headers is something that simply cannot be overemphasized. This is the principal procedure for identifying incoming messages as authentic, and on that basis, the users are able to make informed choices about the messages they receive. It also contributes to the rise of user safety and privacy.
The development of more transparent controls and a stronger decision-making process will make it easier for users to handle their digital lives. The Quick SMS Header Information service is easy and convenient to use, as its interface is simple and understandable for users of all technical levels.
In addition to this, TRAI's attempt to make available an online tool for the maintenance of a comprehensive database of SMS headers strengthens its position towards ensuring security for its users in the telecommunications sector.