#FactCheck - Old Video of US Soldiers’ Coffins Mislinked to Iran War
Executive Summary
A video is being widely shared on social media showing soldiers carrying coffins with full military honours. Users are claiming that the footage shows the bodies of American soldiers who were killed in the war with Iran being brought back to the United States.
However, research by the CyberPeacefound the viral claim to be misleading. Our research revealed that the video has no connection to the recent Iran-Israel conflict. The footage actually dates back to December 2025, when an Islamic State gunman in Syria killed two US soldiers and a US civilian.
Following that incident, the bodies of the victims were transported with military honours, and the ceremony was recorded in the viral video. The clip is now being circulated online with a false claim.
Claim
On March 1, 2026, an Instagram user shared the viral video claiming that it shows the bodies of American soldiers returning to the US after being killed in the war against Iran. The caption of the post reads: “Bodies of American soldiers martyred against Iran are returning to the United States. War always brings destruction, which we are now witnessing.”
The link to the post and its archived version can be seen below.

Fact check
To verify the claim, we extracted key frames from the viral video and performed a reverse image search using Google Lens. During the search, we found the full version of the video in a report published by BBC on December 18, 2025. This confirms that the footage predates the current developments.

According to the BBC report, US President Donald Trump attended a dignified transfer ceremony for two members of the US National Guard and a US civilian who were killed in Syria. The somber ceremony took place at Dover Air Force Base in Delaware, United States. The US Central Command (Centcom) stated that the two soldiers and a civilian interpreter were killed in an ambush carried out by an Islamic State (IS) gunman in Syria. The US Army identified the two soldiers as Sgt. Edgar Brian Torres-Tovar (25) of Des Moines and Sgt. William Nathaniel Howard (29) of Marshalltown. A US civilian interpreter, Ayad Mansoor Sakat, was also killed in the attack. Officials said that three other service members were injured during the attack, and the gunman was engaged and killed. Syria’s state media also reported that two Syrian security personnel were injured in the incident.
Further research led us to a report published on the News Nation YouTube channel on December 18, 2025, which also featured the same footage related to the incident.

Conclusion
Our research found that the viral video is not related to the recent Iran-Israel conflict. The footage dates back to December 2025, when two US soldiers and a US civilian were killed in an Islamic State attack in Syria. The video shows the dignified transfer of their remains and is now being shared on social media with a misleading claim.
Related Blogs

Recent Incidents:
Recent reports are revealing a significant security threat linked to a new infostealer based malware campaign known to solely target gaming accounts. This attack has affected users of Activision and other gaming websites. The sophisticated software has captured millions of login credentials, notably from the cheats and players. The officials at Activision Blizzard, an American video game holding company, are still investigating the matter and collaborating with cheated developers to minimize the impact and inform the accounts’ residents of appropriate safety measures.
Overview:
Infostealer, also known as information stealer, is a type of malware designed in the form of a Trojan virus for stealing private data from the infected system. It can have a variety of incarnations and collect user data of various types such as browser history, passwords, credit card numbers, and login details and credentials to social media, gaming platforms, bank accounts, and other websites. Bad actors use the log obtained as a result of the collection of personal records to access the victim’s financial accounts, appropriate the victim’s online identity, and perform fraudulent actions on behalf of the victim.
Modus Operandi:
- Infostealer is a malicious program created to illegally obtain people's login details, like usernames and passwords. Its goal is to enable cyberattacks, sell on dark web markets, or pursue malicious aims.
- This malware targets both personal devices and corporate systems. It spreads through methods like phishing emails, harmful websites, and infected public sites.
- Once inside a device, Infostealer secretly gathers sensitive data like passwords, account details, and personal information. It's designed to infiltrate systems being undetected. The stolen credentials are compiled into datalogs. These logs are then sold illegally on dark web marketplaces for profit.
Analysis:


Basic properties:
- MD5: 06f53d457c530635b34aef0f04c59c7d
- SHA-1: 7e30c3aee2e4398ddd860d962e787e1261be38fb
- SHA-256: aeecc65ac8f0f6e10e95a898b60b43bf6ba9e2c0f92161956b1725d68482721d
- Vhash: 145076655d155515755az4e?z4
- Authentihash: 65b5ecd5bca01a9a4bf60ea4b88727e9e0c16b502221d5565ae8113f9ad2f878
- Imphash: f4a69846ab44cc1bedeea23e3b680256
- Rich PE header hash: ba3da6e3c461234831bf6d4a6d8c8bff
- SSDEEP: 6144:YcdXHqXTdlR/YXA6eV3E9MsnhMuO7ZStApGJiZcX8aVEKn3js7/FQAMyzSzdyBk8:YIKXd/UgGXS5U+SzdjTnE3V
- TLSH:T1E1B4CF8E679653EAC472823DCC232595E364FB009267875AC25702D3EFBB3D56C29F90
- File type: Win32 DLL executable windows win32 pepe dll
- Magic: PE32+ executable (DLL) (GUI) x86-64, for MS Windows
- File size: 483.50 KB (495104 bytes)
Additional Hash Files:
- 160389696ed7f37f164f1947eda00830
- 229a758e232aeb49196c862655797e12
- 23e4ac5e7db3d5a898ea32d27e8b7661
- 3440cced6ec7ab38c6892a17fd368cf8
- 36d7da7306241979b17ca14a6c060b92
- 38d2264ff74123f3113f8617fabc49f6
- 3c5c693ba9b161fa1c1c67390ff22c96
- 3e0fe537124e6154233aec156652a675
- 4571090142554923f9a248cb9716a1ae
- 4e63f63074eb85e722b7795ec78aeaa3
- 63dd2d927adce034879b114d209b23de
- 642aa70b188eb7e76273130246419f1d
- 6ab9c636fb721e00b00098b476c49d19
- 71b4de8b5a1c5a973d8c23a20469d4ec
- 736ce04f4c8f92bda327c69bb55ed2fc
- 7acfddc5dfd745cc310e6919513a4158
- 7d96d4b8548693077f79bc18b0f9ef21
- 8737c4dc92bd72805b8eaf9f0ddcc696
- 9b9ff0d65523923a70acc5b24de1921f
- 9f7c1fffd565cb475bbe963aafab77ff
Indicators of Compromise:
- Unusual Outbound Network Traffic: An increase in odd or questionable outbound network traffic may be a sign that infostealer malware has accessed more data.
- Anomalies in Privileged User Account Activity: Unusual behavior or illegal access are two examples of irregular actions that might indicate a breach in privileged user accounts.
- Suspicious Registry or System File Changes: Infostealer malware may be trying to alter system settings if there are any unexpected changes to system files, registry settings, or configurations.
- Unusual DNS queries: When communicating with command and control servers or rerouting traffic, infostealer malware may produce strange DNS queries.
- Unexpected System Patching: Unexpected or unauthorized system patching by unidentified parties may indicate that infostealer malware has compromised the system and is trying to hide its footprint or become persistent.
- Phishing emails and social engineering attempts: It is a popular strategy employed by cybercriminals to get confidential data or implant malicious software. To avoid compromise, it is crucial to be wary of dubious communications and attempts of social engineering.
Recommendations:
- Be Vigilant: In today's digital world, many cybercrimes threaten online safety, Phishing tricks, fake web pages, and bad links pose real dangers. Carefully check email sources. Examine websites closely. Use top security programs. Follow safe browsing rules. Update software often. Share safety tips. These steps reduce risks. They help keep your online presence secure.
- Regular use of Anti-Virus Software to detect the threats: Antivirus tools are vital for finding and stopping cyber threats. These programs use signature detection and behavior analysis to identify known malicious code and suspicious activities. Updating virus definitions and software-patches regularly, improves their ability to detect new threats. This helps maintain system security and data integrity.
- Provide security related training to the employees and common employees: One should learn Cybersecurity and the best practices in order to keep the office safe. Common workers will get lessons on spotting risks and responding well, creating an environment of caution.
- Keep changing passwords: Passwords should be changed frequently for better security. Rotating passwords often makes it harder for cyber criminals to compromise and make it happen or confidential data to be stolen. This practice keeps intruders out and shields sensitive intel.
Conclusion:
To conclude, to reduce the impact and including the safety measures, further investigations and collaboration are already in the pipeline regarding the recent malicious software that takes advantage of gamers and has stated that about millions of credentials users have been compromised. To protect sensitive data, continued usage of antivirus software, use of trusted materials and password changes are the key elements. The ways to decrease risks and safely protect sensitive information are to develop improved Cybersecurity methods such as multi-factor authentication and the conduct of security audits frequently. Be safe and be vigilant.
Reference:
- https://techcrunch.com/2024/03/28/activision-says-its-investigating-password-stealing-malware-targeting-game-players/
- https://www.bleepingcomputer.com/news/security/activision-enable-2fa-to-secure-accounts-recently-stolen-by-malware/
- https://cyber.vumetric.com/security-news/2024/03/29/activision-enable-2fa-to-secure-accounts-recently-stolen-by-malware/
- https://www.virustotal.com/
- https://otx.alienvault.com/

Introduction
AI is transforming the way work is done and redefining the nature of jobs over the next decade. In the case of India, it is not just what duties will be taken over by machines, but how millions of employees will move to other sectors, which skills will become more sought-after, and how policy will have to change in response. This article relies on recent labour data of India's Periodic Labour Force Survey (PLFS, 2023-24) and discusses the vulnerabilities to disruption by location and social groups. It recommends viable actions that can be taken to ensure that risks are minimised and economic benefits maximised.
India’s Labour Market and Its Automation Readiness
According to India’s Periodic Labour Force Survey (PLFS), the labour market is changing and growing. Participation in the labour force improved to 60.1 per percent in 2023-24 versus 57.9 per cent the year before, and the ratio of the worker population also improved, signifying the increased employment uptake both in the rural and urban geographies (PLFS, 2023-24). There has also been an upsurge of female involvement. However, a big portion of the job market has been low-wage and informal, with most of the jobs being routine and thus most vulnerable to automation. The statistics indicate a two-tiered reality of the Indian labour market: an increased number of working individuals and a structural weakness.
AI-Driven Automation’s Impact on Tasks and Emerging Opportunities
AI-driven automation, for the most part, affects the task components of jobs rather than wiping out whole jobs. The most automatable tasks are routine and manual, and more recent developments in AI have extended to non-routine cognitive tasks like document review, customer query handling, basic coding and first-level decision-making. There are two concurrent findings of global studies. To start with, part of the ongoing tasks will be automated or expedited. Second, there will be completely new tasks and work positions around data annotation, the operation of AI systems, prompt engineering, algorithmic supervision and AI adherence (World Bank, 2025; McKinsey, 2017).
In the case of India, this change will be skewed by sector. The manufacturing, back-office IT services, retail and parts of financial services will see the highest rate of disruption due to the concentration of routine processes with the ease of technology adoption. In comparison, healthcare, education, high-tech manufacturing and AI safety auditing are placed to create new skilled jobs. NITI Aayog estimates huge returns in GDP with the adoption of AI but emphasises that India has to invest simultaneously in job creation and reskilling to achieve the returns (NITI Aayog, 2025).
Groups with Highest Vulnerability in the Transition to Automation
The PLFS emphasises that a large portion of the Indian population does not have any formal employment and that the social protection is minimal and formal training is not available to them. The risk of displacement is likely to be the greatest for informal employees, making up almost 90% of India’s labour force, who carry out low-skilled, repetitive jobs in the manufacturing and retail industry (PLFS, 2023-24). Women and young people in low-level service jobs also face a greater challenge of transition pressure unless the reskilling and placement efforts can be tailored to them. Meanwhile, major cities and urban centres are likely to have openings for most of the new skilled opportunities at the expense of an increasing geographic and social divide.
The Skills and Supply Challenge
While India’s education and research ecosystem is expanding, there remain significant gaps in preparing the workforce for AI-driven change. Given the vulnerabilities highlighted earlier, AI-focused reskilling must be a priority to equip workers with practical skills that meet industry needs. Short modular programs in areas such as cloud technologies, AI operations, data annotation, human-AI interaction, and cybersecurity can provide workers with employable skills. Particular attention should be given to routine-intensive sectors like manufacturing, retail, and back-office services, as well as to regions with high informal employment or lower access to formal training. Public-private partnerships and localised training initiatives can help ensure that reskilling translates into concrete job opportunities rather than purely theoretical knowledge (NITI Aayog, 2025)
The Way Forward
To facilitate the change process, the policy should focus on three interconnected goals: safeguarding the vulnerable, developing competencies on a large-scale level, and directing innovation towards the widespread ability to benefit.
- Protect the vulnerable through social buffers. Provide informal workers with social protection in the form of portable benefits, temporary income insurance based on reskilling, and earned training leave. While the new labour codes provide essential protections such as unemployment allowances and minimum wage standards, they could be strengthened by incorporating explicit provisions for reskilling. This would better support informal workers during job transitions and enhance workforce adaptability.
- Short modular courses on cloud computing, cybersecurity, data annotation, AI operations, and human-AI interaction should be planned through collaboration between public and private training providers. Special preference should be given to industry-certified certifications and apprenticeship-based placements. These apprenticeships should be made accessible in multiple languages to ensure inclusivity. Existing government initiatives, such as NASSCOM’s Future Skills Prime, need better outreach and marketing to reach the workforce effectively.
- Enhance local labour market mediators. Close the disparity between local demand and the supply of labour in the industry by enhancing placement services and government-subsidised internship programmes for displaced employees and encouraging firms to hire and train locally.
- Invest in AI literacy, AI ethics, and basic education. Democratise access to research and learning by introducing AI literacy in schools, increasing STEM seats in universities, and creating AI labs in the region (NITI Aayog, 2025).
- Encourage AI adoption that creates jobs rather than replaces them. Fiscal and regulatory incentives should prioritise AI tools that augment worker productivity in routine roles instead of eliminating positions. Public procurement can support firms that demonstrate responsible and inclusive deployment of AI, ensuring technology benefits both business and workforce.
- Supervise and oversee the transition. Use PLFS and real-time administrative data to monitor shrinking and expanding occupations. High-frequency labour market dashboards will allow making specific interventions in those regions in which the acceleration of displacement occurs.
Conclusion
The integration of AI will significantly impact the future of the Indian workforce, but policy will determine its effect on the labour market. The PLFS indicates increased employment but a structural weakness of informal and routine employment. Evidence from the Indian market and international research points to the fact that the appropriate combination of social protection, skills building and responsible technology implementation can change disruption into a path of upward mobility. There is a very limited window of action. The extent to which India will realise the productivity and GDP benefits predicted by national research, alongside the investments made in labour market infrastructure, remains uncertain. It is crucial that these efforts lead to the capture of gains and facilitate a fair and inclusive transition for workers.
References
- Annual Report Periodic Labour Force Survey (PLFS) JULY 2022 - JUNE 2023.
- Future Jobs: Robots, Artificial Intelligence, and Digital Platforms in East Asia and Pacific, World Bank.
- Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages, McKinsey Global Institute
- Roadmap for Job Creation in the AI Economy, NITI Aayog
- India central bank chief warns of financial stability risks from growing use of AI, Reuters
- AI Cyber Attacks Statistics 2025, SQ Magazine.

AI-generated content has been taking up space in the ever-changing dynamics of today's tech landscape. Generative AI has emerged as a powerful tool that has enabled the creation of hyper-realistic audio, video, and images. While advantageous, this ability has some downsides, too, particularly in content authenticity and manipulation.
The impact of this content is varied in the areas of ethical, psychological and social harms seen in the past couple of years. A major concern is the creation of non-consensual explicit content, including nudes. This content includes content where an individual’s face gets superimposed onto explicit images or videos without their consent. This is not just a violation of privacy for individuals, and can have humongous consequences for their professional and personal lives. This blog examines the existing laws and whether they are equipped to deal with the challenges that this content poses.
Understanding the Deepfake Technology
Deepfake technology is a media file (image, video, or speech) that typically represents a human subject that is altered deceptively using deep neural networks (DNNs). It is used to alter a person’s identity, and it usually takes the form of a “face swap” where the identity of a source subject is transferred onto a destination subject. The destination’s facial expressions and head movements remain the same, but the appearance in the video is that of the source. In the case of videos, the identities can be substituted by way of replacement or reenactment.
This superimposed content creates realistic content, such as fake nudes. Presently, creating a deepfake is not a costly endeavour. It requires a Graphics Processing Unit (GPU), software that is free, open-source, and easy to download, and graphics editing and audio-dubbing skills. Some of the common apps to create deepfakes are DeepFaceLab and FaceSwap, which are both public and open source and are supported by thousands of users who actively participate in the evolution and development of these software and models.
Legal Gaps and Challenges
Multiple gaps and challenges exist in the legal space for deepfakes and their regulation. They are:
- The inadequate definitions governing AI-generated explicit content often lead to enforcement challenges.
- Jurisdictional challenges due to the cross-border nature of crimes and the difficulties caused by international cooperation measures are in the early stages for AI content.
- There is a gap between the current consent-based and harassment laws for AI-generated nudes.
- Providing evidence or providing proof for the intent and identification of perpetrators in digital crimes is a challenge that is yet to be overcome.
Policy Responses and Global Trends
Presently, the global response to deepfakes is developing. The UK has developed the Online Safety Bill, the EU has the AI Act, the US has some federal laws such as the National AI Initiative Act of 2020 and India is currently developing the India AI Act as the specific legislation dealing with AI and its correlating issues.
The IT Rules, 2021, and the DPDP Act, 2023, regulate digital platforms by mandating content governance, privacy policies, grievance redressal, and compliance with removal orders. Emphasising intermediary liability and safe harbour protections, these laws play a crucial role in tackling harmful content like AI-generated nudes, while the DPDP Act focuses on safeguarding privacy and personal data rights.
Bridging the Gap: CyberPeace Recommendations
- Initiate legislative reforms by advocating for clear and precise definitions for the consent frameworks and instituting high penalties for AI-based offences, particularly those which are aimed at sexually explicit material.
- Advocate for global cooperation and collaborations by setting up international standards and bilateral and multilateral treaties that address the cross-border nature of these offences.
- Platforms should push for accountability by pushing for stricter platform responsibility for the detection and removal of harmful AI-generated content. Platforms should introduce strong screening mechanisms to counter the huge influx of harmful content.
- Public campaigns which spread awareness and educate users about their rights and the resources available to them in case such an act takes place with them.
Conclusion
The rapid advancement of AI-generated explicit content demands immediate and decisive action. As this technology evolves, the gaps in existing legal frameworks become increasingly apparent, leaving individuals vulnerable to profound privacy violations and societal harm. Addressing this challenge requires adaptive, forward-thinking legislation that prioritises individual safety while fostering technological progress. Collaborative policymaking is essential and requires uniting governments, tech platforms, and civil society to develop globally harmonised standards. By striking a balance between innovation and societal well-being, we can ensure that the digital age is not only transformative but also secure and respectful of human dignity. Let’s act now to create a safer future!
References
- https://etedge-insights.com/technology/artificial-intelligence/deepfakes-and-the-future-of-digital-security-are-we-ready/
- https://odsc.medium.com/the-rise-of-deepfakes-understanding-the-challenges-and-opportunities-7724efb0d981
- https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/