#FactCheck- Old US Troops Homecoming Video Falsely Linked to Iran Ceasefire
Executive Summary
Talks between the United States and Iran over a ceasefire reportedly held in Islamabad on Saturday ended without a resolution. Meanwhile, a video circulating on social media claims to show US troops returning home following a ceasefire in the Middle East conflict.
However, a research by the CyberPeace found the claim to be false. The viral video is not linked to any recent ceasefire. It actually dates back to March and shows the return of Iowa National Guard troops after months of deployment in the Middle East.
Claim
An X (formerly Twitter) user posted the video on April 7, 2026, claiming,“Another victory for Iran: American soldiers have started arriving home. After leaving the Middle East, American soldiers are saying, ‘Why did we fight for Israel? If Iran is talking about peace, we will also stand with them.’”

Fact Check
To verify the claim, we extracted keyframes from the viral video and conducted a reverse image search using Google Lens. This led us to posts by Newsradio 1040 WHO, which had shared the same footage on March 12 across Facebook and Instagram.


In its caption, the radio station stated that nearly 600 Iowa soldiers had returned home after a nine-month deployment in the Middle East as part of Operation Inherent Resolve. The segment, narrated by journalist Claire Burnett, explained that the soldiers belonged to the 2nd Brigade Combat Team, 34th Infantry Division, and had been deployed to Iraq and Syria. The footage was recorded at the 132nd Wing base of the Iowa Air National Guard in Des Moines.

For further confirmation, a March 12 report by KCCI 8 News also showed the same aircraft and troops, verifying the authenticity and timeline of the footage

Operation Inherent Resolve, launched in 2014, is a US-led campaign aimed at supporting local forces in the fight against the Islamic State (ISIS) and ensuring its lasting defeat.
https://www.kcci.com/article/iowans-welcome-national-guard-unit-home-from-deployment-in-middle-east/70729105

Conclusion
The viral claim is false and misleading. The video does not show US troops returning due to any recent ceasefire between the United States and Iran. Instead, it captures the routine homecoming of Iowa National Guard soldiers in March after completing a scheduled deployment in the Middle East.There is no evidence linking the footage to current geopolitical developments or any ceasefire agreement. The claim has been taken out of context and shared with a misleading narrative to create confusion around ongoing international events.
Related Blogs

Social media has become far more than a tool of communication, engagement and entertainment. It shapes politics, community identity, and even shapes agendas. When misused, the consequences can be grave: communal disharmony, riots, false rumours, harassment or worse. Emphasising the need for digital Atmanirbhar, Prime Minister Narendra Modi recently urged India’s youth to develop the country’s own social media platforms, like Facebook, Instagram and X, to ensure that the nation’s technological ecosystems remain secure and independent, reinforcing digital autonomy. This growing influence of platforms has sharpened the tussle between government regulation, the independence of social media companies, and the protection of freedom of expression in most countries.
Why Government Regulation Is Especially Needed
While self-regulation has its advantages, ‘real-world harms’ show why state oversight cannot be optional:
- Incitement to violence and communal unrest: Misinformation and hate speech can inflame tensions. In Manipur (May 2023), false posts, including unverified sexual-violence claims, spread online, worsening clashes. Authorities shut down mobile internet on 3 May 2023 to curb “disinformation and false rumours,” showing how quickly harmful content can escalate and why enforceable moderation rules matter.
- Fake news and misinformation: False content about health, elections or individuals spreads far faster than corrections. During COVID-19, an “infodemic” of fake cures, conspiracy theories and religious discrimination went viral on WhatsApp and Facebook, starting with false claims that the virus came from eating bats. The WHO warned of serious knock-on effects, and a Reuters Institute study found that although such claims by public figures were fewer, they gained the highest engagement, showing why self-regulation alone often fails to stop it.
Nepal’s Example:
Nepal provides a clear example of the tension between government regulation and the self-regulation tussle of social media. In 2023, the government issued rules requiring all social media platforms, whether local or foreign, to register with the Ministry of Communication and Information Technology, appoint a local contact person, and comply with Nepali law. By 2025, major platforms such as Facebook, Instagram, and YouTube had not met the registration deadline. In response, the Nepal Telecommunications Authority began blocking unregistered platforms until they complied. While journalists, civil-rights groups and Gen Z criticised the move as potentially limiting free speech and exposing corruption against the government. The government argued it was necessary to stop harmful content and misinformation. The case shows that without enforceable obligations, self-regulation can leave platforms unaccountable, but it must also balance with protecting free speech.
Self-Regulation: Strengths and Challenges
Most social-media companies prefer to self-regulate. They write community rules, trust & safety guidelines, and give users ways to flag harmful posts, and lean on a mix of staff, outside boards and AI filters to handle content that crosses the line. The big advantage here is speed: when something dangerous appears, a platform can react within minutes, far quicker than a court or lawmaker. Because they know their systems inside out, from user habits to algorithmic quirks, they can adapt fast.
But there’s a downside. These platforms thrive on engagement, hence sensational or hateful posts often keep people scrolling longer. That means the very content that makes money can also be the content that most needs moderating , a built-in conflict of interest.
Government Regulation: Strengths and Risks
Public rules make platforms answerable. Laws can require illegal content to be removed, force transparency and protect user rights. They can also stop serious harms such as fake news that might spark violence, and they often feel more legitimate when made through open, democratic processes.
Yet regulation can lag behind technology. Vague or heavy-handed rules may be misused to silence critics or curb free speech. Global enforcement is messy, and compliance can be costly for smaller firms.
Practical Implications & Hybrid Governance
For users, regulation brings clearer rights and safer spaces, but it must be carefully drafted to protect legitimate speech. For platforms, self-regulation gives flexibility but less certainty; government rules provide a level playing field but add compliance costs. For governments, regulation helps protect public safety, reduce communal disharmony, and fight misinformation, but it requires transparency and safeguards to avoid misuse.
Hybrid Approach
A combined model of self-regulation plus government regulation is likely to be most effective. Laws should establish baseline obligations: registration, local grievance officers, timely removal of illegal content, and transparency reporting. Platforms should retain flexibility in how they implement these obligations and innovate with tools for user safety. Independent audits, civil society oversight, and simple user appeals can help keep both governments and platforms accountable.
Conclusion
Social media has great power. It can bring people together, but it can also spread false stories, deepen divides and even stir violence. Acting on their own, platforms can move fast and try new ideas, but that alone rarely stops harmful content. Good government rules can fill the gap by holding companies to account and protecting people’s rights.
The best way forward is to mix both approaches, clear laws, outside checks, open reporting, easy complaint systems and support for local platforms, so the digital space stays safer and more trustworthy.
References
- https://timesofindia.indiatimes.com/india/need-desi-social-media-platforms-to-secure-digital-sovereignty-pm/articleshow/123327780.cms#
- https://www.bbc.com/news/world-asia-india-66255989
- https://nepallawsunshine.com/social-media-registration-in-nepal/ https://www.newsonair.gov.in/nepal-bans-26-unregistered-social-media-sites-including-facebook-whatsapp-instagram/
- https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
- https://www.drishtiias.com/daily-updates/daily-news-analysis/social-media-regulation-in-india

Artificial intelligence is revolutionizing industries such as healthcare to finance to influence the decisions that touch the lives of millions daily. However, there is a hidden danger associated with this power: unfair results of AI systems, reinforcement of social inequalities, and distrust of technology. One of the main causes of this issue is training data bias, which appears when the examples on which an AI model is trained are not representative or skewed. To deal with it successfully, this needs a combination of statistical methods, algorithmic design that is mindful of fairness, and robust governance over the AI lifecycle. This article discusses the origin of bias, the ways to reduce it, and the unique position of fairness-conscious algorithms.
Why Bias in Training Data Matters
The bias in AI occurs when the models mirror and reproduce the trends of inequality in the training data. When a dataset has a biased representation of a demographic group or includes historical biases, the model will be trained to make decisions in ways that will harm the group. This is a fact that has a practical implication: prejudiced AI may cause discrimination during the recruitment of employees, lending, and evaluation of criminal risks, as well as various other spheres of social life, thus compromising justice and equity. These problems are not only technical in nature but also require moral principles and a system of governance (E&ICTA).
Bias is not uniform. It may be based on the data itself, the algorithm design, or even the lack of diversity among developers. The bias in data occurs when data does not represent the real world. Algorithm bias may arise when design decisions inadvertently put one group at an unfair advantage over another. Both the interpretation of the model and data collection may be affected by human bias. (MDPI)
Statistical Principles for Reducing Training Data Bias
Statistical principles are at the core of bias mitigation and they redefine the data-model interaction. These approaches are focused on data preparation, training process adjustment, and model output corrections in such a way that the notion of fairness becomes a quantifiable goal.
Balancing Data Through Re-Sampling and Re-Weighting
Among the aforementioned methods, a fair representation of all the relevant groups in the dataset is one way. This can be achieved by oversampling underrepresented groups and undersampling overrepresented groups. Oversampling gives greater weight to minority examples, whereas re-weighting gives greater weight to under-represented data points in training. The methods minimize the tendency of models to fit to salient patterns and improve coverage among vulnerable groups. (GeeksforGeeks)
Feature Engineering and Data Transformation
The other statistical technique is to convert data characteristics in such a way that sensitive characteristics have a lesser impact on the results. In one example, fair representation learning adjusts the data representation to discourage bias during the untraining of the model. The disparate impact remover adjust technique performs the adjustment of features of the model in such a way that the impact of sensitive features is reduced during learning. (GeeksforGeeks)
Measuring Fairness With Metrics
Statistical fairness measures are used to measure the effectiveness of a model in groups.
Fairness-Aware Algorithms Explained
Fair algorithms do not simply detect bias. They incorporate fairness goals in model construction and run in three phases including pre-processing, in-processing, and post-processing.
Pre-Processing Techniques
Fairness-aware pre-processing deals with bias prior to the model consuming the information. This involves the following ways:
- Rebalancing training data through sampling and re-weighting training data to address sample imbalances.
- Data augmentation to generate examples of underrepresented groups.
- Feature transformation removes or downplays the impact of sensitive attributes prior to the commencement of training. (IJMRSET)
These methods can be used to guarantee that the model is trained on more balanced data and to reduce the chances of bias transfer between historical data.
In-Processing Techniques
The in-processing techniques alter the learning algorithm. These include:
- Fairness constraints that penalize the model for making biased predictions during training.
- Adversarial debiasing, where a second model is used to ensure that sensitive attributes are not predicted by the learned representations.
- Fair representation learning that modifies internal model representations in favor of
Post-Processing Techniques
Fairness may be enhanced after training by changing the model outputs. These strategies comprise:
- Threshold adjustments to various groups to meet conditions of fairness, like equalized odds.
- Calibration techniques such that the estimated probabilities are fair indicators of the actual probabilities in groups. (GeeksforGeeks)
Challenges
Mitigating bias is complex. The statistical bias minimization may at times come at the cost of the model accuracy, and there is a conflict between predictive performance and fairness. The definition of fairness itself is potentially a difficult task because various applications of fairness require various criteria, and various criteria can be conflicting. (MDPI)
Gaining varied and representative data is also a challenge that is experienced because of privacy issues, incomplete records, and a lack of resources. The auditing and reporting done on a continuous basis are needed so that mitigation processes are up to date, as models are continually updated. (E&ICTA)
Why Fairness-Aware Development Matters
The outcomes of the unfair treatment of some groups by AI systems are far-reaching. Discriminatory software in recruitment may support inequality in the workplace. Subjective credit rating may deprive deserving people of opportunities. Unbiased medical forecasts might result in the flawed allocation of medical resources. In both cases, prejudice contravenes the credibility and clouds the greater prospect of AI. (E&ICTA)
Algorithms that are fair and statistical mitigation plans provide a way to create not only powerful AI but also fair and trustworthy AI. They admit that the results of AI systems are social tools whose effects extend across society. Responsible development will necessitate sustained fairness quantification, model adjustment, and upholding human control.
Conclusion
AI bias is not a technical malfunction. It is a mirror of real-world disparities in data and exaggerated by models. Statistical rigor, wise algorithm design, and readiness to address the trade-offs between fairness and performance are required to reduce training data bias. Fairness-conscious algorithms (which can be implemented in pre-processing, in-processing, or post-processing) are useful in delivering more fair results. As AI is taking part in the most crucial decisions, it is necessary to consider fairness at the beginning to have a system that serves the population in a responsible and fair manner.
References
- Understanding Bias in Artificial Intelligence: Challenges, Impacts, and Mitigation Strategies: E&ICTA, IITK
- Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies: JRPS Shodh Sagar
- Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies: MDPI
- Ensuring Fairness in Machine Learning Algorithms: GeeksforGeeks
Bias and Fairness in Machine Learning Models: A Critical Examination of Ethical Implications: IJMRSET - Bias in AI Models: Origins, Impact, and Mitigation Strategies: Preprints
- Bias in Artificial Intelligence and Mitigation Strategies: TCS
- Survey on Machine Learning Biases and Mitigation Techniques: MDPI

AI systems have grown in both popularity and complexity on which they operate. They are enhancing accessibility for all, including people with disabilities, by revolutionising sectors including healthcare, education, and public services. We are at the stage where AI-powered solutions that can help people with mental, physical, visual or hearing impairments perform everyday and complex tasks are being created.
Generative AI is now being used to amplify human capability. The development of tools for speech-to-text and image recognition is helping in facilitating communication and interaction for visually or hearing-impaired individuals, and smart prosthetics are providing tailored support. Unfortunately, even with these developments, PWDs have continued to face challenges. Therefore, it is important to balance innovation with ethical considerations aand ensuring that these technologies are designed with qualities like privacy, equity, and inclusivity in mind.
Access to Tech: the Barriers Faced by PWDs
PWDs face several barriers while accessing technology. Identifying these challenges is important as they lack computer accessibility, in the use of hardware and software, which has become a norm in life nowadays. Website functions that only work when users click with a mouse, self-service kiosks without accessibility features, touch screens without screen reader software or tactile keyboards, and out-of-order equipment, such as lifts, captioning mirrors and description headsets, are just some difficulties that they face in their day-to-day life.
While they are helpful, much of the current technology doesn’t fully address all disabilities. For example, many assistive devices focus on visual or mobility impairments, but they fall short of addressing cognitive or sensory conditions. In addition to this, these solutions often lack personalisation, making them less effective for individuals with diverse needs. AI has significant potential to bridge this gap. With adaptive systems like voice assistants, real-time translation, and personalised features, AI can create more inclusive solutions, improving access to both digital and physical spaces for everyone.
The Importance of Inclusive AI Design
Creating an Inclusive AI design is important. It ensures that PWDs are not excluded from technological advancements because of the impairments that they are suffering from. The concept of an ‘inclusive or universal’ design promotes creating products and services that are usable for the widest possible range of people. Tech Developers have an ethical responsibility to create advancements in AI that serve everyone. Accessibility features should be built into the core design. They should be treated as a practice rather than an afterthought. However, bias in AI development often stems from data of a non-representative nature, or assumptions can lead to systems that overlook or poorly serve PWDs. If AI algorithms are trained on limited or biased data, they risk excluding marginalised groups, making ethical, inclusive design a necessity for equity and accessibility.
Regulatory Efforts to Ensure Accessible AI
In India, the Rights of Persons with Disabilities Act of 2016 impresses upon the need to provide PWDs with equal accessibility to technology. Subsequently, the DPDP Act of 2023 highlights data privacy concerns for the disabled under section 9 to process their data.
On the international level, the newly incorporated EU’s AI Act mandates measures for transparent, safe, and fair access to AI systems along with including measures that are related to accessibility.
In the US, the Americans with Disabilities Act of 1990 and Section 508 of the 1998 amendment to the Rehabilitation Act of 1973 are the primary legislations that work on promoting digital accessibility in public services.
Challenges in implementing Regulations for AI Accessibility for PWDs
Defining the term ‘inclusive AI’ is a challenge. When working on implementing regulations and compliance for the accessibility of AI, if the primary work is left undefined, it makes the task of creating tools to address the issue an issue. The rapid pace of tech and AI development has more often outpaced legal frameworks in development. This leads to the creation of enforcement gaps. Countries like Canada and tech industry giants like Microsoft and Google are leading forces behind creating accessible AI innovations. Their regulatory frameworks focus on developing AI ethics with inclusivity and collaboration with disability rights groups.
India’s efforts in creating an inclusive AI include the redesign of the Sugamya Bharat app. The app had been created to assist PWDs and the elderly. It will now be incorporating AI features specifically to assist the intended users.
Though AI development has opportunities for inclusivity, unregulated development can be risky. Regulation plays a critical role in ensuring that AI-driven solutions prioritise inclusivity, fairness, and accessibility, harnessing AI’s potential to empower PWDs and contribute to a more inclusive society.
Conclusion
AI development can offer PWDs unprecedented independence and accessibility in leading their lives. The development of AI while keeping inclusivity and fairness in mind is needed to be prioritised. AI that is free from bias, combined with robust regulatory frameworks, together are essential in ensuring that AI serves equitably. Collaborations between tech developers, policymakers, and disability advocates need to be supported and promoted to build AI systems. This will in turn work towards bridging the accessibility gaps for PWDs. As AI continues to evolve, maintaining a steadfast commitment to inclusivity will be crucial in preventing marginalisation and advancing true technological progress for all.
References
- https://www.business-standard.com/india-news/over-1-4k-accessibility-related-complaints-filed-on-govt-app-75-solved-124090800118_1.html
- https://www.forbes.com/councils/forbesbusinesscouncil/2023/06/16/empowering-individuals-with-disabilities-through-ai-technology/ .
- https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities
- Thehttps://blogs.microsoft.com/on-the-issues/2018/05/07/using-ai-to-empower-people-with-disabilities/andensur,personalization