#FactCheck - "Deep fake video falsely circulated as of a Syrian prisoner who saw sunlight for the first time in 13 years”
Executive Summary:
A viral online video claims to show a Syrian prisoner experiencing sunlight for the first time in 13 years. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the prisoner’s facial expressions and surroundings. The original footage is unrelated to the claim that the prisoner has been held in solitary confinement for 13 years. The assertion that this video depicts a Syrian prisoner seeing sunlight for the first time is false and misleading.

Claims:
A viral video falsely claims that a Syrian prisoner is seeing sunlight for the first time in 13 years.


Factcheck:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes from the video. The search led us to various legitimate sources featuring real reports about Syrian prisoners, but none of them included any mention of such an incident. The viral video exhibited several signs of digital manipulation, prompting further investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 97.0% confidence that the video was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the prisoner’s facial movements and the lighting conditions, both of which appeared artificially generated.


Additionally, a thorough review of news sources and official reports related to Syrian prisoners revealed no evidence of a prisoner being released from solitary confinement after 13 years, or experiencing sunlight for the first time in such a manner. No credible reports supported the viral video’s claim, further confirming its inauthenticity.
Conclusion:
The viral video claiming that a Syrian prisoner is seeing sunlight for the first time in 13 years is a deep fake. Investigations using tools like Hive AI detection confirm that the video was digitally manipulated using AI technology. Furthermore, there is no supporting information in any reliable sources. The CyberPeace Research Team confirms that the video was fabricated, and the claim is false and misleading.
- Claim: Syrian prisoner sees sunlight for the first time in 13 years, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
Intricate and winding are the passageways of the modern digital age, a place where the reverberations of truth effortlessly blend, yet hauntingly contrast, with the echoes of falsehood. Within this complex realm, the World Economic Forum (WEF) has illuminated the darkened corners with its powerful spotlight, revealing the festering, insidious network of misinformation and disinformation that snakes through the virtual and physical worlds alike. Gravely identified by the “WEF's Global Risks Report 2024” as the most formidable and immediate threats to our collective well-being, this malignant duo—misinformation and disinformation.
The report published with the solemn tone suitable for the prelude to such a grand international gathering as the Annual Summit in Davos, the report presents a vivid tableau of our shared global landscape—one that is dominated by the treacherous pitfalls of deceits and unverified claims. These perils, if unrecognised and unchecked by societal checks and balances, possess the force to rip apart the intricate tapestry of our liberal institutions, shaking the pillars of democracies and endangering the vulnerable fabric of social cohesion.
Election Mania
As we find ourselves perched on the edge of a future, one where the voices of nearly three billion human beings make their mark on the annals of history—within the varied electoral processes of nations such as Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom, and the United States. However, the spectre of misinformation can potentially corrode the integrity of the governing entities that will emerge from these democratic processes. The warning issued by the WEF is unambiguous: we are flirting with the possibility of disorder and turmoil, where the unchecked dispersion of fabrications and lies could kindle flames of unrest, manifesting in violent protests, hate-driven crimes, civil unrest, and the scourge of terrorism.
Derived from the collective wisdom of over 1,400 experts in global risk, esteemed policymakers, and industry leaders, the report crafts a sobering depiction of our world's journey. It paints an ominous future that increasingly endows governments with formidable power—to brandish the weapon of censorship, to unilaterally declare what is deemed 'true' and what ought to be obscured or eliminated in the virtual world of sharing information. This trend signals a looming potential for wider and more comprehensive repression, hindering the freedoms traditionally associated with the Internet, journalism, and unhindered access to a panoply of information sources—vital fora for the exchange of ideas and knowledge in a myriad of countries across the globe.
Prominence of AI
When the gaze of the report extends further over a decade-long horizon, the prominence of environmental challenges such as the erosion of biodiversity and alarming shifts in the Earth's life-support systems ascend to the pinnacle of concern. Yet, trailing closely, the digital risks continue to pulsate—perpetuated by the distortions of misinformation, the echoing falsities of disinformation, and the unpredictable repercussions stemming from the utilization and, at times, the malevolent deployment of artificial intelligence (AI). These ethereal digital entities, far from being illusory shades, are the precursors of a disintegrating world order, a stage on which regional powers move to assert and maintain their influence, instituting their own unique standards and norms.
The prophecies set forth by the WEF should not be dismissed as mere academic conjecture; they are instead a trumpet's urgent call to mobilize. With a startling 30 percent of surveyed global experts bracing for the prospect of international calamities within the mere span of the coming two years, and an even more significant portion—nearly two-thirds—envisaging such crises within the forthcoming decade, it is unmistakable that the time to confront and tackle these looming risks is now. The clarion is sounding, and the message is clear: inaction is no longer an available luxury.
Maldives and India Row
To pluck precise examples from the boundless field of misinformation, we might observe the Lakshadweep-Malé incident wherein an ordinary boat accident off the coast of Kerala was grotesquely transformed into a vessel for the far-reaching tendrils of fabricated narratives, erroneously implicating Lakshadweep in the spectacle. Similarly, the tension-laden India-Maldives diplomatic exchange becomes a harrowing testament to how strained international relations may become fertile ground for the rampant spread of misleading content. The suspension of Maldivian deputy ministers following offensive remarks, the immediate tumult that followed on social media, and the explosive proliferation of counterfeit news targeting both nations paint a stark and intricate picture of how intertwined are the threads of politics, the digital platforms of social media, and the virulent propagation of falsehoods.
Yet, these are mere fragments within the extensive and elaborate weave of misinformation that threatens to enmesh our globe. As we venture forth into this dangerous and murky topography, it becomes our collective responsibility to maintain a sense of heightened vigilance, to consistently question and verify the sources and content of the information that assails us from all directions, and to cultivate an enduring culture anchored in critical thinking and discernment. The stakes are colossal—for it is not merely truth itself that we defend, but rather the underlying tenets of our societies and the sanctity of our cherished democratic institutions.
Conclusion
In this fraught era, marked indelibly by uncertainty and perched precariously on the cusp of numerous pivotal electoral ventures, let us refuse the role of passive bystanders to unraveling our collective reality. We must embrace our role as active participants in the relentless pursuit of truth, fortified with the stark awareness that our entwined futures rest precariously on our willingness and ability to distinguish the veritable from the spurious within the perilous lattice of falsehoods of misinformation. We must continually remind ourselves that, in the quest for a stable and just global order, the unerring discernment of fact from fiction becomes not only an act of intellectual integrity but a deed of civic and moral imperative.
References
- https://www.businessinsider.in/politics/world/election-fuelled-misinformation-is-serious-global-risk-in-2024-says-wef/articleshow/106727033.cms
- https://www.deccanchronicle.com/nation/current-affairs/100124/misinformation-tops-global-risks-2024.html
- https://www.msn.com/en-in/news/India/fact-check-in-lakshadweep-male-row-kerala-boat-accident-becomes-vessel-for-fake-news/ar-AA1mOJqY
- https://www.boomlive.in/news/india-maldives-muizzu-pm-modi-lakshadweep-fact-check-24085
- https://www.weforum.org/press/2024/01/global-risks-report-2024-press-release/

The Rise of Tech Use Amongst Children
Technology today has become an invaluable resource for children, as a means to research issues, be informed about events, gather data, and share views and experiences with others. Technology is no longer limited to certain age groups or professions: children today are using it for learning & entertainment, engaging with their friends, online games and much more. With increased digital access, children are also exposed to online mis/disinformation and other forms of cyber crimes, far more than their parents, caregivers, and educators were in their childhood or are, even in the present. Children are particularly vulnerable to mis/disinformation due to their still-evolving maturity and cognitive capacities. The innocence of the youth is a major cause for concern when it comes to digital access because children simply do not possess the discernment and caution required to be able to navigate the Internet safely. They are active users of online resources and their presence on social media is an important factor of social, political and civic engagement but young people and children often lack the cognitive and emotional capacity needed to distinguish between reliable and unreliable information. As a result, they can be targets of mis/disinformation. ‘A UNICEF survey in 10 countries’[1] reveals that up to three-quarters of children reported feeling unable to judge the veracity of the information they encounter online.
Social media has become a crucial part of children's lives, with them spending a significant time on digital platforms such as Youtube, Facebook, Instagram and more. All these platforms act as source of news, educational content, entertainment, peer communication and more. These platforms host a variety of different kinds of content across a diverse range of subject matters, and each platform’s content and privacy policies are different. Despite age restrictions under the Children's Online Privacy Protection Act (COPPA), and other applicable laws, it is easy for children to falsify their birth date or use their parent's accounts to access content which might not be age-appropriate.
The Impact of Misinformation on Children
In virtual settings, inaccurate information can come in the form of text, images, or videos shared through traditional and social media channels. In this age, online misinformation is a significant cause for concern, especially with children, because it can cause anxiety, damage self-esteem, shape beliefs, and skewing their worldview/viewpoints. It can distort children's understanding of reality, hinder their critical thinking skills, and cause confusion and cognitive dissonance. The growing infodemic can even cause an overdose of information. Misinformation can also influence children's social interactions, leading to misunderstandings, conflicts, and mistrust among peers. Children from low literacy backgrounds are more susceptible to fabricated content. Mis/disinformation can exacerbate social divisions amongst peers and lead to unwanted behavioural patterns. Sometimes even children themselves can unwittingly spread/share misinformation. Therefore, it is important to educate & empower children to build cognitive defenses against online misinformation risks, promote media literacy skills, and equip them with the necessary tools to critically evaluate online information.
CyberPeace Policy Wing Recommendations
- Role of Parents & Educators to Build Cognitive Defenses
One way parents shape their children's values, beliefs and actions is through modelling. Children observe how their parents use technology, handle challenging situations, and make decisions. For example, parents who demonstrate honesty, encourage healthy use of social media and show kindness and empathy are more likely to raise children who hold these qualities in high regard. Hence parents/educators play an important role in shaping the minds of their young charges and their behaviours, whether in offline or online settings. It is important for parents/educators to realise that they must pay close attention to how online content consumption is impacting the cognitive skills of their child. Parents/educators should educate children about authentic sources of information. This involves instructing children on the importance of using reliable, credible sources to utilise while researching on any topic of study or otherwise, and using verification mechanisms to test suspected information., This may sound like a challenging ideal to meet, but the earlier we teach children about Prebunking and Debunking strategies and the ability to differentiate between fact and misleading information, the sooner we can help them build cognitive defenses so that they may use the Internet safely. Hence it becomes paramount important for parents/educators to require children to question the validity of information, verify sources, and critically analyze content. Developing these skills is essential for navigating the digital world effectively and making informed decisions.
- The Role of Tech & Social Media Companies to Fortify their Steps in Countering Misinformation
Is worth noting that all major tech/social media companies have privacy policies in place to discourage any spread of harmful content or misinformation. Social media platforms have already initiated efforts to counter misinformation by introducing new features such as adding context to content, labelling content, AI watermarks and collaboration with civil society organisations to counter the widespread online misinformation. In light of this, social media platforms must prioritise both the designing and the practical implementation aspects of policy development and deployment to counter misinformation strictly. These strategies can be further improved upon through government support and regulatory controls. It is recommended that social media platforms must further increase their efforts to counter increasing spread of online mis/disinformation and apply advanced techniques to counter misinformation including filtering, automated removal, detection and prevention, watermarking, increasing reporting mechanisms, providing context to suspected content, and promoting authenticated/reliable sources of information.
Social media platforms should consider developing children-specific help centres that host educational content in attractive, easy-to-understand formats so that children can learn about misinformation risks and tactics, how to spot red flags and how to increase their information literacy and protect themselves and their peers. Age-appropriate, attractive and simple content can go a long way towards fortifying young minds and making them aware and alert without creating fear.
- Laws and Regulations
It is important that the government and the social media platforms work in sync to counteract misinformation. The government must consult with the concerned platforms and enact rules and regulations which strengthen the platform’s age verification mechanisms at the sign up/ account creation stage whilst also respecting user privacy. Content moderation, removal of harmful content, and strengthening reporting mechanisms all are important factors which must be prioritised at both the regulatory level and the platform operational level. Additionally, in order to promote healthy and responsible use of technology by children, the government should collaborate with other institutions to design information literacy programs at the school level. The government must make it a key priority to work with civil society organisations and expert groups that run programs to fight misinformation and co-create a safe cyberspace for everyone, including children.
- Expert Organisations and Civil Societies
Cybersecurity experts and civil society organisations possess the unique blend of large scale impact potential and technical expertise. We have the ability to educate and empower huge numbers, along with the skills and policy acumen needed to be able to not just make people aware of the problem but also teach them how to solve it for themselves. True, sustainable solutions to any social concern only come about when capacity-building and empowerment are at the heart of the initiative. Programs that prioritise resilience, teach Prebunking and Debunking and are able to understand the unique concerns, needs and abilities of children and design solutions accordingly are the best suited to implement the administration’s mission to create a safe digital society.
Final Words
Online misinformation significantly impacts child development and can hinder their cognitive abilities, color their viewpoints, and cause confusion and mistrust. It is important that children are taught not just how to use technology but how to use it responsibly and positively. This education can begin at a very young age and parents, guardians and educators can connect with CyberPeace and other similar initiatives on how to define age-appropriate learning milestones. Together, we can not only empower children to be safe today, but also help them develop into netizens who make the world even safer for others tomorrow.
References:
- [1] Digital misinformation / disinformation and children
- [2] Children's Privacy | Federal Trade Commission

Introduction
Search Engine Optimisation (SEO) is a process through which one can improve website visibility on search engine platforms like Google, Microsoft Bing, etc. There is an implicit understanding that SEO suggestions or the links that are generated on top are the more popular information sources and, hence, are deemed to be more trustworthy. This trust, however, is being misused by threat actors through a process called SEO poisoning.
SEO poisoning is a method used by threat actors to attack and obtain information about the user by using manipulative methods that position their desired link, web page, etc to appear at the top of the search engine algorithm. The end goal is to lure the user into clicking and downloading their malware, presented in the garb of legitimate marketing or even as a valid result for Google search.
An active example of attempts at SEO poisoning has been discussed in a report by the Hindustan Times on 11th November, 2024. It highlights that using certain keywords could make a user more susceptible to hacking. Hackers are now targeting people who enter specific words or specific combinations in search engines. According to the report, users who looked up and clicked on links at the top related to the search query “Are Bengal cats legal in Australia?” had details regarding their personal information posted online soon after.
SEO Poisoning - Modus Operandi Of Attack
There are certain tactics that are used by the attackers on SEO poisoning, these are:
- Keyword stuffing- This method involves overloading a webpage with irrelevant words, which helps the false website appear higher in ranking.
- Typosquatting- This method involves creating domain names or links similar to the more popular and trusted websites. A lack of scrutiny before clicking would lead the user to download malware, from what they thought was a legitimate site.
- Cloaking- This method operates by showing different content to both the search engines and the user. While the search engine sees what it assumes to be a legitimate website, the user is exposed to harmful content.
- Private Link Networks- Threat actors create a group of unrelated websites in order to increase the number of referral links, which enables them to rank higher on search engine platforms.
- Article Spinning- This method involves imitating content from other pre-existing, legitimate websites, while making a few minor changes, giving the impression to search engine crawlers of it being original content.
- Sneaky Redirect- This method redirects the users to malicious websites (without their knowledge) instead of the ones the user had intended to click.
CyberPeace Recommendations
- Employee Security Awareness Training: Security awareness training can help employees familiarise themselves with tactics of SEO poisoning, encouraging them to either spot such inconsistencies early on or even alert the security team at the earliest.
- Tool usage: Companies can use Digital Risk Monitoring tools to catch instances of typosquatting. Endpoint Detection and Response (EDR) tools also help keep an eye on client history and assess user activities during security breaches to figure out the source of the affected file.
- Internal Security Measures: To refer to lists of Indicators of Compromise (IOC). IOC has URL lists that show evidence of the strange behaviour of websites, and this can be used to practice caution. Deploying Web Application Firewalls (WAFs) to mitigate and detect malicious traffic is helpful.
Conclusion
The nature of SEO poisoning is such that it inherently promotes the spread of misinformation, and facilitates cyberattacks. Misinformation regarding the legitimacy of the links and the content they display, in order to lure users into clicking on them, puts personal information under threat. As people trust their favoured search engines, and there is a lack of awareness of such tactics in use, one must exercise caution while clicking on links that seem to be popular, despite them being hosted by trusted search engines.
References
- https://www.checkpoint.com/cyber-hub/cyber-security/what-is-cyber-attack/what-is-seo-poisoning/
- https://www.vectra.ai/topics/seo-poisoning
- https://www.techtarget.com/whatis/definition/search-poisoning
- https://www.blackberry.com/us/en/solutions/endpoint-security/ransomware-protection/seo-poisoning
- https://www.coalitioninc.com/blog/seo-poisoning-attacks
- https://www.sciencedirect.com/science/article/abs/pii/S0160791X24000186
- https://www.repindia.com/blog/secure-your-organisation-from-seo-poisoning-and-malvertising-threats/
- https://www.hindustantimes.com/technology/typing-these-6-words-on-google-could-make-you-a-target-for-hackers-101731286153415.html