#FactCheck - "Deep fake video falsely circulated as of a Syrian prisoner who saw sunlight for the first time in 13 years”
Executive Summary:
A viral online video claims to show a Syrian prisoner experiencing sunlight for the first time in 13 years. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the prisoner’s facial expressions and surroundings. The original footage is unrelated to the claim that the prisoner has been held in solitary confinement for 13 years. The assertion that this video depicts a Syrian prisoner seeing sunlight for the first time is false and misleading.

Claim A viral video falsely claims that a Syrian prisoner is seeing sunlight for the first time in 13 years.


Factcheck:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes from the video. The search led us to various legitimate sources featuring real reports about Syrian prisoners, but none of them included any mention of such an incident. The viral video exhibited several signs of digital manipulation, prompting further investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 97.0% confidence that the video was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the prisoner’s facial movements and the lighting conditions, both of which appeared artificially generated.


Additionally, a thorough review of news sources and official reports related to Syrian prisoners revealed no evidence of a prisoner being released from solitary confinement after 13 years, or experiencing sunlight for the first time in such a manner. No credible reports supported the viral video’s claim, further confirming its inauthenticity.
Conclusion:
The viral video claiming that a Syrian prisoner is seeing sunlight for the first time in 13 years is a deep fake. Investigations using tools like Hive AI detection confirm that the video was digitally manipulated using AI technology. Furthermore, there is no supporting information in any reliable sources. The CyberPeace Research Team confirms that the video was fabricated, and the claim is false and misleading.
- Claim: Syrian prisoner sees sunlight for the first time in 13 years, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs
.webp)
Introduction
On the precipice of a new domain of existence, the metaverse emerges as a digital cosmos, an expanse where the horizon is not sky, but a limitless scope for innovation and imagination. It is a sophisticated fabric woven from the threads of social interaction, leisure, and an accelerated pace of technological progression. This new reality, a virtual landscape stretching beyond the mundane encumbrances of terrestrial life, heralds an evolutionary leap where the laws of physics yield to the boundless potential inherent in our creativity. Yet, the dawn of such a frontier does not escape the spectre of an age-old adversary—financial crime—the shadow that grows in tandem with newfound opportunity, seeping into the metaverse, where crypto-assets are no longer just an alternative but the currency du jour, dazzling beacons for both legitimate pioneers and shades of illicit intent.
The metaverse, by virtue of its design, is a canvas for the digital repaint of society—a three-dimensional realm where the lines between immersive experiences and entertainment blur, intertwining with surreal intimacy within this virtual microcosm. Donning headsets like armor against the banal, individuals become avatars; digital proxies that acquire the ability to move, speak, and perform an array of actions with an ease unattainable in the physical world. Within this alternative reality, users navigate digital topographies, with experiences ranging from shopping in pixelated arcades to collaborating in virtual offices; from witnessing concerts that defy sensory limitations to constructing abodes and palaces from mere codes and clicks—an act of creation no longer beholden to physicality but to the breadth of one's ingenuity.
The Crypto Assets
The lifeblood of this virtual economy pulsates through crypto-assets. These digital tokens represent value or rights held on distributed ledgers—a technology like blockchain, which serves as both a vault and a transparent tapestry, chronicling the pathways of each digital asset. To hop onto the carousel of this economy requires a digital wallet—a storeroom and a gateway for acquisition and trade of these virtual valuables. Cryptocurrencies, with NFTs—Non-fungible Tokens—have accelerated from obscure digital curios to precious artifacts. According to blockchain analytics firm Elliptic, an astonishing figure surpassing US$100 million in NFTs were usurped between July 2021 and July 2022. This rampant heist underlines their captivating allure for virtual certificates. Empowers do not just capture art, music, and gaming, but embody their very soul.
Yet, as the metaverse burgeons, so does the complexity and diversity of financial transgressions. From phishing to sophisticated fraud schemes, criminals craft insidious simulacrums of legitimate havens, aiming to drain the crypto-assets of the unwary. In the preceding year, a daunting figure rose to prominence—the vanishing of US$14 billion worth of crypto-assets, lost to the abyss of deception and duplicity. Hence, social engineering emerges from the shadows, a sort of digital chicanery that preys not upon weaknesses of the system, but upon the psychological vulnerabilities of its users—scammers adorned in the guise of authenticity, extracting trust and assets with Machiavellian precision.
The New Wave of Fincrimes
Extending their tentacles further, perpetrators of cybercrime exploit code vulnerabilities, engage in wash trading, obscuring the trails of money laundering, meander through sanctions evasion, and even dare to fund activities that send ripples of terror across the physical and virtual divide. The intricacies of smart contracts and the decentralized nature of these worlds, designed to be bastions of innovation, morph into paths paved for misuse and exploitation. The openness of blockchain transactions, the transparency that should act as a deterrent, becomes a paradox, a double-edged sword for the law enforcement agencies tasked with delineating the networks of faceless adversaries.
Addressing financial crime in the metaverse is Herculean labour, requiring an orchestra of efforts—harmonious, synchronised—from individual users to mammoth corporations, from astute policymakers to vigilant law enforcement bodies. Users must furnish themselves with critical awareness, fortifying their minds against the siren calls that beckon impetuous decisions, spurred by the anxiety of falling behind. Enterprises, the architects and custodians of this digital realm, are impelled to collaborate with security specialists, to probe their constructs for weak seams, and to reinforce their bulwarks against the sieges of cyber onslaughts. Policymakers venture onto the tightrope walk, balancing the impetus for innovation against the gravitas of robust safeguards—a conundrum played out on the global stage, as epitomised by the European Union's strides to forge cohesive frameworks to safeguard this new vessel of human endeavour.
The Austrian Example
Consider the case of Austria, where the tapestry of laws entwining crypto-assets spans a gamut of criminal offences, from data breaches to the complex webs of money laundering and the financing of dark enterprises. Users and corporations alike must become cartographers of local legislation, charting their ventures and vigilances within the volatile seas of the metaverse.
Upon the sands of this virtual frontier, we must not forget: that the metaverse is more than a hive of bits and bandwidth. It crystallises our collective dreams, echoes our unspoken fears, and reflects the range of our ambitions and failings. It stands as a citadel where the ever-evolving quest for progress should never stray from the compass of ethical pursuit. The cross-pollination of best practices, and the solidarity of international collaboration, are not simply tactics—they are imperatives engraved with the moral codes of stewardship, guiding us to preserve the unblemished spirit of the metaverse.
Conclusion
The clarion call of the metaverse invites us to venture into its boundless expanse, to savour its gifts of connection and innovation. Yet, on this odyssey through the pixelated constellations, we harness vigilance as our star chart, mindful of the mirage of morality that can obfuscate and lead astray. In our collective pursuit to curtail financial crime, we deploy our most formidable resource—our unity—conjuring a bastion for human ingenuity and integrity. In this, we ensure that the metaverse remains a beacon of awe, safeguarded against the shadows of transgression, and celebrated as a testament to our shared aspiration to venture beyond the realm of the possible, into the extraordinary.
References
- https://www.wolftheiss.com/insights/financial-crime-in-the-metaverse-is-real/
- https://gnet-research.org/2023/08/16/meta-terror-the-threats-and-challenges-of-the-metaverse/
- https://shuftipro.com/blog/the-rising-concern-of-financial-crimes-in-the-metaverse-aml-screening-as-a-solution/

Introduction: Reasons Why These Amendments Have Been Suggested.
The suggested changes in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, are the much-needed regulatory reaction to the blistering emergence of synthetic information and deepfakes. These reforms are due to the pressing necessity to govern risks within the digital ecosystem as opposed to regular reformation.
The Emergence of the Digital Menace
Generative AI tools have also facilitated the generation of very realistic images, videos, audio, and text in recent years. Such artificial media have been abused to portray people in situations they are not in or in statements they have never said. The market size is expected to have a compound annual growth rate(CAGR) from 2025 to 2031 of 37.57%, resulting in a market volume of US$400.00 bn by 2031. Therefore, tight regulatory controls are necessary to curb a high prevalence of harm in the Indian digital world.
The Gap in Law and Institution
None of the IT Rules, 2021, clearly addressed synthetic content. Although the Information Technology Act, 2000 dealt with identity theft, impersonation and violation of privacy, the intermediaries were not explicitly obligated on artificial media. This left a loophole in enforcement, particularly since AI-generated content might get around the old system of moderation. These amendments bring India closer to the international standards, including the EU AI Act, which requires transparency and labelling of AI-driven content. India addresses such requirements and adapts to local constitutional and digital ecosystem needs.
II. Explanation of the Amendments
The amendments of 2025 present five alternative changes in the current IT Rules framework, which address various areas of synthetic media regulation.
A. Definitional Clarification: Synthetic Generation of Information Introduction.
Rule 2(1)(wa) Amendment:
The amendments provide an all-inclusive definition of what is meant by “synthetically generated information” as information, which is created, or produced, changed or distorted with the use of a computer resource, in a way that such information can reasonably be perceived to be genuine. This definition is intentionally broad and is not limited to deepfakes in the strict sense but to any artificial media that has gone through algorithmic manipulation in order to have a semblance of authenticity.
Expansion of Legal Scope:
Rule 2(1A) also makes it clear that any mention of information in the context of unlawful acts, namely, including categories listed in Rule 3(1)(b), Rule 3(1)(d), Rule 4(2), and Rule 4(4), should be understood to mean synthetically generated information. This is a pivotal interpretative protection that does not allow intermediaries to purport that synthetic versions of illegal material are not under the control of the regulation since they are algorithmic creations and not descriptions of what actually occurred.
B. Safe Harbour Protection and Content Removal Requirements
Amendment, rule 3(1)(b)- Safe Harbour Clarification:
The amendments add a certain proviso to the Rule (3) (1)(b) that explains a deletion or facilitation of access of synthetically produced information (or any information falling within specified categories) which the intermediaries have made in good faith as part of reasonable endeavours or at the receipt of a complaint shall not be considered a breach of the Section 79(2) (a) or (b) of the Information Technology Act, 2000. This coverage is relevant especially since it insures the intermediaries against liability in situations where they censor the synthetic contents in advance of a court ruling or governmental warnings.
C. Labelling and Metadata Requirements that are mandatory on Intermediaries that enable the creation of synthetic content
The amendments establish a new framework of due diligence in Rule 3(3) on the case of intermediaries that offer tools to generate, modify, or alter the synthetically generated information. Two fundamental requirements are laid down.
- The generated information must be prominently labelled or embedded with a permanent, unique metadata or identifier. The label or metadata must be:
- Visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
- It should cover at least 10% of the surface of the visual display or, in the case of audio content, during the initial 10% of its duration.
- It can be used to immediately identify that such information is synthetically generated information which has been created, generated, modified, or altered using the computer resource of the intermediary.
- The intermediary in clause (a) shall not enable modification, suppression or removal of such label, permanent unique metadata or identifier, by whatever name called.
D. Important Social Media Intermediaries- Pre-Publication Checking Responsibilities
The amendments present a three-step verification mechanism, under Rule 4(1A), to Significant Social Media Intermediaries (SSMIs), which enables displaying, uploading or publishing on its computer resource before such display, uploading, or publication has to follow three steps.
Step 1- User Declaration: It should compel the users to indicate whether the materials they are posting are synthetically created. This puts the first burden on users.
Step 2-Technical Verification: To ensure that the user is truly valid, the SSMIs need to provide reasonable technical means, such as automated tools or other applications. This duty is contextual and would be based on the nature, format and source of content. It does not allow intermediaries to escape when it is known that not every type of content can be verified using the same standards.
Step 3- Prominent Labelling: In case the synthetic origin is verified by user declaration or technical verification, SSMIs should have a notice or label that is prominently displayed to be seen by users before publication.
The amendments provide a better system of accountability and set that intermediaries will be found to have failed due diligence in a case where it is established that they either knowingly permitted, encouraged or otherwise failed to act on synthetically produced information in contravention of these requirements. This brings in an aspect of knowledge, and intermediaries cannot use accidental errors as an excuse for non-compliance.
An explanation clause makes it clear that SSMIs should also make reasonable and proportionate technical measures to check user declarations and keep no synthetic content published without adequate declaration or labelling. This eliminates confusion on the role of the intermediaries with respect to making declarations.
III. Attributes of The Amendment Framework
- Precision in Balancing Innovation and Accountability.
The amendments have commendably balanced two extreme regulatory postures by neither prohibiting nor allowing the synthetic media to run out of control. It has recognised the legitimate use of synthetic media creation in entertainment, education, research and artistic expression by adopting a transparent and traceable mandate that preserves innovation while ensuring accountability.
- Overt Acceptance of the Intermediary Liability and Reverse Onus of Knowledge
Rule 4(1A) gives a highly significant deeming rule; in cases where the intermediary permits or refrains from acting with respect to the synthetic content knowing that the rules are violated, it will be considered as having failed to comply with the due diligence provisions. This description closes any loopholes in unscrupulous supervision where intermediaries can be able to argue that they did so. Standard of scienter promotes material investment in the detection devices and censor mechanisms that have been in place to offer security to the platforms that have sound systems, albeit the fact that the tools fail to capture violations at times.
- Clarity Through Definition and Interpretive Guidance
The cautious definition of the term “synthetically generated information” and the guidance that is provided in Rule 2(1A) is an admirable attempt to solve confusion in the previous regulatory framework. Instead of having to go through conflicting case law or regulatory direction, the amendments give specific definitional limits. The purposefully broad formulation (artificially or algorithmically created, generated, modified or altered) makes sure that the framework is not avoided by semantic games over what is considered to be a real synthetic content versus a slight algorithmic alteration.
- Insurance of non-accountability but encourages preventative moderation
The safe harbour clarification of the Rule 3(1)(b) amendment clearly safeguards the intermediaries who voluntarily dismiss the synthetic content without a court order or government notification. It is an important incentive scheme that prompts platforms to implement sound self-regulation measures. In the absence of such protection, platforms may also make rational decisions to stay in a passive stance of compliance, only deleting content under the pressure of an external authority, thus making them more effective in keeping users safe against dangerous synthetic media.
IV. Conclusion
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2025 suggest a structured, transparent, and accountable execution of curbing the rising predicaments of synthetic media and deepfakes. The amendments deal with the regulatory and interpretative gaps that have always existed in determining what should be considered as synthetically generated information, the intermediary liabilities and the mandatory labelling and metadata requirement. Safe-harbour protection will encourage the moderation proactively, and a scienter-based liability rule will not permit the intermediaries to escape liability when they are aware of the non-compliance but tolerate such non-compliance. The idea to introduce pre-publication verification of Significant Social Media Intermediaries adds the responsibility to users and due diligence to the platform. Overall, the amendments provide a reasonable balance between innovation and regulation, make the process more open with its proper definitions, promote responsible conduct on the platform and transform India and the new standards in the sphere of synthetic media regulation. They collaborate to enhance the verisimilitude, defence of the users, and visibility of the systems of the digital ecosystem of India.
V. References
2. https://www.statista.com/outlook/tmo/artificial-intelligence/generative-ai/worldwide

A photo featuring Bollywood actor Abhishek Bachchan and actress Aishwarya Rai is being widely shared on social media. In the image, the Kedarnath Temple is clearly visible in the background. Users are claiming that the couple recently visited the Kedarnath shrine for darshan.
Cyber Peace Foundation’s research found the viral claim to be false. Our research revealed that the image of Abhishek Bachchan and Aishwarya Rai is not real, but AI-generated, and is being misleadingly shared as a genuine photograph.
Claim
On January 14, 2026, a user on X (formerly Twitter) shared the viral image with a caption suggesting that all rumours had ended and that the couple had restarted their life together. The post further claimed that both actors were seen smiling after a long time, implying that the image was taken during their visit to Kedarnath Temple.
The post has since been widely circulated on social media platforms

Fact Check:
To verify the claim, we first conducted a keyword search on Google related to Abhishek Bachchan, Aishwarya Rai, and a Kedarnath visit. However, we did not find any credible media reports confirming such a visit.
On closely examining the viral image, several visual inconsistencies raised suspicion about it being artificially generated. To confirm this, we scanned the image using the AI detection tool Sightengine. According to the tool’s analysis, the image was found to be 84 percent AI-generated.

Additionally, we scanned the same image using another AI detection tool, HIVE Moderation. The results showed an even stronger indication, classifying the image as 99 percent AI-generated.

Conclusion
Our research confirms that the viral image showing Abhishek Bachchan and Aishwarya Rai at Kedarnath Temple is not authentic. The picture is AI-generated and is being falsely shared on social media to mislead users.