#FactCheck - "Deep fake video falsely circulated as of a Syrian prisoner who saw sunlight for the first time in 13 years”
Executive Summary:
A viral online video claims to show a Syrian prisoner experiencing sunlight for the first time in 13 years. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the prisoner’s facial expressions and surroundings. The original footage is unrelated to the claim that the prisoner has been held in solitary confinement for 13 years. The assertion that this video depicts a Syrian prisoner seeing sunlight for the first time is false and misleading.

Claim A viral video falsely claims that a Syrian prisoner is seeing sunlight for the first time in 13 years.


Factcheck:
Upon receiving the viral posts, we conducted a Google Lens search on keyframes from the video. The search led us to various legitimate sources featuring real reports about Syrian prisoners, but none of them included any mention of such an incident. The viral video exhibited several signs of digital manipulation, prompting further investigation.

We used AI detection tools, such as TrueMedia, to analyze the video. The analysis confirmed with 97.0% confidence that the video was a deepfake. The tools identified “substantial evidence of manipulation,” particularly in the prisoner’s facial movements and the lighting conditions, both of which appeared artificially generated.


Additionally, a thorough review of news sources and official reports related to Syrian prisoners revealed no evidence of a prisoner being released from solitary confinement after 13 years, or experiencing sunlight for the first time in such a manner. No credible reports supported the viral video’s claim, further confirming its inauthenticity.
Conclusion:
The viral video claiming that a Syrian prisoner is seeing sunlight for the first time in 13 years is a deep fake. Investigations using tools like Hive AI detection confirm that the video was digitally manipulated using AI technology. Furthermore, there is no supporting information in any reliable sources. The CyberPeace Research Team confirms that the video was fabricated, and the claim is false and misleading.
- Claim: Syrian prisoner sees sunlight for the first time in 13 years, viral on social media.
- Claimed on: Facebook and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

The rapid innovation of technology and its resultant proliferation in India has integrated businesses that market technology-based products with commerce. Consumer habits have now shifted from traditional to technology-based products, with many consumers opting for smart devices, online transactions and online services. This migration has increased potential data breaches, product defects, misleading advertisements and unfair trade practices.
The need to regulate technology-based commercial industry is seen in the backdrop of various threats that technologies pose, particularly to data. Most devices track consumer behaviour without the authorisation of the consumer. Additionally, products are often defunct or complex to use and the configuration process may prove to be lengthy with a vague warranty.
It is noted that consumers also face difficulties in the technology service sector, even while attempting to purchase a product. These include vendor lock-ins (whereby a consumer finds it difficult to migrate from one vendor to another), dark patterns (deceptive strategies and design practices that mislead users and violate consumer rights), ethical concerns etc.
Against this backdrop, consumer laws are now playing catch up to adequately cater to new consumer rights that come with technology. Consumer laws now have to evolve to become complimentary with other laws and legislation that govern and safeguard individual rights. This includes emphasising compliance with data privacy regulations, creating rules for ancillary activities such as advertising standards and setting guidelines for both product and product seller/manufacturer.
The Legal Framework in India
Currently, Consumer Laws in India while not tech-targeted, are somewhat adequate; The Consumer Protection Act 2019 (“Act”) protects the rights of consumers in India. It places liability on manufacturers, sellers and service providers for any harm caused to a consumer by faulty/defective products. As a result, manufacturers and sellers of ‘Internet & technology-based products’ are brought under the ambit of this Act. The Consumer Protection Act 2019 may also be viewed in light of the Digital Personal Data Protection Act 2023, which mandates the security of the digital personal data of an individual. Envisioned provisions such as those pertaining to mandatory consent, purpose limitation, data minimization, mandatory security measures by organisations, data localisation, accountability and compliance by the DPDP Act can be applied to information generated by and for consumers.
Multiple regulatory authorities and departments have also tasked themselves to issue guidelines that imbibe the principle of caveat venditor. To this effect, the Networks & Technologies (NT) wing of the Department of Telecommunications (DoT) on 2 March 2023, issued the Advisory Guidelines to M2M/IoT stakeholders for securing consumer IoT (“Guidelines”) aiming for M2M/IoT (i.e. Machine to Machine/Internet of things) compliance with the safety and security standards and guidelines in order to protect the users and the networks that connect these devices. The comprehensive Guidelines suggest the removal of universal default passwords and usernames such as “admin” that come preprogrammed with new devices and mandate the password reset process to be done after user authentication. Web services associated with the product are required to use Multi-Factor Authentication and duty is cast on them to not expose any unnecessary user information prior to authentication. Further, M2M/IoT stakeholders are required to provide a public point of contact for reporting vulnerability and security issues. Such stakeholders must also ensure that the software components are updateable in a secure and timely manner. An end-of-life policy is to be published for end-point devices which states the assured duration for which a device will receive software updates.
The involvement of regulatory authorities depends on the nature of technology products; a single product or technical consumer threat may see multiple guidelines. The Advertising Standards Council of India (ASCI) notes that cryptocurrency and related products were considered as the most violative category to commit fraud. In an attempt to protect consumer safety, it introduced guidelines to regulate advertising and promotion of virtual digital assets (VDA) exchange and trading platforms and associated services as a necessary interim measure in February 2022. It mandates that all VDA ads must carry the stipulated disclaimer “Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions.” must be made in a prominent and unmissable manner.
Further, authorities such as Securities and Exchange Board of India (SEBI) and the Reserve Bank of India (RBI) also issue cautionary notes to consumers and investors against crypto trading and ancillary activities. Even bodies like Bureau of Indian Standards (BIS) act as a complimenting authority, since product quality, including electronic products, is emphasised by mandating compliance to prescribed standards.
It is worth noting that ASCI has proactively responded to new-age technology-induced threats to consumers by attempting to tackle “dark patterns” through its existing Code on Misleading Ads (“Code”), since it is applicable across media to include online advertising on websites and social media handles. It was noted by ASCI that 29% of advertisements were disguised ads by influencers, which is a form of dark pattern. Although the existing Code addressed some issues, a need was felt to encompass other dark patterns.
Perhaps in response, the Central Consumer Protection Authority in November 2023 released guidelines addressing “dark patterns” under the Consumer Protection Act 2019 (“Guidelines”). The Guidelines define dark patterns as deceptive strategies and design practices that mislead users and violate consumer rights. These may include creating false urgency, scarcity or popularity of a product, basket sneaking (whereby additional services are added automatically on purchase of a product or service), confirm shaming (it refers to statements such as “I will stay unsecured” when opting out of travel insurance on booking of transportation tickets), etc. The Guidelines also cater to several data privacy considerations; for example, they stipulate a bar on encouraging consumers from divulging more personal information while making purchases due to difficult language and complex settings of their privacy policies, thereby ensuring compliance of technology product sellers and e-commerce platforms/vendors with data privacy laws in India. It is to be noted that the Guidelines are applicable on all platforms that systematically offer goods and services in India, advertisers and sellers.
Conclusion
Consumer laws for technology-based products in India play a pivotal role in safeguarding the rights and interests of individuals in an era marked by rapid technological advancements. These legislative frameworks, spanning facets such as data protection, electronic transactions, and product liability, assume a pivotal role in establishing a regulatory equilibrium that addresses the nuanced challenges of the digital age. The dynamic evolution of the digital landscape necessitates an adaptive legal infrastructure that ensures ongoing consumer safeguarding amidst technological innovations. As the digital landscape evolves, it is imperative for regulatory frameworks to adapt, ensuring that consumers are protected from potential risks associated with emerging technologies. Striking a balance between innovation and consumer safety requires ongoing collaboration between policymakers, businesses, and consumers. By staying attuned to the evolving needs of the digital age, Indian consumer laws can provide a robust foundation for security and equitable relationships between consumers and technology-based products.
References:
- https://dot.gov.in/circulars/advisory-guidelines-m2miot-stakeholders-securing-consumer-iot
- https://www.mondaq.com/india/advertising-marketing--branding/1169236/asci-releases-guidelines-to-govern-ads-for-cryptocurrency
- https://www.ascionline.in/the-asci-code/#:~:text=Chapter%20I%20(4)%20of%20the,nor%20deceived%20by%20means%20of
- https://www.ascionline.in/wp-content/uploads/2022/11/dark-patterns.pdf

Introduction
In real-time warfare scenarios of this modern age, where actions occur without delay, the relevance of edge computing emerges as paramount. By processing data close to the source in the battlefield with the help of a drone or through video imaging from any military vehicle or aircraft, the concept of edge computing allows the military to point targets faster and strike with accuracy. It also enables local processing to relay central data, helping ground troops get intelligence inputs to act rapidly in critical mission scenarios.
As the global security landscape experiences a significant transformation in different corners of the world, it presents unprecedented challenges in the present scenario. In this article, we will try to understand how countries can maintain their military capabilities with the help of advanced technologies like edge computing.
Edge Computing in Modern Warfare
Edge computing involves the processing and storage of data at the point of collection on the battlefield, for example, through vehicles and drones, instead of relying on centralized data centers. This enables faster decision-making in real-time. This approach creates a resilient and secure network by reducing reliance on potentially compromised external connections, supporting autonomous systems, precision-based targeting, and data sharing among military personnel, drones, and command centers amidst a challenging environment.
A report released by the US Department of Defence in March 2025 found a crucial reality surrounding the operation of hardware relying on outdated industrial-age processes in the digital era. In the case of applications with video, edge computing helps to deliver significant advantages to a wide range of crucial military operations, which include:
- Situational awareness with real-time data processing that provides improved battlefield visibility and proper threat detection.
- Autonomous warfare systems such as drones, which use a tactical edge cloud computing to get the capability to navigate faster.
- Developing a strong communication and networking capability to secure low-latency communication for troops to stay connected in challenging environments.
- Ensuring predictive maintenance with the help of effective sensors to carry out edge detection and attrition at an early point, thereby reducing equipment failures.
- Developing effective targeting and weapons systems to ensure faster processing to enable precision-based targeting and response, besides a strong logistics and supply chain that can provide real-time tracking to improve delivery accuracy and resource management.
This report also highlighted that the DoD is rapidly updating its software and investing in AI enablers like data sets or MLOps tools. This also stresses the breaking down of integration barriers by enforcing MOSA (Modular Open Systems Approaches), APIs (Application Programming Interface), and modular interfaces to ensure interoperability across platforms, sensors, and networks to make software-defined warfare an effective strategy.
Developing Edge with Artificial Intelligence for Future Warfare
A significant insight from the work of the US Department of Defense is its emphasis on the importance of edge computing in shaping the future of warfare. In that context, the Annual Threat Assessment Report highlights a key limitation of traditional AI strategies that rely on centralised cloud computing, since these might not be suitable for modern battlefields with congested networks and limited bandwidth. The need for real-time data processing requires a distributed and edge-based AI solution to address contemporary threats. This report also directly supports the deployment of effective edge with AI in a defined, disrupted, intermittent, and limited-bandwidth (DDIL) environment. In that case, when the communication networks fail, the edge servers at the edge of the network offer crucial advantages that cloud-dependent systems cannot. This ability to analyse data and make decisions without consistent connectivity and operate with limited computational resources is a strategic necessity.
The scenario of warfare is a phenomenon that requires maintaining a strong strategic and tactical approach, which, in the present times, is being examined through the domain of digital platforms. Modern warfare patterns demand faster decision-making and edge computing deliveries by shifting the power of distant servers to the frontlines. The US military is already moving in the direction of deploying edge-enabled systems to prove the nature of sensors and networks to compute at the tactical edge to transform warfighting.
However, it can be understood with the help of an example, as creating fusion in the skies with F-35s. As they have showcased the capability of edge computing by fusing sensor data with MADL (Multi-Functional Advanced Data Link) to create a unified picture, making the squadrons a force multiplier. An example of this was visible when an F-35 relayed real-time tracking data, enabling a navy ship to neutralise a missile beyond its range.
Conclusion: The Way Ahead
As the changing nature of warfare moves towards adopting software-defined systems, where edge computing thrives as a futuristic military technology, it calls for the need for integration across all domains of warfighting. But at the same time, several imperatives do emerge, such as:
- Developing an open architecture that enables both flexibility and innovation.
- Ensuring an effective connectivity that actually combines a confluence of legacy systems.
- Developing interoperability among the systems that can function in synergy with all platforms and can function across all domains.
- Prioritising edge-native AI development systems, where it is also necessary to ensure the shift to adopting cloud-based AI models to create solutions optimised from the ground up for edge deployment.
- Investing in edge infrastructure to establish a robust edge computing infrastructure that enables rapid deployment by testing and updating AI capabilities across diverse hardware platforms. Like the way the military training academies in India are developing training infrastructures for training officer cadets or personnel to handle drones and all forms of advanced warfare tactics emerging in this age.
- Fostering talent and expertise by embracing commercial solutions where software talent could be enabled across the enterprises with expertise in edge computing capabilities and AI. In this case, the role of the commercial sector can help to drive innovations in edge AI, and the only way to move in this direction is by leveraging these advances through partnerships and collaborative efforts.
Taking the example of the ARPANET, which once seeded the modern internet, edge computing can also help to create a transformative network effect within the digital battlespace. In conclusion, future conflicts will be defined by the speed and accuracy provided by the edge, as nations integrating AI and robust edge infrastructures can hold a strong advantage in the multi-domain battlefields in the future.
References
- https://www.idsa.in/mpidsanews/rk-narangs-article-what-the-regions-first-drone-warfare-taught-us-published-in-the-new-indian-express
- https://latentai.com/blog/software-defined-warfare-why-edge-ai-is-critical-to-americas-defense-future/
- https://www.boozallen.com/s/insight/blog/how-the-us-military-is-using-edge-computing.html
- https://capsindia.org/wp-content/uploads/2022/08/RK-Narang-3.pdf
- https://www.newindianexpress.com/opinions/2025/May/12/what-the-regions-first-drone-warfare-taught-us
- https://www.maris-tech.com/blog/edge-computing-in-the-military-challenges-and-solutions/#:~:text=In%20modern%20warfare%2C%20decisions%20need,enables%20precision%20targeting%20and%20response
- https://cassindia.com/digital-soldiers/

Introduction
The use of AI in content production, especially images and videos, is changing the foundations of evidence. AI-generated videos and images can mirror a person’s facial features, voice, or actions with a level of fidelity to which the average individual may not be able to distinguish real from fake. The ability to provide creative solutions is indeed a beneficial aspect of this technology. However, its misuse has been rapidly escalating over recent years. This creates threats to privacy and dignity, and facilitates the creation of dis/misinformation. Its real-world consequences are the manipulation of elections, national security threats, and the erosion of trust in society.
Why India Needs Deepfake Regulation
Deepfake regulation is urgently needed in India, evidenced by the recent Rashmika Mandanna incident, where a hoax deepfake of an actress created a scandal throughout the country. This was the first time that an individual's image was superimposed on the body of another woman in a viral deepfake video that fooled many viewers and created outrage among those who were deceived by the video. The incident even led to law enforcement agencies issuing warnings to the public about the dangers of manipulated media.
This was not an isolated incident; many influencers, actors, leaders and common people have fallen victim to deepfake pornography, deepfake speech scams, defraudations, and other malicious uses of deepfake technology. The rapid proliferation of deepfake technology is outpacing any efforts by lawmakers to regulate its widespread use. In this regard, a Private Member’s Bill was introduced in the Lok Sabha in its Winter Session. This proposal was presented to the Lok Sabha as an individual MP's Private Member's Bill. Even though these have had a low rate of success in being passed into law historically, they do provide an opportunity for the government to take notice of and respond to emerging issues. In fact, Private Member's Bills have been the catalyst for government action on many important matters and have also provided an avenue for parliamentary discussion and future policy creation. The introduction of this Bill demonstrates the importance of addressing the public concern surrounding digital impersonation and demonstrates that the Parliament acknowledges digital deepfakes to be a significant concern and, therefore, in need of a legislative framework to combat them.
Key Features Proposed by the New Deepfake Regulation Bill
The proposed legislation aims to create a strong legal structure around the creation, distribution and use of deepfake content in India. Its five core proposals are:
1. Prior Consent Requirement: individuals must give their written approval before producing or distributing deepfake media, including digital representations of themselves, as well as their faces, images, likenesses and voices. This aims to protect women, celebrities, minors, and everyday citizens against the use of their identities with the intent to harm them or their reputations or to harass them through the production of deepfakes.
2. Penalties for Malicious Deepfakes: Serious criminal consequences should be placed for creating or sharing deepfake media, particularly when it is intended to cause harm (defame, harass, impersonate, deceive or manipulate another person). The Bill also addresses financially fraudulent use of deepfakes, political misinformation, interfering with elections and other types of explicit AI-generated media.
3. Establishment of a Deepfake Task Force: To look at the potential impact of deepfakes on national security, elections and public order, as well as on public safety and privacy. This group will work with academic institutions, AI research labs and technology companies to create advanced tools for the detection of deepfakes and establish best practices for the safe and responsible use of generative AI.
4. Creation of a Deepfake Detection and Awareness Fund: To assist with the development of tools for detecting deepfakes, increasing the capacity of law enforcement agencies to investigate cybercrime, promoting public awareness of deepfakes through national campaigns, and funding research on artificial intelligence safety and misinformation.
How Other Countries Are Handling Deepfakes
1. United States
Many States in the United States, including California and Texas, have enacted laws to prohibit the use of politically deceptive deepfakes during elections. Additionally, the Federal Government is currently developing regulations requiring that AI-generated content be clearly labelled. Social Media Platforms are also being encouraged to implement a requirement for users to disclose deepfakes.
2. United Kingdom
In the United Kingdom, it is illegal to create or distribute intimate deepfake images without consent; violators face jail time. The Online Safety Act emphasises the accountability of digital media providers by requiring them to identify, eliminate, and avert harmful synthetic content, which makes their role in curating safe environments all the more important.
3. European Union:
The EU has enacted the EU AI Act, which governs the use of deepfakes by requiring an explicit label to be affixed to any AI-generated content. The absence of a label would subject an offending party to potentially severe regulatory consequences; therefore, any platform wishing to do business in the EU should evaluate the risks associated with deepfakes and adhere strictly to the EU's guidelines for transparency regarding manipulated media.
4. China:
China has among the most rigorous regulations regarding deepfakes anywhere on the planet. All AI-manipulated media will have to be marked with a visible watermark, users will have to authenticate their identities prior to being allowed to use advanced AI tools, and online platforms have a legal requirement to take proactive measures to identify and remove synthetic materials from circulation.
Conclusion
Deepfake technology has the potential to be one of the greatest (and most dangerous) innovations of AI technology. There is much to learn from incidents such as that involving Rashmika Mandanna, as well as the proliferation of deepfake technology that abuses globally, demonstrating how easily truth can be altered in the digital realm. The new Private Member's Bill created by India seeks to provide for a comprehensive framework to address these abuses based on prior consent, penalties that actually work, technical preparedness, and public education/awareness. With other nations of the world moving towards increased regulation of AI technology, proposals such as this provide a direction for India to become a leader in the field of responsible digital governance.
References
- https://www.ndtv.com/india-news/lok-sabha-introduces-bill-to-regulate-deepfake-content-with-consent-rules-9761943
- https://m.economictimes.com/news/india/shiv-sena-mp-introduces-private-members-bill-to-regulate-deepfakes/articleshow/125802794.cms
- https://www.bbc.com/news/world-asia-india-67305557
- https://www.akingump.com/en/insights/blogs/ag-data-dive/california-deepfake-laws-first-in-country-to-take-effect
- https://codes.findlaw.com/tx/penal-code/penal-sect-21-165/
- https://www.mishcon.com/news/when-ai-impersonates-taking-action-against-deepfakes-in-the-uk#:~:text=As%20of%2031%20January%202024,of%20intimate%20deepfakes%20without%20consent.
- https://www.politico.eu/article/eu-tech-ai-deepfakes-labeling-rules-images-elections-iti-c2pa/
- https://www.reuters.com/article/technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VT/