#FactCheck - AI-Generated Flyover Collapse Video Shared With Misleading Claims
Executive Summary
A video showing a flyover collapse is going viral on social media. The clip shows a flyover and a road passing beneath it, with vehicles moving normally. Suddenly, a portion of the flyover appears to collapse and fall onto the road below, with some vehicles seemingly coming under its impact. The video has been widely shared by users online. However, research by the CyberPeace found the viral claim to be false. The probe revealed that the video is not real but has been created using artificial intelligence.
Claim:
On X (formerly Twitter), a user shared the viral video on February 13, 2026, claiming it showed the reality of India’s infrastructure development and criticizing ongoing projects. The post quickly gained traction, with several users sharing it as a real incident. Similarly, another user shared the same video on Facebook on February 13, 2026, making a similar claim.

Fact Check:
To verify the claim, key frames from the viral video were extracted and searched using Google Lens. During the search, the video was traced to an account named “sphereofai” on Instagram, where it had been posted on February 9. The post included hashtags such as “AI Creator” and “AI Generated,” clearly indicating that the video was created using AI. Further examination of the account showed that the user identifies themselves as an AI content creator.


To confirm the findings, the viral video was also analysed using Hive Moderation. The tool’s analysis suggested a 99 percent probability that the video was AI-generated.

Conclusion:
The research established that the viral flyover collapse video is not authentic. It is an AI-generated clip being circulated online with misleading claims.
Related Blogs

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.

Introduction
The digital expanse of the metaverse has recently come under scrutiny following a gruesome incident. In a digital realm crafted for connection and exploration, a 16-year-old girl’s avatar falls victim to an agonising assault that kindled the fire of ethno-legal and societal discourse. The incident is a stark reminder that the cyberverse, offering endless possibilities and experiences, also has glaring challenges that require serious consideration. The incident involves a sixteen-year-old teen girl being raped through her digital avatar by a few members of Metaverse.
This incident has sparked a critical question of genuine psychological trauma inflicted by virtual experiences. The incident with a 16-year-old girl highlights the strong emotional repercussions caused by illicit virtual actions. While the physical realm remains unharmed, the digital assault can leave permanent scars on the psyche of the girl. This issue raises a critical question about the ethical implications of virtual interactions and the responsibilities of service providers to protect users' well-being on their platforms.
The Judicial Quagmire
The digital nature of these assaults gives impetus to complex jurisdictions which are profound in cyber offences. We are still novices in navigating the digital labyrinth where avatars have the ability to transcend borders with just a click of a mouse. The current legal structure is not equipped to tackle virtual crimes, calling for urgent reforms in critical legal structure. The Policymakers and legal Professionals must define virtual offenses first with clear and defined jurisdictional boundaries ensuring justice isn’t hampered due to geographical restrictions.
Meta’s Accountability
Meta, a platform where this gruesome incident occurred, finds itself at the crossroads of ethical dilemma. The company implemented plenty of safeguards that proved futile in preventing such harrowing acts. The incident has raised several questions about the broader role and responsibilities of tech juggernauts. Some of the questions demanding immediate answers as how a company can strike a balance between innovation and the protection of its users.
The Tightrope of Ethics
Metaverse is the epitome of innovation, yet this harrowing incident highlights a fundamental ethical contention. The real challenge is to harness the power of virtual reality while addressing the risks of digital hostilities. Society is still facing this conundrum, stakeholders must work in tandem to formulate robust and effective legal structures to protect the rights and well-being of users. This also includes balancing technological development and ethical challenges which require collective effort.
Reflections of Society
Beyond legal and ethical considerations, this act calls for wider societal reflections. It emphasises the pressing need for a cultural shift fostering empathy, digital civility and respect. As we tread deeper into the virtual realm, we must strive to cultivate ethos upholding dignity in both the digital and real world. This shift is only possible through awareness campaigns, educational initiatives and strong community engagement to foster a culture of respect and responsibility.
Safer and Ethical Way Forward
A multidimensional approach is essential to address the complicated challenges cyber violence poses. Several measures can pave the way for safer cyberspace for netizens.
- Legislative Reforms - There’s an urgent need to revamp legislative frameworks to mitigate and effectively address the complexities of these new and emerging virtual offences. The tech companies must collaborate with the government on formulating best practices and help develop standard security measures prioritising user protection.
- Public Awareness and Engagement - Initiating public awareness campaigns to educate users on crucial issues such as cyber resilience, ethics, digital detox and responsible online behaviour play a critical role in making netizens vigilant to avoid cyber hostilities and help fellow netizens in distress. Civil society organisations and think tanks such as CyberPeace Foundation are the pioneers of cyber safety campaigns in the country, working in tandem with governments across the globe to curb the evil of cyber hostilities.
- Interdisciplinary Research: The policymakers should delve deeper into the ethical, psychological and societal ramifications of digital interactions. The multidisciplinary approach in research is crucial for formulating policy based on evidence.
Conclusion
The digital Gang Rape is a wake-up call, demanding the bold measure to confront the intricate legal, societal and ethical pitfalls of the metaverse. As we navigate digital labyrinth, our collective decisions will help shape the metaverse's future. By nurturing the culture of empathy, responsibility and innovation, we can forge a path honouring the dignity of netizens, upholding ethical principles and fostering a vibrant and safe cyberverse. In this significant movement, ethical vigilance, diligence and active collaboration are indispensable.
References:
- https://www.thehindu.com/sci-tech/technology/virtual-gang-rape-reported-in-the-metaverse-probe-underway/article67705164.ece
- https://thesouthfirst.com/news/teen-uk-girl-virtually-gang-raped-in-metaverse-are-indian-laws-equipped-to-handle-similar-cases/

Introduction
The Information Technology (IT) Ministry has tested a new parental control app called ‘SafeNet’ that is intended to be pre-installed in all mobile phones, laptops and personal computers (PCs). The government's approach shows collaborative efforts by involving cooperation between Internet service providers (ISPs), the Department of School Education, and technology manufacturers to address online safety concerns. Campaigns and the proposed SafeNet application aim to educate parents about available resources for online protection and safeguarding their children.
The Need for SafeNet App
SafeNet Trusted Access is an access management and authentication service that ensures no user is a target by allowing you to expand authentication to all users and apps with diverse authentication capabilities. SafeNet is, therefore, an arsenal of tools, each meticulously crafted to empower guardians in the art of digital parenting. With the finesse of a master weaver, it intertwines content filtering with the vigilant monitoring of live locations, casting a protective net over the vulnerable online experiences of the children. The ability to oversee calls and messages adds another layer of security, akin to a watchful sentinel standing guard over the gates of communication. Some pointers regarding the parental control app that can be taken into consideration are as follows.
1. Easy to use and set up: The app should be useful, intuitive, and easy to use. The interface plays a significant role in achieving this goal. The setup process should be simple enough for parents to access the app without any technical issues. Parents should be able to modify settings and monitor their children's activity with ease.
2. Privacy and data protection: Considering the sensitive nature of children's data, strong privacy and data protection measures are paramount. From the app’s point of view, strict privacy standards include encryption protocols, secure data storage practices, and transparent data handling policies with the right of erasure to protect and safeguard the children's personal information from unauthorized access.
3. Features for Time Management: Effective parental control applications frequently include capabilities for regulating screen time and establishing use limitations. The app will evaluate if the software enables parents to set time limits for certain applications or devices, therefore promoting good digital habits and preventing excessive screen time.
4. Comprehensive Features of SafeNet: The app's commitment to addressing the multifaceted aspects of online safety is reflected in its robust features. It allows parents to set content filters with surgical precision, manage the time their children spend in the digital world, and block content that is deemed age-inappropriate. This reflects a deep understanding of the digital ecosystem's complexities and the varied threats that lurk within its shadows.
5. Adaptable to the needs of the family: In a stroke of ingenuity, SafeNet offers both parent and child versions of the app for shared devices. This adaptability to diverse family dynamics is not just a nod to inclusivity but a strategic move that enhances its usability and effectiveness in real-world scenarios. It acknowledges the unique tapestry of family structures and the need for tools that are as flexible and dynamic as the families they serve.
6. Strong Support From Government: The initiative enjoys a chorus of support from both government and industry stakeholders, a symphony of collaboration that underscores the collective commitment to the cause. Recommendations for the pre-installation of SafeNet on devices by an industry consortium resonate with the directives from the Prime Minister's Office (PMO),creating a harmonious blend of policy and practice. The involvement of major telecommunications players and Internet service providers underscores the industry's recognition of the importance of such initiatives, emphasising a collaborative approach towards deploying digital safeguarding measures at scale.
Recommendations
The efforts by the government to implement parental controls a recommendable as they align with societal goals of child welfare and protection. This includes providing parents with tools to manage and monitor their children's Internet usage to address concerns about inappropriate content and online risks. The following suggestions are made to further support the government's initiative:
1. The administration can consider creating a verification mechanism similar to how identities are verified when mobile SIMS are issued. While this certainly makes for a longer process, it will help address concerns about the app being misused for stalking and surveillance if it is made available to everyone as a default on all digital devices.
2. Parental controls are available on several platforms and are designed to shield, not fetter. Finding the right balance between protection and allowing for creative exploration is thus crucial to ensuring children develop healthy digital habits while fostering their curiosity and learning potential. It might be helpful to the administration to establish updated policies that prioritise the privacy-protection rights of children so that there is a clear mandate on how and to what extent the app is to be used.
3. Policy reforms can be further supported through workshops, informational campaigns, and resources that educate parents and children about the proper use of the app, the concept of informed consent, and the importance of developing healthy, transparent communication between parents and children.
Conclusion
Safety is a significant step towards child protection and development. Children have to rely on adults for protection and cannot identify or sidestep risk. In this context, the United Nations Convention on the Rights of the Child emphasises the matter of protection efforts for children, which notes that children have the "right to protection". Therefore, the parental safety app will lead to significant concentration on the general well-being and health of the children besides preventing drug misuse. On the whole, while technological solutions can be helpful, one also needs to focus on educating people on digital safety, responsible Internet use, and parental supervision.
References
- https://www.hindustantimes.com/india-news/itministry-tests-parental-control-app-progress-to-be-reviewed-today-101710702452265.html
- https://www.htsyndication.com/ht-mumbai/article/it-ministry-tests-parental-control-app%2C-progress-to-be-reviewed-today/80062127
- https://www.varindia.com/news/it-ministry-to-evaluate-parental-control-software
- https://www.medianama.com/2024/03/223-indian-government-to-incorporate-parental-controls-in-data-usage/