#FactCheck - Deepfake Video Falsely Claims visuals of a massive rally held in Manipur
Executive Summary:
A viral online video claims visuals of a massive rally organised in Manipur for stopping the violence in Manipur. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the crowd into existence. There is no original footage in connection to any similar protest. The claim that promotes the same is therefore, false and misleading.
Claims:
A viral post falsely claims of a massive rally held in Manipur.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. We could not locate any authentic sources mentioning such event held recently or previously. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia and Hive AI Detection tool, to analyze the video. The analysis confirmed with 99.7% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the crowd and colour gradience , which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Manipur State officials revealed no mention of any such rally. No credible reports were found linking to such protests, further confirming the video’s inauthenticity.
Conclusion:
The viral video claims visuals of a massive rally held in Manipur. The research using various tools such as truemedia.org and other AI detection tools confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Massive rally held in Manipur against the ongoing violence viral on social media.
- Claimed on: Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
In 2022, Oxfam’s India Inequality report revealed the worsening digital divide, highlighting that only 38% of households in the country are digitally literate. Further, only 31% of the rural population uses the internet, as compared to 67% of the urban population. Over time, with the increasing awareness about the importance of digital privacy globally, the definition of digital divide has translated into a digital privacy divide, whereby different levels of privacy are afforded to different sections of society. This further promotes social inequalities and impedes access to fundamental rights.
Digital Privacy Divide: A by-product of the digital divide
The digital divide has evolved into a multi-level issue from its earlier interpretations; level I implies the lack of physical access to technologies, level II refers to the lack of digital literacy and skills and recently, level III relates to the impacts of digital access. Digital Privacy Divide (DPD) refers to the various gaps in digital privacy protection provided to users based on their socio-demographic patterns. It forms a subset of the digital divide, which involves uneven distribution, access and usage of information and communication technology (ICTs). Typically, DPD exists when ICT users receive distinct levels of digital privacy protection. As such, it forms a part of the conversation on digital inequality.
Contrary to popular perceptions, DPD, which is based on notions of privacy, is not always based on ideas of individualism and collectivism and may constitute internal and external factors at the national level. A study on the impacts of DPD conducted in the U.S., India, Bangladesh and Germany highlighted that respondents in Germany and Bangladesh expressed more concerns about their privacy compared to respondents in the U.S. and India. This suggests that despite the U.S. having a strong tradition of individualistic rights, that is reflected in internal regulatory frameworks such as the Fourth Amendment, the topic of data privacy has not garnered enough interest from the population. Most individuals consider forgoing the right to privacy as a necessary evil to access many services, and schemes and to stay abreast with technological advances. Research shows that 62%- 63% of Americans believe that companies and the government collecting data have become an inescapable necessary evil in modern life. Additionally, 81% believe that they have very little control over what data companies collect and about 81% of Americans believe that the risk of data collection outweighs the benefits. Similarly, in Japan, data privacy is thought to be an adopted concept emerging from international pressure to regulate, rather than as an ascribed right, since collectivism and collective decision-making are more valued in Japan, positioning the concept of privacy as subjective, timeserving and an idea imported from the West.
Regardless, inequality in privacy preservation often reinforces social inequality. Practices like surveillance that are geared towards a specific group highlight that marginalised communities are more likely to have less data privacy. As an example, migrants, labourers, persons with a conviction history and marginalised racial groups are often subject to extremely invasive surveillance under suspicions of posing threats and are thus forced to flee their place of birth or residence. This also highlights the fact that focus on DPD is not limited to those who lack data privacy but also to those who have (either by design or by force) excess privacy. While on one end, excessive surveillance, carried out by both governments and private entities, forces immigrants to wait in deportation centres during the pendency of their case, the other end of the privacy extreme hosts a vast number of undocumented individuals who avoid government contact for fear of deportation, despite noting high rates of crime victimization.
DPD is also noted among groups with differential knowledge and skills in cyber security. For example, in India, data privacy laws mandate that information be provided on order of a court or any enforcement agency. However, individuals with knowledge of advanced encryption are adopting communication channels that have encryption protocols that the provider cannot control (and resultantly able to exercise their right to privacy more effectively), in contrast with individuals who have little knowledge of encryption, implying a security as well as an intellectual divide. While several options for secure communication exist, like Pretty Good Privacy, which enables encrypted emailing, they are complex and not easy to use in addition to having negative reputations, like the Tor Browser. Cost considerations also are a major factor in propelling DPD since users who cannot afford devices like those by Apple, which have privacy by default, are forced to opt for devices that have relatively poor in-built encryption.
Children remain the most vulnerable group. During the pandemic, it was noted that only 24% of Indian households had internet facilities to access e-education and several reported needing to access free internet outside of their homes. These public networks are known for their lack of security and privacy, as traffic can be monitored by the hotspot operator or others on the network if proper encryption measures are not in place. Elsewhere, students without access to devices for remote learning have limited alternatives and are often forced to rely on Chromebooks and associated Google services. In response to this issue, Google provided free Chromebooks and mobile hotspots to students in need during the pandemic, aiming to address the digital divide. However, in 2024, New Mexico was reported to be suing Google for allegedly collecting children’s data through its educational products provided to the state's schools, claiming that it tracks students' activities on their personal devices outside of the classroom. It signified the problems in ensuring the privacy of lower-income students while accessing basic education.
Policy Recommendations
Digital literacy is one of the critical components in bridging the DPD. It enables individuals to gain skills, which in turn effectively addresses privacy violations. Studies show that low-income users remain less confident in their ability to manage their privacy settings as compared to high-income individuals. Thus, emphasis should be placed not only on educating on technology usage but also on privacy practices since it aims to improve people’s Internet skills and take informed control of their digital identities.
In the U.S., scholars have noted the role of libraries and librarians in safeguarding intellectual privacy. The Library Freedom Project, for example, has sought to ensure that the skills and knowledge required to ensure internet freedoms are available to all. The Project channelled one of the core values of the library profession i.e. intellectual freedom, literacy, equity of access to recorded knowledge and information, privacy and democracy. As a result, the Project successfully conducted workshops on internet privacy for the public and also openly objected to the Department of Homeland Security’s attempts to shut down the use of encryption technologies in libraries. The International Federation of Library Association adopted a Statement of Privacy in the Library Environment in 2015 that specified “when libraries and information services provide access to resources, services or technologies that may compromise users’ privacy, libraries should encourage users to be aware of the implications and provide guidance in data protection and privacy.” The above should be used as an indicative case study for setting up similar protocols in inclusive public institutions like Anganwadis, local libraries, skill development centres and non-government/non-profit organisations in India, where free education is disseminated. The workshops conducted must inculcate two critical aspects; firstly, enhancing the know-how of using public digital infrastructure and popular technologies (thereby de-alienating technology) and secondly, shifting the viewpoint of privacy as a right an individual has and not something that they own.
However, digital literacy should not be wholly relied on, since it shifts the responsibility of privacy protection to the individual, who may not either be aware or cannot be controlled. Data literacy also does not address the larger issue of data brokers, consumer profiling, surveillance etc. Resultantly, an obligation on companies to provide simplified privacy summaries, in addition to creating accessible, easy-to-use technical products and privacy tools, should be necessitated. Most notable legislations address this problem by mandating notices and consent for collecting personal data of users, despite slow enforcement. However, the Digital Personal Data Protection Act 2023 in India aims to address DPD by not only mandating valid consent but also ensuring that privacy policies remain accessible in local languages, given the diversity of the population.
References
- https://idronline.org/article/inequality/indias-digital-divide-from-bad-to-worse/
- https://arxiv.org/pdf/2110.02669
- https://arxiv.org/pdf/2201.07936#:~:text=The%20DPD%20index%20is%20a,(33%20years%20and%20over).
- https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- /https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=6265&context=law_lawreview
- https://eprints.lse.ac.uk/67203/1/Internet%20freedom%20for%20all%20Public%20libraries%20have%20to%20get%20serious%20about%20tackling%20the%20digital%20privacy%20divi.pdf
- https://bosniaca.nub.ba/index.php/bosniaca/article/view/488/pdf
- https://www.hindustantimes.com/education/just-24-of-indian-households-have-internet-facility-to-access-e-education-unicef/story-a1g7DqjP6lJRSh6D6yLJjL.html
- https://www.forbes.com/councils/forbestechcouncil/2021/05/05/the-pandemic-has-unmasked-the-digital-privacy-divide/
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.isc.meiji.ac.jp/~ethicj/Privacy%20protection%20in%20Japan.pdf
- https://socialchangenyu.com/review/the-surveillance-gap-the-harms-of-extreme-privacy-and-data-marginalization/

Introduction
26th November 2024 marked a historical milestone for India as a Hyderabad-based space technology firm TakeMe2Space, announced the forthcoming launch of MOI-TD “(My Orbital Infrastructure - Technology Demonstrator)”, India's first AI lab in space. The mission will demonstrate real-time data processing in orbit, making space research more affordable and accessible according to the Company. The launch is scheduled for mid-December 2024 aboard the ISRO's PSLV C60 launch vehicle. It represents a transformative phase for innovation and exploration in India's AI and space technology integration space.
The Vision Behind the Initiative
The AI Laboratory in orbit is designed to enable autonomous decision-making, revolutionising satellite exploration and advancing cutting-edge space research. It signifies a major step toward establishing space-based data centres, paving the way for computing capabilities that will support a variety of applications.
While space-based data centres currently cost 10–15 times more than terrestrial alternatives, this initiative leverages high-intensity solar power in orbit to significantly reduce energy consumption. Training AI models in space could lower energy costs by up to 95% and cut carbon emissions by at least tenfold, even when factoring in launch emissions. It positions MOI-TD as an eco-friendly and cost-efficient solution.
Technological Innovations and Future Impact of AI in Space
This AI Laboratory, MOI-TD includes control software and hardware components, including reaction wheels, magnetometers, an advanced onboard computer, and an AI accelerator. The satellite also features flexible solar cells that could power future satellites. It will enable the processing of real-time space data, pattern recognition, and autonomous decision-making and address the latency issues, ensuring faster and more efficient data analysis, while the robust hardware designs tackle the challenges posed by radiation and extreme space environments. Advanced sensor integration will further enhance data collection, facilitating AI model training and validation.
These innovations drive key applications with transformative potential. It will allow users to access the satellite platform through OrbitLaw, a web-based console that will allow users to upload AI models to aid climate monitoring, disaster prediction, urban growth analysis and custom Earth observation use cases. TakeMe2Space has already partnered with a leading Malaysian university and an Indian school (grades 9 and 10) to showcase the satellite’s potential for democratizing space research.
Future Prospects and India’s Global Leadership in AI and Space Research
As per Stanford’s HAI Global AI Vibrancy rankings, India secured 4th place due to its R&D leadership, vibrant AI ecosystem, and public engagement for AI. This AI laboratory is a step further in advancing India’s role in the development of regulatory frameworks for ethical AI use, fostering robust public-private partnerships, and promoting international cooperation to establish global standards for AI applications.
Cost-effectiveness and technological exercise are some of India’s unique strengths and could position the country as a key player in the global AI and space research arena and draw favourable comparisons with initiatives by NASA, ESA, and private entities like SpaceX. By prioritising ethical and sustainable practices and fostering collaboration, India can lead in shaping the future of AI-driven space exploration.
Conclusion
India’s first AI laboratory in space, MOI-TD, represents a transformative milestone in integrating AI with space technology. This ambitious project promises to advance autonomous decision-making, enhance satellite exploration, and democratise space research. Additionally, factors such as data security, fostering international collaboration and ensuring sustainability should be taken into account while fostering such innovations. With this, India can establish itself as a leader in space research and AI innovation, setting new global standards while inspiring a future where technology expands humanity’s frontiers and enriches life on Earth.
References
- https://www.ptinews.com/story/national/start-up-to-launch-ai-lab-in-space-in-december/2017534
- https://economictimes.indiatimes.com/tech/startups/spacetech-startup-takeme2space-to-launch-ai-lab-in-space-in-december/articleshow/115701888.cms?from=mdr
- https://www.ibm.com/think/news/data-centers-space
- https://cio.economictimes.indiatimes.com/amp/news/next-gen-technologies/spacetech-startup-takeme2space-to-launch-ai-lab-in-space-in-december/115718230
.webp)
Introduction
Autonomous transportation, smart cities, remote medical care, and immersive augmented reality are just a few of the revolutionary applications made possible by the global rollout of 5G technology. However, along with this revolution in connectivity, a record-breaking rise in vulnerabilities and threats has emerged, driven by software-defined networks, growing attack surfaces, and increasingly complex networks. As work on next-generation 6G networks accelerates, with commercialisation starting in 2030, security issues are piling up, including those related to AI-driven networks, terahertz communications, and quantum computing attacks. For a nation like India, poised to become a global technological leader, next-generation network procurement is not merely a technical necessity but a strategic imperative. Initiatives such as India-UK collaboration on telecom security in recent years say a lot about how international alliances are the order of the day to address these challenges.
Why Cybersecurity in 5G and 6G Networks is Crucial
With the launch of global 5G services and the rapid introduction of 6G technologies, the telecom sector is seeing a fundamental transformation. Besides expanding connectivity, future networks are also creating the building blocks for networked and highly intelligent environments. With its ultra-high speed of 10 Gbps, network slicing, and ultra-low latency, 5G provides new capabilities that are perfectly suited for mission-critical applications such as telemedicine, autonomous vehicles, and industrial IoT. Sixth-generation wireless technology is still in development, and it will be approximately one hundred times faster than fifth-generation. Here are a few drawbacks and challenges:
- Decentralised Infrastructure (edge computing nodes): Increased number of entry points for attack.
- Virtual Network Functions (VNFs): Greater vulnerability to configuration issues and software exploitation.
- Billions of IoT devices with different security states, thus forming networks that are more difficult to secure.
Although these challenges are unparalleled, the advancement in technology also creates new opportunities.
Understanding the Cyber Threat Landscape for 5G and 6G
The move to 5G and the upgrade to 6G open great opportunities, but also open doors for new cybersecurity risks. Open RAN usage offers flexibility and vendor selection but exposes the supply chain to untested third-party components and attacks. SBA security vulnerabilities can be exploited to disrupt vital network services, resulting in outages or data breaches. Similarly, widespread adoption of edge computing to reduce latency creates multiple entry points for an attacker to target. Compounding the problem is the explosion of IoT device connections through 5G, which, if breached, can fuel massive botnets capable of conducting massive distributed denial-of-service (DDoS) attacks.
Challenges in 6G
- AI-Powered Cyberattacks: AI-native 6G networks are susceptible to adversarial machine learning attacks, data model poisoning, both for security and for traffic optimisation.
- Quantum Threats: Post-quantum cryptography may be required if quantum computing renders current encryption algorithms outdated.
- Privacy Concerns with Digital Twins: 6G may result in creating enormous privacy and data protection issues in addition to offering real-time virtual replicas of the physical world.
- Cross-Border Data Flow Risks: Secure interoperability frameworks and standardised data sovereignty are essential for the worldwide rollout of 6G.
A Critical Step Toward Secure Telecom: The India-UK Partnership
India's recent foray with the UK reflects its active role in shaping the future of telecom security. Major points of the UK-India Telecom Roundtable are:
- MoU between SONIC Labs and C-DOT: Dedicated to Open RAN and AI integration security in 4G/5G deployments. This will offer supply chain diversity without sacrificing resilience.
- Research Partnerships for 6G: Partnerships with UK institutions like CHEDDAR (Cloud & Distributed Computing Hub) and the University of Glasgow 6G Research Centre are focused on developing AI-driven network security solutions, green 6G, and quantum-resistant design.
- Telecom Cybersecurity Centres of Excellence: Constructing two-way CoEs for telecom cybersecurity, ethical AI, and digital twin security models.
- Standardisation Efforts: Joint contribution to ITU for the creation of IMT-2030 standards, in a way that cybersecurity-by-design principles are integrated into worldwide 6G specifications.
- Future Initiatives:
- Application of privacy-enhancing technologies (PETs) for cross-sectoral data usage.
- Secure quantum communications to be used for satellite and submarine cable connections.
- Encouragement of native telecommunication stacks for strategic independence.
Global Policy and Regulatory Aspects
- India's Bharat 6G Vision: India will lead the global standardisation process in the Bharat 6G Alliance with a vision of inclusive, secure, and sustainable connectivity.
- International Harmonisation:
- 3GPP and ITU's joint effort towards standardisation of 6G security.
- Cross-border privacy and cybersecurity compliance system designs to enable secure flows of data.
- Cyber Diplomacy for Telecom Security: Cross-border sharing of information architectures, threat intelligence sharing, and coordinated incident response schemes are essential to 6G security resilience globally.
Building a Secure and Resilient Future for 5G and 6G
Establishing a safe and future-proof 5G and 6G environment should be an end-to-end effort involving governments, industry, and technology vendors. Security should be integrated into the underlying architecture of the networks and not an afterthought feature to be optionally provided. Active engagement in international bodies to establish homogeneous security and privacy standards across geographies is also required. Public-private partnerships, including academia partnerships, will be the driver for innovation and the creation of advanced protection mechanisms. Simultaneously, creating a competent talent pool to manage AI-based threat analysis, quantum-resistant cryptography, and next-generation cryptographic methods will be required to combat the advanced menace of new telecom technologies.
Conclusion
Given 6G on the way and 5G technologies already changing global connections, cybersecurity needs to continue to be a key focus. The partnership between India and the UK serves as an example of why the safe rise of tomorrow's networks depends on global collaboration, AI-driven security measures, plus quantum preparedness. The world can unleash the potential for transformation of 5G and 6G through combining security by design, supporting international standards, and encouraging innovation via cooperation. This will result in an online future that is not only quick and egalitarian but also solid and trustworthy.
References:
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2105225
- https://www.itu.int/en/ITU-R/study-groups/rsg5/rwp5d/imt-2030/pages/default.aspx
- https://dot.gov.in/sites/default/files/Bharat%206G%20Vision%20Statement%20-%20full.pdf
- https://www.gsma.com/solutions-and-impact/technologies/security/wp-content/uploads/2024/07/FS.40-v3.0-002-19-July.pdf