#FactCheck - Deepfake Video Falsely Claims visuals of a massive rally held in Manipur
Executive Summary:
A viral online video claims visuals of a massive rally organised in Manipur for stopping the violence in Manipur. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the crowd into existence. There is no original footage in connection to any similar protest. The claim that promotes the same is therefore false and misleading.

Claims:
A viral post falsely claims of a massive rally held in Manipur.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. We could not locate any authentic sources mentioning such event held recently or previously. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia and Hive AI Detection tool, to analyze the video. The analysis confirmed with 99.7% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the crowd and colour gradience , which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Manipur State officials revealed no mention of any such rally. No credible reports were found linking to such protests, further confirming the video’s inauthenticity.
Conclusion:
The viral video claims visuals of a massive rally held in Manipur. The research using various tools such as truemedia.org and other AI detection tools confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Massive rally held in Manipur against the ongoing violence viral on social media.
- Claimed on: Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs
.png)
Introduction
The fast-paced development of technology and the wider use of social media platforms have led to the rapid dissemination of misinformation with characteristics such as diffusion, fast propagation speed, wide influence, and deep impact through these platforms. Social Media Algorithms and their decisions are often perceived as a black box introduction that makes it impossible for users to understand and recognise how the decision-making process works.
Social media algorithms may unintentionally promote false narratives that garner more interactions, further reinforcing the misinformation cycle and making it harder to control its spread within vast, interconnected networks. Algorithms judge the content based on the metrics, which is user engagement. It is the prerequisite for algorithms to serve you the best. Hence, algorithms or search engines enlist relevant items you are more likely to enjoy. This process, initially, was created to cut the clutter and provide you with the best information. However, sometimes it results in unknowingly widespread misinformation due to the viral nature of information and user interactions.
Analysing the Algorithmic Architecture of Misinformation
Social media algorithms, designed to maximize user engagement, can inadvertently promote misinformation due to their tendency to trigger strong emotions, creating echo chambers and filter bubbles. These algorithms prioritize content based on user behaviour, leading to the promotion of emotionally charged misinformation. Additionally, the algorithms prioritize content that has the potential to go viral, which can lead to the spread of false or misleading content faster than corrections or factual content.
Additionally, popular content is amplified by platforms, which spreads it faster by presenting it to more users. Limited fact-checking efforts are particularly difficult since, by the time they are reported or corrected, erroneous claims may have gained widespread acceptance due to delayed responses. Social media algorithms find it difficult to distinguish between real people and organized networks of troll farms or bots that propagate false information. This creates a vicious loop where users are constantly exposed to inaccurate or misleading material, which strengthens their convictions and disseminates erroneous information through networks.
Though algorithms, primarily, aim to enhance user engagement by curating content that aligns with the user's previous behaviour and preferences. Sometimes this process leads to "echo chambers," where individuals are exposed mainly to information that reaffirms their beliefs which existed prior, effectively silencing dissenting voices and opposing viewpoints. This curated experience reduces exposure to diverse opinions and amplifies biased and polarising content, making it arduous for users to discern credible information from misinformation. Algorithms feed into a feedback loop that continuously gathers data from users' activities across digital platforms, including websites, social media, and apps. This data is analysed to optimise user experiences, making platforms more attractive. While this process drives innovation and improves user satisfaction from a business standpoint, it also poses a danger in the context of misinformation. The repetitive reinforcement of user preferences leads to the entrenchment of false beliefs, as users are less likely to encounter fact-checks or corrective information.
Moreover, social networks and their sheer size and complexity today exacerbate the issue. With billions of users participating in online spaces, misinformation spreads rapidly, and attempting to contain it—such as by inspecting messages or URLs for false information—can be computationally challenging and inefficient. The extensive amount of content that is shared daily means that misinformation can be propagated far quicker than it can get fact-checked or debunked.
Understanding how algorithms influence user behaviour is important to tackling misinformation. The personalisation of content, feedback loops, the complexity of network structures, and the role of superspreaders all work together to create a challenging environment where misinformation thrives. Hence, highlighting the importance of countering misinformation through robust measures.
The Role of Regulations in Curbing Algorithmic Misinformation
The EU's Digital Services Act (DSA) applicable in the EU is one of the regulations that aims to increase the responsibilities of tech companies and ensure that their algorithms do not promote harmful content. These regulatory frameworks play an important role they can be used to establish mechanisms for users to appeal against the algorithmic decisions and ensure that these systems do not disproportionately suppress legitimate voices. Independent oversight and periodic audits can ensure that algorithms are not biased or used maliciously. Self-regulation and Platform regulation are the first steps that can be taken to regulate misinformation. By fostering a more transparent and accountable ecosystem, regulations help mitigate the negative effects of algorithmic misinformation, thereby protecting the integrity of information that is shared online. In the Indian context, the Intermediary Guidelines, 2023, Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation on digital platforms. The ‘Intermediaries’ are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or prohibited content. This rule aims to ensure platforms identify and swiftly remove misinformation, and false or misleading content.
Cyberpeace Outlook
Understanding how algorithms prioritise content will enable users to critically evaluate the information they encounter and recognise potential biases. Such cognitive defenses can empower individuals to question the sources of the information and report misleading content effectively. In the future of algorithms in information moderation, platforms should evolve toward more transparent, user-driven systems where algorithms are optimised not just for engagement but for accuracy and fairness. Incorporating advanced AI moderation tools, coupled with human oversight can improve the detection and reduction of harmful and misleading content. Collaboration between regulatory bodies, tech companies, and users will help shape the algorithms landscape to promote a healthier, more informed digital environment.
References:
- https://www.advancedsciencenews.com/misformation-spreads-like-a-nuclear-reaction-on-the-internet/
- https://www.niemanlab.org/2024/09/want-to-fight-misinformation-teach-people-how-algorithms-work/
- Press Release: Press Information Bureau (pib.gov.in)
.webp)
Introduction
The automobile business is fast expanding, with vehicles becoming sophisticated, interconnected gadgets equipped with cutting-edge digital technology. This integration improves convenience, safety, and efficiency while also exposing automobiles to a new set of cyber risks. Electric vehicles (EVs) are equipped with sophisticated computer systems that manage various functions, such as acceleration, braking, and steering. If these systems are compromised, it could result in hazardous situations, including the remote control of the vehicle or unauthorized access to sensitive data. The automotive sector is evolving with the rise of connected car stakeholders, exposing new vulnerabilities for hackers to exploit.
Why Automotive Cybersecurity is required
Cybersecurity threats to automotives result from hardware, software and overall systems redundancy. Additional concerns include general privacy clauses that justify collecting and transferring data to “third-party vendors”, without explicitly disclosing who such third parties are and the manner of processing personal data. For example, infotainment platform data may show popular music and the user’s preferences, which may be used by the music industry to improve marketing strategies. Similarly, it is lesser known that any data relating to behavioural tracking data, such as driving patterns etc., are also logged by the original equipment manufacturer.
Hacking is not limited to attackers gaining control of an electronic automobile; it includes malicious actors hacking charging stations to manipulate the systems. In Russia, EV charging stations were hacked in Moscow to display pro-Ukraine and anti-Putin messages such as “Glory to Ukraine” and “Death to the enemy” in the backdrop of the Russia-Ukraine war. Other examples include instances from the Isle of Wight, where hackers controlled the EV monitor to show inappropriate content and display high voltage fault codes to EV owners, preventing them from charging their vehicles with empty batteries.
UN Economic Commission for Europe releases Regulation 155 for Automobiles
UN Economic Commission for Europe Regulation 155 lays down uniform provisions concerning the approval of vehicles with regard to cybersecurity and cybersecurity management systems (CSMS). This was originally a part of the Commission.s Work Paper (W.P.) 29 that aimed to harmonise vehicular regulations for vehicles and vehicle equipment. Regulation 155 has a two-prong objective; first, to ensure cybersecurity at the organisational level and second, to ensure adequate designs of the vehicle architecture. A critical aspect in this context is the implementation of a certified CSMS by all companies that bring vehicles to market. Notably, this requirement alters the perspective of manufacturers; their responsibilities no longer conclude with the start of production (SOP). Instead, manufacturers are now required to continuously monitor and assess the safety systems throughout the entire life cycle of a vehicle, including making any necessary improvements.
This Regulation reflects the highly dynamic nature of software development and assurance. Moreover, the management system is designed to ensure compliance with safety requirements across the entire supply chain. This is a significant challenge, considering that suppliers currently account for over 70 per cent of the software volume.
The Regulation, which is binding in nature for 64 member countries, came into force in 2021. UNECE countries were required to be compliant with the Regulations by July 2022 for all new vehicles and by July 2024, the Regulation was set to apply to all vehicles. It is believed that the Regulation will become a de facto global standard, since vehicles authorised in a particular country may not be brought into the global market or the market of any UNECE member country based on any other authorisation. In such a scenario, OEMs of non-member countries may be required to give a “self-declaration”, declaring the equipment’s conformity with cybersecurity standards.
Conclusion
To compete and ensure trust, global car makers must deliver a robust cybersecurity framework that meets evolving regulations. The UNECE regulations in this regard are driving this direction by requiring automotive original equipment manufacturers (OEMs) to integrate vehicle cybersecurity throughout the entire value chain. The ‘security by design' approach aims to build a connected car that is trusted by all. Automotive cybersecurity involves measures and technologies to protect connected vehicles and their onboard systems from growing digital threats.
References:
- “Electric vehicle cyber security risks and best practices (2023)”, Cyber Talk, 1 August 2023. https://www.cybertalk.org/2023/08/01/electric-vehicle-cyber-security-risks-and-best-practices-2023/#:~:text=EVs%20are%20equipped%20with%20complex,unauthorized%20access%20to%20sensitive%20data.
- Gordon, Aaron, “Russian Electric Vehicle Chargers Hacked, Tell Users “PUTIN IS A D*******D”, Vice, 28 February 2022. https://www.vice.com/en/article/russian-electric-vehicle-chargers-hacked-tell-users-putin-is-a-dickhead/
- “Isle of Wight: Council’s electric vehicle chargers hacked to show porn site”, BBC, 6 April 2022. https://www.bbc.com/news/uk-england-hampshire-61006816
- Sandler, Manuel, “UN Regulation No. 155: What You Need to Know about UN R155”, Cyres Consulting, 1 June 2022. https://www.cyres-consulting.com/un-regulation-no-155-requirements-what-you-need-to-know/?srsltid=AfmBOopV1pH1mg6M2Nn439N1-EyiU-gPwH2L4vq5tmP0Y2vUpQR-yfP7#A_short_overview_Background_knowledge_on_UN_Regulation_No_155
- https://unece.org/wp29-introduction?__cf_chl_tk=ZYt.Sq4MrXvTwSiYURi_essxUCGCysfPq7eSCg1oXLA-1724839918-0.0.1.1-13972

Amid the popularity of OpenAI’s ChatGPT and Google’s announcement of introducing its own Artificial Intelligence chatbot called Bard, there has been much discussion over how such tools can impact India at a time when the country is aiming for an AI revolution.
During the Budget Session, Finance Minister Nirmala Sitharaman talked about AI, while her colleague, Minister of State (MoS) for Electronics and Information Technology Rajeev Chandrasekhar discussed it at the India Stack Developer Conference.
While Sitharaman stated that the government will establish three centres of excellence in AI in the country, Chandrashekhar at the event mentioned that India Stack, which includes digital solutions like Aadhaar, Digilocker and others, will become more sophisticated over time with the inclusion of AI.
As AI chatbots become the buzzword, News18 discusses with experts how such tech tools will impact India.
AI IN INDIA
Many experts believe that in a country like India, which is extremely diverse in nature and has a sizeable population, the introduction of technologies and their right adoption can bring a massive digital revolution.
For example, Manoj Gupta, Cofounder of Plotch.ai, a full-stack AI-enabled SaaS product, told News18 that Bard is still experimental and not open to everyone to use while ChatGPT is available and can be used to build applications on top of it.
He said: “Conversational chatbots are interesting since they have the potential to automate customer support and assisted buying in e-commerce. Even simple banking applications can be built that can use ChatGPT AI models to answer queries like bank balance, service requests etc.”
According to him, such tools could be extremely useful for people who are currently excluded from the digital economy due to language barriers.
Ashwini Vaishnaw, Union Minister for Communications, Electronics & IT, has also talked about using such tools to reduce communication issues. At World Economic Forum in Davos, he said: “We integrated our Bhashini language AI tool, which translates from one Indian language to another Indian language in real-time, spoken and text everything. We integrated that with ChatGPT and are seeing very good results.”
‘DOUBLE-EDGED SWORD’
Sundar Balasubramanian, Managing Director, India & SAARC, at Check Point Software, told News18 that generative AI like ChatGPT is a “double-edged sword”.
According to him, used in the right way, it can help developers write and fix code quicker, enable better chat services for companies, or even be a replacement for search engines, revolutionising the way people search for information.
“On the flip side, hackers are also leveraging ChatGPT to accelerate their bad acts and we have already seen examples of such exploitations. ChatGPT has lowered the bar for novice hackers to enter the field as they are able to learn quicker and hack better through asking the AI tool for answers,” he added.
Balasubramanian also stated that CPR has seen the quality of phishing emails improve tremendously over the past 3 months, making it increasingly difficult to discern between legitimate sources and a targeted phishing scam.
“Despite the emergence of the use of generative AI impacting cybercrime, Check Point is continually reminding organisations and individuals of the significance of being vigilant as ChatGPT and Codex become more mature, it can affect the threat landscape, for both good and bad,” he added.
While the real-life applications of ChatGPT include several things ranging from language translation to explaining tricky math problems, Balasubramanian said it can also be used for making the work of cyber researchers and developers more efficient.
“Generative AI or tools like ChatGPT can be used to detect potential threats by analysing large amounts of data and identifying patterns that may indicate malicious activity. This can help enterprises quickly identify and respond to a potential threat before it escalates to something more,” he added.
POSITIVE FACTORS
Major Vineet Kumar, Founder and Global President of CyberPeace Foundation, believes that the deployment of AI chatbots has proven to be highly beneficial in India, where a booming economy and increasing demand for efficient customer service have led to a surge in their use. According to him, both ChatGPT and Bard have the potential to bring significant positive change to various industries and individuals in India.
“ChatGPT has already made an impact by revolutionising customer service, providing instant and accurate support, and reducing wait time. It has automated tedious and complicated tasks for businesses and educational institutions, freeing up valuable time for more significant activities. In the education sector, ChatGPT has also improved learning experiences by providing quick and reliable information to students and educators,” he added.
He also said there are several possible positive impacts that the AI chatbots, ChatGPT and Bard, could have in India and these include improved customer experience, increased productivity, better access to information, improved healthcare, improved access to education and better financial services.
Reference Link : https://www.news18.com/news/explainers/confused-about-chatgpt-bard-experts-tell-news18-how-openai-googles-ai-chatbots-may-impact-india-7026277.html