#FactCheck: Viral image shows the Maldives mocking India with a "SURRENDER" sign on photo of Prime Minister Narendra Modi
Executive Summary:
A manipulated viral photo of a Maldivian building with an alleged oversized portrait of Indian Prime Minister Narendra Modi and the words "SURRENDER" went viral on social media. People responded with fear, indignation, and anxiety. Our research, however, showed that the image was manipulated and not authentic.

Claim:
A viral image claims that the Maldives displayed a huge portrait of PM Narendra Modi on a building front, along with the phrase “SURRENDER,” implying an act of national humiliation or submission.

Fact Check:
After a thorough examination of the viral post, we got to know that it had been altered. While the image displayed the same building, it was wrong to say it included Prime Minister Modi’s portrait along with the word “SURRENDER” shown in the viral version. We also checked the image with the Hive AI Detector, which marked it as 99.9% fake. This further confirmed that the viral image had been digitally altered.

During our research, we also found several images from Prime Minister Modi’s visit, including one of the same building displaying his portrait, shared by the official X handle of the Maldives National Defence Force (MNDF). The post mentioned “His Excellency Prime Minister Shri @narendramodi was warmly welcomed by His Excellency President Dr.@MMuizzu at Republic Square, where he was honored with a Guard of Honor by #MNDF on his state visit to Maldives.” This image, captured from a different angle, also does not feature the word “surrender.


Conclusion:
The claim that the Maldives showed a picture of PM Modi with a surrender message is incorrect and misleading. The image is altered and is being spread to mislead people and stir up controversy. Users should check the authenticity of photos before sharing.
- Claim: Viral image shows the Maldives mocking India with a surrender sign
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
The recent cyber-attack on Jaguar Land Rover (JLR), one of the world's best-known car makers, has revealed extensive weaknesses in the interlinked character of international supply chains. The incident highlights the increasing cybersecurity issues of industries going through digital transformation. With its production stopped in several UK factories, supply chain disruptions, and service delays to its customers worldwide, this cyber-attack shows how cyber events can ripple into operation, finance, and reputation risks for large businesses.
The Anatomy of a Breakdown
Jaguar Land Rover, a Tata Motors subsidiary, was forced to disable its IT infrastructure because of a cyber-attack over the weekend. This shut down was already an emergency shut down to mitigate damage and the disruption to business was serious.
- No Production - The car plants at Halewood (Merseyside) and Solihull (West Midlands) and the engine plant (Wolverhampton) were all completely shut down.
- Sales and Distribution: Car sales were significantly impaired during a high-volume registration period in September, although certain transactions still passed through manual procedures.
- Global Effect: The breakdown did not reach only the UK, dealers and fix experts across the world, including in Australia, suffered with inaccessible parts databases.
JLR called the recovery process "extremely complex" as it involved a controlled recovery of systems and implementing alternative workarounds for offline services. The overall effects include the immediate and massive impact to their suppliers and customers, and has raised larger questions regarding the sustainability of digital ecosystems in the automobile value chain.
The Human Impact: Beyond JLR's Factories
The implications of the cyber-attack have extended beyond the production lines of JLR:
- Independent Garages: Repair centres such as Nyewood Express of West Sussex indicated that they could not use vital parts databases, which brought repair activities to a standstill and left clients waiting indefinitely.
- Global Dealers: Land Rover experts as distant as Tasmania indicated total system crashes, highlighting global dependency on centralized IT systems.
- Customer Frustration: Regular customers in need of urgent repairs were stranded by the inability to order replacement parts from original manufacturers.
This attack is an example of the cascading effect of cyber disruptions among interconnected industries, a single point of failure paralyzing complete ecosystems.
The Culprit: The Hacker Collective
The hack is justifiably claimed by a so-called hacker collective "Scattered Lapsus$ Hunters." The so-called hacking collective says that it consists of young English-speaking hackers and has previously targeted blue-chip brands like Marks & Spencer. While the attackers seem not to have publicly declared whether they exfiltrated sensitive information or deployed ransomware, they went ahead and posted screenshots of internal JLR documents-the kind of documents that probably are not supposed to see the light of day, including troubleshooting guides and system logs-implicating what can only be described as grossly unauthorized access into some of Jaguar Land Rover's core IT systems.
Jaguar Land Rover had gone on record to claim with no apropos proof or evidence that it probably did not see anyone getting into customer data; however, the very occurrence of this attack raises some very serious questions on insider threats, social engineering concepts, and how efficient cybersecurity governance architectures really are.
Cybersecurity Weaknesses and Lessons Learned
The JLR attack depicts some of the common weaknesses associated with large-scale manufacturing organizations:
- Centralized IT Dependencies: Today's auto firms are based on worldwide IT systems for operations, logistics, and customer care. Compromise can lead to broad outages.
- Supply Chain Vulnerabilities: Tier-2 and Tier-1 suppliers use OEM systems for placing and tracing components. Interrupting at the OEM level automatically stops their processes.
- Inadequate Incident Visibility: Several suppliers complained about no clear information from JLR, which increased uncertainty and financial loss.
- Rise of Youth Hacking Groups: Involvement of youth hacker groups highlight the necessity for active monitoring and community-level cybersecurity awareness initiatives.
Broader Industry Context
With ever-increasing cyber-attacks on the automotive industry, an area currently being rapidly digitalised through connected cars, IoT-based factories, and cloud-based operations, this series of incidents falls within such a context. In 2023, JLR awarded an £800 million contract to Tata Consultancy Services (TCS) for services in support of the company's digital transformation and cybersecurity enhancement. This attack shows that, no matter how much is spent, poorly conceptualised security programs can never stand up to ever-changing cyber threats.
What Can Organizations Do? – Cyberpeace Recommendations
To contain risks and develop a resilience against such events, organizations need to implement a multi-layered approach to cybersecurity:
- Adopt Zero Trust Architecture - Presume breach as the new normal. Verify each user, device, and application before access is given, even inside the internal network.
- Enhance Supply Chain Security - Perform targeted assessments on a routine basis to identify risk factors in diminishing suppliers. Include rigorous cybersecurity provisions in the agreements with suppliers, namely disclosure of vulnerabilities and the agreed period for incident response.
- Durable Backups and Their Restoration - Backward hampers are kept isolated and encrypted to continue operations in case of ransomware incidents or any other occur in system compromise.
- Periodic Red Team Exercises - Simulate cyber-attacks on IT and OT systems to examine if vulnerabilities exist and evaluate current incident response measures.
- Employee Training and Insider Threat Monitoring - Social engineering being the forefront of attack vectors, continuous training and behavioural monitoring will have to be done to avoid credential disposal.
- Public-Private Partnership - Interact with several government agencies and cybersecurity groups for sharing threat intelligence and enforcing best practices complementary to ISO/IEC 27001 and NIST Cybersecurity Framework.
Conclusion
The hacking at Jaguar Land Rover is perhaps one of a thousand reminders that cybersecurity can no longer be seen as a back-office job but rather as an issue of business continuity at the very core of the organization. In the process of digital transformation, the attack surface grows, making the entities targeted by cybercriminals. Operation security demands that cybersecurity be ensured on a proactive basis through resilient supply chains and stakeholders working together. The JLR attack is not an isolated event; it is a warning for the entire automobile sector to maintain security at every level of digitalization.
References
- https://www.bbc.com/news/articles/c1jzl1lw4y1o
- https://www.theguardian.com/business/2025/sep/07/disruption-to-jaguar-land-rover-after-cyber-attack-may-last-until-october
- https://uk.finance.yahoo.com/news/jaguar-factory-workers-told-stay-073458122.html
.webp)
The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's voice and generate realistic images and videos of almost anyone doing pretty much anything. The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content.
Understanding Deepfakes
Deepfakes are synthetically generated content created using artificial intelligence (AI). This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious.
An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message.
Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts. The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches.
India’s Legal Landscape Surrounding Deepfakes
India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused.
- Deepfakes created with the intent of spreading misinformation or damaging someone’s reputation can be prosecuted under the Bharatiya Nyaya Sanhita of 2023. It deals with the consequences of such acts under Section 356, governing defamation law.
- The Information Technology Act of 2000, the primary law that regulates Indian cyberspace. Any unauthorised disclosure of personal information which is used to create deepfakes for harassment or voyeurism is a violation of the act.
- The unauthorised use of a person's likeness in a deepfake can become a violation of their intellectual property rights and lead to copyright infringement.
- India’s privacy law, the Digital Personal Data Protection Act, regulates and limits the misuse of personal data. It has the potential to address deepfakes by ensuring that individuals’ likenesses are not used without their consent in digital contexts.
India, at present, needs legislation that can specifically address the challenges deepfakes pose. The proposed legislation, aptly titled, ‘the Digital India Act’ aims to tackle various digital issues, including the misuse of deepfake technology and the spread of misinformation. Additionally, states like Maharashtra have proposed laws targeting deepfakes used for defamation or fraud, highlighting growing concerns about their impact on the digital landscape.
Policy Approaches to Regulation of Deepfakes
- Criminalising and penalising the making, creation and distribution of harmful deepfakes as illegal will act as a deterrent.
- There should be a process that mandates the disclosures for synthetic media. This would be to inform viewers that the content has been created using AI.
- Encouraging tech companies to implement stricter policies on deepfake content moderation can enhance accountability and reduce harmful misinformation.
- The public’s understanding of deepfakes should be promoted. Especially, via awareness campaigns that will empower citizens to critically evaluate digital content and make informed decisions.
Deepfake, Global Overview
There has been an increase in the momentum to regulate deepfakes globally. In October 2023, US President Biden signed an executive order on AI risks instructing the US Commerce Department to form labelling standards for AI-generated content. California and Texas have passed laws against the dangerous distribution of deepfake images that affect electoral contexts and Virginia has targeted a law on the non-consensual distribution of deepfake pornography.
China promulgated regulations requiring explicit marking of doctored content. The European Union has tightened its Code of Practice on Disinformation by requiring social media to flag deepfakes, otherwise they risk facing hefty fines and proposed transparency mandates under the EU AI Act. These measures highlight a global recognition of the risks that deepfakes pose and the need for a robust regulatory framework.
Conclusion
With deepfakes being a significant source of risk to trust and democratic processes, a multi-pronged approach to regulation is in order. From enshrining measures against deepfake technology in specific laws and penalising the same, mandating transparency and enabling public awareness, the legislators have a challenge ahead of them. National and international efforts have highlighted the urgent need for a comprehensive framework to enable measures to curb the misuse and also promote responsible innovation. Cooperation during these trying times will be important to shield truth and integrity in the digital age.
References
- https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
- https://www.thehindu.com/news/national/regulating-deepfakes-generative-ai-in-india-explained/article67591640.ece
- https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
- https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/
- https://thesecretariat.in/article/wake-up-call-for-law-making-on-deepfakes-and-misinformation
.webp)
Introduction
The link between social media and misinformation is undeniable. Misinformation, particularly the kind that evokes emotion, spreads like wildfire on social media and has serious consequences, like undermining democratic processes, discrediting science, and promulgating hateful discourses which may incite physical violence. If left unchecked, misinformation propagated through social media has the potential to incite social disorder, as seen in countless ethnic clashes worldwide. This is why social media platforms have been under growing pressure to combat misinformation and have been developing models such as fact-checking services and community notes to check its spread. This article explores the pros and cons of the models and evaluates their broader implications for online information integrity.
How the Models Work
- Third-Party Fact-Checking Model (formerly used by Meta) Meta initiated this program in 2016 after claims of extraterritorial election tampering through dis/misinformation on its platforms. It entered partnerships with third-party organizations like AFP and specialist sites like Lead Stories and PolitiFact, which are certified by the International Fact-Checking Network (IFCN) for meeting neutrality, independence, and editorial quality standards. These fact-checkers identify misleading claims that go viral on platforms and publish verified articles on their websites, providing correct information. They also submit this to Meta through an interface, which may link the fact-checked article to the social media post that contains factually incorrect claims. The post then gets flagged for false or misleading content, and a link to the article appears under the post for users to refer to. This content will be demoted in the platform algorithm, though not removed entirely unless it violates Community Standards. However, in January 2025, Meta announced it was scrapping this program and beginning to test X’s Community Notes Model in the USA, before rolling it out in the rest of the world. It alleges that the independent fact-checking model is riddled with personal biases, lacks transparency in decision-making, and has evolved into a censoring tool.
- Community Notes Model ( Used by X and being tested by Meta): This model relies on crowdsourced contributors who can sign up for the program, write contextual notes on posts and rate the notes made by other users on X. The platform uses a bridging algorithm to display those notes publicly, which receive cross-ideological consensus from voters across the political spectrum. It does this by boosting those notes that receive support despite the political leaning of the voters, which it measures through their engagements with previous notes. The benefit of this system is that it is less likely for biases to creep into the flagging mechanism. Further, the process is relatively more transparent than an independent fact-checking mechanism since all Community Notes contributions are publicly available for inspection, and the ranking algorithm can be accessed by anyone, allowing for external evaluation of the system by anyone.
CyberPeace Insights
Meta’s uptake of a crowdsourced model signals social media’s shift toward decentralized content moderation, giving users more influence in what gets flagged and why. However, the model’s reliance on diverse agreements can be a time-consuming process. A study (by Wirtschafter & Majumder, 2023) shows that only about 12.5 per cent of all submitted notes are seen by the public, making most misleading content go unchecked. Further, many notes on divisive issues like politics and elections may not see the light of day since reaching a consensus on such topics is hard. This means that many misleading posts may not be publicly flagged at all, thereby hindering risk mitigation efforts. This casts aspersions on the model’s ability to check the virality of posts which can have adverse societal impacts, especially on vulnerable communities. On the other hand, the fact-checking model suffers from a lack of transparency, which has damaged user trust and led to allegations of bias.
Since both models have their advantages and disadvantages, the future of misinformation control will require a hybrid approach. Data accuracy and polarization through social media are issues bigger than an exclusive tool or model can effectively handle. Thus, platforms can combine expert validation with crowdsourced input to allow for accuracy, transparency, and scalability.
Conclusion
Meta’s shift to a crowdsourced model of fact-checking is likely to have bigger implications on public discourse since social media platforms hold immense power in terms of how their policies affect politics, the economy, and societal relations at large. This change comes against the background of sweeping cost-cutting in the tech industry, political changes in the USA and abroad, and increasing attempts to make Big Tech platforms more accountable in jurisdictions like the EU and Australia, which are known for their welfare-oriented policies. These co-occurring contestations are likely to inform the direction the development of misinformation-countering tactics will take. Until then, the crowdsourcing model is still in development, and its efficacy is yet to be seen, especially regarding polarizing topics.
References
- https://www.cyberpeace.org/resources/blogs/new-youtube-notes-feature-to-help-users-add-context-to-videos
- https://en-gb.facebook.com/business/help/315131736305613?id=673052479947730
- http://techxplore.com/news/2025-01-meta-fact.html
- https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
- https://communitynotes.x.com/guide/en/about/introduction
- https://blogs.lse.ac.uk/impactofsocialsciences/2025/01/14/do-community-notes-work/?utm_source=chatgpt.com
- https://www.techpolicy.press/community-notes-and-its-narrow-understanding-of-disinformation/
- https://www.rstreet.org/commentary/metas-shift-to-community-notes-model-proves-that-we-can-fix-big-problems-without-big-government/
- https://tsjournal.org/index.php/jots/article/view/139/57