#FactCheck - "AI-Generated Image of UK Police Officers Bowing to Muslims Goes Viral”
Executive Summary:
A viral picture on social media showing UK police officers bowing to a group of social media leads to debates and discussions. The investigation by CyberPeace Research team found that the image is AI generated. The viral claim is false and misleading.

Claims:
A viral image on social media depicting that UK police officers bowing to a group of Muslim people on the street.


Fact Check:
The reverse image search was conducted on the viral image. It did not lead to any credible news resource or original posts that acknowledged the authenticity of the image. In the image analysis, we have found the number of anomalies that are usually found in AI generated images such as the uniform and facial expressions of the police officers image. The other anomalies such as the shadows and reflections on the officers' uniforms did not match the lighting of the scene and the facial features of the individuals in the image appeared unnaturally smooth and lacked the detail expected in real photographs.

We then analysed the image using an AI detection tool named True Media. The tools indicated that the image was highly likely to have been generated by AI.



We also checked official UK police channels and news outlets for any records or reports of such an event. No credible sources reported or documented any instance of UK police officers bowing to a group of Muslims, further confirming that the image is not based on a real event.
Conclusion:
The viral image of UK police officers bowing to a group of Muslims is AI-generated. CyberPeace Research Team confirms that the picture was artificially created, and the viral claim is misleading and false.
- Claim: UK police officers were photographed bowing to a group of Muslims.
- Claimed on: X, Website
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
Pagers were commonly utilized in the late 1990s and early 2000s, especially in fields that needed fast, reliable communication and swift alerts and information sharing. Pagers typically offer a broader coverage range, particularly in remote areas with limited cellular signals, which enhances their dependability. They are simple electronic devices with minimal features, making them easy to use and less prone to technical issues. The decline in their use has been caused by the rise of mobile phones and their extensive features, offering more advanced communication options like voice calls, text messages, and internet access. Despite this, pagers are still used in some specific industries.
A shocking incident occurred on 17th September 2014, where thousands of pager devices exploded within seconds across Lebanon in a synchronized attack, targeting the US-designated terror group Hezbollah. The explosions killed at least 9 and injured over 2,800 individuals in the country that has been caught up in the Israel-Palestine tensions in its backyard.
The Pager Bombs Incident
On Tuesday, 17th September 2024, hundreds of pagers carried by Hezbollah members in Lebanon exploded in an unprecedented attack, surpassing a series of covert assassinations and cyber-attacks in the region over recent years. The Iran-backed militant group claimed the wireless devices began to explode around 3:30 p.m., local time, in a targeted attack on Hezbollah operatives. The pagers that exploded were new and had been purchased by Hezbollah in recent months. Experts say the explosions underscore Hezbollah's vulnerability as its communication network was compromised to deadly effect. Several areas of the country were affected, particularly Beirut's southern suburbs, a populous area that is a known Hezbollah stronghold. At least 9 people were killed, including a child, and about 2,800 people were wounded, overwhelming Lebanese hospitals.
Second Wave of Attack
As per the most recent reports, the next day, following the pager bombing incident, a second wave of blasts hit Beirut and multiple parts of Lebanon. Certain wireless devices such as walkie-talkies, solar equipment, and car batteries exploded, resulting in at least 9 people killed and 300 injured, according to the Lebanese Health Ministry. The attack is said to have embarrassed Hezbollah, incapacitated many of its members, and raised fears about a greater escalation of hostilities between the Iran-backed Lebanese armed group and Israel.
A New Kind of Threat - ‘Cyber-Physical’ Attacks
The incident raises serious concerns about physical tampering with daily-use electronic devices and the possibility of triggering a new age of warfare. This highlights the serious physical threat posed, wherein even devices such as smartwatches, earbuds, and pacemakers could be vulnerable to physical tampering if an attacker gains physical access to them. We are potentially looking at a new age of ‘cyber-physical’ threats where the boundaries between the digital and the physical are blurring rapidly. It raises questions about unauthorised access and manipulation targeting the physical security of such electronic devices. There is a cause for concern regarding the global supply chain across sectors, if even seemingly-innocuous devices can be weaponised to such devastating effect. Such kinds of attacks are capable of causing significant disruption and casualties, as demonstrated by pager bombings in Lebanon, which resulted in numerous deaths and injuries. It also raises questions on the regulatory mechanism and oversights checks at every stage of the electronic device lifecycle, from component manufacturing to the final assembly and shipment or supply. This is a grave issue because embedding explosives and doing malicious modifications by adversaries can turn such electronic devices into weapons.
CyberPeace Outlook
The pager bombing attack demonstrates a new era of threats in warfare tactics, revealing the advanced coordination and technical capabilities of adversaries where they have weaponised the daily use of electronic devices. They have targeted the hardware security of electronic devices, presenting a serious new threat to hardware security. The threat is grave, and has understandably raised widespread apprehension globally. Such kind of gross weaponisation of daily-use devices, specially in the conflict context, also triggers concerns about the violation of International Humanitarian Law principles. It also raises serious questions on the liabilities of companies, suppliers and manufacturers of such devices, who are subject to regulatory checks and ensuring the authenticity of their products.
The incident highlights the need for a more robust regulatory landscape, with stricter supply chain regulations as we adjust to the realities of a possible new era of weaponisation and conflict expression. CyberPeace recommends the incorporation of stringent tracking and vetting processes in product supply chains, along with the strengthening of international cooperation mechanisms to ensure compliance with protocols regarding the responsible use of technology. These will go a long way towards establishing peace in the global cyberspace and restore trust and safety with regards to everyday technologies.
References:
1. https://indianexpress.com/article/what-is/what-is-a-pager-9573113/
5. https://www.theguardian.com/world/2024/sep/18/hezbollah-pager-explosion-lebanon-israel-gold-apollo

Introduction
CyberPeace Chronicles is a one-stop for the latest edition of news, updates, and findings in global cyberspace. As we step into the cyberage, it is pertinent that we need to incorporate cybersecurity practices in our day-to-day activities. From laptops to automated homes and cars, we are all surrounded by technology in some form or another. Thus, with the increased dependency, we need to eradicate the scope of vulnerabilities and threats around us and create robust and sustainable safety mechanisms for us and future generations.
What, When and How?
- WIN-RAR Update: CVE-2023-33831, a serious vulnerability, was identified in WinRAR versions prior to 6.23 in April 2023. When users attempted to access seemingly harmless files inside ZIP archives, this vulnerability allowed attackers to run arbitrary code. Cybercriminals transmitted malware families like DarkMe, GuLoader, and Remcos RAT by taking advantage of this vulnerability. It is essential to update WinRAR to version 6.23 or later in order to protect your computer and your data. Follow the following steps to secure your device -
- Checking Your Current WinRAR Version
- Downloading the Latest WinRAR Version
- Installing the Updated WinRAR
- Completing the Installation
- Verifying the Update
- Cleaning Up
- Indonesian Hacker Groups Target Indian Digital Infrastructure: As India geared up to host the G20 delegation as part of the Leadership Summit, various reports pointed towards different forms and intensity-based cyber attacks on Indian organisations and digital infrastructure. Tech firms in India have been successful in tracing the origination of the attacks to be from Indonesia. It is believed that hacker groups backed by anti-India elements have been trying to target the digital resources of India. Organisations and central agencies like Computer Emergency Response Team (CERT-In), National Critical Information Infrastructure Protection Centre (NCIIPC), I4C (Indian Cybercrime Coordination Centre), Delhi Police, Intelligence Bureau (IB), Research and Analysis Wing (R&AW), National Investigation Agency (NIA) and Central Bureau of Investigation (CBI) have constantly been working in keeping the Digital interests of India safe and secure, and with the ongoing G20 summit, it is very pertinent to be mindful of potential threats prevailing to prepare counter tactics for the same.
- CLOP Ransomware: The CL0P ransomware is thought to have initially surfaced in 2019 and was developed by a cybercriminal organisation that spoke Russian. The threat actor FIN11 (also known as TA505 and Snakefly), who is notorious for being financially driven, is frequently connected to the CL0P ransomware, which had its roots at the beginning of 2019. By utilising this technique, CL0P has targeted businesses utilising the "Accellion FTA" file transfer appliance's vulnerable version. Accordingly, it has been asserted that the following vulnerabilities have been used to access victim data and maybe switch to victim networks. Numerous well-publicized attacks carried out by CL0P have had an impact on organisations all across the world. Especially for Managed File Transfer (MFT) programmes, the CL0P performers are well known for their talent in developing zero-day vulnerabilities. The gang went after Accellion File Transfer Appliance (FTA) devices in both 2020 and 2021, then early in 2023, they went after Fortran/Linoma GoAnywhere MFT servers, and then later in June, they went after MOVEit transfer deployments. Up to 500 organisations are thought to have been harmed by this aggressive operation. Some of the ways to mitigate the risk are as follows:
- Regular Software Updates: Updating programmes and systems helps prevent known security flaws that fraudsters frequently exploit.
- Employee Training: Employee training can significantly lower the likelihood of successful penetration by educating staff members about phishing scams and safe internet conduct.
- Network Segmentation: By separating networks and restricting lateral movement, a ransomware attack's potential effects can be reduced.
- Regular Data backups: Data backups can lessen the effects of encryption and deter payment by regularly backing up data and storing it offsite.
- Security solutions: Putting in place effective cybersecurity measures like firewalls, intrusion detection systems, and cutting-edge endpoint protection can greatly improve an organisation's defences.
- Increased scrutiny for SIM card vendors: As phishing and smishing scams are on the rise in India, the Telecom Regulatory Authority of India (TRAI) has repeatedly issued notifications and consultation papers to address this growing concern. Earlier this year, TRAI notified that promotional calling will not be continued from 10-digit personal numbers. Instead, companies will now have to take authorised 9-digit numbers for promotional calls and SMSs. Similarly, to increase the efficiency of the same, TRAI has laid down that all the SIM card vendors will now have to be verified again, and any discrepancy found against any of the vendors will lead to blacklisting and penal actions against the vendor.
Conclusion
In conclusion, the digital landscape in 2023 is rife with both opportunities and challenges. The recent discovery of a critical vulnerability in WinRAR underscores the importance of regularly updating software to protect against malicious attacks. It is imperative for users to follow the provided steps to secure their devices and safeguard their data. Furthermore, the cyber threat landscape continues to evolve, with Indonesian hacker groups targeting Indian digital infrastructure, particularly during significant events like the G20 summit. Indian organisations and cybersecurity agencies are working diligently to defend against these threats and ensure the security of digital assets. The emergence of ransomware attacks, exemplified by the CL0P ransomware, serves as a stark reminder of the need for robust cybersecurity measures. Regular software updates, employee training, network segmentation, data backups, and advanced security solutions are crucial components of a comprehensive defence strategy against ransomware and other cyber threats. Additionally, the Telecom Regulatory Authority of India's efforts to enhance security in the telecommunications sector, such as stricter verification of SIM card vendors, demonstrate a proactive approach to addressing the rising threat of phishing and smishing scams. In this dynamic digital landscape, staying informed and implementing proactive cybersecurity measures is essential for individuals, organisations, and nations to protect their digital assets and maintain a secure online environment. Vigilance, collaboration, and ongoing adaptation are key to meeting the challenges posed by cyber threats in 2023 and beyond.
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62