#FactCheck - Edited Video Falsely Claims as an attack on PM Netanyahu in the Israeli Senate
Executive Summary:
A viral online video claims of an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate. However, the CyberPeace Research Team has confirmed that the video is fake, created using video editing tools to manipulate the true essence of the original footage by merging two very different videos as one and making false claims. The original footage has no connection to an attack on Mr. Netanyahu. The claim that endorses the same is therefore false and misleading.

Claims:
A viral video claims an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate.


Fact Check:
Upon receiving the viral posts, we conducted a Reverse Image search on the keyframes of the video. The search led us to various legitimate sources featuring an attack on an ethnic Turkish leader of Bulgaria but not on the Prime Minister Benjamin Netanyahu, none of which included any attacks on him.

We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 68.0% confidence that the video was an editing. The tools identified "substantial evidence of manipulation," particularly in the change of graphics quality of the footage and the breakage of the flow in footage with the change in overall background environment.



Additionally, an extensive review of official statements from the Knesset revealed no mention of any such incident taking place. No credible reports were found linking the Israeli PM to the same, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming of an attack on Prime Minister Netanyahu is an old video that has been edited. The research using various AI detection tools confirms that the video is manipulated using edited footage. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using video editing technology, making the claim false and misleading.
- Claim: Attack on the Prime Minister Netanyahu Israeli Senate
- Claimed on: Facebook, Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Executive Summary:
A viral picture on social media showing UK police officers bowing to a group of social media leads to debates and discussions. The investigation by CyberPeace Research team found that the image is AI generated. The viral claim is false and misleading.

Claims:
A viral image on social media depicting that UK police officers bowing to a group of Muslim people on the street.


Fact Check:
The reverse image search was conducted on the viral image. It did not lead to any credible news resource or original posts that acknowledged the authenticity of the image. In the image analysis, we have found the number of anomalies that are usually found in AI generated images such as the uniform and facial expressions of the police officers image. The other anomalies such as the shadows and reflections on the officers' uniforms did not match the lighting of the scene and the facial features of the individuals in the image appeared unnaturally smooth and lacked the detail expected in real photographs.

We then analysed the image using an AI detection tool named True Media. The tools indicated that the image was highly likely to have been generated by AI.



We also checked official UK police channels and news outlets for any records or reports of such an event. No credible sources reported or documented any instance of UK police officers bowing to a group of Muslims, further confirming that the image is not based on a real event.
Conclusion:
The viral image of UK police officers bowing to a group of Muslims is AI-generated. CyberPeace Research Team confirms that the picture was artificially created, and the viral claim is misleading and false.
- Claim: UK police officers were photographed bowing to a group of Muslims.
- Claimed on: X, Website
- Fact Check: Fake & Misleading

Introduction
CyberPeace Chronicles is a one-stop for the latest edition of news, updates, and findings in global cyberspace. As we step into the cyberage, it is pertinent that we need to incorporate cybersecurity practices in our day-to-day activities. From laptops to automated homes and cars, we are all surrounded by technology in some form or another. Thus, with the increased dependency, we need to eradicate the scope of vulnerabilities and threats around us and create robust and sustainable safety mechanisms for us and future generations.
What, When and How?
- WIN-RAR Update: CVE-2023-33831, a serious vulnerability, was identified in WinRAR versions prior to 6.23 in April 2023. When users attempted to access seemingly harmless files inside ZIP archives, this vulnerability allowed attackers to run arbitrary code. Cybercriminals transmitted malware families like DarkMe, GuLoader, and Remcos RAT by taking advantage of this vulnerability. It is essential to update WinRAR to version 6.23 or later in order to protect your computer and your data. Follow the following steps to secure your device -
- Checking Your Current WinRAR Version
- Downloading the Latest WinRAR Version
- Installing the Updated WinRAR
- Completing the Installation
- Verifying the Update
- Cleaning Up
- Indonesian Hacker Groups Target Indian Digital Infrastructure: As India geared up to host the G20 delegation as part of the Leadership Summit, various reports pointed towards different forms and intensity-based cyber attacks on Indian organisations and digital infrastructure. Tech firms in India have been successful in tracing the origination of the attacks to be from Indonesia. It is believed that hacker groups backed by anti-India elements have been trying to target the digital resources of India. Organisations and central agencies like Computer Emergency Response Team (CERT-In), National Critical Information Infrastructure Protection Centre (NCIIPC), I4C (Indian Cybercrime Coordination Centre), Delhi Police, Intelligence Bureau (IB), Research and Analysis Wing (R&AW), National Investigation Agency (NIA) and Central Bureau of Investigation (CBI) have constantly been working in keeping the Digital interests of India safe and secure, and with the ongoing G20 summit, it is very pertinent to be mindful of potential threats prevailing to prepare counter tactics for the same.
- CLOP Ransomware: The CL0P ransomware is thought to have initially surfaced in 2019 and was developed by a cybercriminal organisation that spoke Russian. The threat actor FIN11 (also known as TA505 and Snakefly), who is notorious for being financially driven, is frequently connected to the CL0P ransomware, which had its roots at the beginning of 2019. By utilising this technique, CL0P has targeted businesses utilising the "Accellion FTA" file transfer appliance's vulnerable version. Accordingly, it has been asserted that the following vulnerabilities have been used to access victim data and maybe switch to victim networks. Numerous well-publicized attacks carried out by CL0P have had an impact on organisations all across the world. Especially for Managed File Transfer (MFT) programmes, the CL0P performers are well known for their talent in developing zero-day vulnerabilities. The gang went after Accellion File Transfer Appliance (FTA) devices in both 2020 and 2021, then early in 2023, they went after Fortran/Linoma GoAnywhere MFT servers, and then later in June, they went after MOVEit transfer deployments. Up to 500 organisations are thought to have been harmed by this aggressive operation. Some of the ways to mitigate the risk are as follows:
- Regular Software Updates: Updating programmes and systems helps prevent known security flaws that fraudsters frequently exploit.
- Employee Training: Employee training can significantly lower the likelihood of successful penetration by educating staff members about phishing scams and safe internet conduct.
- Network Segmentation: By separating networks and restricting lateral movement, a ransomware attack's potential effects can be reduced.
- Regular Data backups: Data backups can lessen the effects of encryption and deter payment by regularly backing up data and storing it offsite.
- Security solutions: Putting in place effective cybersecurity measures like firewalls, intrusion detection systems, and cutting-edge endpoint protection can greatly improve an organisation's defences.
- Increased scrutiny for SIM card vendors: As phishing and smishing scams are on the rise in India, the Telecom Regulatory Authority of India (TRAI) has repeatedly issued notifications and consultation papers to address this growing concern. Earlier this year, TRAI notified that promotional calling will not be continued from 10-digit personal numbers. Instead, companies will now have to take authorised 9-digit numbers for promotional calls and SMSs. Similarly, to increase the efficiency of the same, TRAI has laid down that all the SIM card vendors will now have to be verified again, and any discrepancy found against any of the vendors will lead to blacklisting and penal actions against the vendor.
Conclusion
In conclusion, the digital landscape in 2023 is rife with both opportunities and challenges. The recent discovery of a critical vulnerability in WinRAR underscores the importance of regularly updating software to protect against malicious attacks. It is imperative for users to follow the provided steps to secure their devices and safeguard their data. Furthermore, the cyber threat landscape continues to evolve, with Indonesian hacker groups targeting Indian digital infrastructure, particularly during significant events like the G20 summit. Indian organisations and cybersecurity agencies are working diligently to defend against these threats and ensure the security of digital assets. The emergence of ransomware attacks, exemplified by the CL0P ransomware, serves as a stark reminder of the need for robust cybersecurity measures. Regular software updates, employee training, network segmentation, data backups, and advanced security solutions are crucial components of a comprehensive defence strategy against ransomware and other cyber threats. Additionally, the Telecom Regulatory Authority of India's efforts to enhance security in the telecommunications sector, such as stricter verification of SIM card vendors, demonstrate a proactive approach to addressing the rising threat of phishing and smishing scams. In this dynamic digital landscape, staying informed and implementing proactive cybersecurity measures is essential for individuals, organisations, and nations to protect their digital assets and maintain a secure online environment. Vigilance, collaboration, and ongoing adaptation are key to meeting the challenges posed by cyber threats in 2023 and beyond.

Introduction
The use of digital information and communication technologies for healthcare access has been on the rise in recent times. Mental health care is increasingly being provided through online platforms by remote practitioners, and even by AI-powered chatbots, which use natural language processing (NLP) and machine learning (ML) processes to simulate conversations between the platform and a user. Thus, AI chatbots can provide mental health support from the comfort of the home, at any time of the day, via a mobile phone. While this has great potential to enhance the mental health care ecosystem, such chatbots can present technical and ethical challenges as well.
Background
According to the WHO’s World Mental Health Report of 2022, every 1 in 8 people globally is estimated to be suffering from some form of mental health disorder. The need for mental health services worldwide is high but the supply of a care ecosystem is inadequate both in terms of availability and quality. In India, it is estimated that there are only 0.75 psychiatrists per 100,000 patients and only 30% of the mental health patients get help. Considering the slow thawing of social stigma regarding mental health, especially among younger demographics and support services being confined to urban Indian centres, the demand for a telehealth market is only projected to grow. This paves the way for, among other tools, AI-powered chatbots to fill the gap in providing quick, relatively inexpensive, and easy access to mental health counseling services.
Challenges
Users who seek mental health support are already vulnerable, and AI-induced oversight can exacerbate distress due to some of the following reasons:
- Inaccuracy: Apart from AI’s tendency to hallucinate data, chatbots may simply provide incorrect or harmful advice since they may be trained on data that is not representative of the specific physiological and psychological propensities of various demographics.
- Non-Contextual Learning: The efficacy of mental health counseling often relies on rapport-building between the service provider and client, relying on circumstantial and contextual factors. Machine learning models may struggle with understanding interpersonal or social cues, making their responses over-generalised.
- Reinforcement of Unhelpful Behaviors: In some cases, AI chatbots, if poorly designed, have the potential to reinforce unhealthy thought patterns. This is especially true for complex conditions such as OCD, treatment for which requires highly specific therapeutic interventions.
- False Reassurance: Relying solely on chatbots for counseling may create a partial sense of safety, thereby discouraging users from approaching professional mental health support services. This could reinforce unhelpful behaviours and exacerbate the condition.
- Sensitive Data Vulnerabilities: Health data is sensitive personal information. Chatbot service providers will need to clarify how health data is stored, processed, shared, and used. Without strong data protection and transparency standards, users are exposed to further risks to their well-being.
Way Forward
- Addressing Therapeutic Misconception: A lack of understanding of the purpose and capabilities of such chatbots, in terms of care expectations and treatments they can offer, can jeopardize user health. Platforms providing such services should be mandated to lay disclaimers about the limitations of the therapeutic relationship between the platform and its users in a manner that is easy to understand.
- Improved Algorithm Design: Training data for these models must undertake regular updates and audits to enhance their accuracy, incorporate contextual socio-cultural factors for profile analysis, and use feedback loops from customers and mental health professionals.
- Human Oversight: Models of therapy where AI chatbots are used to supplement treatment instead of replacing human intervention can be explored. Such platforms must also provide escalation mechanisms in cases where human-intervention is sought or required.
Conclusion
It is important to recognize that so far, there is no substitute for professional mental health services. Chatbots can help users gain awareness of their mental health condition and play an educational role in this regard, nudging them in the right direction, and provide assistance to both the practitioner and the client/patient. However, relying on this option to fill gaps in mental health services is not enough. Addressing this growing —and arguably already critical— global health crisis requires dedicated public funding to ensure comprehensive mental health support for all.
Sources
- https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care
- https://health.economictimes.indiatimes.com/news/industry/mental-healthcare-in-india-building-a-strong-ecosystem-for-a-sound-mind/105395767#:~:text=Indian%20mental%20health%20market%20is,access%20to%20better%20quality%20services.
- https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2023.1278186/full