#FactCheck-AI-Generated Viral Image of US President Joe Biden Wearing a Military Uniform
Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.
Similar Post:
Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.
Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool
The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.
Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake
Related Blogs
Executive Summary:
In late 2024 an Indian healthcare provider experienced a severe cybersecurity attack that demonstrated how powerful AI ransomware is. This blog discusses the background to the attack, how it took place and the effects it caused (both medical and financial), how organisations reacted, and the final result of it all, stressing on possible dangers in the healthcare industry with a lack of sufficiently adequate cybersecurity measures in place. The incident also interrupted the normal functioning of business and explained the possible economic and image losses from cyber threats. Other technical results of the study also provide more evidence and analysis of the advanced AI malware and best practices for defending against them.
1. Introduction
The integration of artificial intelligence (AI) in cybersecurity has revolutionised both defence mechanisms and the strategies employed by cybercriminals. AI-powered attacks, particularly ransomware, have become increasingly sophisticated, posing significant threats to various sectors, including healthcare. This report delves into a case study of an AI-powered ransomware attack on a prominent Indian healthcare provider in 2024, analysing the attack's execution, impact, and the subsequent response, along with key technical findings.
2. Background
In late 2024, a leading healthcare organisation in India which is involved in the research and development of AI techniques fell prey to a ransomware attack that was AI driven to get the most out of it. With many businesses today relying on data especially in the healthcare industry that requires real-time operations, health care has become the favourite of cyber criminals. AI aided attackers were able to cause far more detailed and damaging attack that severely affected the operation of the provider whilst jeopardising the safety of the patient information.
3. Attack Execution
The attack began with the launch of a phishing email designed to target a hospital administrator. They received an email with an infected attachment which when clicked in some cases injected the AI enabled ransomware into the hospitals network. AI incorporated ransomware was not as blasé as traditional ransomware, which sends copies to anyone, this studied the hospital’s IT network. First, it focused and targeted important systems which involved implementation of encryption such as the electronic health records and the billing departments.
The fact that the malware had an AI feature allowed it to learn and adjust its way of propagation in the network, and prioritise the encryption of most valuable data. This accuracy did not only increase the possibility of the potential ransom demand but also it allowed reducing the risks of the possibility of early discovery.
4. Impact
- The consequences of the attack were immediate and severe: The consequences of the attack were immediate and severe.
- Operational Disruption: The centralization of important systems made the hospital cease its functionality through the acts of encrypting the respective components. Operations such as surgeries, routine medical procedures and admitting of patients were slowed or in some cases referred to other hospitals.
- Data Security: Electronic patient records and associated billing data became off-limit because of the vulnerability of patient confidentiality. The danger of data loss was on the verge of becoming permanent, much to the concern of both the healthcare provider and its patients.
- Financial Loss: The attackers asked for 100 crore Indian rupees (approximately 12 USD million) for the decryption key. Despite the hospital not paying for it, there were certain losses that include the operational loss due to the server being down, loss incurred by the patients who were affected in one way or the other, loss incurred in responding to such an incident and the loss due to bad reputation.
5. Response
As soon as the hotel’s management was informed about the presence of ransomware, its IT department joined forces with cybersecurity professionals and local police. The team decided not to pay the ransom and instead recover the systems from backup. Despite the fact that this was an ethically and strategically correct decision, it was not without some challenges. Reconstruction was gradual, and certain elements of the patients’ records were permanently erased.
In order to avoid such attacks in the future, the healthcare provider put into force several organisational and technical actions such as network isolation and increase of cybersecurity measures. Even so, the attack revealed serious breaches in the provider’s IT systems security measures and protocols.
6. Outcome
The attack had far-reaching consequences:
- Financial Impact: A healthcare provider suffers a lot of crashes in its reckoning due to substantial service disruption as well as bolstering cybersecurity and compensating patients.
- Reputational Damage: The leakage of the data had a potential of causing a complete loss of confidence from patients and the public this affecting the reputation of the provider. This, of course, had an effect on patient care, and ultimately resulted in long-term effects on revenue as patients were retained.
- Industry Awareness: The breakthrough fed discussions across the country on how to improve cybersecurity provisions in the healthcare industry. It woke up the other care providers to review and improve their cyber defence status.
7. Technical Findings
The AI-powered ransomware attack on the healthcare provider revealed several technical vulnerabilities and provided insights into the sophisticated mechanisms employed by the attackers. These findings highlight the evolving threat landscape and the importance of advanced cybersecurity measures.
7.1 Phishing Vector and Initial Penetration
- Sophisticated Phishing Tactics: The phishing email was crafted with precision, utilising AI to mimic the communication style of trusted contacts within the organisation. The email bypassed standard email filters, indicating a high level of customization and adaptation, likely due to AI-driven analysis of previous successful phishing attempts.
- Exploitation of Human Error: The phishing email targeted an administrative user with access to critical systems, exploiting the lack of stringent access controls and user awareness. The successful penetration into the network highlighted the need for multi-factor authentication (MFA) and continuous training on identifying phishing attempts.
7.2 AI-Driven Malware Behavior
- Dynamic Network Mapping: Once inside the network, the AI-powered malware executed a sophisticated mapping of the hospital's IT infrastructure. Using machine learning algorithms, the malware identified the most critical systems—such as Electronic Health Records (EHR) and the billing system—prioritising them for encryption. This dynamic mapping capability allowed the malware to maximise damage while minimising its footprint, delaying detection.
- Adaptive Encryption Techniques: The malware employed adaptive encryption techniques, adjusting its encryption strategy based on the system's response. For instance, if it detected attempts to isolate the network or initiate backup protocols, it accelerated the encryption process or targeted backup systems directly, demonstrating an ability to anticipate and counteract defensive measures.
- Evasive Tactics: The ransomware utilised advanced evasion tactics, such as polymorphic code and anti-forensic features, to avoid detection by traditional antivirus software and security monitoring tools. The AI component allowed the malware to alter its code and behaviour in real time, making signature-based detection methods ineffective.
7.3 Vulnerability Exploitation
- Weaknesses in Network Segmentation: The hospital’s network was insufficiently segmented, allowing the ransomware to spread rapidly across various departments. The malware exploited this lack of segmentation to access critical systems that should have been isolated from each other, indicating the need for stronger network architecture and micro-segmentation.
- Inadequate Patch Management: The attackers exploited unpatched vulnerabilities in the hospital’s IT infrastructure, particularly within outdated software used for managing patient records and billing. The failure to apply timely patches allowed the ransomware to penetrate and escalate privileges within the network, underlining the importance of rigorous patch management policies.
7.4 Data Recovery and Backup Failures
- Inaccessible Backups: The malware specifically targeted backup servers, encrypting them alongside primary systems. This revealed weaknesses in the backup strategy, including the lack of offline or immutable backups that could have been used for recovery. The healthcare provider’s reliance on connected backups left them vulnerable to such targeted attacks.
- Slow Recovery Process: The restoration of systems from backups was hindered by the sheer volume of encrypted data and the complexity of the hospital’s IT environment. The investigation found that the backups were not regularly tested for integrity and completeness, resulting in partial data loss and extended downtime during recovery.
7.5 Incident Response and Containment
- Delayed Detection and Response: The initial response was delayed due to the sophisticated nature of the attack, with traditional security measures failing to identify the ransomware until significant damage had occurred. The AI-powered malware’s ability to adapt and camouflage its activities contributed to this delay, highlighting the need for AI-enhanced detection and response tools.
- Forensic Analysis Challenges: The anti-forensic capabilities of the malware, including log wiping and data obfuscation, complicated the post-incident forensic analysis. Investigators had to rely on advanced techniques, such as memory forensics and machine learning-based anomaly detection, to trace the malware’s activities and identify the attack vector.
8. Recommendations Based on Technical Findings
To prevent similar incidents, the following measures are recommended:
- AI-Powered Threat Detection: Implement AI-driven threat detection systems capable of identifying and responding to AI-powered attacks in real time. These systems should include behavioural analysis, anomaly detection, and machine learning models trained on diverse datasets.
- Enhanced Backup Strategies: Develop a more resilient backup strategy that includes offline, air-gapped, or immutable backups. Regularly test backup systems to ensure they can be restored quickly and effectively in the event of a ransomware attack.
- Strengthened Network Segmentation: Re-architect the network with robust segmentation and micro-segmentation to limit the spread of malware. Critical systems should be isolated, and access should be tightly controlled and monitored.
- Regular Vulnerability Assessments: Conduct frequent vulnerability assessments and patch management audits to ensure all systems are up to date. Implement automated patch management tools where possible to reduce the window of exposure to known vulnerabilities.
- Advanced Phishing Defences: Deploy AI-powered anti-phishing tools that can detect and block sophisticated phishing attempts. Train staff regularly on the latest phishing tactics, including how to recognize AI-generated phishing emails.
9. Conclusion
The AI empowered ransomware attack on the Indian healthcare provider in 2024 makes it clear that the threat of advanced cyber attacks has grown in the healthcare facilities. Sophisticated technical brief outlines the steps used by hackers hence underlining the importance of ongoing active and strong security. This event is a stark message to all about the importance of not only remaining alert and implementing strong investments in cybersecurity but also embarking on the formulation of measures on how best to counter such incidents with limited harm. AI is now being used by cybercriminals to increase the effectiveness of the attacks they make and it is now high time all healthcare organisations ensure that their crucial systems and data are well protected from such attacks.
Introduction
The Union Minister of Information and Broadcasting Ashwini Vaishnaw addressed the Press Council of India on the occasion of National Press Day regarding emergent concerns in the digital media and technology landscape. Union Minister of Information and Broadcasting Ashwini Vaishnaw has identified four major challenges facing news media in India, including fake news, algorithmic bias, artificial intelligence, and fair compensation. He emphasized the need for greater accountability and fairness from Big Tech to combat misinformation and protect democracy. Vaishnaw argued that platforms do not verify information posted online, leading to the spread of false and misleading information. He called on online platforms and Big Tech to combat misinformation and protect democracy.
Key Concerns Highlighted by Union Minister Ashwini Vaishnaw
- Misinformation: Due to India's unique sensitivities, digital platforms should adopt country-specific responsibilities and metrics. The Minister also questioned the safe harbour principle, which shields platforms from liability for user-generated content.
- Algorithmic Biases: The prioritisation of viral content, which is often divisive, by social media algorithms can have serious implications on societal peace.
- Impact of AI on intellectual Property: The training of AI on pre-existing datasets presents the ethical challenge of robbing original creators of their rights to their intellectual property
- Fair compensation: Traditional news media is increasingly facing financial strain since news consumption is shifting rapidly to social media platforms, creating uneven compensation dynamics.
Cyberpeace Insights
- Misinformation: Marked by routine upheavals and moral panics, Indian society is vulnerable to the severe impacts of fake news, including mob violence, political propaganda, health misinformation and more. Inspired by the EU's Digital Services Act, 2022, and other related legislation that addresses hate speech and misinformation, the Indian Minister has called for revisiting the safe harbour protection under Section 79 of the IT Act, 2000. However, any legislation on misinformation must strike a balance between protecting the fundamental rights to freedom of speech, and privacy while safeguarding citizens from its harmful effects.
- Algorithmic Biases: Social media algorithms are designed to boost user engagement since this increases advertisement revenue. This leads to the creation of filter bubbles- exposure to personalized information online and echo chambers interaction with other users with the same opinions that align with their worldview. These phenomena induce radicalization of views, increase intolerance fuel polarization in public discourse, and trigger the spread of more misinformation. Tackling this requires algorithmic design changes such as disincentivizing sensationalism, content labelling, funding fact-checking networks, etc. to improve transparency.
- Impact of AI on Intellectual Property: AI models are trained on data that may contain copyrighted material. It can lead to a loss of revenue for primary content creators, while tech companies owning AI models may financially benefit disproportionately by re-rendering their original works. Large-scale uptake of AI models will significantly impact fields such as advertising, journalism, entertainment, etc by disrupting their market. Managing this requires a push for Ethical AI regulations and the protection of original content creators.
Conclusion: Charting a Balanced Path
The socio-cultural and economic fabric of the Indian subcontinent is not only distinct from the rest of the world but has cross-cutting internal diversities, too. Its digital landscape stands at a crossroads as rapid global technological advancements present increasing opportunities and challenges. In light of growing incidents of misinformation on social media platforms, it is also crucial that regulators consider framing rules that encourage and mandate content verification mechanisms for online platforms, incentivizing them to adopt advanced AI-driven fact-checking tools and other relevant measures. Additionally, establishing public-private partnerships to monitor misinformation trends is crucial torpidly debunking viral falsehoods. However ethical concerns and user privacy should be taken into consideration while taking such steps. Addressing misinformation requires a collaborative approach that balances platform accountability, technological innovation, and the protection of democratic values.
Sources
- https://www.indiatoday.in/india/story/news-media-4-challenges-ashwini-vaishnaw-national-press-day-speech-big-tech-fake-news-algorithm-ai-2634737-2024-11-17
- https://ec.europa.eu/commission/presscorner/detail/en/ip_24_881
- https://www.legaldive.com/news/digital-services-act-dsa-eu-misinformation-law-propaganda-compliance-facebook-gdpr/691657/
- https://www.fondationdescartes.org/en/2020/07/filter-bubbles-and-echo-chambers/
Introduction
Cybersecurity remains a crucial component in the modern digital era, considering the growing threat landscape caused by our increased reliance on technology and the internet. The Karnataka Government introduced a new ‘Cyber Security Policy 2024’ to address increasing cybercrimes and enhance protection measures for the State's digital infrastructure through awareness, skill development, public-private collaborations, and technology integration. Officials stated that the policy highlights various important aspects including raising awareness and providing education, developing skills, supporting the industry and start-ups, as well as forming partnerships and collaborations for enhancing capacity.
Key Highlights
- The policy consists of two components. The initial segment emphasizes creating a robust cyber security environment involving various sectors such as the public, academia, industry, start-ups, and government. The second aspect of the policy aims to enhance the cybersecurity status of the State's IT resources. Although the initial section will be accessible to the public, the second portion will be restricted to the state's IT teams and departments for their IT implementation.
- The Department of Electronics, IT, BT and S&T, the Department of Personnel and Administrative Reforms (e-Governance),and the Home Department, in collaboration with stakeholders from government and private sectors, have collectively formulated this policy. The Indian Institute of Science, the main institute for the state's K-tech Centre of Excellence for Cyber Security (CySecK), also examined the policy.
- The Department of Electronics, IT, BT and S&T, the Department of Personnel and Administrative Reforms (e-Governance),and the Home Department, in collaboration with stakeholders from government and private sectors, have collectively formulated this policy. The Indian Institute of Science, the main institute for the state's K-tech Centre of Excellence for Cyber Security (CySecK), also examined the policy.
- Approximately ₹103.87 crore will be spent over five years to implement the policy, which would be fulfilled from the budget allocated to the Department of Information Technology and Biotechnology and Science & Technology. A total of ₹23.74 crore would be allocated for offering incentives and concessions.
- The policy focuses on key pillars of building awareness and skills, promoting research and innovation, promoting industry and start-ups, partnerships and collaborations for capacity building.
- Karnataka-based undergraduate and postgraduate interns will receive a monthly stipend of INR 10,000- Rs15,000 fora maximum duration of three months under the internship program. The goal is to support 600 interns at the undergraduate level and 120 interns at the post-graduate level within the policy timeframe.
- Karnataka-based start-ups collaborating with academic institutes can receive matching grants of up to 50% of the total R&D cost for cybersecurity projects, or a maximum of ₹50 lakh.
- Reimbursement will be provided for expenses up to a maximum of INR 1 Lakh for start-ups registered with Karnataka Start-up Cell who engage CERT-In empanelled service providers from Karnataka for cyber security audit.
- The Karnataka government has partnered with Meta to raise awareness on cyber security. By reaching out to educational institutions, schools and colleges, it is piloted to provide training to 1 lakh teachers and educate 1 million children on online safety.
CyberPeace Policy Wing Outlook
The Cyber Security Policy, 2024 launched by the Karnataka government is a testament to the state government's commitment to strengthening the cyber security posture and establishing cyber resilience. By promoting and supporting research and development projects, supporting startups, and providing skill training internships, and capacity building at a larger scale, the policy will serve asa positive step in countering the growing cyber threats and establishing a peaceful digital environment for all. The partnership and collaboration with tech companies will be instrumental in implementing the capacity-building initiatives aimed at building cognitive and skill defenses while navigating the digital world. The policy will inspire other state governments in their policy initiatives for building safe and secure cyber-infrastructure in the states by implementing strategies tailored to the specific needs and demands of each state in building safe digital infrastructure and environment.
References:
- https://www.hindustantimes.com/cities/bengaluru-news/karnataka-govt-launches-new-cyber-security-policy-amid-frequent-scams-101722598078117.html
- https://ciso.economictimes.indiatimes.com/amp/news/grc/karnataka-govt-launches-new-cyber-security-policy/112214121
- https://cybermithra.in/2024/08/09/karnataka-cyber-security-policy/