What’s Your New Year's Resolution?
2025 is knocking firmly at our door and we have promises to make and resolutions to keep. Time you make your list for the New Year and check it twice.
- Lifestyle targets 🡪 Check
- Family targets 🡪 Check
- Social targets 🡪 Check
Umm, so far so good, but what about your cybersecurity targets for the year? Hey, you look confused and concerned. Wait a minute, you do not have one, do you?
I get it. Though the digital world still puzzles, and sometimes outright scares us, we still are not in the ‘Take-Charge-Of-Your-Digital-Safety Mode. We prefer to depend on whatever software security we are using and keep our fingers crossed that the bad guys (read threat actors) do not find us.
Let me illustrate why cybersecurity should be one of your top priorities. You know that stress is a major threat to our continued good health, right? However, if your devices, social media accounts, office e-mail or network, or God forbid, bank accounts become compromised, would that not cause stress? Think about it and the probable repercussions and you will comprehend why I am harping on prioritising security.
Fret not. We will keep it brief as we well know you have 101 things to do in the next few days leading up to 01/01/2025. Just add cyber health to the list and put in motion the following:
- Install and activate comprehensive security software on ALL internet-enabled devices you have at home. Yes, including your smartphones.
- Set yourself a date to change and create separate unique passwords for all accounts. Or use the password manager that comes with all reputed security software to make life simpler.
- Keep home Wi-Fi turned off at night
- Do not set social media accounts to auto-download photos/documents
- Activate parental controls on all the devices used by your children to monitor and mentor them. But keep them apprised.
- Do not blindly trust anyone or anything online – this includes videos, speeches, emails, voice calls, and video calls. Be aware of fakes.
- Be aware of the latest threats and talk about unsafe cyber practices and behaviour often at home.
Short and sweet, as promised.
We will be back, with more tips, and answers to your queries. Drop us a line anytime, and we will be happy to resolve your doubts.
Ciao!
Related Blogs

The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading and factually-incorrect content. As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged for any queries regarding the election data. Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired.
What Is AI-driven Misinformation?
False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly.
AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology - and rightly so. The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development.
Regulatory Sandboxes: An Overview
Regulatory sandboxes refer to regulatory tools that allow businesses to test and experiment with innovative products, services or businesses under the supervision of a regulator for a limited period. They engage by creating a controlled environment where regulators allow businesses to test new technologies or business models with relaxed regulations.
Regulatory sandboxes have been in use for many industries and the most recent example is their use in sectors like fintech, such as the UK’s Financial Conduct Authority sandbox. These models have been known to encourage innovation while allowing regulators to understand emerging risks. Lessons from the fintech sector show that the benefits of regulatory sandboxes include facilitating firm financing and market entry and increasing speed-to-market by reducing administrative and transaction costs. For regulators, testing in sandboxes informs policy-making and regulatory processes. Looking at the success in the fintech industry, regulatory sandboxes could be adapted to AI, particularly for overseeing technologies that have the potential to generate or spread misinformation.
The Role of Regulatory Sandboxes in Addressing AI Misinformation
Regulatory sandboxes can be used to test AI tools designed to identify or flag misinformation without the risks associated with immediate, wide-scale implementation. Stakeholders like AI developers, social media platforms, and regulators work in collaboration within the sandbox to refine the detection algorithms and evaluate their effectiveness as content moderation tools.
These sandboxes can help balance the need for innovation in AI and the necessity of protecting the public from harmful misinformation. They allow the creation of a flexible and adaptive framework capable of evolving with technological advancements and fostering transparency between AI developers and regulators. This would lead to more informed policymaking and building public trust in AI applications.
CyberPeace Policy Recommendations
Regulatory sandboxes offer a mechanism to predict solutions that will help to regulate the misinformation that AI tech creates. Some policy recommendations are as follows:
- Create guidelines for a global standard for including regulatory sandboxes that can be adapted locally and are useful in ensuring consistency in tackling AI-driven misinformation.
- Regulators can propose to offer incentives to companies that participate in sandboxes. This would encourage innovation in developing anti-misinformation tools, which could include tax breaks or grants.
- Awareness campaigns can help in educating the public about the risks of AI-driven misinformation and the role of regulatory sandboxes can help manage public expectations.
- Periodic and regular reviews and updates to the sandbox frameworks should be conducted to keep pace with advancements in AI technology and emerging forms of misinformation should be emphasized.
Conclusion and the Challenges for Regulatory Frameworks
Regulatory sandboxes offer a promising pathway to counter the challenges that AI-driven misinformation poses while fostering innovation. By providing a controlled environment for testing new AI tools, these sandboxes can help refine technologies aimed at detecting and mitigating false information. This approach ensures that AI development aligns with societal needs and regulatory standards, fostering greater trust and transparency. With the right support and ongoing adaptations, regulatory sandboxes can become vital in countering the spread of AI-generated misinformation, paving the way for a more secure and informed digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/on-the-importance-of-regulatory-sandboxes-in-artificial-intelligence/article68176084.ece
- https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html
- https://www.weforum.org/publications/global-risks-report-2024/
- https://democracy-reporting.org/en/office/global/publications/chatbot-audit#Conclusions

As Generative AI continues to make strides by creating content through user prompts, the increasing sophistication of language models widens the scope of the services they can deliver. However, they have their own limitations. Recently, alerts by Apple Intelligence on the iPhone’s latest version have come under fire for misrepresenting news by news agencies.
The new feature was introduced with the aim of presenting an effective way to group and summarise app notifications in a single alert on a user’s lock screen. This was to enable an easier scan for important details amongst a large number of notifications, doing away with overwhelming updates for the user. This, however, resulted in the misrepresentation of news channels and reporting of fake news such as the arrest of Israeli Prime Minister Benjamin Netanyahu, Luke Litter winning the PDC World Darts Championship even before the competition, tennis Player Rafael Nadal coming out as gay, among other news alerts. Following false alerts, BBC had complained about its journalism being misrepresented. In response, Apple’s proposed solution was to clarify to the user that when the text summary is displayed in the notifications, it is clearly stated to be a product of notification Apple Intelligence and not of the news agency. It also claimed the complexity of having to compress content into short summaries which resulted in fallacious alerts. Further comments revealed that the AI alert feature was in beta and is continuously being worked on depending on the user’s feedback. Owing to the backlash, Apple has suspended this service and announced that an improved version of the feature is set to be released in the near future, however, no dates have been set.
CyberPeace Insights
The rush to release new features often exacerbates the problem, especially when AI-generated alerts are responsible for summarising news reports. This can significantly damage the credibility and trust that brands have worked hard to build. The premature release of features that affect the dissemination, content, and public comprehension of information carries substantial risks, particularly in the current environment where misinformation is widespread. Timely action and software updates, which typically require weeks to implement, are crucial in mitigating these risks. The desire to be ahead in the game and bring out competitive features must not resolve the responsibility of providing services that are secure and reliable. This aforementioned incident highlights the inherent nature of generative AI, which operates by analysing the data it was trained on to deliver the best possible responses based on user prompts. However, these responses are not always accurate or reliable. When faced with prompts beyond its scope, AI systems often produce untrustworthy information, underlining the need for careful oversight and verification. A question to deliberate on is whether we require such services at all, which in practice, do save our time, but do so at the risk of the spread of false tidbits.
References
- https://www.theguardian.com/technology/2025/jan/07/apple-update-ai-inaccurate-news-alerts-bbc-apple-intelligence-iphone
- https://www.firstpost.com/tech/apple-intelligence-hallucinates-falsely-credits-bbc-for-fake-news-broadcaster-lodges-complaint-13845214.html
- https://www.cnbc.com/2025/01/08/apple-ai-fake-news-alerts-highlight-the-techs-misinformation-problem.html
- https://news.sky.com/story/apple-ai-feature-must-be-revoked-over-notifications-misleading-users-say-journalists-13288716
- https://www.hindustantimes.com/world-news/apple-to-pay-95-million-in-user-privacy-violation-lawsuit-on-siri-101735835058198.html
- https://www.hindustantimes.com/business/apple-denies-claims-of-siri-violating-user-privacy-after-95-million-class-action-suit-settlement-101736445941497.html#:~:text=Apple%20denies%20claims%20of%20Siri,action%20suit%20settlement%20%2D%20Hindustan%20Times
- https://www.google.com/search?q=apple+AI+alerts+misinformation&oq=apple+AI+alerts+misinformation+&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigATIHCAMQIRigATIHCAQQIRigAdIBCTEyMzUxajBqN6gCALACAA&sourceid=chrome&ie=UTF-8
- https://www.fastcompany.com/91261727/apple-intelligence-news-summaries-mistakes
- https://timesofindia.indiatimes.com/technology/tech-news/siris-secret-listening-costs-apple-95m/articleshow/116906209.cms
- https://www.theguardian.com/technology/2025/jan/17/apple-suspends-ai-generated-news-alert-service-after-bbc-complaint

Executive Summary:
BrazenBamboo’s DEEPDATA malware represents a new wave of advanced cyber espionage tools, exploiting a zero-day vulnerability in Fortinet FortiClient to extract VPN credentials and sensitive data through fileless malware techniques and secure C2 communications. With its modular design, DEEPDATA targets browsers, messaging apps, and password stores, while leveraging reflective DLL injection and encrypted DNS to evade detection. Cross-platform compatibility with tools like DEEPPOST and LightSpy highlights a coordinated development effort, enhancing its espionage capabilities. To mitigate such threats, organizations must enforce network segmentation, deploy advanced monitoring tools, patch vulnerabilities promptly, and implement robust endpoint protection. Vendors are urged to adopt security-by-design practices and incentivize vulnerability reporting, as vigilance and proactive planning are critical to combating this sophisticated threat landscape.
Introduction
The increased use of zero-day vulnerabilities by more complex threat actors reinforces the importance of more developed countermeasures. One of the threat actors identified is BrazenBamboo uses a zero-day vulnerability in Fortinet FortiClient for Windows through the DEEPDATA advanced malware framework. This research explores technical details about DEEPDATA, the tricks used in its operations, and its other effects.
Technical Findings
1. Vulnerability Exploitation Mechanism
The vulnerability in Fortinet’s FortiClient lies in its failure to securely handle sensitive information in memory. DEEPDATA capitalises on this flaw via a specialised plugin, which:
- Accesses the VPN client’s process memory.
- Extracts unencrypted VPN credentials from memory, bypassing typical security protections.
- Transfers credentials to a remote C2 server via encrypted communication channels.
2. Modular Architecture
DEEPDATA exhibits a highly modular design, with its core components comprising:
- Loader Module (data.dll): Decrypts and executes other payloads.
- Orchestrator Module (frame.dll): Manages the execution of multiple plugins.
- FortiClient Plugin: Specifically designed to target Fortinet’s VPN client.
Each plugin operates independently, allowing flexibility in attack strategies depending on the target system.
3. Command-and-Control (C2) Communication
DEEPDATA establishes secure channels to its C2 infrastructure using WebSocket and HTTPS protocols, enabling stealthy exfiltration of harvested data. Technical analysis of network traffic revealed:
- Dynamic IP switching for C2 servers to evade detection.
- Use of Domain Fronting, hiding C2 communication within legitimate HTTPS traffic.
- Time-based communication intervals to minimise anomalies in network behavior.
4. Advanced Credential Harvesting Techniques
Beyond VPN credentials, DEEPDATA is capable of:
- Dumping password stores from popular browsers, such as Chrome, Firefox, and Edge.
- Extracting application-level credentials from messaging apps like WhatsApp, Telegram, and Skype.
- Intercepting credentials stored in local databases used by apps like KeePass and Microsoft Outlook.
5. Persistence Mechanisms
To maintain long-term access, DEEPDATA employs sophisticated persistence techniques:
- Registry-based persistence: Modifies Windows registry keys to reload itself upon system reboot.
- DLL Hijacking: Substitutes legitimate DLLs with malicious ones to execute during normal application operations.
- Scheduled Tasks and Services: Configures scheduled tasks to periodically execute the malware, ensuring continuous operation even if detected and partially removed.
Additional Tools in BrazenBamboo’s Arsenal
1. DEEPPOST
A complementary tool used for data exfiltration, DEEPPOST facilitates the transfer of sensitive files, including system logs, captured credentials, and recorded user activities, to remote endpoints.
2. LightSpy Variants
- The Windows variant includes a lightweight installer that downloads orchestrators and plugins, expanding espionage capabilities across platforms.
- Shellcode-based execution ensures that LightSpy’s payload operates entirely in memory, minimising artifacts on the disk.
3. Cross-Platform Overlaps
BrazenBamboo’s shared codebase across DEEPDATA, DEEPPOST, and LightSpy points to a centralised development effort, possibly linked to a Digital Quartermaster framework. This shared ecosystem enhances their ability to operate efficiently across macOS, iOS, and Windows systems.
Notable Attack Techniques
1. Memory Injection and Data Extraction
Using Reflective DLL Injection, DEEPDATA injects itself into legitimate processes, avoiding detection by traditional antivirus solutions.
- Memory Scraping: Captures credentials and sensitive information in real-time.
- Volatile Data Extraction: Extracts transient data that only exists in memory during specific application states.
2. Fileless Malware Techniques
DEEPDATA leverages fileless infection methods, where its payload operates exclusively in memory, leaving minimal traces on the system. This complicates post-incident forensic investigations.
3. Network Layer Evasion
By utilising encrypted DNS queries and certificate pinning, DEEPDATA ensures that network-level defenses like intrusion detection systems (IDS) and firewalls are ineffective in blocking its communications.
Recommendations
1. For Organisations
- Apply Network Segmentation: Isolate VPN servers from critical assets.
- Enhance Monitoring Tools: Deploy behavioral analysis tools that detect anomalous processes and memory scraping activities.
- Regularly Update and Patch Software: Although Fortinet has yet to patch this vulnerability, organisations must remain vigilant and apply fixes as soon as they are released.
2. For Security Teams
- Harden Endpoint Protections: Implement tools like Memory Integrity Protection to prevent unauthorised memory access.
- Use Network Sandboxing: Monitor and analyse outgoing network traffic for unusual behaviors.
- Threat Hunting: Proactively search for indicators of compromise (IOCs) such as unauthorised DLLs (data.dll, frame.dll) or C2 communications over non-standard intervals.
3. For Vendors
- Implement Security by Design: Adopt advanced memory protection mechanisms to prevent credential leakage.
- Bug Bounty Programs: Encourage researchers to report vulnerabilities, accelerating patch development.
Conclusion
DEEPDATA is a form of cyber espionage and represents the next generation of tools that are more advanced and tunned for stealth, modularity and persistence. While Brazen Bamboo is in the process of fine-tuning its strategies, the organisations and vendors have to be more careful and be ready to respond to these tricks. The continuous updating, the ability to detect the threats and a proper plan on how to deal with incidents are crucial in combating the attacks.