CyberPeace Alert: Rise in Phishing and Malicious Activities Following CrowdStrike Outage
Research Wing
Innovation and Research
PUBLISHED ON
Jul 23, 2024
10
Overview:
After the blackout on July 19, 2024, which affected CrowdStrike’s services worldwide, cybercriminals began to launch many phishing attacks and distribute malware. These activities mainly affect CrowdStrike customers, using the confusion as a way to extort information through fake support sites. The analysis carried out by the Research Wing of CyberPeace and Autobot Infosec has identified several phishing links and malicious campaigns.
The Exploitation:
Cyber adversaries have registered domains that are similar to CrowdStrike’s brand and have opened fake accounts on social media platforms. These are fake platforms that are employed to defraud users into surrendering their personal and sensitive details for use in other fraudulent activities.
In one case, a PDF file is being circulated with CrowdStrike branding, saying ‘Download The Updater,’ which is a link to a ZIP file. The ZIP file is a compressed file that has an executable file with a virus. This is a clear sign that the hackers are out to take advantage of the current situation by releasing the malware as an update.
Image Source: AnyRun
Image Source: VirusTotal
In another case, there is a malicious Microsoft Word document that is currently being shared, which claims to offer a solution on how to deal with this CrowdStrike BSOD bug. But there is a hidden risk in the document. When users follow the instructions and enable the embedded macro, it triggers the download of an information-stealing malware from a remote host. This is a form of malware that is used to steal information and is not well recognized by most security software. Also it sends the stolen data to the samesame remote host but with different port number, which likey works as the CnC server for the campaign.
Name New_Recovery_Tool_to_help_with_CrowdStrike_issue_impacting_Windows[.]docm
On July 19, 2024, CrowdStrike faced a global outage that originated from an update of its Falcon Sensor security software. This outage affected many government organizations and companies in different industries, such as finance, media, and telecommunications. The event led to numerous complaints from the users who experienced problems like blue screen of death and system failure. Although, CrowdStrike has admitted to the problem and is in the process of fixing it.
Preventive Measures:
Organize regular awareness sessions to educate the employees about the phishing techniques and how they can avoid the phishing scams, emails, links, and websites.
MFA should be used for login to the sensitive accounts and systems for an improvement on the security levels.
Make sure all security applications including the antivirus and anti-malware are up to date to help in the detection of phishing scams.
This includes putting in place of measures such as alert on account activity or login patterns to facilitate early detection of phishing attempts.
Encourage employees and users to inform the IT department as soon as they have any suspicions regarding phishing attempts.
Conclusion:
The recent CrowdStrike outage is a perfect example of how cybercriminals take advantage of the situation and user’s confusion and anxiety. Thus, people and organizations can keep themselves from these threats and maintain the confidentiality of their information by being cautious and adhering to the proper standards. To get the current information on the BSOD problem and the detailed instructions on its solution, visit CrowdStrike’s support center. Reported problems should be handled with caution and regular backup should be made to minimize the effects.
Netizens across the globe have been enjoying the fruits of technological advancements in the digital century. Our personal and professional life has been impacted deeply by the new technologies. The previous year we saw an exponential rise in blockchain integration and the applications of Web 3.0. There is no denying that the Covid-19 pandemic caused a rapid rise in technology and internet penetration all across the globe, bringing the world closer with respect to connectivity and the exchange of ideas and knowledge. Tech advancements have definitely made our lives easier, but the same has also opened the doors to various vulnerabilities and new potential threats. As cyberspace expands, so do the vulnerabilities associated with it, and it is critical we take note of such issues and create safeguards to the extent that such incidents are prevented before they occur. We need to create sustainable and secure cyberspace for future generations.MetaVerse in 2023The metaverse was introduced by Facebook (now Meta) in 2021 as a peak into the future of cyberspace. Since then, tech developers have been working towards arming the metaverse with extraordinary innovations and applications. Netizens came across news like someone bought a house or a plot in the metaverse, someone bought a car in the metaverse, and so on, these news were taken to be the evidence of the netizen’s transition towards the new digital age as we have seen in sci-fi movies. But today this type of news has become history and the metaverse is expanding faster than ever. Let us look at the latest developments and trends in the metaverse-
Avatar creation - The avatar creation in the metaverse will be a pivotal move as the avatars will represent the user, and essentially it will be the digital, version of the user and will be similar to the user's personal and physical traits to maintain realism in the metaverse.
Architecture firms - Metaverse has its own set of architects who will be working towards creating your dream home or pro[erty in the metaverse, the heavy code-based services are now being sold just as if they were in the physical space.
Mining - The metaverse already has companies who are mining gold, silver, petroleum, and other resources for the avatars in the metaverse, for instance, if someone has bought a car in the metaverse, it will still need fuel to run.
Security firms - These firms are the first line of defenders in the metaverse as they provide tech-based solutions and protocols to secure one’s avatar and belongings in the metaverse.
Metaverse Police - Interpol, along with its global partner organization has created the metaverse police, who will be working towards creating a safe cyber ecosystem by maintaining compliance with digital laws and ethics.
Advancements beyond metaverse in 2023
Technology continues to be a critical force for change in the world. Technology breakthroughs give enterprises more possibilities to lift their productivity and invent offerings. And while it remains difficult to forecast how technology trends will play out, business leaders can plan ahead better by watching the development of new technologies, anticipating how companies could utilize them, and understanding the factors that impact innovation and adoption.
Applied observability
It advances the practice of pattern recognition. To foresee and identify abnormalities and offer solutions, one must have the capacity to delve deeply into complicated systems and a stream of data. Data fuels this aspect of tech growth in the future.
Digital Immune System
To ensure that all major systems operate round-the-clock to deliver uninterrupted services, Digital Immune System will combine observability, AI-augmented testing, chaos engineering, site reliability engineering (SRE), and software supply chain security. This will take the efficiency of the systems to a new level.
Super apps
These represent the upcoming shift in application usage, design, and development, where consumers will utilise a single app to manage most systems in an enterprise ecosystem. Over 50% of the world’s population will utilise super apps on a daily basis to fulfill their daily personal and professional needs.
AR/VR and BlockChain technology
A combination of better interconnected, safe, and immersive virtual environments where people and businesses may recreate real-life scenarios will be created by combining AR/VR, AI/ML, IoT, and Blockchain, thus creating a new vertical of innovation with keen technologies of Web 3.0.
AAI
The next level of AI, i.e., Advanced Artificial Intelligence (AI), will revolutionise machine learning, pattern recognition, and computing. It aims to fully automate processes without requiring any manual input, thus eradicating the issues of human error and bad actor influence completely.
Corporate Metaverse
Aside from its power as a marketing tool, the metaverse promises to provide platforms, tools, and entire virtual worlds where business can be done remotely, efficiently, and intelligently. We can expect to see the metaverse concept merge with the idea of the “digital twin” – virtual simulations of real-world products, processes, or operations that can be used to test and prototype new ideas in the safe environment of the digital domain. From wind farms to Formula 1 cars, designers are recreating physical objects inside virtual worlds where their efficiency can be stress-tested under any conceivable condition without the resource costs that would be incurred by testing them in the physical world.ConclusionIn 2023, we will see more advanced use cases for technology such as motion capture, which will mean that as well as looking and sounding more like us, our avatars will adopt our own unique gestures and body language. We may even start to see further developments in the fields of autonomous avatars – meaning they won't be under our direct control but will be enabled by AI to act as our representatives in the digital world while we ourselves get on with other, completely unrelated tasks. As we go deeper into cyberspace, we need to remember the basic safety practices and inculcate them with respect to cyberspace and work towards creating string policies and legislations to safeguard the digital rights and duties of the netizen to create a wholesome and interdependent cyber ecosystem.
India’s telecom regulator, the Telecom Regulatory Authority of India (TRAI), has directed telcos to block all unverified headers and message templates within 30 and 60 days, respectively, according to a press release. The regulator observed that telemarketers were ‘misusing’ headers and message templates of registered parties and asked telcos to reverify all registered headers & message templates on the DLT (Distributed Ledger Technology) platform. All telecom service providers (TSP) have to comply with these directions, issued under the Telecom Commercial Communication Customer Preference Regulations, 2018, within a month, TRAI said in its release. The directions were issued after TRAI held a meeting with telcos on February 17, 2023, to discuss quality of service (QoS) improvements, review of QoS standards, QoS of 5G services and unsolicited commercial communications”, as per its press release.
Why it matters?
It may be useful as it can ensure that all promotional messages are sent through registered telemarketers using only approved templates. It is no secret that the spam problem has been difficult to rein in, so the measure can restrict its proliferation and filter out telemarketers resorting to misuse.
Details about TRAI’s orders
The release said that telcos have to ensure that temporary headers are deactivated immediately after the time duration for which such headers were created. The telcos also have to ensure that there is no space to insert unwanted content in the template of a message where one can add content to be sent to people. Message recipients should not be confused, so telcos must ensure that they register no lookalike headers in the names of different senders.
Measures to check unregistered telemarketers
The release ordered telcos to bar telemarketers not registered on its DLT platform from accessing message templates and scrubbing them to deliver spam messages to recipients on the telco’s network. The telcos have been directed not to allow promotional messages to be sent by unregistered telemarketers or telemarketers using 10-digit telephone numbers. It added that telcos have to take action against erring telemarketers and share details of these telemarketers with other telcos, which will then be responsible for stopping these entities from sending commercial communications through their networks.
How big is the problem of spam?
A survey conducted by LocalCircles said that two out of every three people (66 per cent) in India get three or more spam calls daily. It added that not one person among thousands of respondents checked the box of ‘no spam’.
The platform said that it was a national survey which gathered over 56,000 responses from Indians located in 342 districts. It also found that 92 % of responders said they continue receiving spam despite opting for DND. The DND list is a feature where mobile subscriber can register their number to avoid getting unsolicited commercial communication (UCC).
Addressing the problem of spam
The regulatory body recently released a consultation paper that proposed the idea of providing the real name identity of callers to people receiving calls. The paper said that it would use a database containing each subscriber’s correct name to implement the caller name presentation (CNAP) service. The regulator wants to use details acquired by telecom service providers via customer acquisition forms (CAF).
TRAI formed a joint committee to look at the issue of phishing and cyber fraud in 2022. It included officials from the Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI). The telecom watchdog had laid out a plan to combat SMS and call spam using blockchain technology (DLT). It saw telecom companies and TRAI to build an encrypted and distributed database that will record user consent to be included in SMS or call send-out lists.
According to a press release, the Telecom Regulatory Authority of India (TRAI), the telecom regulator in India, has ordered carriers to block any unverified headers and message templates within 30 and 60 days, respectively.
The regulator saw that telemarketers were “misusing” registered parties’ headers and message templates. Thus, they requested that telecoms validate all of the registered headers and message templates on the DLT (Distributed Ledger Technology) platform.
According to TRAI’s statement, all telecom service providers (TSP) must adhere to these directives within one month under the 2018 Telecom Commercial Communication Consumer Preference Rules. The guidelines were released following a conference with telcos convened by TRAI on February 17, 2023, to discuss quality of service (QoS) enhancements, a review of QoS standards, the QoS of 5G services, and unsolicited commercial communications.
Why it matters?
Requiring that only registered telemarketers send promotional communications using approved templates may prove to be a beneficial safeguard. It is no secret that the spam problem has been challenging to control, so the measure can limit its spread and screen out telemarketers that employ abusive tactics.
Information on the TRAI order
According to the press release, telecoms must ensure that temporary headers are deactivated as soon as the time period they were established has passed. The telecoms must also ensure that there is no room in the message template where one can add content to be sent to recipients for unwanted content. There should be no room for uncertainty among message recipients. Thus, telecoms must ensure that no similar-looking headers are registered under the identities of various senders.
Taking action against unregistered telemarketers In accordance with the directive, telcos must prevent telemarketers who are not registered on their DLT platform from obtaining message templates and using them to send spam to subscribers on their network. Telemarketers who are not registered or who use 10-digit phone numbers cannot send promotional messages, according to instructions given to telecoms. Telcos must take action against misbehaving telemarketers, it was noted, and divulge their information to other telecoms, who would be in charge of preventing these companies from transmitting commercial messages.
How widespread is the spam issue?
According to a LocalCircles poll, three or more spam calls are received every day by two out of every three Indians (66%) on average. It further stated that not a single one of the thousands of responses clicked the “no-spam” box. According to the platform, the survey was conducted nationally and received over 56,000 responses from Indians in 342 districts. Moreover, 92 % of respondents reported that even after choosing DND, they still receive spam. A mobile subscriber can register their number on the DND list to prevent receiving unsolicited commercial communication (UCC).
consultation document recently in which it recommended the concept of providing the genuine name identify of callers to persons receiving calls. The paper indicated that it would employ a database containing each subscriber’s correct name to implement the caller name presentation (CNAP) service. The regulator wants to use information collected by telecom service providers through client acquisition forms (CAF).
Conclusion
TRAI established a joint committee to examine the problem of phishing and cyber scams in 2022. Officials from the Securities and Exchange Board of India (SEBI) and Reserve Bank of India (RBI) were present (SEBI).
The telecom watchdog had outlined a strategy for leveraging blockchain technology to combat SMS and call spam (DLT).
The Expanding Governance Challenge of Artificial Intelligence
Artificial intelligence (AI) systems are increasingly embedded in economic and social infrastructure. They are being adopted in financial services, healthcare diagnostics, hiring systems, and public administration. But while these systems improve efficiency and decision-making, they also introduce new forms of technological risk.
Unlike conventional software, AI systems learn patterns from data and continue to evolve as they run. This poses governance issues since risks can arise throughout the AI life cycle, whether at the coding level or in their implementation.
The latest regulatory frameworks, such as the European Union’s AI Act (EU AI Act) and the UNESCO Recommendation on the Ethics of Artificial Intelligence, note that responsible AI governance depends on the realisation of where risks emerge across the development process.
This article maps the AI system lifecycle, identifies the risks that emerge at each stage and evaluates the policy tools used to mitigate them using the lifecycle framework developed by the Organisation of Economic Co-operation and Development (OECD).
The Lifecycle of an AI System
AI systems are developed through a structured process that includes problem definition, dataset collection and preparation, model development, testing and validation, deployment, and monitoring.
The OECD conceptualises this development process as the AI system lifecycle. Each stage entails various technical and administrative procedures, since choices made during these stages will dictate the goals and limits of an AI system. Further, the quality and representativeness of training sets will have a strong effect on the behaviour of models after implementation.
Since this is an iterative and not a linear procedure, risks can be introduced at each stage of the AI lifecycle. New data can be retrained into different models, and systems are regularly updated once they have been deployed, to address performance degradation, model errors, or unintended outputs. This iterative process means governance must address risks across the entire lifecycle, not just at deployment.
Where AI Risks Emerge
AI risks usually emerge earlier in the development process, especially in the phases when system objectives are formulated and training data are chosen. The EU AI Act and the UNESCO Recommendation on the Ethics of AI outline the following risks: bias and discrimination, privacy and data security violations, the absence of transparency in automated decision-making, and risks to fundamental rights.
AI Governance Risk Landscape: Core Risk Categories Under International Frameworks
Risk categories jointly identified by the EU AI Act and UNESCO Recommendation on the Ethics of Artificial Intelligence
Outlining the risks throughout the AI lifecycle helps understand the areas where governance interventions are most necessary. For example, discriminatory outcomes often result from biased or unrepresentative training data, while safety failures are typically linked to inadequate testing before deployment. Risks such as misinformation arise post the development process, when generative AI systems are deployed at scale on digital platforms.
AI System Lifecycle: Key Risks at Each Stage
Risks identified per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Understanding where risks emerge across the lifecycle explains why governance frameworks classify AI systems by risk and apply oversight at multiple stages.
Policy Tools for Mitigating AI Risks
Governments and international organisations have developed regulatory tools to help mitigate AI risks in the lifecycle. These tools are meant to make sure that AI technologies are identified as up to standard in safety, accountability and fairness prior to and after deployment.
For example, the OECD AI Policy Observatory recommends that governments adopt policy instruments such as risk evaluations, algorithmic auditing necessities, regulatory sandboxes, and transparency necessities of AI systems. The European Union’s Artificial Intelligence Act (AI Act) is one of the most comprehensive systems of governance that introduces a risk-oriented regulation strategy. It mandates adherence to requirements concerning data governance, documentation, human oversight, and robustness, and cybersecurity. Such requirements bring regulatory checkpoints to the lifecycle of AI systems.
Mapping these policy tools across the lifecycle illustrates how governance mechanisms can intervene at different stages of AI development.
Governance Overlay: Policy Interventions Across the AI Lifecycle
Regulatory tools mapped at each stage of AI development per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Several policy tools are directed at the risks that occur in the pre-developmental stages. In one example, algorithmic impact assessment has been applied in various jurisdictions to measure the possible consequences of automated decision systems on society before implementation. On the same note, the requirements of dataset documentation, including dataset transparency requirements and model cards, are aimed at enhancing accountability during the training and development stages of the AI systems. Therefore, lifecycle-based policy design allows regulators to intervene before harmful outcomes occur, rather than responding only after AI systems have caused damage in real-world environments.
The Policy Gap in AI Governance
The misalignment between risks and governance tools across the AI lifecycle indicates a critical structural gap in existing regulations. Numerous governance processes become activated after AI systems are classified as “high risk” or after they are implemented in the real world. But the most serious sources of damage have their roots in earlier stages of the development procedure.
An example is that prejudiced or unbalanced training data is almost inevitably a source of discriminative results in automated decision systems. When these types of models are applied in areas like staffing, credit rating, or in providing services to the public, such biases can quickly spread to large populations and undermine democratic rights. In the same way, the lack of transparency in model design might result in the fact that the regulator or individuals are affected by the decision-making process. This reflects a broader timing gap in AI governance, where risks originate during design and development, but regulatory intervention typically occurs only after deployment.
Analysis
1. Key risks originate before deployment: As depicted in the lifecycle mapping, the data collection and model development phase presents several significant governance risks as opposed to the deployment phase. Structural issues can be entrenched within AI systems even before they are deployed in practice due to bias in data sets, incomplete reporting of training sets, and obscured network designs.
2. Data governance is a primary point of vulnerability: Most of the instances of algorithmic discrimination listed above are associated with training material that is not representative of some population groups or is historical. Since machine learning models are optimisations of patterns that exist in datasets, these biases can be carried through the whole lifecycle and reproduced after deployment.
3. Regulatory approaches remain mismatched across jurisdictions: Different countries adopt varying approaches to AI governance, ranging from risk-based frameworks such as the EU AI Act to more sector-specific or voluntary guidelines in other regions. This divergence creates inconsistencies in safety, accountability, and enforcement standards, allowing risks to persist across borders and potentially undermining the protection of users in globally deployed AI systems.
4. Governance interventions remain uneven across the lifecycle: Whereas the various regulatory instruments aim at deployment and monitoring, fewer instruments systematically tackle the risks that are posed by the previous design and development phases.
Recommendations
1. Introduce mandatory lifecycle risk assessments: The regulatory systems need to demand systemic risk evaluation at the beginning of AI development, especially at the problem design and dataset selection phases. This would assist in detecting possible harmful applications in advance, before systems are constructed and installed.
2. Strengthen dataset governance standards: Training datasets must be supplemented with documentation as to their provenance, composition and limitations. Standardised documentation frameworks of data sets can assist in the discovery by regulators and auditors of the potential sources of bias or privacy threats.
3. Expand independent algorithmic auditing: AI systems can be assessed by regular third-party audits based on fairness, strength, and security weaknesses. The auditing mechanisms especially apply to high-risk systems employed in employment, finance or the public services.
4. Integrate continuous monitoring requirements: AI systems may be monitored regularly after implementation to identify model drift, unforeseen consequences, or abuse. Reporting systems can facilitate the process where the regulators can see the emerging risks and modify the governance systems.
Conclusion - The Need for Global AI Governance
Despite growing regulatory attention, global air governance remains fragmented. Different jurisdictions adopt varying approaches to risk classification, oversight, and enforcement, leading to inconsistencies in safety and accountability standards. Given that AI systems are often developed, deployed, and used across borders, this lack of coordination allows risks to persist beyond national regulatory frameworks.
Addressing these challenges requires a shift towards greater international cooperation and lifecycle-based governance. Developing shared standards, improving cross-border regulatory alignment, and embedding oversight across all stages of AI development will be essential to ensuring that AI systems are safe, transparent, and accountable in a globally interconnected environment.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.