Regulations on CDR
Introduction:
CDR is a term that refers to Call detail records, The Telecom Industries holds the call details data of the users. As it amounts to a large amount of data, the telecom companies retain the data for a period of 6 months. CDR plays a significant role in investigations and cases in the courts. It can be used as pivotal evidence in court proceedings to prove or disprove certain facts & circumstances. Power of Interception of Call detail records is allowed for reasonable grounds and only by the authorized authority as per the laws.
Admissibility of CDR’s in Courts:
Call Details Records (CDRs) can be used as effective pieces of evidence to assist the court in ascertaining the facts of the particular case and inquiring about the commission of an offence, and according to the judicial pronouncements, it is made clear that CDRs can be used supporting or secondary evidence in the court. However, it cannot be the sole basis of the conviction. Section 92 of the Criminal Procedure Code 1973 provides procedure and empowers certain authorities to apply for court or competent authority intervention to seek the CDR.
Legal provisions to obtain CDR:
The CDR can be obtained under the statutory provisions of law contained in section 92 Criminal Procedure Code, 1973. Or under section 5(2) of Indian Telegraph Act 1885, read with rule 419(A) Indian Telegraph Amendment rule 2007. The guidelines were also issued in 2016 by Ministry of Ministry of Home Affairs for seeking Call details records (CDRs)
How long is CDR stored with telecom Companies (Data Retention)
Call Data is retained by telecom companies for a period of 6 months. As the data amounts to high storage, almost several Petabytes per year, telecom companies store the call details data for a period of 6 months and archive the rest of it to tapes.
New Delhi 25Cr jewellery heist
Recently, an incident took place where a 25-crore jewellery theft was carried out in a jewellery shop in Delhi, It was planned and executed by a man from Chhattisgarh. After committing the crime, the criminal went back to Chhattisgarh. It was a case of a 25Cr heist, and the police started their search & investigation. Police used technology and analysed the mobile numbers which were active at the crime scene. Delhi police used advanced software to analyse data. The police were able to trace the mobile number of thieves or suspects active at the crime scene. They discovered suspected contacts who were active within the range of the crime scene, and it helped in the arrest of the main suspects. From around 5,000 mobile numbers active around the crime scene, police have used advanced software that analyses huge data, and then police found a number registered outside of Delhi. The surveillance on the number has revealed that the suspected criminal has moved to the MP from Delhi, then moved further to Bhilai Chattisgarh. Police have successfully arrested the suspected criminal. This incident highlights how technology or call data can assist law enforcement agencies in investigating and finding the real culprits.
Conclusion:
CDR refers to call detail records retained by telecom companies for a period of 6 months, it can be obtained through lawful procedure and by competent authorities only. CDR can be helpful in cases before the court or law enforcement agencies, to assist the court and law enforcement agencies in ascertaining the facts of the case or to prove or disprove certain things. It is important to reiterated that unauthorized seeking of CDR is not allowed; the intervention of the court or competent authority is required to seek the CDR from the telecom companies. CDRs cannot be unauthorizedly obtained, and there has to be a directive from the court or competent authority to do so.
References:
- https://indianlegalsystem.org/cdr-the-wonder-word/#:~:text=CDR%20is%20admissible%20as%20secondary,the%20Indian%20Evidence%20Act%2C%201872.
- https://timesofindia.indiatimes.com/city/delhi/needle-in-a-haystack-how-cops-scanned-5k-mobile-numbers-to-crack-rs-25cr-heist/articleshow/104055687.cms?from=mdr
- https://www.ndtv.com/delhi-news/just-one-man-planned-executed-rs-25-crore-delhi-heist-another-thief-did-him-in-4436494
Related Blogs

Brief Overview of the EU AI Act
The EU AI Act, Regulation (EU) 2024/1689, was officially published in the EU Official Journal on 12 July 2024. This landmark legislation on Artificial Intelligence (AI) will come into force just 20 days after publication, setting harmonized rules across the EU. It amends key regulations and directives to ensure a robust framework for AI technologies. The AI Act, a set of EU rules governing AI, has been in development for two years and now, the EU AI Act enters into force across all 27 EU Member States on 1 August 2024, with certain future deadlines tied up and the enforcement of the majority of its provisions will commence on 2 August 2026. The law prohibits certain uses of AI tools, including those that threaten citizens' rights, such as biometric categorization, untargeted scraping of faces, and systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. It also prohibits the use of predictive policing tools in some instances. The law takes a phased approach to implementing the EU's AI rulebook, meaning there are various deadlines between now and then as different legal provisions will start to apply.
The framework puts different obligations on AI developers, depending on use cases and perceived risk. The bulk of AI uses will not be regulated as they are considered low-risk, but a small number of potential AI use cases are banned under the law. High-risk use cases, such as biometric uses of AI or AI used in law enforcement, employment, education, and critical infrastructure, are allowed under the law but developers of such apps face obligations in areas like data quality and anti-bias considerations. A third risk tier also applies some lighter transparency requirements for makers of tools like AI chatbots.
In case of failure to comply with the Act, the companies in the EU providing, distributing, importing, and using AI systems and GPAI models, are subject to fines of up to EUR 35 million or seven per cent of the total worldwide annual turnover, whichever is higher.
Key highlights of EU AI Act Provisions
- The AI Act classifies AI according to its risk. It prohibits Unacceptable risks such as social scoring systems and manipulative AI. The regulation mostly addresses high-risk AI systems.
- Limited-risk AI systems are subject to lighter transparency obligations and according to the act, the developers and deployers must ensure that the end-users are aware that the interaction they are having is with AI such as Chatbots and Deepfakes. The AI Act allows the free use of minimal-risk AI. This includes the majority of AI applications currently available in the EU single market like AI-enabled video games, and spam filters, but with the advancement of Gen AI changes with regards to this might be done. The majority of obligations fall on providers (developers) of high-risk AI systems that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country. And also, a third-country provider where the high-risk AI system’s output is used in the EU.
- Users are natural or legal persons who deploy an AI system in a professional capacity, not affected end-users. Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers). This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.
- General purpose AI or GPAI model providers must provide technical documentation, and instructions for use, comply with the Copyright Directive, and publish a summary of the content used for training. Free and open license GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk. All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, and adversarial testing, and track and report serious incidents and ensure cybersecurity protections.
- The Codes of Practice will account for international approaches. It will cover but not necessarily be limited to the obligations, particularly the relevant information to include in technical documentation for authorities and downstream providers, identification of the type and nature of systemic risks and their sources, and the modalities of risk management accounting for specific challenges in addressing risks due to the way they may emerge and materialize throughout the value chain. The AI Office may invite GPAI model providers, and relevant national competent authorities to participate in drawing up the codes, while civil society, industry, academia, downstream providers and independent experts may support the process.
Application & Timeline of Act
The EU AI Act will be fully applicable 24 months after entry into force, but some parts will be applicable sooner, for instance the ban on AI systems posing unacceptable risks will apply six months after the entry into force. The Codes of Practice will apply nine months after entry into force. Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force. High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force. The expected timeline for the same is:
- August 1st, 2024: The AI Act will enter into force.
- February 2025: Prohibition of certain AI systems - Chapters I (general provisions) & II (prohibited AI systems) will apply; Prohibition of certain AI systems.
- August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers); Requirements for new GPAI models.
- August 2026: The whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
- August 2027: Article 6(1) & corresponding obligations apply.
The AI Act sets out clear definitions for the different actors involved in AI, such as the providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI systems will be held accountable. Along with this, the AI Act also applies to providers and deployers of AI systems located outside of the EU, e.g., in Switzerland, if output produced by the system is intended to be used in the EU. The Act applies to any AI system within the EU that is on the market, in service, or in use, covering both AI providers (the companies selling AI systems) and AI deployers (the organizations using those systems).
In short, the AI Act will apply to different companies across the AI distribution chain, including providers, deployers, importers, and distributors (collectively referred to as “Operators”). The EU AI Act also has extraterritorial application and can also apply to companies not established in the EU, or providers outside the EU if they -make an AI system or GPAI model available on the EU market. Even if only the output generated by the AI system is used in the EU, the Act still applies to such providers and deployers.
CyberPeace Outlook
The EU AI Act, approved by EU lawmakers in 2024, is a landmark legislation designed to protect citizens' health, safety, and fundamental rights from potential harm caused by AI systems. The AI Act will apply to AI systems and GPAI models. The Act creates a tiered risk categorization system with various regulations and stiff penalties for noncompliance. The Act adopts a risk-based approach to AI governance, categorizing potential risks into four tiers: unacceptable, high, limited, and low. Violations of banned systems carry the highest fine: €35 million, or 7 percent of global annual revenue. It establishes transparency requirements for general-purpose AI systems. The regulation also provides specific rules for general-purpose AI (GPAI) models and lays down more stringent requirements for GPAI models with 'high-impact capabilities' that could pose a systemic risk and have a significant impact on the internal market. For high-risk AI systems, the AI Act addresses the issues of fundamental rights impact assessment and data protection impact assessment.
The EU AI Act aims to enhance trust in AI technologies by establishing clear regulatory standards governing AI. We encourage regulatory frameworks that strive to balance the desire to foster innovation with the critical need to prevent unethical practices that may cause user harm. The legislation can be seen as strengthening the EU's position as a global leader in AI innovation and developing regulatory frameworks for emerging technologies. It sets a global benchmark for regulating AI. The companies to which the act applies will need to make sure their practices align with the same. The act may inspire other nations to develop their own legislation contributing to global AI governance. The world of AI is complex and challenging, the implementation of regulatory checks, and compliance by the concerned companies, all pose a conundrum. However, in the end, balancing innovation with ethical considerations is paramount.
At the same hand, the tech sector welcomes regulatory progress but warns that overly-rigid regulations could stifle innovation. Hence flexibility and adaptability are key to effective AI governance. The journey towards robust AI regulation has begun in major countries, and it is important that we find the right balance between safety and innovation and also take into consideration the industry reactions.
References:
- https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689
- https://www.theverge.com/2024/7/12/24197058/eu-ai-act-regulations-bans-deadline
- https://techcrunch.com/2024/07/12/eus-ai-act-gets-published-in-blocs-official-journal-starting-clock-on-legal-deadlines/
- https://www.wsgr.com/en/insights/eu-ai-act-to-enter-into-force-in-august.html
- https://www.techtarget.com/searchenterpriseai/tip/Is-your-business-ready-for-the-EU-AI-Act
- https://www.simmons-simmons.com/en/publications/clyimpowh000ouxgkw1oidakk/the-eu-ai-act-a-quick-guide

Introduction
In the dynamic intersection of pop culture and technology, an unexpected drama unfolded in the virtual world, where the iconic Taylor Swift account has been temporarily blocked on X . The incident sent a shockwave through the online community, sparking debates and speculation about the misuse of deepfake technology.
Taylor Swift's searches on social media platform X have been restored after a temporary blockage was lifted following outrage over her explicit AI images. The social media site, formerly known as Twitter, temporarily restricted searches for Taylor Swift as a temporary measure to address a flood of AI-generated deepfake images that went viral across X and other platforms.
X has mentioned it is actively removing the images and taking appropriate actions against the accounts responsible for spreading them. While Swift has not spoken publicly about the fake images, a report stated that her team is "considering legal action" against the site which published the AI-generated images.
The Social Media Frenzy
As news of temporary blockages spread like wildfire across social media platforms, users engaged in a frenzy of reactions. The fake picture was re-shared 24,000 times, with tens of thousands of users liking the post. This engagement supercharged the deepfake image of Taylor Swift, and by the time the moderators woke up, it was too late. Hundreds of accounts began reposting it, which started an online trend. Taylor Swift's AI video reached an even larger audience. The source of the photograph wasn't even known to begin with. The revelations are causing outrage. American lawmakers from across party lines have spoken. One of them said they were astounded, while another said they were shocked.
AI Deepfake Controversy
The deepfake controversy is not new. There are lot of cases such as Rashmika Mandana, Sachin Tendulkar, and now Taylor Swift have been the victims of such misuse of Deepfake technology. The world is facing a concern about the misuse of AI or deepfake technology. With no proactive measures in place, this threat will only worsen affecting privacy concerns for individuals. This incident has opened a debate among users and industry experts on the ethical use of AI in the digital age and its privacy concerns.
Why has the Incident raised privacy concerns?
The emergence of Taylor Swift's deepfake has raised privacy concerns for several reasons.
- Misuse of Personal Imagery: Deepfake uses AI and its algorithms to superimpose one person’s face onto another person’s body, the algorithms are processed again and again till the desired results are obtained. In the case of celebrities or higher-position people, it's very easy for crooks to get images and generate a deepfake. In the case of Taylor Swift, her images are misused. The misuse of Images can have serious consequences for an individual's reputation and privacy.
- False narrative and Manipulation: Deepfake opens the door for public reaction and spreads false narratives, causing harm to reputation, and affecting personal and professional life. Such false narratives through deepfakes may influence public opinion and damage reputation making it challenging for the person to control it.
- Invasion of Privacy: Creating a deepfake involves gathering a significant amount of information about their targets without their consent. The use of such personal information for the creation of AI-generated content without permission raises serious privacy concerns.
- Difficulty in differentiation: Advanced Deepfake technology makes it difficult for people to differentiate between genuine and manipulated content.
- Potential for Exploitation: Deepfake could be exploited for financial gain or malicious motives of the cyber crooks. These videos do harm the reputation, damage the brand name, and partnerships, and even hamper the integrity of the digital platform upon which the content is posted, they also raise questions about the platform’s policy or should we say against the zero-tolerance policy on posting the non-consensual nude images.
Is there any law that could safeguard Internet users?
Legislation concerning deepfakes differs by nation and often spans from demanding disclosure of deepfakes to forbidding harmful or destructive material. Speaking about various countries, the USA including its 10 states like California, Texas, and Illinois have passed criminal legislation prohibiting deepfake. Lawmakers are advocating for comparable federal statutes. A Democrat from New York has presented legislation requiring producers to digitally watermark deepfake content. The United States does not criminalise such deepfakes but does have state and federal laws addressing privacy, fraud, and harassment.
In 2019, China enacted legislation requiring the disclosure of deepfake usage in films and media. Sharing deepfake pornography became outlawed in the United Kingdom in 2023 as part of the Online Safety Act.
To avoid abuse, South Korea implemented legislation in 2020 criminalising the dissemination of deepfakes that endanger the public interest, carrying penalties of up to five years in jail or fines of up to 50 million won ($43,000).
In 2023, the Indian government issued an advisory to social media & internet companies to protect against deepfakes that violate India'sinformation technology laws. India is on its way to coming up with dedicated legislation to deal with this subject.
Looking at the present situation and considering the bigger picture, the world urgently needs strong legislation to combat the misuse of deepfake technology.
Lesson learned
The recent blockage of Taylor Swift's searches on Elon Musk's X has sparked debates on responsible technology use, privacy protection, and the symbiotic relationship between celebrities and the digital era. The incident highlights the importance of constant attention, ethical concerns, and the potential dangers of AI in the digital landscape. Despite challenges, the digital world offers opportunities for growth and learning.
Conclusion
Such deepfake incidents highlight privacy concerns and necessitate a combination of technological solutions, legal frameworks, and public awareness to safeguard privacy and dignity in the digital world as technology becomes more complex.
References:
- https://www.hindustantimes.com/world-news/us-news/taylor-swift-searches-restored-on-elon-musks-x-after-brief-blockage-over-ai-deepfakes-101706630104607.html
- https://readwrite.com/x-blocks-taylor-swift-searches-as-explicit-deepfakes-of-singer-go-viral/

Executive Summary:
New Linux malware has been discovered by a cybersecurity firm Volexity, and this new strain of malware is being referred to as DISGOMOJI. A Pakistan-based threat actor alias ‘UTA0137’ has been identified as having espionage aims, with its primary focus on Indian government entities. Like other common forms of backdoors and botnets involved in different types of cyberattacks, DISGOMOJI, the malware allows the use of commands to capture screenshots, search for files to steal, spread additional payloads, and transfer files. DISGOMOJI uses Discord (messaging service) for Command & Control (C2) and uses emojis for C2 communication. This malware targets Linux operating systems.
The DISCOMOJI Malware:
- The DISGOMOJI malware opens a specific channel in a Discord server and every new channel corresponds to a new victim. This means that the attacker can communicate with the victim one at a time.
- This particular malware connects with the attacker-controlled Discord server using Emoji, a form of relay protocol. The attacker provides unique emojis as instructions, and the malware uses emojis as a feedback to the subsequent command status.
- For instance, the ‘camera with flash’ emoji is used to screenshots the device of the victim or to steal, the ‘fox’ emoji cracks all Firefox profiles, and the ‘skull’ emoji kills the malware process.
- This C2 communication is done using emojis to ensure messaging between infected contacts, and it is almost impossible for Discord to shut down the malware as it can always change the account details of Discord it is using once the maliciou server is blocked.
- The malware also has capabilities aside from the emoji-based C2 such as network probing, tunneling, and data theft that are needed to help the UTA0137 threat actor in achieving its espionage goals.
Specific emojis used for different commands by UTA0137:
- Camera with Flash (📸): Captures a picture of the target device’s screen as per the victim’s directions.
- Backhand Index Pointing Down (👇): Extracts files from the targeted device and sends them to the command channel in the form of attachments.
- Backhand Index Pointing Right (👉): This process involves sending a file found on the victim’s device to another web-hosted file storage service known as Oshi or oshi[. ]at.
- Backhand Index Pointing Left (👈): Sends a file from the victim’s device to transfer[. ]sh, which is an online service for sharing files on the Internet.
- Fire (🔥): Finds and transmits all files with certain extensions that exist on the victim’s device, such as *. txt, *. doc, *. xls, *. pdf, *. ppt, *. rtf, *. log, *. cfg, *. dat, *. db, *. mdb, *. odb, *. sql, *. json, *. xml, *. php, *. asp, *. pl, *. sh, *. py, *. ino, *. cpp, *. java,
- Fox (🦊): This works by compressing all Firefox related profiles in the affected device.
- Skull (💀): Kills the malware process in windows using ‘os. Exit()’
- Man Running (🏃♂️): Execute a command on a victim’s device. This command receives an argument, which is the command to execute.
- Index Pointing up (👆) : Upload a file to the victim's device. The file to upload is attached along with this emoji
Analysis:
The analysis was carried out for one of the indicator of compromised SHA-256 hash file- C981aa1f05adf030bacffc0e279cf9dc93cef877f7bce33ee27e9296363cf002.
It is found that most of the vendors have marked the file as trojan in virustotal and the graph explains the malicious nature of the contacted domains and IPs.


Discord & C2 Communication for UTA0137:
- Stealthiness: Discord is a well-known messaging platform used for different purposes, which means that sending any messages or files on the server should not attract suspicion. Such stealthiness makes it possible for UTA0137 to remain dormant for greater periods before launching an attack.
- Customization: UTA0137 connected to Discord is able to create specific channels for distinct victims on the server. Such a framework allows the attackers to communicate with each of the victims individually to make a process more accurate and efficient.
- Emoji-based protocol: For C2 communication, emojis really complicates the attempt that Discord might make to interfere with the operations of the malware. In case the malicious server gets banned, malware could easily be recovered, especially by using the Discord credentials from the C2 server.
- Persistence: The malware, as stated above, has the ability to perpetually exist to hack the system and withstand rebooting of systems so that the virus can continue to operate without being detected by the owner of the hacked system.
- Advanced capabilities: Other features of DISGOMOJI are the Network Map using Nmap scanner, network tunneling through Chisel and Ligolo and Data Exfiltration by File Sharing services. These capabilities thus help in aiding the espionage goals of UTA0137.
- Social engineering: The virus and the trojan can show the pop-up windows and prompt messages, for example the fake update for firefox and similar applications, where the user can be tricked into inputting the password.
- Dynamic credential fetching: The malware does not write the hardcoded values of the credentials in order to connect it to the discord server. This also inconveniences analysts as they are unable to easily locate the position of the C2 server.
- Bogus informational and error messages: They never show any real information or errors because they do not want one to decipher the malicious behavior easily.
Recommendations to mitigate the risk of UTA0137:
- Regularly Update Software and Firmware: It is essential to regularly update all the application software and firmware of different devices, particularly, routers, to prevent hackers from exploiting the discovered and disclosed flaws. This includes fixing bugs such as CVE-2024-3080 and CVE-2024-3912 on ASUS routers, which basically entails solving a set of problems.
- Implement Multi-Factor Authentication: There are statistics that show how often user accounts are attacked, it is important to incorporate multi-factor authentication to further secure the accounts.
- Deploy Advanced Malware Protection: Provide robust guard that will help the user recognize and prevent the execution of the DISGOMOJI malware and similar threats.
- Enhance Network Segmentation: Utilize stringent network isolation mechanisms that seek to compartmentalize the key systems and data from the rest of the network in order to minimize the attack exposure.
- Monitor Network Activity: Scanning Network hour to hour for identifying and handling the security breach and the tools such as Nmap, Chisel, Ligolo etc can be used.
- Utilize Threat Intelligence: To leverage advanced threats intelligence which will help you acquire knowledge on previous threats and vulnerabilities and take informed actions.
- Secure Communication Channels: Mitigate the problem of the leakage of developers’ credentials and ways of engaging with the discord through loss of contact to prevent abusing attacks or gaining control over Discord as an attack vector.
- Enforce Access Control: Regularly review and update the user authentication processes by adopting stricter access control measures that will allow only the right personnel to access the right systems and information.
- Conduct Regular Security Audits: It is important to engage in security audits periodically in an effort to check some of the weaknesses present within the network or systems.
- Implement Incident Response Plan: Conduct a risk assessment, based on that design and establish an efficient incident response kit that helps in the early identification, isolation, and management of security breaches.
- Educate Users: Educate users on cybersecurity hygiene, opportunities to strengthen affinity with the University, and conduct retraining on threats like phishing and social engineering.
Conclusion:
The new threat actor named UTA0137 from Pakistan who was utilizing DISGOMOJI malware to attack Indian government institutions using embedded emojis with a command line through the Discord app was discovered by Volexity. It has the capability to exfiltrate and aims to steal the data of government entities. The UTA0137 was continuously improved over time to permanently communicate with victims. It underlines the necessity of having strong protection from viruses and hacker attacks, using secure passwords and unique codes every time, updating the software more often and having high-level anti-malware tools. Organizations can minimize advanced threats, the likes of DISGOMOJI and protect sensitive data by improving network segmentation, continuous monitoring of activities, and users’ awareness.
References:
https://otx.alienvault.com/pulse/66712446e23b1d14e4f293eb
https://thehackernews.com/2024/06/pakistani-hackers-use-disgomoji-malware.html?m=1
https://cybernews.com/news/hackers-using-emojis-to-command-malware/
https://www.volexity.com/blog/2024/06/13/disgomoji-malware-used-to-target-indian-government/