TRAI issues guidelines to Access Service Providers to prevent misuse of messaging services
Introduction
The Telecom Regulatory Authority of India (TRAI) on 20th August 2024 issued directives requiring Access Service Providers to adhere to the specific guidelines to protect consumer interests and prevent fraudulent activities. TRAI has mandated all Access Service Providers to abide by the directives. These steps advance TRAI's efforts to promote a secure messaging ecosystem, protecting consumer interests and eliminating fraudulent conduct.
Key Highlights of the TRAI’s Directives
- For improved monitoring and control, TRAI has directed that Access Service Providers move telemarketing calls, beginning with the 140 series, to an online DLT (Digital Ledger Technology) platform by September 30, 2024, at the latest.
- All Access Service Providers will be forbidden from sending messages that contain URLs, APKs, OTT links, or callback numbers that the sender has not whitelisted, the rule is to be effective from September 1st, 2024.
- In an effort to improve message traceability, TRAI has made it mandatory for all messages, starting on November 1, 2024, to include a traceable trail from sender to receiver. Any message with an undefined or mismatched telemarketer chain will be rejected.
- To discourage the exploitation or misuse of templates for promotional content, TRAI has introduced punitive actions in case of non-compliance. Content Templates registered in the wrong category will be banned, and subsequent offences will result in a one-month suspension of the Sender's services.
- To assure compliance with rules, all Headers and Content Templates registered on DLT must follow the requirements. Furthermore, a single Content Template cannot be connected to numerous headers.
- If any misuse of headers or content templates by a sender is discovered, TRAI has instructed an immediate ‘suspension of traffic’ from all of that sender's headers and content templates for their verification. Such suspension can only be revoked only after the Sender has taken legal action against such usage. Furthermore, Delivery-Telemarketers must identify and disclose companies guilty of such misuse within two business days, or else risk comparable repercussions.
CyberPeace Policy Outlook
TRAI’s measures are aimed at curbing the misuse of messaging services including spam. TRAI has mandated that headers and content templates follow defined requirements. Punitive actions are introduced in case of non-compliance with the directives, such as blacklisting and service suspension. TRAI’s measures will surely curb the increasing rate of scams such as phishing, spamming, and other fraudulent activities and ultimately protect consumer's interests and establish a true cyber-safe environment in messaging services ecosystem.
The official text of TRAI directives is available on the official website of TRAI or you can access the link here.
References
- https://www.trai.gov.in/sites/default/files/Direction_20082024.pdf
- https://www.trai.gov.in/sites/default/files/PR_No.53of2024.pdf
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2046872
- https://legal.economictimes.indiatimes.com/news/regulators/trai-issues-directives-to-access-providers-to-curb-misuse-fraud-through-messaging/112669368
Related Blogs

Executive Summary:
Given that AI technologies are evolving at a fast pace in 2024, an AI-oriented phishing attack on a large Indian financial institution illustrated the threats. The documentation of the attack specifics involves the identification of attack techniques, ramifications to the institution, intervention conducted, and resultant effects. The case study also turns to the challenges connected with the development of better protection and sensibilisation of automatized threats.
Introduction
Due to the advancement in AI technology, its uses in cybercrimes across the world have emerged significant in financial institutions. In this report a serious incident that happened in early 2024 is analysed, according to which a leading Indian bank was hit by a highly complex, highly intelligent AI-supported phishing operation. Attack made use of AI’s innate characteristic of data analysis and data persuasion which led into a severe compromise of the bank’s internal structures.
Background
The chosen financial institution, one of the largest banks in India, had a good background regarding the extremity of its cybersecurity policies. However, these global cyberattacks opened up new threats that AI-based methods posed that earlier forms of security could not entirely counter efficiently. The attackers concentrated on the top managers of the bank because it is evident that controlling such persons gives the option of entering the inner systems as well as financial information.
Attack Execution
The attackers utilised AI in sending the messages that were an exact look alike of internal messages sent between employees. From Facebook and Twitter content, blog entries, and lastly, LinkedIn connection history and email tenor of the bank’s executives, the AI used to create these emails was highly specific. Some of these emails possessed official formatting, specific internal language, and the CEO’s writing; this made them very realistic.
It also used that link in phishing emails that led the users to a pseudo internal portal in an attempt to obtain the login credentials. Due to sophistication, the targeted individuals thought the received emails were genuine, and entered their log in details easily to the bank’s network, thus allowing the attackers access.
Impact
It caused quite an impact to the bank in every aspect. Numerous executives of the company lost their passwords to the fake emails and compromised several financial databases with information from customer accounts and transactions. The break-in permitted the criminals to cease a number of the financial’s internet services hence disrupting its functions and those of its customers for a number of days.
They also suffered a devastating blow to their customer trust because the breach revealed the bank’s weakness against contemporary cyber threats. Apart from managing the immediate operations which dealt with mitigating the breach, the financial institution was also toppling a long-term reputational hit.
Technical Analysis and Findings
1. The AI techniques that are used in generation of the phishing emails are as follows:
- The attack used powerful NLP technology, which was most probably developed using the large-scaled transformer, such as GPT (Generative Pre-trained Transformer). Since these models are learned from large data samples they used the examples of the conversation pieces from social networks, emails and PC language to create quite credible emails.
Key Technical Features:
- Contextual Understanding: The AI was able to take into account the nature of prior interactions and thus write follow up emails that were perfectly in line with prior discourse.
- Style Mimicry: The AI replicated the writing of the CEO given the emails of the CEO and then extrapolated from the data given such elements as the tone, the language, and the format of the signature line.
- Adaptive Learning: The AI actively adapted from the mistakes, and feedback to tweak the generated emails for other tries and this made it difficult to detect.
2. Sophisticated Spear-Phishing Techniques
Unlike ordinary phishing scams, this attack was phishing using spear-phishing where the attackers would directly target specific people using emails. The AI used social engineering techniques that significantly increased the chances of certain individuals replying to certain emails based on algorithms which machine learning furnished.
Key Technical Features:
- Targeted Data Harvesting: Cyborgs found out the employees of the organisation and targeted messages via the public profiles and messengers were scraped.
- Behavioural Analysis: The latest behaviour pattern concerning the users of the social networking sites and other online platforms were used by the AI to forecast the courses of action expected to be taken by the end users such as clicking on the links or opening of the attachments.
- Real-Time Adjustments: These are times when it was determined that the response to the phishing email was necessary and the use of AI adjusted the consequent emails’ timing and content.
3. Advanced Evasion Techniques
The attackers were able to pull off this attack by leveraging AI in their evasion from the normal filters placed in emails. These techniques therefore entailed a modification of the contents of the emails in a manner that would not be easily detected by the spam filters while at the same time preserving the content of the message.
Key Technical Features:
- Dynamic Content Alteration: The AI merely changed the different aspects of the email message slightly to develop several versions of the phishing email that would compromise different algorithms.
- Polymorphic Attacks: In this case, polymorphic code was used in the phishing attack which implies that the actual payloads of the links changed frequently, which means that it was difficult for the AV tools to block them as they were perceived as threats.
- Phantom Domains: Another tactic employed was that of using AI in generating and disseminating phantom domains, that are actual web sites that appear to be legitimate but are in fact short lived specially created for this phishing attack, adding to the difficulty of detection.
4. Exploitation of Human Vulnerabilities
This kind of attack’s success was not only in AI but also in the vulnerability of people, trust in familiar language and the tendency to obey authorities.
Key Technical Features:
- Social Engineering: As for the second factor, AI determined specific psychological principles that should be used in order to maximise the chance of the targeted recipients opening the phishing emails, namely the principles of urgency and familiarity.
- Multi-Layered Deception: The AI was successfully able to have a two tiered approach of the emails being sent as once the targeted individuals opened the first mail, later the second one by pretext of being a follow up by a genuine company/personality.
Response
On sighting the breach, the bank’s cybersecurity personnel spring into action to try and limit the fallout. They reported the matter to the Indian Computer Emergency Response Team (CERT-In) to find who originated the attack and how to block any other intrusion. The bank also immediately started taking measures to strengthen its security a bit further, for instance, in filtering emails, and increasing the authentication procedures.
Knowing the risks, the bank realised that actions should be taken in order to enhance the cybersecurity level and implement a new wide-scale cybersecurity awareness program. This programme consisted of increasing the awareness of employees about possible AI-phishing in the organisation’s info space and the necessity of checking the sender’s identity beforehand.
Outcome
Despite the fact and evidence that this bank was able to regain its functionality after the attack without critical impacts with regards to its operations, the following issues were raised. Some of the losses that the financial institution reported include losses in form of compensation of the affected customers and costs of implementing measures to enhance the financial institution’s cybersecurity. However, the principle of the incident was significantly critical of the bank as customers and shareholders began to doubt the organisation’s capacity to safeguard information in the modern digital era of advanced artificial intelligence cyber threats.
This case depicts the importance for the financial firms to align their security plan in a way that fights the new security threats. The attack is also a message to other organisations in that they are not immune from such analysis attacks with AI and should take proper measures against such threats.
Conclusion
The recent AI-phishing attack on an Indian bank in 2024 is one of the indicators of potential modern attackers’ capabilities. Since the AI technology is still progressing, so are the advances of the cyberattacks. Financial institutions and several other organisations can only go as far as adopting adequate AI-aware cybersecurity solutions for their systems and data.
Moreover, this case raises awareness of how important it is to train the employees to be properly prepared to avoid the successful cyberattacks. The organisation’s cybersecurity awareness and secure employee behaviours, as well as practices that enable them to understand and report any likely artificial intelligence offences, helps the organisation to minimise risks from any AI attack.
Recommendations
- Enhanced AI-Based Defences: Financial institutions should employ AI-driven detection and response products that are capable of mitigating AI-operation-based cyber threats in real-time.
- Employee Training Programs: CYBER SECURITY: All employees should undergo frequent cybersecurity awareness training; here they should be trained on how to identify AI-populated phishing.
- Stricter Authentication Protocols: For more specific accounts, ID and other security procedures should be tight in order to get into sensitive ones.
- Collaboration with CERT-In: Continued engagement and coordination with authorities such as the Indian Computer Emergency Response Team (CERT-In) and other equivalents to constantly monitor new threats and valid recommendations.
- Public Communication Strategies: It is also important to establish effective communication plans to address the customers of the organisations and ensure that they remain trusted even when an organisation is facing a cyber threat.
Through implementing these, financial institutions have an opportunity for being ready with new threats that come with AI and cyber terrorism on essential financial assets in today’s complex IT environments.
.webp)
Introduction
India's National Commission for Protection of Child Rights (NCPCR) is set to approach the Ministry of Electronics and Information Technology (MeitY) to recommend mandating a KYC-based system for verifying children's age under the Digital Personal Data Protection (DPDP) Act. The decision to approach or send recommendations to MeitY was taken by NCPCR in a closed-door meeting held on August 13 with social media entities. In the meeting, NCPCR emphasised proposing a KYC-based age verification mechanism. In this background, Section 9 of the Digital Personal Data Protection Act, 2023 defines a child as someone below the age of 18, and Section 9 mandates that such children have to be verified and parental consent will be required before processing their personal data.
Requirement of Verifiable Consent Under Section 9 of DPDP Act
Regarding the processing of children's personal data, Section 9 of the DPDP Act, 2023, provides that for children below 18 years of age, consent from parents/legal guardians is required. The Data Fiduciary shall, before processing any personal data of a child or a person with a disability who has a lawful guardian, obtain verifiable consent from the parent or lawful guardian. Additionally, behavioural monitoring or targeted advertising directed at children is prohibited.
Ongoing debate on Method to obtain Verifiable Consent
Section 9 of the DPDP Act gives parents or lawful guardians more control over their children's data and privacy, and it empowers them to make decisions about how to manage their children's online activities/permissions. However, obtaining such verifiable consent from the parent or legal guardian presents a quandary. It was expected that the upcoming 'DPDP rules,' which have yet to be notified by the Central Government, would shed light on the procedure of obtaining such verifiable consent from a parent or lawful guardian.
However, In the meeting held on 18th July 2024, between MeitY and social media companies to discuss the upcoming Digital Personal Data Protection Rules (DPDP Rules), MeitY stated that it may not intend to prescribe a ‘specific mechanism’ for Data Fiduciaries to verify parental consent for minors using digital services. MeitY instead emphasised obligations put forth on the data fiduciary under section 8(4) of the DPDP Act to implement “appropriate technical and organisational measures” to ensure effective observance of the provisions contained under this act.
In a recent update, MeitY held a review meeting on DPDP rules, where they focused on a method for determining children's ages. It was reported that the ministry is making a few more revisions before releasing the guidelines for public input.
CyberPeace Policy Outlook
CyberPeace in its policy recommendations paper published last month, (available here) also advised obtaining verifiable parental consent through methods such as Government Issued ID, integration of parental consent at ‘entry points’ like app stores, obtaining consent through consent forms, or drawing attention from foreign laws such as California Privacy Law, COPPA, and developing child-friendly SIMs for enhanced child privacy.
CyberPeace in its policy paper also emphasised that when deciding the method to obtain verifiable consent, the respective platforms need to be aligned with the fact that verifiable age verification must be done without compromising user privacy. Balancing user privacy is a question of both technological capabilities and ethical considerations.
DPDP Act is a brand new framework for protecting digital personal data and also puts forth certain obligations on Data Fiduciaries and provides certain rights to Data Principal. With upcoming ‘DPDP Rules’ which are expected to be notified soon, will define the detailed procedure for the implementation of the provisions of the Act. MeitY is refining the DPDP rules before they come out for public consultation. The approach of NCPCR is aimed at ensuring child safety in this digital era. We hope that MeitY comes up with a sound mechanism for obtaining verifiable consent from parents/lawful guardians after taking due consideration to recommendations put forth by various stakeholders, expert organisations and concerned authorities such as NCPCR.
References
- https://www.moneycontrol.com/technology/dpdp-rules-ncpcr-to-recommend-meity-to-bring-in-kyc-based-age-verification-for-children-article-12801563.html
- https://pune.news/government/ncpcr-pushes-for-kyc-based-age-verification-in-digital-data-protection-a-new-era-for-child-safety-215989/#:~:text=During%20this%20meeting%2C%20NCPCR%20issued,consent%20before%20processing%20their%20data
- https://www.hindustantimes.com/india-news/ncpcr-likely-to-seek-clause-for-parents-consent-under-data-protection-rules-101724180521788.html
- https://www.drishtiias.com/daily-updates/daily-news-analysis/dpdp-act-2023-and-the-isssue-of-parental-consent

Introduction
The Online Lottery Scam involves a scammer reaching out through email, phone or SMS to inform you that you have won a significant amount of money in a lottery, instructing you to contact an agent at a specific phone number or email address that actually belongs to the fraudster. Once the agent is reached out to, the recipient will need to cover processing charges in order to claim the lottery reward. Upfront Paying is required in order to receive your reward. However, actual rewards come at no cost. Additionally, such defective 'offers’ often contain phishing attacks, tricking users into clicking on malicious links.
Modus Operandi
The common lottery fraud starts with a message stating that the receiver has won a large lottery prize. These messages are frequently crafted to imitate official correspondence from reputable institutions, sweepstakes, or foreign administrations. The scammers request the receiver to give personal information like name, address, and banking details, or to make a payment for taxes, processing fees, or legal procedures. After the victim sends the money or discloses their personal details, the scammers may vanish or persist in requesting more payments for different reasons.
Tactics and Psychological Manipulation
These fraudulent techniques mostly rely on psychological manipulation to work. Fraudsters by persuading the victims create the fake sense of emergency that they must act quickly in order to get the lottery prize. Additionally, they prey on people's hopes for a better life by convincing them that this unanticipated gain has the power to change their destiny. Many people fall prey to the scam because they are driven by the desire to get wealthy and fail to recognize the warning indications. Additionally, fraudsters frequently use convincing language and fictitious documentation that appears authentic, hence users need to be extra cautious and recognise the early signs of such online fraudulent activities.
Festive Season and Uptick in Deceptive Online Scams
As the festive season begins, there is a surge in deceptive online scams that aim at targeting innocent internet users. A few examples of such scams include, free Navratri garba passes, quiz participation opportunities, coupons offering freebies, fake offers of cheap jewellery, counterfeit product sales, festival lotteries, fake lucky draws and charity appeals. Most of these scams are targeted to lure the victims for financial gain.
In 2023, CyberPeace released a research report on the Navratri festivities scam where we highlighted the ‘Tanishq iPhone 15 Gift’ scam which involved fraudsters posing as Tanishq, a well-known jewellery brand, and offering fake iPhone 15 as Navratri gifts. Victims were lured into clicking on malicious links. CyberPeace issued a detailed advisory within the report, highlighting that the public must exercise vigilance, scrutinise the legitimacy of such offers, and take precautionary measures to shield themselves from falling prey to such deceptive cyber schemes.
Preventive Measures for Lottery Scams
To avoid lottery scams ,users should avoid responding to messages or calls about fake lottery wins, verify the source of the lottery, maintain confidentiality by not sharing sensitive personal details, approach unexpected windfalls with scepticism, avoid upfront payment requests, and recognize manipulative tactics by scammers. Ignoring messages or calls about fake lottery wins is a smart move. Verifying the source and asking probing questions is also crucial. Users are also advisednot to click on such unsolicited links of lottery prizes received in emails or messages as such links can be phishing attempts. These best practices can help protect the victims against scammers who pressurise victims to act quickly that led them to fall prey to such scams.
Must-Know Tips to Prevent Lottery Scams
● It is advised to steer clear of any communication that offers lotteries or giveaways, as these are often perceived as too good to be true.
● It is advised to refrain from transferring money to individuals/entities who are unknown without verifying their identity and credibility.
● If you have already given the fraudsters your bank account details, it is crucial to alert your bank immediately.
● Report any such incidents on the National Cyber Crime Reporting Portal at cybercrime.gov.in or Cyber Crime Helpline Number 1930.