The Telecom Regulatory Authority of India (TRAI), on March 13 2023, published a new rule to regulate telemarketing firms. Trai has demonstrated strictness when it comes to bombarding users with intrusive marketing pitches. In a report, TRAI stated that 10-digit mobile numbers could not be utilised for advertising. In reality, different phone numbers are given out for regular calls and telemarketing calls. Hence, it is an appropriate and much-required move in order to suppress and eradicate phishing scammers and secure the Indian Cyber-ecosystem at large.
What are the new rules?
The rules state that now 10-digit unregistered mobile numbers for promotional purposes would be shut down over the following five days. The rule claim that calling from unregistered mobile numbers had been banned was published on February 16. In this case, using 10-digit promotional messages for promotional calling will end within the following five days. This step by TRAI has been seen after nearly 6-8 months of releasing the Telecommunication Bill, 2022, which has focused towards creating a stable Indian Telecom market and reducing the phoney calls/messages by bad actors to reduce cyber crimes like phishing. This is done to distinguish between legitimate and promotional calls. According to certain reports, some telecom firms allegedly break the law by using 10-digit mobile numbers to make unwanted calls and send promotional messages. All telecom service providers must execute the requirements under the recent TRAI directive within five days.
How will the new rules help?
The promotional use of a cellphone number with 10 digits was allowed since the start, however, with the latest NCRB report on cyber crimes and the rising instances and reporting of cyber crimes primarily focused towards frauds related to monetary gains by the bad actors points to the issue of unregulated promotional messages. This move will act as a critical step towards eradicating scammers from the cyber-ecosystem, TRAI has been very critical in understanding the dynamics and shortcomings in the regulation of the telecom spectrum and network in India and has shown keen interest towards suppressing the modes of technology used by the scammers. It is a fact that the invention of the technology does not define its use, the policy of the technology does, hence it is important to draft ad enact policies which better regulate the existing and emerging technologies.
What to avoid?
In pursuance of the rules enacted by TRAI, the business owners involved in promotional services through 10-digit numbers will have to follow these steps-
It is against the law to utilise a 10-digit cellphone number for promotional calls.
You should stop doing so right now.
Your mobile number will be blocked in the following five days if not.
Users employed by telemarketing firms are encouraged to refrain from using the system in such circumstances.
Those working for telemarketing firms are encouraged not to call from their mobile numbers.
Users should phone the company’s registered mobile number for promotional purposes.
Conclusion
The Indian netizen has been exposed to the technology a little later than the western world. However, this changed drastically during the Covid-19 pandemic as the internet and technology penetration rates increased exponentially in just a couple of months. Although this has been used as an advantage by the bad actors, it was pertinent for the government and its institutions to take an effective and efficient step to safeguard the people from financial fraud. Although these frauds occur in high numbers due to a lack of knowledge and awareness, we need to work on preventive solutions rather than precautionary steps and the new rules by TRAI point towards a safe, secured and sustainable future of cyberspace in India.
Misinformation is rampant all over the world and impacting people at large. In 2023, UNESCO commissioned a survey on the impact of Fake News which was conducted by IPSOS. This survey was conducted in 16 countries that are to hold national elections in 2024 with a total of 2.5 billion voters and showed how pressing the need for effective regulation had become and found that 85% of people are apprehensive about the repercussions of online disinformation or misinformation. UNESCO has introduced a plan to regulate social media platforms in light of these worries, as they have become major sources of misinformation and hate speech online. This action plan is supported by the worldwide opinion survey, highlighting the urgent need for strong actions. The action plan outlines the fundamental principles that must be respected and concrete measures to be implemented by all stakeholders associated, i.e., government, regulators, civil society and the platforms themselves.
The Key Areas in Focus of the Action Plan
The focus area of the action plan is on the protection of the Freedom of Expression while also including access to information and other human rights in digital platform governance. The action plan works on the basic premise that the impact on human rights becomes the compass for all decision-making, at every stage and by every stakeholder. Groups of independent regulators work in close coordination as part of a wider network, to prevent digital companies from taking advantage of disparities between national regulations. Moderation of content as a feasible and effective option at the required scale, in all regions and all languages.
The algorithms of these online platforms, particularly the social media platforms are established, but it is too often geared towards maximizing engagement rather than the reliability of information. Platforms are required to take on more initiative to educate and train users to be critical thinkers and not just hopers. Regulators and platforms are in a position to take strong measures during particularly sensitive conditions ranging from elections to crises, particularly the information overload that is taking place.
Key Principles of the Action Plan
Human Rights Due Diligence: Platforms are required to assess their impact on human rights, including gender and cultural dimensions, and to implement risk mitigation measures. This would ensure that the platforms are responsible for educating users about their rights.
Adherence to International Human Rights Standards: Platforms must align their design, content moderation, and curation with international human rights standards. This includes ensuring non-discrimination, supporting cultural diversity, and protecting human moderators.
Transparency and Openness: Platforms are expected to operate transparently, with clear, understandable, and auditable policies. This includes being open about the tools and algorithms used for content moderation and the results they produce.
User Access to Information: Platforms should provide accessible information that enables users to make informed decisions.
Accountability: Platforms must be accountable to their stakeholders which would include the users and the public, which would ensure that redressal for content-related decisions is not compromised. This accountability extends to the implementation of their terms of service and content policies.
Enabling Environment for the application of the UNESCO Plan
The UNESCO Action Plan to counter misinformation has been created to create an environment where freedom of expression and access to information flourish, all while ensuring safety and security for digital platform users and non-users. This endeavour calls for collective action—societies as a whole must work together. Relevant stakeholders, from vulnerable groups to journalists and artists, enable the right to expression.
Conclusion
The UNESCO Action Plan is a response to the dilemma that has been created due to the information overload, particularly, because the distinction between information and misinformation has been so clouded. The IPSOS survey has revealed the need for an urgency to address these challenges in the users who fear the repercussions of misinformation.
The UNESCO action plan provides a comprehensive framework that emphasises the protection of human rights, particularly freedom of expression, while also emphasizing the importance of transparency, accountability, and education in the governance of digital platforms as a priority. By advocating for independent regulators and encouraging platforms to align with international human rights standards, UNESCO is setting the stage for a more responsible and ethical digital ecosystem.
The recommendations include integrating regulators through collaborations and promoting global cooperation to harmonize regulations, expanding the Digital Literacy campaign to educate users about misinformation risks and online rights, ensuring inclusive access to diverse content in multiple languages and contexts, and monitoring and refining tech advancements and regulatory strategies as challenges evolve. To ultimately promote a true online information landscape.
In recent years, the city of Hyderabad/Cyberabad has emerged as a technology hub, a place with the strong presence of multi corporations, Startups, and research institutions, Hyderabad has become a hub of innovations and technological advancement. However, this growing land of cyber opportunities has also become a hub for cybercriminals as well. In this blog post, we shall explore the reasons why professionals are being targeted and the effects of cyber fraud on techies. Through this investigation, we hope to raise awareness about the seriousness of the problem as well as give vital insights and techniques for Cyberabad’s computer workers to defend themselves against cyber theft. We can work together to make Cyberabad’s technology ecosystem safer and more secure.
Defining Cyber Fraud
In today’s age, where everything has an interconnected digital world, cyber fraud cases are increasing daily. Cyber fraud encompasses a wide range of threats and techniques employed by bad actors, such as Phishing, Ransomware, identity theft, online scams, data breaches, and fake websites designed for users. The sophistication of cyber fraud techniques is constantly evolving, making it challenging for individuals and organisations to stay ahead. Cybercriminals use software vulnerabilities, social engineering tactics, and flaws in cybersecurity defences to carry out their harmful operations. Individuals and organisations must grasp these dangers and tactics to protect themselves against cyber fraud.
Impact of Cyber Frauds
The consequences of Falling victim to cyber fraud can be devastating, both personally and professionally. The emotional and financial toll on individuals may be a challenge. Identity theft may lead to damaged credit scores, fraudulent transactions, and years of recovery work to rehabilitate one’s image. Financial fraud can result in depleted bank accounts, unauthorised charges, and substantial monetary losses. Furthermore, being tricked and violated in the digital environment can generate anxiety, tension, and a lack of confidence.
The impact of cyber fraud goes beyond immediate financial losses and can have long-term consequences for individuals’ and organisations’ entire well-being and stability. As the threat environment evolves, it is critical for people and organisations to recognise the gravity of these repercussions and take proactive actions to protect themselves against cyber theft.
Why are Cyberabad Tech Professionals Targeted?
Tech professionals in Cyberabad are particularly vulnerable to cyber due to various factors. Firstly, their expertise and knowledge in technology make them attractive targets for cybercrooks. These professionals possess valuable coding, Software, and administration skills, making them attractive to cybercriminals.
Secondly, the nature of work often involves enormous use of technology, including regular internet contacts, email exchanges, and access to private information. This expanded digital presence exposes them to possible cyber dangers and makes them more vulnerable to fraudsters’ social engineering efforts. Furthermore, the fast-moving nature of the tech industry, with many deadlines and work pressure to deliver, can create a distraction. This can let them click on some malicious links or share sensitive information unknowingly all these factors let the cyber criminals exploit vulnerabilities.
Unveiling the Statistics
According to various reports, 80% of cyber fraud victims in Hyderabad are techies; the rest are the public targeted by cyber crooks. This surprising number emphasises the critical need to address the vulnerabilities and threats this specific segment within the IT community faces.
Going further into the data, we can acquire insights into the many forms of cyber fraud targeting tech workers, the strategies used by cybercriminals, and the impact these occurrences have on individuals and organisations. Examining precise features and patterns within data might give important information for developing successful preventative and protection methods.
Factors Contributing
Several reasons contribute to the elevated risk of cyber fraud among ICT professionals in Cyberabad. Understanding these aspects helps explain why this group is specifically targeted and may be more vulnerable to such assaults.
Technical Expertise: Tech workers frequently have specialised technical knowledge, but this knowledge may only sometimes extend to cybersecurity. Their primary concentration is writing software, designing systems, or implementing technologies, which may result in missing possible vulnerabilities or a lack of overall cybersecurity understanding.
Confidence in Technology: IT workers have a higher level of confidence in technology because of their knowledge and dependence on technology. This trust can sometimes make individuals more vulnerable to sophisticated frauds or social engineering approaches that prey on their faith in the services they utilise.
Time Constraints and Pressure: Tech workers frequently operate under tight deadlines and tremendous pressure to reach project milestones. This may result in hurried decision-making or disregarding possible warning signals of cyber fraud, rendering them more exposed to assaults that prey on time-sensitive circumstances.
Cybercriminals know that technology workers have valuable knowledge, trade secrets, and intellectual property that may be economically profitable. As a result, they are attractive targets for attacks aiming at stealing sensitive data or gaining unauthorised access to critical systems.
The best practices that cyber techies can apply to safeguard their personal and professional data by following these simple tips:
Strong Passwords: create a strong password, using passwords for all your online accounts and changing them regularly. Remember to use unique combinations!
MFA (Multi-Factor Authentication): Enable MFA wherever possible. This provides an extra degree of protection by requiring a second form of verification, such as a code texted to your mobile device and your password.
Use Secured WiFi: Use secure and encrypted Wi-Fi networks, especially while viewing sensitive information. Avoid connecting to public or unprotected networks, as they can be readily exploited. Recognising Red Flags and Staying Ahead
Social Engineering: Be sceptical of unwanted solicitations or offers, both online and offline. Cybercriminals may try to persuade or fool you using social engineering tactics. Before revealing any personal or private information, think critically and confirm the veracity of the request.
Secure Web Browsing: Only browse trustworthy websites with valid SSL certificates (look for “https://” in the URL). Avoid clicking on strange links or downloading files from unknown sources since they may contain malware or ransomware.
Report Suspicious actions: If you encounter any suspicious or fraudulent actions, report them to the relevant authorities, such as the Cyber Crime Police or your organisation’s IT department. Reporting events can assist in avoiding additional harm and aid in identifying and apprehending hackers.
Stay Current on Security Practises: Stay up to speed on the newest cybersecurity risks and best practices. Follow credible sources, participate in cybersecurity forums or seminars, and remain current on new threats and preventative measures.
Conclusion
The rise in cybercrimes and fraud cases among tech experts in Cyberabad is a disturbing trend that requires prompt intervention. We can establish a safer tech cluster that lives on creativity, trust, and resilience by adopting proactive actions, raising awareness, and encouraging cooperation. Let us work together to prevent cybercrime and ensure the future of Cyberabad’s IT ecosystem.
Generative AI models are significant consumers of computational resources and energy required for training and running models. While AI is being hailed as a game-changer, however underneath the shiny exterior, cracks are present which significantly raises concerns for its environmental impact. The development, maintenance, and disposal of AI technology all come with a large carbon footprint. The energy consumption of AI models, particularly large-scale models or image generation systems, these models rely on data centers powered by electricity, often from non-renewable sources, which exacerbates environmental concerns and contributes to substantial carbon emissions.
As AI adoption grows, improving energy efficiency becomes essential. Optimising algorithms, reducing model complexity, and using more efficient hardware can lower the energy footprint of AI systems. Additionally, transitioning to renewable energy sources for data centers can help mitigate their environmental impact. There is a growing need for sustainable AI development, where environmental considerations are integral to model design and deployment.
A breakdown of how generative AI contributes to environmental risks and the pressing need for energy efficiency:
Gen AI during the training phase has high power consumption, when vast amounts of computational power which is often utilising extensive GPU clusters for weeks or at times even months, consumes a substantial amount of electricity. Post this phase, the inference phase where the deployment of these models takes place for real-time inference, can be energy-extensive especially when we take into account the millions of users of Gen AI.
The main source of energy used for training and deploying AI models often comes from non-renewable sources which then contribute to the carbon footprint. The data centers where the computations for Gen AI take place are a significant source of carbon emissions if they rely on the use of fossil fuels for their energy needs for the training and deployment of the models. According to a study by MIT, training an AI can produce emissions that are equivalent to around 300 round-trip flights between New York and San Francisco. According to a report by Goldman Sachs, Data Companies will use 8% of US power by 2030, compared to 3% in 2022 as their energy demand grows by 160%.
The production and disposal of hardware (GPUs, servers) necessary for AI contribute to environmental degradation. Mining for raw materials and disposing of electronic waste (e-waste) are additional environmental concerns. E-waste contains hazardous chemicals, including lead, mercury, and cadmium, that can contaminate soil and water supplies and endanger both human health and the environment.
Efforts by the Industry to reduce the environmental risk posed by Gen AI
There are a few examples of how companies are making efforts to reduce their carbon footprint, reduce energy consumption and overall be more environmentally friendly in the long run. Some of the efforts are as under:
Google's TPUs in particular the Google Tensor are designed specifically for machine learning tasks and offer a higher performance-per-watt ratio compared to traditional GPUs, leading to more efficient AI computations during the shorter periods requiring peak consumption.
Researchers at Microsoft, for instance, have developed a so-called “1 bit” architecture that can make LLMs 10 times more energy efficient than the current leading system. This system simplifies the models’ calculations by reducing the values to 0 or 1, slashing power consumption but without sacrificing its performance.
OpenAI has been working on optimizing the efficiency of its models and exploring ways to reduce the environmental impact of AI and using renewable energy as much as possible including the research into more efficient training methods and model architectures.
Policy Recommendations
We advocate for the sustainable product development process and press the need for Energy Efficiency in AI Models to counter the environmental impact that they have. These improvements would not only be better for the environment but also contribute to the greater and sustainable development of Gen AI.Some suggestions are as follows:
AI needs to adopt a Climate justice framework which has been informed by a diverse context and perspectives while working in tandem with the UN’s (Sustainable Development Goals) SDGs.
Working and developing more efficient algorithms that would require less computational power for both training and inference can reduce energy consumption. Designing more energy-efficient hardware, such as specialized AI accelerators and next-generation GPUs, can help mitigate the environmental impact.
Transitioning to renewable energy sources (solar, wind, hydro) can significantly reduce the carbon footprint associated with AI. The World Economic Forum (WEF) projects that by 2050, the total amount of e-waste generated will have surpassed 120 million metric tonnes.
Employing techniques like model compression, which reduces the size of AI models without sacrificing performance, can lead to less energy-intensive computations. Optimized models are faster and require less hardware, thus consuming less energy.
Implementing scattered learning approaches, where models are trained across decentralized devices rather than centralized data centers, can lead to a better distribution of energy load evenly and reduce the overall environmental impact.
Enhancing the energy efficiency of data centers through better cooling systems, improved energy management practices, and the use of AI for optimizing data center operations can contribute to reduced energy consumption.
Final Words
The UN Sustainable Development Goals (SDGs) are crucial for the AI industry just as other industries as they guide responsible innovation. Aligning AI development with the SDGs will ensure ethical practices, promoting sustainability, equity, and inclusivity. This alignment fosters global trust in AI technologies, encourages investment, and drives solutions to pressing global challenges, such as poverty, education, and climate change, ultimately creating a positive impact on society and the environment. The current state of AI is that it is essentially utilizing enormous power and producing a product not efficiently utilizing the power it gets. AI and its derivatives are stressing the environment in such a manner which if it continues will affect the clean water resources and other non-renewable power generation sources which contributed to the huge carbon footprint of the AI industry as a whole.
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.