From Equitable Growth to Sustainability, AI and Digital Twins
The Equitable Growth Approach of AI and Digital Twins
Digital Twins can be simply described as virtual replicas of physical assets or systems, powered by real-time data and advanced simulations. When this technology is combined with AI, the impact it has on enabling real-time monitoring, predictive maintenance, optimised operations, and improved design processes through the creation of virtual replicas of physical assets becomes even greater. The greatest value of AI is its ability to make data actionable. And when combined with digital twins, these data can be collated, analysed, inefficiencies removed, and better decisions can be taken to improve efficiency and quality.
This intersection between AI and Digital Twins holds immense potential for addressing key challenges, particularly in countries like India, which is rapidly embracing digital adoption to achieve its economic ambitions and sustainability goals. According to Salesforce’s most recent survey on generative AI use among the general population within the U.S., UK, Australia and India, 75% of generative AI users are looking to automate repetitive tasks and use generative AI for work communications. India is particularly looking towards a rapid digital adoption, economic ambitions, and sustainable developments to be achieved through AI adoption. This blog discuss the intersection of equitable growth, sustainability, and AI-driven policies in India.
Sustainability and the Path Ahead: Digital Twin and AI-Driven Solutions
India faces sustainability challenges which are mainly associated with issues such as urban congestion, the rising demand for energy along with climate change and environmental degradation. AI and Digital Twins provide solutions for real-time simulations and predictive analysis. Some of the examples are its applications in sustainable urban planning such as smart cities like the Indore Smart City Initiative and traffic optimisation, energy efficiency/optimisation through AI-driven renewable energy projects and power grid optimisation and even water resource management through leak detection, equitable distribution and conservation.
The need is to balance innovation with regulation, particularly, underscoring the importance of ethical and sustainable deployment of AI and digital twins and addressing data privacy with AI ethics with recent developments such as the India’s evolving AI policy landscape, including the National Strategy for Artificial Intelligence and its focus on AI for All, regulatory frameworks such as DPDP Act and the manner in which they address AI ethics, data privacy, and digital governance.
The need is to initiate targeted policies that promote research and development in AI and digital twin technologies, skill development and partnerships with the private sector, think tanks, nonprofits and others. Also, collaborations at the global level would include aligning our domestic policies with global AI and sustainability initiatives and leveraging the international frameworks for climate tech and smart infrastructure.
Cyberpeace Outlook
As part of specific actions, policymakers need to engage in proactive governance to ensure the responsible use and development of AI. This includes enacting incentive schemes for sustainable AI projects and strengthening the enforcement of data privacy laws. Industry leaders must support equitable access to AI and digital twin technologies and develop tailored AI tools for resource-constrained settings, particularly in India. Finally, researchers need to drive innovation in alignment with sustainability goals, such as those related to agriculture and groundwater management.
References
- https://economictimes.indiatimes.com/tech/artificial-intelligence/technologies-like-ai-and-digital-twins-can-tackle-challenges-like-equitable-growth-to-sustainability-wef/articleshow/117121897.cms
- https://www.salesforce.com/news/stories/generative-ai-statistics/
- https://www.mdpi.com/2673-2688/4/3/38
- https://www.ibm.com/think/topics/generative-ai-for-digital-twin-energy-utilities
Related Blogs

Introduction
With the ever-growing technology where cyber-crimes are increasing, a new cyber-attack is on the rise, but it’s not in your inbox or your computer- it's targeting your phone, especially your smartphone. Cybercriminals are expanding their reach in India, with a new text-messaging fraud targeting individuals. The Indian Computer Emergency Response Team (CERT-In) has warned against "smishing," or SMS phishing.
Understanding Smishing
Smishing is a combination of the terms "SMS" and "phishing." It entails sending false text messages that appear to be from reputable sources such as banks, government organizations, or well-known companies. These communications frequently generate a feeling of urgency in their readers, prompting them to click on harmful links, expose personal information, or conduct financial transactions.
When hackers "phish," they send out phony emails in the hopes of tricking the receiver into clicking on a dangerous link. Smishing is just the use of text messaging rather than email. In essence, these hackers are out to steal your personal information to commit fraud or other cybercrimes. This generally entails stealing money – usually your own, but occasionally also the money of your firm.
The cybercriminals typically use these tactics to lure victims and steal the information.
Malware- The cyber crooks send the smishing URL link that might tick you into downloading malicious software on your phone itself. This SMS malware may appear as legitimate software, deceiving you into putting in sensitive information and transmitting it to crooks.
Malicious website- The URL in the smishing message may direct you to a bogus website that seeks sensitive personal information. Cybercriminals employ custom-made rogue sites meant to seem like legitimate ones, making it simpler to steal your information.
Smishing text messages often appear to be from your bank, asking you to share personal sensitive information, ATM numbers, or account details. Mobile device cybercrime is increasing, as is mobile device usage. Aside from the fact that texting is the most prevalent usage of cell phones, a few additional aspects make this an especially pernicious security issue. Let's go over how smishing attacks operate.
Modus Operandi
The cyber crooks commit the fraud via SMS. As attackers assume an identity that might be of someone trusted, Smishing attackers can use social engineering techniques to sway a victim's decision-making. Three things are causing this deception:
- Trust- Cyber crooks target individuals, by posing to someone from a legitimate individual and organization, this naturally lowers a person’s defense against threats.
- Context- Using a circumstance that might be relevant to targets helps an attacker to create an effective disguise. The message feels personalized, which helps it overcome any assumption that it is spam.
- Emotion- The nature of the SMS is critical; it makes the victim think that is urgent and requires rapid action. Using these tactics, attackers craft communications that compel the receiver to act.
- Typically, attackers want the victim to click on a URL link within the text message, which takes them to a phishing tool that asks them for sensitive information. This phishing tool is frequently in the form of a website or app that also assumes a phony identity.
How does Smishing Spread?
As we have revealed earlier smishing attacks are delivered through both traditional texts. However, SMS phishing attacks primarily appear to be from known sources People are less careful while they are on their phones. Many people believe that their cell phones are more secure than their desktops. However, smartphone security has limits and cannot always guard against smishing directly.
Considering the fact phones are the target While Android smartphones dominate the market and are a perfect target for malware text messages, iOS devices are as vulnerable. Although Apple's iOS mobile technology has a high reputation for security, no mobile operating system can protect you from phishing-style assaults on its own. A false feeling of security, regardless of platform, might leave users especially exposed.
Kinds of smishing attacks
Some common types of smishing attacks that occurred are;
- COVID-19 Smishing: The Better Business Bureau observed an increase in reports of US government impersonators sending text messages requesting consumers to take an obligatory COVID-19 test via a connected website in April 2020. The concept of these smishing assaults may readily develop, as feeding on pandemic concerns is a successful technique of victimizing the public.
- Gift Smishing: Give away, shopping rewards, or any number of other free offers, this kind of smishing includes free services or products, from a reputable or other company. attackers plan in such a way that the offer is for a limited time or is an exclusive offer and the offers are so lucrative that one gets excited and falls into the trap.
CERT Guidelines
CERT-In shared some steps to avoid falling victim to smishing.
- Never click on any suspicious link in SMS/social media charts or posts.
- Use online resources to validate shortened URLs.
- Always check the link before clicking.
- Use updated antivirus and antimalware tools.
- If you receive any suspicious message pretending to be from a bank or institution, immediately contact the bank or institution.
- Use a separate email account for personal online transactions.
- Enforce multi-factor authentication (MFA) for emails and bank accounts.
- Keep your operating system and software updated with the latest patches.
Conclusion
Smishing uses fraudulent mobile text messages to trick people into downloading malware, sharing sensitive data, or paying cybercriminals money. With the latest technological developments, it has become really important to stay vigilant in the digital era not only protecting your computers but safeguarding the devices that fit in the palm of your hand, CERT warning plays a vital role in this. Awareness and best practices play a pivotal role in safeguarding yourself from evolving threats.
Reference
- https://www.ndtv.com/india-news/government-warns-of-smishing-attacks-heres-how-to-stay-safe-4709458
- https://zeenews.india.com/technology/govt-warns-citizens-about-smishing-scam-how-to-protect-against-this-online-threat-2654285.html
- https://www.the420.in/protect-against-smishing-scams-cert-in-advice-online-safety/
.webp)
Introduction
In an era where digital connectivity drives employment, investment, and communication, the most potent weapon of cybercriminals is ‘gaining trust’ with their sophisticated tactics. Prayagraj has been a recent battleground in India's cybercrime landscape. Within a one-year crackdown, over 10,400 SIM cards, 612 mobile device IMEIs, and 59 bank accounts were blocked, exposing a sprawling international fraud network. These activities primarily targeted unsuspecting individuals through Telegram job postings, fake investment tips, and mobile app scams, highlighting the darker side of convenience in cyberspace. With India now experiencing a wave of scams enabled by technology, this crackdown establishes a precedent for concerted cyber policing and awareness among citizens.
Digital Deceit: How the Scams Operated
SIM cards that have been issued through fake or stolen identities are increasingly being used by cybercriminals in Prayagraj and elsewhere. These SIMs were the initial weapon in a highly organised fraud system, allowing criminals to conduct themselves anonymously while abusing messaging services like WhatsApp and Telegram. The gangs involved in these scams, some of which have been linked by reports to nations like Nepal, Pakistan, China, Dubai, and Myanmar, enticed their victims with rich-yielding stock market advice, remote employment offers, and weekend employment promises. After getting a target engaged, victims were slowly manipulated into sending money in the name of application fees, verification fees, or investment contributions.
API Abuse and OTP Interception
What's more alarming about these scams is their tech-savviness. From Prayagraj's cybercrime squad, several syndicates are reported to have employed API-based mobile applications to intercept OTPs (One-Time Passwords) sent to Indian numbers. Such apps, cleverly disguised as genuine services or work-from-home software, collected personal details like bank account credentials and payment card data, allowing wrongdoers to carry out unauthorised transactions in a matter of minutes. The pilfered funds were then quickly transferred through several mule accounts, rendering the money trail almost untraceable.
The Human Impact: How Citizens Were Trapped
Victims tended to come from job-hunting groups, students, or housewives seeking to earn additional income. Often, the scammers persuaded users to join Telegram channels providing free investment advice or job-referral-based schemes, creating an illusion of authenticity. Once on board, victims were sometimes even paid small commissions initially, creating a false sense of success. This tactic, known as “advance-fee confidence building,” made victims more likely to invest larger sums later, ultimately leading to complete financial loss.
Digital Arrest Threats and Bitcoin Ransom Scams
Aside from investment and job scam complaints, the cybercrime cell also saw several "digital arrest" scams, where victims were forced to send money under the threat of engaging in criminal activities. Bitcoin extortion schemes were also used in some cases, with perpetrators threatening exposure of victims' personal information or browsing history on the internet unless they were paid in cryptocurrency.
Law Enforcement’s Cyber Shield: Local Action, Global Impact
Identifying the extent of the threat, Prayagraj authorities implemented strategic measures to enable local policing. Cyber Units have been formed in each of the 43 police stations in the district, each made up of a sub-inspector, head constable, constable, lady constable, and computer operator. This decentralised model enables response in real-time, improved victim support, and quicker forensic analysis of hacked devices. The nodal officer for cyber operations said that this multi-level action is not punitive but preventive, meant to break syndicates before more harm is caused.
CyberPeace Recommendations: Prevention is Power
As cybercrime gets advanced, citizens will also have to keep pace with it. Prayagraj's experience highlights the importance of public awareness, digital literacy, and instant response processes. To assist in preventing people from falling victim to such scams, CyberPeace advises the following:
- Don't click on dubious APK links sent on WhatsApp or Telegram.
- Do not share OTPs or confidential details, even if the source appears to be familiar.
- Never download unfamiliar apps that demand access to SMS or financial information.
- Block your SIM card, payment cards, and bank accounts at once if your phone is stolen.
- Report all cyber frauds to cybercrime.gov.in or your local Cyber Cell.
- Never join investment or job groups on social sites without verification.
- Refuse video calls from unknown numbers; some scammers use this method of recording or blackmailing victims.
Conclusion
Prayagraj crackdown uncovers both the magnitude and versatility of cybercrime in the present. From trans-border cartels to Telegram job scams, the cyber front is as intricate as ever. But this incident also illustrates what can be achieved when technology, law enforcement, and public awareness come together. To stay safe from cyber threats, a cyber-conscious citizenry is as important as an effective cyber cell for India. At CyberPeace, we know that defending cyberspace begins with cyber resilience, and the story of Prayagraj should encourage communities everywhere to take active digital precautions.
References
- https://www.hindustantimes.com/cities/lucknow-news/over-10k-sims-blocked-as-job-investment-frauds-rise-in-prayagraj-101753715061234.html
- https://consumer.ftc.gov/articles/how-recognize-and-avoid-phishing-scams
- https://faq.whatsapp.com/2286952358121083
- https://education.vikaspedia.in/viewcontent/education/digital-litercy/information-security/preventing-online-scams-cert-in-advisory?lgn=en
- https://cybercrime.gov.in/Accept.aspx
- https://www.linkedin.com/pulse/perils-advance-fee-fraud-protecting-yourself-from-scammers-sharma/

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html