India to come up with AI Regulation Framework
Introduction
India plans to draft the first AI regulations framework. The draft will be discussed and debated in June-July this year as stated by Union Minister of Skill Development and Entrepreneurship Rajeev Chandrasekhar. He aims to harness AI for economic growth, healthcare, and agriculture, ensuring its significant impact. The Indian government plans to fully utilise AI for economic growth, focusing on healthcare, drug discovery, agriculture, and farmer productivity.
Government Approach to Regulating AI
Chandrasekhar stated that the government's approach to AI regulation involves establishing principles and a comprehensive list of harms and criminalities. They prefer clear platform standards to address bias and misuse during model training rather than regulating AI at specific stages of its development. Union Minister Chandrasekhar also highlights the importance of legal compliance and the risks faced by entrepreneurs who disregard regulations in the digital economy. He warned of "severe consequences" for non-compliance.
Addressing the opening session of the two-day Nasscom leadership summit in Mumbai, the Union minister added that the intention is to harness AI for economic growth and address potential risks and harms. Mr. Chandrasekhar stated that the government is committed to developing AI-skilled individuals. He also highlighted the importance of a global governance framework that deals with the safety and trust of AI.
Union Minister Chandrasekhar also said that 900 million Indians online and 1.3 billion people will be connected to the global internet soon, providing India with both an opportunity and a responsibility to collaborate on regulations to establish legal safeguards that protect consumers and citizens. He further added that the framework is being retrofitted to address the complexity and impact of AI in safety infrastructure. The goal is to ensure legal guardrails for Al, a kinetic enabler of the digital economy, safety and trust, and accountability for those using the AI platform.
Prioritizing Safety and Trust in AI Development
Union minister Chandrasekhar announced that the framework will be discussed at the upcoming Global Partnership on Artificial Intelligence (GPAI) event, a multi-stakeholder initiative with 29 member countries aiming to bridge the gap between theory and practice on AI by supporting research on AI-related priorities. Chandrasekhar emphasises the importance of safety and trust in generative AI development. He believes that every platform must be legally accountable for any harm it causes or enables and should not enable criminality. He advocated for safe and trustworthy AI.
Conclusion
India is drafting its first AI regulation framework, as highlighted by Union Minister Rajeev Chandrasekhar. This framework aims to harness the potential of AI while ensuring safety, trust, and accountability. The framework will focus on principles, comprehensive standards, and legal compliance to navigate the complexities of AI's impact on sectors like healthcare, agriculture, and the digital economy. India recognises the need for robust legal safeguards to protect citizens and foster innovation and economic growth while fostering a culture of trustworthy AI development.
References:
- https://www.livemint.com/ai/artificial-intelligence/india-to-come-up-with-ai-regulations-framework-by-june-july-this-year-rajeev-chandrasekhar-msde-11708409300377.html
- https://timesofindia.indiatimes.com/business/india-business/india-to-develop-draft-ai-framework-by-june-july-chandrasekhar/articleshow/107865548.cms
- https://newsonair.gov.in/News?title=Government-to-come-out-with-draft-regulatory-framework-for-Artificial-Intelligence-by-July-2024&id=477637
Related Blogs

Introduction
Phone farms refer to setups or systems using multiple phones collectively. Phone farms are often for deceptive purposes, to create repeated actions in high numbers quickly, or to achieve goals. These can include faking popularity through increasing views, likes, and comments and growing the number of followers. It can also include creating the illusion of legitimate activity through actions like automatic app downloads, ad views, clicks, registrations, installations and in-app engagement.
A phone farm is a network where cybercriminals exploit mobile incentive programs by using multiple phones to perform the same actions repeatedly. This can lead to misattributions and increased marketing spends. Phone farming involves exploiting paid-to-watch apps or other incentive-based programs over dozens of phones to increase the total amount earned. It can also be applied to operations that orchestrate dozens or hundreds of phones to create a certain outcome, such as improving restaurant ratings or App Store Optimization(ASO). Companies constantly update their platforms to combat phone farming, but it is nearly impossible to prevent people from exploiting such services for their own benefit.
How Do Phone Farms Work?
Phone farms are a collection of connected smartphones or mobile devices used for automated tasks, often remotely controlled by software programs. These devices are often used for advertising, monetization, and artificially inflating app ratings or social media engagement. The software used in phone farms is typically a bot or script that interacts with the operating system and installed apps. The phone farm operator connects the devices to the Internet via wired or wireless networks, VPNs, or other remote access software. Once the software is installed, the operator can use a web-based interface or command-line tool to schedule and monitor tasks, setting specific schedules or monitoring device status for proper operation.
Modus Operandi Behind Phone Farms
Phone farms have gained popularity due to the growing popularity and scope of the Internet and the presence of bots. Phone farmers use multiple phones simultaneously to perform illegitimate activity and mimic high numbers. The applications can range from ‘watching’ movie trailers and clicking on ads to giving fake ratings and creating false engagements. When phone farms drive up ‘engagement actions’ on social media through numerous likes and post shares, they help perpetuate a false narrative. Through phone click farms, bad actors also earn on each ad or video watched. Phone farmers claim to use this as a side hustle, as a means of making more money. Click farms can be modeled as companies providing digital engagement services or as individual corporations to multiply clicks for various objectives. They are operated on a much larger scale, with thousands of employees and billions of daily clicks, impressions, and engagements.
The Legality of Phone Farms
The question about the legality of phone farms presents a conundrum. It is notable that phone farms are also used for legitimate application in software development and market research, enabling developers to test applications across various devices and operating systems simultaneously. However, they are typically employed for more dubious purposes, such as social media manipulation, generatiing fake clicks on online ads, spamming, spreading misinformation, and facilitating cyberattacks, and such use cases classify as illegal and unethical behaviour.
The use of the technology to misrepresent information for nefarious intents is illegitimate and unethical. Phone farms are famed for violating the terms of the apps they use to make money by simulating clicks, creating multiple fake accounts and other activities through multiple phones, which can be illegal.
Furthermore, should any entity misrepresent its image/product/services through fake reviews/ratings obtained through bots and phone farms and create deliberately-false impressions for consumers, it is to be considered an unfair trade practice and may attract liabilities.
CyberPeace Policy Recommendations
CyberPeace advocates for truthful and responsible consumption of technology and the Internet. Businesses are encouraged to refrain from using such unethical methods to gain a business advantage and mimic fake popularity online. Businesses must be mindful to avoid any actions that may misrepresent information and/ or cause injury to consumers, including online users. The ethical implications of phone farms cannot be ignored, as they can erode public trust in digital platforms and contribute to a climate of online deception. Law enforcement agencies and regulators are encouraged to keep a check on any illegal use of mobile devices by cybercriminals to commit cyber crimes. Tech and social media platforms must implement monitoring and detection systems to analyse any unusual behaviour/activity on their platforms, looking for suspicious bot activity or phone farming groups. To stay protected from sophisticated threats and to ensure a secure online experience, netizens are encouraged to follow cybersecurity best practices and verify all information from authentic sources.
Final Words
Phone farms have the ability to generate massive amounts of social media interactions, capable of performing repetitive tasks such as clicking, scrolling, downloading, and more in very high volumes in very short periods of time. The potential for misuse of phone farms is higher than the legitimate uses they can be put to. As technology continues to evolve, the challenge lies in finding a balance between innovation and ethical use, ensuring that technology is harnessed responsibly.
References
- https://www.branch.io/glossary/phone-farm/
- https://clickpatrol.com/phone-farms/
- https://www.airbridge.io/glossary/phone-farms#:~:text=A%20phone%20farm%20is%20a,monitor%20the%20tasks%20being%20performed
- https://innovation-village.com/phone-farms-exposed-the-sneaky-tech-behind-fake-likes-clicks-and-more/

Introduction
In the age of advanced technology, Cyber threats continue to grow, and so are the cyber hubs. A new name has been added to the cyber hub, Purnia, a city in India, is now evolving as a new and alarming menace-biometric cloning and financial crimes. This emerging cyber threat involves replicating an individual’s biometric data, such as fingerprint or facial recognition, to gain unauthorised access to their bank accounts and carry out fraudulent activities. In this blog, we will have a look at the methods employed, the impact on individuals and institutions, and the necessary steps to mitigate the risk.
The Backdrop
Purnia, a bustling city in the state of Bihar, India, is known for its rich cultural heritage, However, underneath its bright appearance comes a hidden danger—a rising cyber threat with the potential to devastate its citizens’ financial security. Purnia has seen the growth of a dangerous trend in recent years, such as biometric cloning for financial crimes, after several FIRs were registered with Kasba and Amaur police stations. The Police came into action and started an investigation.
Modus Operandi unveiled
The modus Operandi of cyber criminals includes hacking into databases, intercepting data during transactions, or even physically obtaining fingerprints of facial images from objects or surfaces. Let’s understand how they gathered all this data and why Bihar was not targeted.
These criminals are way smart they operate in the three states. They targeted and have open access to obtain registry and agreement paperwork from official websites, albeit it is not available online in Bihar. As a result, the scam was conducted in other states rather than Bihar; further, the fraudsters were involved in downloading the fingerprints, biometrics, and Aadhaar numbers of buyers and sellers from the property registration documents of Andhra Pradesh, Haryana, and Telangana.
After Cloning fingerprints, the fraudster withdrew money after linking with Aadhaar Enabled Payment System (AEPS) from various bank accounts. The fraudsters stamped the fingerprint on rubber trace paper and utilised a polymer stamp machine and heating at a specific temperature with a chemical to make duplicate fingerprints used in unlawful financial transactions from several consumers’ bank accounts.
Investigation Insight
After the breakthrough, the police teams recovered a large number of smartphones, ATM cards, rubber stamps of fingerprints, Aadhar numbers, scanners, Stamp machines, laptops, and chemicals, and along with this, 17 people were arrested.
During the investigation, it was found that the cybercriminals employ Sophisticated money laundering techniques to obscure the illicit origins of the stolen funds. The fraudsters transfer money into various /multiple accounts or use cryptocurrency. Using these tactics makes it more challenging for authorities to trace back money and get it back.
Impact of biometric Cloning scam
The Biometric scam has far-reaching implications both for society, Individuals, and institutions. These kinds of scams cause financial losses and create emotional breakdowns, including anger, anxiety, and a sense of violation. This also broke the trust in a digital system.
It also seriously impacts institutions. Biometric cloning frauds may potentially cause severe reputational harm to financial institutions and organisations. When clients fall prey to such frauds, it erodes faith in the institution’s security procedures, potentially leading to customer loss and a tarnished reputation. Institutions may suffer legal and regulatory consequences, and they must invest money in investigating the incident, paying victims, and improving their security systems to prevent similar instances.
Raising Awareness
Empowering Purnia Residents to Protect Themselves from Biometric Fraud: Purnia must provide its inhabitants with knowledge and techniques to protect their personal information as it deals with the increasing issue of biometric fraud. Individuals may defend themselves from falling prey to these frauds by increasing awareness about biometric fraud and encouraging recommended practices. This blog will discuss the necessity of increasing awareness and present practical recommendations to help Purnia prevent biometric fraud. Here are some tips that one can follow;
- Securing personal Biometric data: It is crucial to safeguard personal biometric data. Individuals should be urged to secure their fingerprints, face scans, and other biometric information in the same way that they protect their passwords or PINs. It is critical to ensure that biometric data is safely maintained and shared with only trustworthy organisations with strong security procedures in place.
- Verifying Service providers: Residents should be vigilant while submitting biometric data to service providers, particularly those providing financial services. Before disclosing any sensitive information, it is important to undertake due diligence and establish the validity and reliability of the organisation. Checking for relevant certificates, reading reviews, and getting recommendations can assist people in making educated judgments and avoiding unscrupulous companies.
- Personal Cybersecurity: Individuals should implement robust cybersecurity practices to reduce the danger of biometric fraud. This includes using difficult and unique passwords, activating two-factor authentication, upgrading software and programs on a regular basis, and being wary of phishing efforts. Individuals should also refrain from providing personal information or biometric data via unprotected networks or through untrustworthy sources.
- Educating the Elderly and Vulnerable Groups: Special attention should be given to educating the elderly and other vulnerable groups who may be more prone to scams. Awareness campaigns may be modified to their individual requirements, emphasising the significance of digital identities, recognising possible risks, and seeking help from reliable sources when in doubt. Empowering these populations with knowledge can help keep them safe from biometric fraud.
Measures to Stay Ahead
As biometric fraud is a growing concern, staying a step ahead is essential. By following these simple steps, one can safeguard themselves.
- Multi-factor Authentication: MFA is one of the best methods for security. MFA creates multi-layer security or extra-layer security against unauthorised access. MFA incorporates a biometric scan and a password.
- Biometric Encryption: Biometric encryption securely stores and transmits biometric data. Rather than keeping raw biometric data, encryption methods transform it into mathematical templates that cannot be reverse-engineered. These templates are utilised for authentication, guaranteeing that the original biometric information is not compromised even if the encrypted data is.
- AI and Machine Learning (ML): AI and ML technologies are critical in detecting and combating biometric fraud. These systems can analyse massive volumes of data in real-time, discover trends, and detect abnormalities. Biometric systems may continually adapt and enhance accuracy by employing AI and ML algorithms, boosting their capacity to distinguish between legitimate users and fraudulent efforts.
Conclusion
The Biometric fraud call needs immediate attention to protect the bankers from the potential consequences. By creating awareness, we can save ourselves; additionally, by working together, we can create a safer digital environment. The use of biometric verification was inculcated to increase factor authentication for a banker. However, we see that the bad actors have already started to bypass the tech and even wreak havoc upon the netizens by draining their accounts of their hard-earned money. The banks and the cyber cells nationwide need to work together in synergy to increase awareness and safety mechanisms to prevent such cyber crimes and create effective and efficient redressal mechanisms for the citizens.
Reference

The global race for Artificial Intelligence is heating up, and India has become one of its most important battlegrounds. Over the past few months, tech giants like OpenAI (ChatGPT), Google (Gemini), X (Grok), Meta (Llama), and Perplexity AI have stepped up their presence in the country, not by selling their AI tools, but by offering them free or at deep discounts.
At first, it feels like a huge win for India’s digital generation. Students, professionals, and entrepreneurs today can tap into some of the world’s most powerful AI tools without paying a rupee. It feels like a digital revolution unfolding in real time. Yet, beneath this generosity lies a more complicated truth. Experts caution that this wave of “free” AI access isn’t without strings attached. This offering impacts how India handles data privacy, the fairness of competition, and the pace of the development of homegrown AI innovation that the country is focusing on.
The Market Strategy: Free Now, Pay Later
The choice of global AI companies to offer free access in India is a calculated business strategy. With one of the world’s largest and fastest-growing digital populations, India is a market no tech giant wants to miss. By giving away their AI tools for free, these firms are playing a long game:
- Securing market share early: Flooding the market with free access helps them quickly attract millions of users before Indian startups have a chance to catch up. Recent examples are Perplexity, ChatGPT Go and Gemini AI which are offering free subscriptions to Indian users.
- Gathering local data: Every interaction, every prompt, question, or language pattern, helps these models learn from larger datasets to improve their product offerings in India and the rest of the world. Nothing is free in the world - as the popular saying goes, “if something is free, means you are the product. The same goes for these AI platforms: they monetise user data by analysing chats and their behaviour to refine their model and build paid products. This creates the privacy risk as India currently lacks specific laws to govern how such data is stored, processed or used for AI training.
- Create user dependency: Once users grow accustomed to the quality and convenience of these global models, shifting to Indian alternatives, even when they become paid, will be difficult. This approach mirrors the “freemium” model used in other tech sectors, where users are first attracted through free access and later monetised through subscriptions or premium features, raising ethical concerns.
Impact on Indian Users
For most Indians, the short-term impact of free AI access feels overwhelmingly positive. Tools like ChatGPT and Gemini are breaking down barriers by democratising knowledge and making advanced technology available to everyone, from students, professionals, to small businesses. It’s changing how people learn, think and do - all without spending a single rupee.But the long-term picture isn’t quite as simple. Beneath the convenience lies a set of growing concerns:
- Data privacy risks: Many users don’t realise that their chats, prompts, or queries might be stored and used to train global AI models. Without strong data protection laws in action, sensitive Indian data could easily find its way into foreign systems.
- Overdependence on foreign technology: Once these AI tools become part of people’s daily lives, moving away from them gets harder — especially if free access later turns into paid plans or comes with restrictive conditions.
- Language and cultural bias: Most large AI models are still built mainly around English and Western data. Without enough Indian language content and cultural representation, the technology risks overlooking the very diversity that defines India
Impact on India’s AI Ecosystem
India’s Generative AI market, valued at USD $ 1.30 billion in 2024, is projected to reach 5.40 billion by 2033. Yet, this growth story may become uneven if global players dominate early.
Domestic AI startups face multiple hurdles — limited funding, high compute costs, and difficulty in accessing large, diverse datasets. The arrival of free, GPT-4-level models sharpens these challenges by raising user expectations and increasing customer acquisition costs.
As AI analyst Kashyap Kompella notes, “If users can access GPT-4-level quality at zero cost, their incentive to try local models that still need refinement will be low.” This could stifle innovation at home, resulting in a shallow domestic AI ecosystem where India consumes global technology but contributes little to its creation.
CCI’s Intervention: Guarding Fair Competition
The Competition Commission of India (CCI) has started taking note of how global AI companies are shaping India’s digital market. In a recent report, it cautioned that AI-driven pricing strategies such as offering free or heavily subsidised access could distort healthy competition and create an uneven playing field for smaller Indian developers.
The CCI’s decision to step in is both timely and necessary. Without proper oversight, such tactics could gradually push homegrown AI startups to the sidelines and allow a few foreign tech giants to gain disproportionate influence over India’s emerging AI economy.
What the Indian Government Should Do
To ensure India’s AI landscape remains competitive, inclusive, and innovation-driven, the government must adopt a balanced strategy that safeguards users while empowering local developers.
1. Promote Fair Competition
The government should mandate transparency in free access offers, including their duration, renewal terms, and data-use policies. Exclusivity deals between foreign AI firms and telecom or device companies must be closely monitored to prevent monopolistic practices.
2. Strengthen Data Protection
Under the Digital Personal Data Protection (DPDP) Act, companies should be required to obtain explicit consent from users before using data for model training. Encourage data localisation, ensuring that sensitive Indian data remains stored within India’s borders.
3. Support Domestic AI Innovation
Accelerate the implementation of the IndiaAI Mission to provide public compute infrastructure, open datasets, and research funding to local AI developers like Sarvam AI, an Indian company chosen by the government to build the country's first homegrown large language model (LLM) under IndianAI Mission.
4. Create an Open AI Ecosystem
India should develop national AI benchmarks to evaluate all models, foreign or domestic, on performance, fairness, and linguistic diversity. And at the same time, they have their own national data Centre to train their indigenous AI models.
5. Encourage Responsible Global Collaboration
Speaking at the AI Action Summit 2025, the Prime Minister highlighted that governance should go beyond managing risks and should also promote innovation for the global good. Building on this idea, India should encourage global AI companies to invest meaningfully in the country’s ecosystem through research labs, data centres, and AI education programmes. Such collaborations will ensure that these partnerships not only expand markets but also create value, jobs and knowledge within India.
Conclusion
The surge of free AI access across India represents a defining moment in the nation’s digital journey. On one hand, it’s empowering millions of people and accelerating AI awareness like never before. On the other hand, it poses serious challenges from over-reliance on foreign platforms to potential risks around data privacy and the slow growth of local innovation. India’s real test will be finding the right balance between access and autonomy, allowing global AI leaders to innovate and operate here, but within a framework that protects the interests of Indian users, startups, and data ecosystems. With strong and timely action under the Digital Personal Data Protection (DPDP) Act, the IndiaAI Mission, and the Competition Commission of India’s (CCI) active oversight, India can make sure this AI revolution isn’t just something that happens to the country, but for it.
References
- https://www.moneycontrol.com/artificial-intelligence/cci-study-flags-steep-barriers-for-indian-ai-startups-calls-for-open-data-and-compute-access-to-level-playing-field-article-13600606.html#
- https://www.imarcgroup.com/india-generative-ai-market
- https://www.mea.gov.in/Speeches-Statements.htm?dtl/39020/Opening_Address_by_Prime_Minister_Shri_Narendra_Modi_at_the_AI_Action_Summit_Paris_February_11_2025
- https://m.economictimes.com/tech/artificial-intelligence/nasscom-planning-local-benchmarks-for-indic-ai-models/articleshow/124218208.cms
- https://indianexpress.com/article/business/centre-selects-start-up-sarvam-to-build-country-first-homegrown-ai-model-9967243/#