#FactCheck - Afghan Cricket Team's Chant Misrepresented in Viral Video
Executive Summary:
Footage of the Afghanistan cricket team singing ‘Vande Mataram’ after India’s triumph in ICC T20 WC 2024 exposed online. The CyberPeace Research team carried out a thorough research to uncover the truth about the viral video. The original clip was posted on X platform by Afghan cricketer Mohammad Nabi on October 23, 2023 where the Afghan players posted the video chanting ‘Allah-hu Akbar’ after winning the ODIs in the World Cup against Pakistan. This debunks the assertion made in the viral video about the people chanting Vande Mataram.

Claims:
Afghan cricket players chanted "Vande Mataram" to express support for India after India’s victory over Australia in the ICC T20 World Cup 2024.

Fact Check:
Upon receiving the posts, we analyzed the video and found some inconsistency in the video such as the lip sync of the video.
We checked the video in an AI audio detection tool named “True Media”, and the detection tool found the audio to be 95% AI-generated which made us more suspicious of the authenticity of the video.


For further verification, we then divided the video into keyframes. We reverse-searched one of the frames of the video to find any credible sources. We then found the X account of Afghan cricketer Mohammad Nabi, where he uploaded the same video in his account with a caption, “Congratulations! Our team emerged triumphant n an epic battle against ending a long-awaited victory drought. It was a true test of skills & teamwork. All showcased thr immense tlnt & unwavering dedication. Let's celebrate ds 2gether n d glory of our great team & people” on 23 Oct, 2023.

We found that the audio is different from the viral video, where we can hear Afghan players chanting “Allah hu Akbar” in their victory against Pakistan. The Afghan players were not chanting Vande Mataram after India’s victory over Australia in T20 World Cup 2014.
Hence, upon lack of credible sources and detection of AI voice alteration, the claim made in the viral posts is fake and doesn’t represent the actual context. We have previously debunked such AI voice alteration videos. Netizens must be careful before believing misleading information.
Conclusion:
The viral video claiming that Afghan cricket players chanted "Vande Mataram" in support of India is false. The video was altered from the original video by using audio manipulation. The original video of Afghanistan players celebrating victory over Pakistan by chanting "Allah-hu Akbar" was posted in the official Instagram account of Mohammad Nabi, an Afghan cricketer. Thus the information is fake and misleading.
- Claim: Afghan cricket players chanted "Vande Mataram" to express support for India after the victory over Australia in the ICC T20 World Cup 2024.
- Claimed on: YouTube
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
Digitalisation presents both opportunities and challenges for micro, small, and medium enterprises (MSMEs) in emerging markets. Digital tools can increase business efficiency and reach but also increase exposure to misinformation, fraud, and cyber attacks. Such cyber threats can lead to financial losses, reputational damage, loss of customer trust, and other challenges hindering MSMEs' ability and desire to participate in the digital economy.
The current information dump is a major component of misinformation. Misinformation spreads or emerges from online sources, causing controversy and confusion in various fields including politics, science, medicine, and business. One obvious adverse effect of misinformation is that MSMEs might lose trust in the digital market. Misinformation can even result in the devaluation of a product, sow mistrust among customers, and negatively impact the companies’ revenue. The reach of and speed with which misinformation can spread and ruin companies’ brands, as well as the overall difficulty businesses face in seeking recourse, may discourage MSMEs from fully embracing the digital ecosystem.
MSMEs are essential for innovation, job development, and economic growth. They contribute considerably to the GDP and account for a sizable share of enterprises. They serve as engines of economic resilience in many nations, including India. Hence, a developing economy’s prosperity and sustainability depend on the MSMEs' growth and such digital threats might hinder this process of growth.
There are widespread incidents of misinformation on social media, and these affect brand and product promotion. MSMEs also rely on online platforms for business activities, and threats such as misinformation and other digital risks can result in reputational damage and financial losses. A company's reputation being tarnished due to inaccurate information or a product or service being incorrectly represented are just some examples and these incidents can cause MSMSs to lose clients and revenue.
In the digital era, MSMEs need to be vigilant against false information in order to preserve their brand name, clientele, and financial standing. In the interconnected world of today, these organisations must develop digital literacy and resistance against misinformation in order to succeed in the long run. Information resilience is crucial for protecting and preserving their reputation in the online market.
The Impact of Misinformation on MSMEs
Misinformation can have serious financial repercussions, such as lost sales, higher expenses, legal fees, harm to the company's reputation, diminished consumer trust, bad press, and a long-lasting unfavourable impact on image. A company's products may lose value as a result of rumours, which might affect both sales and client loyalty.
Inaccurate information can also result in operational mistakes, which can interrupt regular corporate operations and cost the enterprise a lot of money. When inaccurate information on a product's safety causes demand to decline and stockpiling problems to rise, supply chain disruptions may occur. Misinformation can also lead to operational and reputational issues, which can cause psychological stress and anxiety at work. The peace of the workplace and general productivity may suffer as a result. For MSMEs, false information has serious repercussions that impact their capacity to operate profitably, retain employees, and maintain a sustainable business. Companies need to make investments in cybersecurity defence, legal costs, and restoring consumer confidence and brand image in order to lessen the effects of false information and ensure smooth operations.
When we refer to the financial implications caused by misinformation spread in the market, be it about the product or the enterprise, the cost is two-fold in all scenarios: there is loss of revenue and then the organisation has to contend with the costs of countering the impact of the misinformation. Stock Price Volatility is one financial consequence for publicly-traded MSMEs, as misinformation can cause stock price fluctuations. Potential investors might be discouraged due to false negative information.
Further, the reputational damage consequences of misinformation on MSMEs is also a serious concern as a loss of their reputation can have long-term damages for a carefully-cultivated brand image.
There are also operational disruptions caused by misinformation: for instance, false product recalls can take place and supplier mistrust or false claims about supplier reliability can disrupt procurement leading to disruptions in the operations of MSMEs.
Misinformation can negatively impact employee morale and productivity due to its physiological effects. This leads to psychological stress and workplace tensions. Staff confidence is also affected due to the misinformation about the brand. Internal operational stability is a core component of any organisation’s success.
Misinformation: Key Risk Areas for MSMEs
- Product and Service Misinformation
For MSMEs, misinformation about products and services poses a serious danger since it undermines their credibility and the confidence clients place in the enterprise and its products or services. Because this misleading material might mix in with everyday activities and newsfeeds, viewers may find it challenging to identify fraudulent content. For example, falsehoods and rumours about a company or its goods may travel quickly through social media, impacting the confidence and attitude of customers. Algorithms that favour sensational material have the potential to magnify disinformation, resulting in the broad distribution of erroneous information that can harm a company's brand.
- False Customer Reviews and Testimonials
False testimonies and evaluations pose a serious risk to MSMEs. These might be abused to damage a company's brand or lead to unfair competition. False testimonials, for instance, might mislead prospective customers about the calibre or quality of a company’s offerings, while phony reviews can cause consumers to mistrust a company's goods or services. These actions frequently form a part of larger plans by rival companies or bad individuals to weaken a company's position in the market.
- Misleading Information about Business Practices
False statements or distortions regarding a company's operations constitute misleading information about business practices. This might involve dishonest marketing, fabrications regarding the efficacy or legitimacy of goods, and inaccurate claims on a company's compliance with laws or moral principles. Such incorrect information can result in a decline in consumer confidence, harm to one's reputation, and even legal issues if consumers or rival businesses act upon it. Even before the truth is confirmed, for example, allegations of wrongdoing or criminal activity pertaining can inflict a great deal of harm, even if they are disproven later.
- Fake News Related to Industry and Market Conditions
By skewing consumer views and company actions, fake news about market and industry circumstances can have a significant effect on MSMEs. For instance, false information about market trends, regulations, or economic situations might make consumers lose faith in particular industries or force corporations to make poor strategic decisions. The rapid dissemination of misinformation on online platforms intensifies its effects on enterprises that significantly depend on digital engagement for their operations.
Factors Contributing to the Vulnerability of MSMEs
- Limited Resources for Verification
MSMEs have a small resource pool. Information verification is typically not a top priority for most. MSMEs usually lack the resources needed to verify the information and given their limited resources, they usually tend to deploy the same towards other, more seemingly-critical functions. They are more susceptible to misleading information because they lack the capacity to do thorough fact-checking or validate the authenticity of digital content. Technology tools, human capital, and financial resources are all in low supply but they are essential requirements for effective verification processes.
- Inadequate Digital Literacy
Digital literacy is required for effective day-to-day operations. Fake reviews, rumours, or fake images commonly used by malicious actors can result in increased scrutiny or backlash against the targeted business. The lack of awareness combined with limited resources usually spells out a pale redressal plan on part of the affected MSME. Due to their low digital literacy in this domain, a large number of MSMEs are more susceptible to false information and other online threats. Inadequate knowledge and abilities to use digital platforms securely and effectively can result in making bad decisions and raising one's vulnerability to fraud, deception, and online scams.
- Lack of Crisis Management Plans
MSMEs frequently function without clear-cut procedures for handling crises. They lack the strategic preparation necessary to deal with the fallout from disinformation and cyberattacks. Proactive crisis management plans usually incorporate procedures for detecting, addressing, and lessening the impact of digital harms, which are frequently absent from MSMEs.
- High Dependence on Social Media and Online Platforms
The marketing strategy for most MSMEs is heavily reliant on social media and online platforms. While the digital-first nature of operations reduces the need for a large capital to set up in the form of stores or outlets, it also gives them a higher need to stay relevant to the trends of the online community and make their products attractive to the customer base. However, MSMEs are depending more and more on social media and other online channels for marketing, customer interaction, and company operations. These platforms are really beneficial, but they also put organisations at a higher risk of false information and online fraud. Heavy reliance on these platforms coupled with the absence of proper security measures and awareness can result in serious interruptions to operations and monetary losses.
CyberPeace Policy Recommendations to Enhance Information Resilience for MSMEs
CyberPeace advocates for establishing stronger legal frameworks to protect MSMEs from misinformation. Governments should establish regulations to build trust in online business activities and mitigate fraud and misinformation risks. Mandatory training programs should be implemented to cover online safety and misinformation awareness for MSME businesses. Enhanced reporting mechanisms should be developed to address digital harm incidents promptly. Governments should establish strict penalties for deliberate inaccurate misinformation spreaders, similar to those for copyright or intellectual property violations. Community-based approaches should be encouraged to help MSMEs navigate digital challenges effectively. Donor communities and development agencies should invest in digital literacy and cybersecurity training for MSMEs, focusing on misinformation mitigation and safe online practices. Platform accountability should be increased, with social media and online platforms playing a more active role in removing content from known scam networks and responding to fraudulent activity reports. There should be investment in comprehensive digital literacy solutions for MSMEs that incorporate cyber hygiene and discernment skills to combat misinformation.
Conclusion
Misinformation poses a serious risk to MSME’s digital resilience, operational effectiveness, and financial stability. MSMEs are susceptible to false information because of limited technical resources, lack of crisis management strategies, and insufficient digital literacy. They are also more vulnerable to false information and online fraud because of their heavy reliance on social media and other online platforms. To address these challenges it is significant to strengthen their cyber hygiene and information resilience. Robust policy and regulatory frameworks are encouraged, promoting and mandating online safety training programmes, and improved reporting procedures, are required to overall enhance the information landscape.
References:
- https://www.dai.com/uploads/digital-downsides.pdf
- https://www.indiacode.nic.in/bitstream/123456789/2013/3/A2006-27.pdf
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1946375
- https://dai-global-digital.com/digital-downsides-the-economic-impact-of-misinformation-and-other-digital-harms-on-msmes-in-kenya-india-and-cambodia.html
- https://www.dai.com/uploads/digital-downsides.pdf

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india

The race for global leadership in AI is in full force. As China and the US emerge as the ‘AI Superpowers’ in the world, the world grapples with the questions around AI governance, ethics, regulation, and safety. Some are calling this an ‘AI Arms Race.’ Most of the applications of these AI systems are in large language models for commercial use or military applications. Countries like Germany, Japan, France, Singapore, and India are now participating in this race and are not mere spectators.
The Government of India’s Ministry of Electronics and Information Technology (MeitY) has launched the IndiaAI Mission, an umbrella program for the use and development of AI technology. This MeitY initiative lays the groundwork for supporting an array of AI goals for the country. The government has allocated INR 10,300 crore for this endeavour. This mission includes pivotal initiatives like the IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, IndiaAI Application Development Initiative, IndiaAI FutureSkills, IndiaAI Startup Financing, and Safe & Trusted AI.
There are several challenges and opportunities that India will have to navigate and capitalize on to become a significant player in the global AI race. The various components of India’s ‘AI Stack’ will have to work well in tandem to create a robust ecosystem that yields globally competitive results. The IndiaAI mission focuses on building large language models in vernacular languages and developing compute infrastructure. There must be more focus on developing good datasets and research as well.
Resource Allocation and Infrastructure Development
The government is focusing on building the elementary foundation for AI competitiveness. This includes the procurement of AI chips and compute capacity, about 10,000 graphics processing units (GPUs), to support India’s start-ups, researchers, and academics. These GPUs have been strategically distributed, with 70% being high-end newer models and the remaining 30% comprising lower-end older-generation models. This approach ensures that a robust ecosystem is built, which includes everything from cutting-edge research to more routine applications. A major player in this initiative is Yotta Data Services, which holds the largest share of 9,216 GPUs, including 8,192 Nvidia H100s. Other significant contributors include Amazon AWS's managed service providers, Jio Platforms, and CtrlS Datacenters.
Policy Implications: Charting a Course for Tech Sovereignty and Self-reliance
With this government initiative, there is a concerted effort to develop indigenous AI models and reduce tech dependence on foreign players. There is a push to develop local Large Language Models and domain-specific foundational models, creating AI solutions that are truly Indian in nature and application. Many advanced chip manufacturing takes place in Taiwan, which has a looming China threat. India’s focus on chip procurement and GPUs speaks to a larger agenda of self-reliance and sovereignty, keeping in mind the geopolitical calculus. This is an important thing to focus on, however, it must not come at the cost of developing the technological ‘know-how’ and research.
Developing AI capabilities at home also has national security implications. When it comes to defence systems, control over AI infrastructure and data becomes extremely important. The IndiaAI Mission will focus on safe and trusted AI, including developing frameworks that fit the Indian context. It has to be ensured that AI applications align with India's security interests and can be confidently deployed in sensitive defence applications.
The big problem here to solve here is the ‘data problem.’ There must be a focus on developing strategies to mitigate the data problem that disadvantages the Indian AI ecosystem. Some data problems are unique to India, such as generating data in local languages. While other problems are the ones that appear in every AI ecosystem development lifecycle namely generating publicly available data and licensed data. India must strengthen its ‘Digital Public Infrastructure’ and data commons across sectors and domains.
India has proposed setting up the India Data Management Office to serve as India’s data regulator as part of its draft National Data Governance Framework Policy. The MeitY IndiaAI expert working group report also talked about operationalizing the India Datasets Platform and suggested the establishment of data management units within each ministry.
Economic Impact: Growth and Innovation
The government’s focus on technology and industry has far-reaching economic implications. There is a push to develop the AI startup ecosystem in the country. The IndiaAI mission heavily focuses on inviting ideas and projects under its ambit. The investments will strengthen the IndiaAI startup financing system, making it easier for nascent AI businesses to obtain capital and accelerate their development from product to market. Funding provisions for industry-led AI initiatives that promote social impact and stimulate innovation and entrepreneurship are also included in the plan. The government press release states, "The overarching aim of this financial outlay is to ensure a structured implementation of the IndiaAI Mission through a public-private partnership model aimed at nurturing India’s AI innovation ecosystem.”
The government also wants to establish India as a hub for sustainable AI innovation and attract top AI talent from across the globe. One crucial aspect that needs to be worked on here is fostering talent and skill development. India has a unique advantage, that is, top-tier talent in STEM fields. Yet we suffer from a severe talent gap that needs to be addressed on a priority basis. Even though India is making strides in nurturing AI talents, out-migration of tech talent is still a reality. Once the hardware manufacturing “goods-side” of economics transitions to service delivery in the field of AI globally, India will need to be ready to deploy its talent. Several structural and policy interfaces, like the New Education Policy and industry-academic partnership frameworks, allow India to capitalize on this opportunity.
India’s talent strategy must be robust and long-term, focusing heavily on multi-stakeholder engagement. The government has a pivotal role here by creating industry-academia interfaces and enabling tech hubs and innovation parks.
India's Position in the Global AI Race
India’s foreign policy and geopolitical standpoint have been one of global cooperation. This must not change when it comes to AI. Even though this has been dubbed as the “AI Arms Race,” India should encourage worldwide collaboration on AI R&D through collaboration with other countries in order to strengthen its own capabilities. India must prioritise more significant open-source AI development, work with the US, Europe, Australia, Japan, and other friendly countries to prevent the unethical use of AI and contribute to the formation of a global consensus on the boundaries for AI development.
The IndiaAI Mission will have far-reaching implications for India’s diplomatic and economic relations. The unique proposition that India comes with is its ethos of inclusivity, ethics, regulation, and safety from the get-go. We should keep up the efforts to create a powerful voice for the Global South in AI. The IndiaAI Mission marks a pivotal moment in India's technological journey. Its success could not only elevate India's status as a tech leader but also serve as a model for other nations looking to harness the power of AI for national development and global competitiveness. In conclusion, the IndiaAI Mission seeks to strengthen India's position as a global leader in AI, promote technological independence, guarantee the ethical and responsible application of AI, and democratise the advantages of AI at all societal levels.
References
- Ashwini Vaishnaw to launch IndiaAI portal, 10 firms to provide 14,000 GPUs. (2025, February 17). https://www.business-standard.com/. Retrieved February 25, 2025, from https://www.business-standard.com/industry/news/indiaai-compute-portal-ashwini-vaishnaw-gpu-artificial-intelligence-jio-125021700245_1.html
- Global IndiaAI Summit 2024 being organized with a commitment to advance responsible development, deployment and adoption of AI in the country. (n.d.). https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2029841
- India to Launch AI Compute Portal, 10 Firms to Supply 14,000 GPUs. (2025, February 17). apacnewsnetwork.com. https://apacnewsnetwork.com/2025/02/india-to-launch-ai-compute-portal-10-firms-to-supply-14000-gpus/
- INDIAai | Pillars. (n.d.). IndiaAI. https://indiaai.gov.in/
- IndiaAI Innovation Challenge 2024 | Software Technology Park of India | Ministry of Electronics & Information Technology Government of India. (n.d.). http://stpi.in/en/events/indiaai-innovation-challenge-2024
- IndiaAI Mission To Deploy 14,000 GPUs For Compute Capacity, Starts Subsidy Plan. (2025, February 17). www.businessworld.in. Retrieved February 25, 2025, from https://www.businessworld.in/article/indiaai-mission-to-deploy-14000-gpus-for-compute-capacity-starts-subsidy-plan-548253
- India’s interesting AI initiatives in 2024: AI landscape in India. (n.d.). IndiaAI. https://indiaai.gov.in/article/india-s-interesting-ai-initiatives-in-2024-ai-landscape-in-india
- Mehra, P. (2025, February 17). Yotta joins India AI Mission to provide advanced GPU, AI cloud services. Techcircle. https://www.techcircle.in/2025/02/17/yotta-joins-india-ai-mission-to-provide-advanced-gpu-ai-cloud-services/
- IndiaAI 2023: Expert Group Report – First Edition. (n.d.). IndiaAI. https://indiaai.gov.in/news/indiaai-2023-expert-group-report-first-edition
- Satish, R., Mahindru, T., World Economic Forum, Microsoft, Butterfield, K. F., Sarkar, A., Roy, A., Kumar, R., Sethi, A., Ravindran, B., Marchant, G., Google, Havens, J., Srichandra (IEEE), Vatsa, M., Goenka, S., Anandan, P., Panicker, R., Srivatsa, R., . . . Kumar, R. (2021). Approach Document for India. In World Economic Forum Centre for the Fourth Industrial Revolution, Approach Document for India [Report]. https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf
- Stratton, J. (2023, August 10). Those who solve the data dilemma will win the A.I. revolution. Fortune. https://fortune.com/2023/08/10/workday-data-ai-revolution/
- Suri, A. (n.d.). The missing pieces in India’s AI puzzle: talent, data, and R&D. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2025/02/the-missing-pieces-in-indias-ai-puzzle-talent-data-and-randd?lang=en
- The AI arms race. (2024, February 13). Financial Times. https://www.ft.com/content/21eb5996-89a3-11e8-bf9e-8771d5404543