#FactCheck - MS Dhoni Sculpture Falsely Portrayed as Chanakya 3D Recreation
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.
Related Blogs

The Expanding Governance Challenge of Artificial Intelligence
Artificial intelligence (AI) systems are increasingly embedded in economic and social infrastructure. They are being adopted in financial services, healthcare diagnostics, hiring systems, and public administration. But while these systems improve efficiency and decision-making, they also introduce new forms of technological risk.
Unlike conventional software, AI systems learn patterns from data and continue to evolve as they run. This poses governance issues since risks can arise throughout the AI life cycle, whether at the coding level or in their implementation.
The latest regulatory frameworks, such as the European Union’s AI Act (EU AI Act) and the UNESCO Recommendation on the Ethics of Artificial Intelligence, note that responsible AI governance depends on the realisation of where risks emerge across the development process.
This article maps the AI system lifecycle, identifies the risks that emerge at each stage and evaluates the policy tools used to mitigate them using the lifecycle framework developed by the Organisation of Economic Co-operation and Development (OECD).
The Lifecycle of an AI System
AI systems are developed through a structured process that includes problem definition, dataset collection and preparation, model development, testing and validation, deployment, and monitoring.

The OECD conceptualises this development process as the AI system lifecycle. Each stage entails various technical and administrative procedures, since choices made during these stages will dictate the goals and limits of an AI system. Further, the quality and representativeness of training sets will have a strong effect on the behaviour of models after implementation.
Since this is an iterative and not a linear procedure, risks can be introduced at each stage of the AI lifecycle. New data can be retrained into different models, and systems are regularly updated once they have been deployed, to address performance degradation, model errors, or unintended outputs. This iterative process means governance must address risks across the entire lifecycle, not just at deployment.
Where AI Risks Emerge
AI risks usually emerge earlier in the development process, especially in the phases when system objectives are formulated and training data are chosen. The EU AI Act and the UNESCO Recommendation on the Ethics of AI outline the following risks: bias and discrimination, privacy and data security violations, the absence of transparency in automated decision-making, and risks to fundamental rights.

AI Governance Risk Landscape: Core Risk Categories Under International Frameworks
Risk categories jointly identified by the EU AI Act and UNESCO Recommendation on the Ethics of Artificial Intelligence
Outlining the risks throughout the AI lifecycle helps understand the areas where governance interventions are most necessary. For example, discriminatory outcomes often result from biased or unrepresentative training data, while safety failures are typically linked to inadequate testing before deployment. Risks such as misinformation arise post the development process, when generative AI systems are deployed at scale on digital platforms.

AI System Lifecycle: Key Risks at Each Stage
Risks identified per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Understanding where risks emerge across the lifecycle explains why governance frameworks classify AI systems by risk and apply oversight at multiple stages.
Policy Tools for Mitigating AI Risks
Governments and international organisations have developed regulatory tools to help mitigate AI risks in the lifecycle. These tools are meant to make sure that AI technologies are identified as up to standard in safety, accountability and fairness prior to and after deployment.
For example, the OECD AI Policy Observatory recommends that governments adopt policy instruments such as risk evaluations, algorithmic auditing necessities, regulatory sandboxes, and transparency necessities of AI systems. The European Union’s Artificial Intelligence Act (AI Act) is one of the most comprehensive systems of governance that introduces a risk-oriented regulation strategy. It mandates adherence to requirements concerning data governance, documentation, human oversight, and robustness, and cybersecurity. Such requirements bring regulatory checkpoints to the lifecycle of AI systems.
Mapping these policy tools across the lifecycle illustrates how governance mechanisms can intervene at different stages of AI development.

Governance Overlay: Policy Interventions Across the AI Lifecycle
Regulatory tools mapped at each stage of AI development per the EU AI Act and UNESCO Recommendation on the Ethics of AI
Several policy tools are directed at the risks that occur in the pre-developmental stages. In one example, algorithmic impact assessment has been applied in various jurisdictions to measure the possible consequences of automated decision systems on society before implementation. On the same note, the requirements of dataset documentation, including dataset transparency requirements and model cards, are aimed at enhancing accountability during the training and development stages of the AI systems. Therefore, lifecycle-based policy design allows regulators to intervene before harmful outcomes occur, rather than responding only after AI systems have caused damage in real-world environments.
The Policy Gap in AI Governance
The misalignment between risks and governance tools across the AI lifecycle indicates a critical structural gap in existing regulations. Numerous governance processes become activated after AI systems are classified as “high risk” or after they are implemented in the real world. But the most serious sources of damage have their roots in earlier stages of the development procedure.
An example is that prejudiced or unbalanced training data is almost inevitably a source of discriminative results in automated decision systems. When these types of models are applied in areas like staffing, credit rating, or in providing services to the public, such biases can quickly spread to large populations and undermine democratic rights. In the same way, the lack of transparency in model design might result in the fact that the regulator or individuals are affected by the decision-making process. This reflects a broader timing gap in AI governance, where risks originate during design and development, but regulatory intervention typically occurs only after deployment.
Analysis
1. Key risks originate before deployment: As depicted in the lifecycle mapping, the data collection and model development phase presents several significant governance risks as opposed to the deployment phase. Structural issues can be entrenched within AI systems even before they are deployed in practice due to bias in data sets, incomplete reporting of training sets, and obscured network designs.
2. Data governance is a primary point of vulnerability: Most of the instances of algorithmic discrimination listed above are associated with training material that is not representative of some population groups or is historical. Since machine learning models are optimisations of patterns that exist in datasets, these biases can be carried through the whole lifecycle and reproduced after deployment.
3. Regulatory approaches remain mismatched across jurisdictions: Different countries adopt varying approaches to AI governance, ranging from risk-based frameworks such as the EU AI Act to more sector-specific or voluntary guidelines in other regions. This divergence creates inconsistencies in safety, accountability, and enforcement standards, allowing risks to persist across borders and potentially undermining the protection of users in globally deployed AI systems.
4. Governance interventions remain uneven across the lifecycle: Whereas the various regulatory instruments aim at deployment and monitoring, fewer instruments systematically tackle the risks that are posed by the previous design and development phases.
Recommendations
1. Introduce mandatory lifecycle risk assessments: The regulatory systems need to demand systemic risk evaluation at the beginning of AI development, especially at the problem design and dataset selection phases. This would assist in detecting possible harmful applications in advance, before systems are constructed and installed.
2. Strengthen dataset governance standards: Training datasets must be supplemented with documentation as to their provenance, composition and limitations. Standardised documentation frameworks of data sets can assist in the discovery by regulators and auditors of the potential sources of bias or privacy threats.
3. Expand independent algorithmic auditing: AI systems can be assessed by regular third-party audits based on fairness, strength, and security weaknesses. The auditing mechanisms especially apply to high-risk systems employed in employment, finance or the public services.
4. Integrate continuous monitoring requirements: AI systems may be monitored regularly after implementation to identify model drift, unforeseen consequences, or abuse. Reporting systems can facilitate the process where the regulators can see the emerging risks and modify the governance systems.
Conclusion - The Need for Global AI Governance
Despite growing regulatory attention, global air governance remains fragmented. Different jurisdictions adopt varying approaches to risk classification, oversight, and enforcement, leading to inconsistencies in safety and accountability standards. Given that AI systems are often developed, deployed, and used across borders, this lack of coordination allows risks to persist beyond national regulatory frameworks.
Addressing these challenges requires a shift towards greater international cooperation and lifecycle-based governance. Developing shared standards, improving cross-border regulatory alignment, and embedding oversight across all stages of AI development will be essential to ensuring that AI systems are safe, transparent, and accountable in a globally interconnected environment.
References
- OECD AI lifecycle
- OECD AI system lifecycle description
- OECD AI governance lifecycle framework
- EU AI Act overview
- EU AI Act risk categories
- UNESCO Recommendation on the Ethics of AI
- AI governance lifecycle analysis
- OECD AI policy tools database

Introduction
Regulatory agencies throughout Europe have stepped up their monitoring of digital communication platforms because of the increased use of Artificial Intelligence in the digital domain. Messaging services have evolved into being more than just messaging systems, they now serve as a gateway for Artificial Intelligence services, Business Tools and Digital Marketplaces. In light of this evolution, the Competition Authority in Italy has taken action against Meta Platforms and ordered Meta to cease activities on WhatsApp that are deemed to restrict the ability of other companies to sell AI-based chatbots. This action highlights the concerns surrounding Gatekeeping Power, Market Foreclosure and Innovation Suppression. This proceeding will also raise questions regarding the application of Competition Law to the actions of Dominant Digital Platforms, where they leverage their own ecosystems to promote their own AI products to the detriment of competitors.
Background of the Case
In December 2025, Italy’s competition authority, the Autorità Garante della Concorrenza e del Mercato (AGCM), ordered Meta Platforms to suspend certain contractual terms governing WhatsApp. These terms allegedly prevented or restricted the operation of third-party AI chatbots on WhatsApp’s platform.
The decision was issued as an interim measure during an ongoing antitrust investigation. According to the AGCM, the disputed terms risked excluding competing AI chatbot providers from accessing a critical digital channel, thereby distorting competition and harming consumer choice.
Why WhatsApp Matters as a Digital Gateway
WhatsApp is situated uniquely within the European digital landscape. It has hundreds of millions of users throughout the entire European Union; therefore, it is an integral part of the communication infrastructure that supports communications between individual consumers and companies as well as between companies and their service providers. AI chatbot developers depend heavily upon WhatsApp as it provides the ability to connect directly with consumers in real-time, which is critical to their success as business offers.
According to the Italian regulator's opinion, a corporation that controls the ability to communicate via such a popular platform has a tremendous influence over innovation within that market as it essentially operates as a gatekeeper between the company creating an innovative service and the consumer using that service. If Meta is permitted to stop competing AI chatbot developers while providing more productive and useful offers than those offered by competing developers, it is likely that competing developers will be unable to market and distribute their innovative products at sufficient scale to remain competitive.
Alleged Abuse of Dominant Position
Under EU and national competition law, companies holding a dominant market position bear a special responsibility not to distort competition. The AGCM’s concern is that Meta may have abused WhatsApp’s dominance by:
- Restricting market access for rival AI chatbot providers
- Limiting technical development by preventing interoperability
- Strengthening Meta’s own AI ecosystem at the expense of competitors
Such conduct, if proven, could amount to an abuse under Article 102 of the Treaty on the Functioning of the European Union (TFEU). Importantly, the authority emphasised that even contractual terms—rather than explicit bans—can have exclusionary effects when imposed by a dominant platform.
Meta’s Response and Infrastructure Argument
Meta has openly condemned the Italian ruling as “fundamentally flawed,” arguing that third-party AI chatbots represent a major economic burden to the infrastructure and risk the performance, safety, and user enjoyment of WhatsApp.
Although the protection of infrastructure is a valid issue of concern, competition authorities commonly look at whether the justifications for such restrictions are appropriate and non-discriminatory. One of the principal legal issues is whether the restrictions imposed by Meta were applied in a uniform manner or whether they were selectively imposed in favour of Meta's AI services. If the restrictions are asymmetrical in application, they may be viewed as anti-competitive rather than as legitimate technical safeguards.
Link to the EU’s Digital Markets Framework
The Italian case fits into a wider EU context in relation to their efforts to regulate the actions of large technology companies with the use of prior (ex-ante) regulation as contained in the Digital Markets Act (DMA). The DMA has put in place obligations on a set of gatekeepers to make available to third parties on a non-discriminatory basis in order to maintain equitable access, interoperability and no discrimination against those parties.
While the Italian case has been brought pursuant to an Italian competition law, its philosophy is consistent with that of the DMA in that dominant digital platforms should not undertake actions that use their control over their core products and services to prevent other companies from being able to innovate. The trend with some EU national regulators is to be increasingly willing to take swift action through the application of interim measures rather than await many years for final decisions.
Implications for AI Developers and Platforms
The Italian order signifies to developers of AI-based chatbots that competitive access to AI technology via messaging services is an important factor for regulatory bodies. The order also serves as a warning to the large incumbent organisations that are establishing a foothold in the messaging services market to integrate AI into their already established platforms that they will not be protected from competition laws.
Additionally, the overall case showcases the growing consensus amongst regulatory agencies regarding the role of competition in the development of AI. If a handful of large companies are allowed to control both the infrastructure and the AI technology being operated on top of that infrastructure, the result will likely be the development of closed ecosystems that eliminate or greatly reduce the potential for technology diversity.
Conclusion
Italy's move against Meta highlights a significant intersection between competitive laws and artificial intelligence. The Italian antitrust authority has reinforced the principle that digital gatekeepers cannot use contractual methods to block off access to competition in targeting WhatsApp's restrictive terms. As AI becomes a larger part of our day to day digital services, regulatory bodies will likely continue to increase their scrutiny on platform behaviour. The result of this investigation will impact not just the Metaverse's AI strategy, but also create a baseline for future European regulators' methods of balancing innovation versus competition versus consumer choice in an increasingly AI-driven digital marketplace.
References
- https://www.reuters.com/sustainability/boards-policy-regulation/italy-watchdog-orders-meta-halt-whatsapp-terms-barring-rival-ai-chatbots-2025-12-24/
- https://techcrunch.com/2025/12/24/italy-tells-meta-to-suspend-its-policy-that-bans-rival-ai-chatbots-from-whatsapp/
- https://www.communicationstoday.co.in/italy-watchdog-orders-meta-to-halt-whatsapp-terms-barring-rival-ai-chatbots/
- https://www.techinasia.com/news/italy-watchdog-orders-meta-halt-whatsapp-terms-ai-bot

Introduction
Purchasing online currencies through one of the numerous sizable digital marketplaces designed specifically for this purpose is the simplest method. The quantity of cryptocurrency and money paid. These online marketplaces impose an exchange fee. After being obtained, digital cash is stored in a digital wallet and can be used in the metaverse or as real money to make purchases of goods and services in the real world. Blockchain ensures the security and decentralisation of each exchange.
Its worth and application are comparable to those of gold: when a large number of investors choose this valuable asset, its value increases and vice versa. This also applies to cryptocurrencies, which explains why they have become so popular in recent years. The metaphysical realm is an online space where users can communicate with one another via virtual personas, among other features. Furthermore, money and commerce always come up when people communicate.
Web3 is welcoming the metaverse, and in an environment where conventional currency isn't functional, its technologies are making it possible to use cryptocurrencies. Non-Fungible Tokens (NFTs) can be used to monitor intellectual rights to ownership in the metaverse, while cryptocurrencies are used to pay for content and incentivise consumers. This write-up addresses what the metaverse crypto is. It also delves into the advantages, disadvantages, and applications of crypto in this context.
Convergence of Metaverse and Cryptocurrency
As the main form of digital money in the Metaverse, digital currencies can be used to do business and exchange in the digital realm. The term "metaverse" describes a simulation of reality where users can communicate in real time with other users and an environment created by computers. The acquisition and exchange of virtual products, virtual possessions, and electronic creativity within the Metaverse can all be made possible via cryptocurrency.
Many digital currencies are based on blockchain software, which can offer an accessible and safe way to confirm payments and manage digital currencies in the Metaverse. By giving consumers vouchers or other electronic currencies in exchange for their accomplishments or contributions, cryptocurrency might encourage consumer engagement and involvement in the Metaverse.
In the Metaverse, cryptocurrency can also facilitate portable connectivity, enabling users to move commodities and their worth between various virtual settings and platforms.
The idea of fragmentation in the Metaverse, where participants have more ownership and control over their virtual worlds, is consistent with the decentralised characteristics of cryptocurrencies.
Advantages of Metaverse Cryptocurrency
There are countless opportunities for creativity and discovery in the metaverse. Because the blockchain is accessible to everyone, unchangeable, and password-protected, metaverse-centric cryptocurrencies offer greater safety and adaptability than cash. Crypto will be crucial to the evolution of the metaverse as it keeps growing and more individuals show interest in using it. Here are a few of the variables influencing the growth of this new virtual environment.
Safety
Your Bitcoin wallet is intimately linked to your personal information, progress, and metaverse possessions. Additionally, if your digital currency wallet is compromised, especially if your account credentials are weak, public, or connected to your real-world identity, cybercriminals may try to steal your money or personal data.
Adaptability
Digital assets can be accessed and exchanged worldwide due to cryptocurrencies’ ability to transcend national borders. By utilising a local cryptocurrency, many metaverse platforms streamline transactions and eliminate the need for frequent currency conversions between various digital or fiat currencies. Another advantage of using autonomous contract languages is for metaverse cryptos. When consumers make transactions within the network, applications do away with the need for administrative middlemen.
Objectivity
By exposing interactions in a publicly accessible distributed database, the use of blockchain improves accountability. It is more difficult for dishonest people to raise the cost of digital goods and land since Bitcoin transactions are public. Metaverse cryptocurrencies are frequently employed to control project modifications. The outcomes of these legislative elections are made public using digital contracts.
NFT, Virtual worlds, and Digital currencies
Using the NFT is an additional method of using Bitcoin for metaverse transactions. These are distinct electronic documents that have significant potential value.
A creator must convert an electronic work of art into a virtual object or virtual world if they want to display it digitally in the metaverse. Artists produce one-of-a-kind, serialised pieces that are given an NFT that may be acquired through Bitcoin payments.
Applications of Metaverse Cryptography
Fiat money or independent virtual currencies like Robux are used by Web 2 metaverse initiatives to pay for goods, real estate, and services. Fiat lacked the adaptability of cryptocurrencies with automated contract capabilities, even though it may be used to pay for goods and finance the creation of projects. Users can stake these within the network virtual currencies to administer distributed metaverses, and they have all the same functions as fiat currency.
Banking operations
Lending digital cash to purchase metaverse land is possible. Banks that have already made inroads into the metaverse include HSBC and JPMorgan, both of which possess virtual real estate. "We are making our foray into the metaverse, allowing us to create innovative brand experiences for both new and existing customers," said Suresh Balaji, chief marketing officer for HSBC in Asia-Pacific.
Purchasing
An increasingly important aspect of the metaverse is online commerce. Users can interact with real-world brands, tour simulated malls, and try on virtual apparel for their characters. Adidas, for instance, debuted an NFT line in 2021 that included customizable peripherals for the Sandbox. Buyers of NFTs crossed the line separating the virtual universe and the actual world to obtain the tangible goods associated with their NFTs.
Authority
Metaverse initiatives are frequently governed by cryptocurrency. Decentraland, a well-known Ethereum-based metaverse featuring virtual reality components, permits users to submit and vote on suggestions provided they own specific tokens.
Conclusion
The combination of the virtual world and cryptocurrencies creates novel opportunities for trade, innovation, and communication. The benefits of using the blockchain system are increased objectivity, safety, and flexibility. By facilitating exclusive ownership of digital assets, NFTs enhance metaverse immersion even more. In the metaverse, cryptocurrencies are used in banking, shopping, and government, forming a user-driven, autonomous digital world. The combination of cryptocurrencies and the metaverse will revolutionise how we interact with online activities, creating a dynamic environment that presents both opportunities and difficulties.
References
- https://www.telefonica.com/en/communication-room/blog/metaverse-and-cryptocurrencies-what-is-their-relationship/
- https://hedera.com/learning/metaverse/metaverse-crypto
- https://www.linkedin.com/pulse/unleashing-power-connection-between-cryptocurrency-ai-amit-chandra/