#FactCheck - MS Dhoni Sculpture Falsely Portrayed as Chanakya 3D Recreation
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.
Related Blogs

Along with the loss of important files and information, data loss can result in downtime and lost revenue. Unexpected occurrences, including natural catastrophes, cyber-attacks, hardware malfunctions, and human mistakes, can result in the loss of crucial data. Recovery from these without a backup plan may be difficult, if not impossible.
The fact is that the largest threat to the continuation of your organization today is cyberattacks. Because of this, disaster recovery planning should be approached from a data security standpoint. If not, you run the risk of leaving your vital systems exposed to a cyberattack. Cybercrime has been more frequent and violent over the past few years. In the past, major organizations and global businesses were the main targets of these attacks by criminals. But nowadays, businesses of all sizes need to be cautious of digital risks.
Many firms might suffer a financial hit even from a brief interruption to regular business operations. But imagine if a situation forced a company to close for a few days or perhaps weeks! The consequences would be disastrous.
One must have a comprehensive disaster recovery plan in place that is connected with the cybersecurity strategy, given the growing danger of cybercrime.
Let’s look at why having a solid data security plan and a dependable backup solution are essential for safeguarding a company from external digital threats.
1. Apply layered approaches
One must specifically use precautionary measures like antivirus software and firewalls. One must also implement strict access control procedures to restrict who may access the network.
One must also implement strict access control procedures to restrict who may access the network.
2. Understand the threat situation
If someone is unaware of the difficulties one should be prepared for, how can they possibly expect to develop a successful cybersecurity strategy? They can’t, is the simple response.
Without a solid understanding of the threat landscape, developing the plan will require a lot too much speculation. With this strategy, one can allocate resources poorly or perhaps completely miss a threat.
Because of this, one should educate themselves on the many cyber risks that businesses now must contend with.
3. Adopt a proactive security stance
Every effective cybersecurity plan includes a number of reactive processes that aren’t activated until an attack occurs. Although these reactive strategies will always be useful in cybersecurity, the main focus of your plan should be proactiveness.
There are several methods to be proactive, but the most crucial one is to analyze your network for possible threats regularly. your network securely. Having a SaaS Security Posture Management (SSPM) solution in place is beneficial for SaaS applications, in particular.
A preventive approach can lessen the effects of a data breach and aid in keeping data away from attackers.
4. Evaluate your ability to respond to incidents
Test your cybersecurity disaster recovery plan’s effectiveness by conducting exercises and evaluating the outcomes. Track pertinent data during the exercise to see if your plan is working as expected.
Meet with your team after each drill to evaluate what went well and what didn’t. This strategy enables you to continuously strengthen your plan and solve weaknesses. This procedure may be repeated endlessly and should be.
You must include cybersecurity protections in your entire disaster recovery plan if you want to make sure that your business is resilient in the face of cyber threats. You may strengthen data security and recover from data loss and corruption by putting in place a plan that focuses on both the essential components of proactive data protection and automated data backup and recovery.
For instance, Google distributes all data among several computers in various places while storing each user’s data on a single machine or collection of machines. To prevent a single point of failure, chunk the data and duplicate it across several platforms. As an additional security safeguard, they give these data chunks random names that are unreadable to the human eye.[1]
The process of creating and storing copies of data that may be used to safeguard organizations against data loss is referred to as backup and recovery. In the case of a main data failure, the backup’s goal is to make a duplicate of the data that can be restored.
5. Take zero-trust principles
Don’t presume that anything or anybody can be trusted; zero trust is a new label for an old idea. Check each device, user, service, or other entity’s trustworthiness before providing it access, then periodically recheck trustworthiness while access is allowed to make sure the entity hasn’t been hacked. Reduce the consequences of any breach of confidence by granting each entity access to only the resources it requires. The number of events and the severity of those that do happen can both be decreased by using zero-trust principles.
6. Understand the dangers posed by supply networks
A nation-state can effectively penetrate a single business, and that business may provide thousands of other businesses with tainted technological goods or services. These businesses will then become compromised, which might disclose their own customers’ data to the original attackers or result in compromised services being offered to customers. Millions of businesses and people might be harmed as a result of what began with one infiltrating corporation.
In conclusion, a defense-in-depth approach to cybersecurity won’t vanish. Organizations may never be able to totally eliminate the danger of a cyberattack, but having a variety of technologies and procedures in place can assist in guaranteeing that the risks are kept to a minimum.
References:

Introduction
The Telecom Regulatory Authority of India (TRAI) on 20th August 2024 issued directives requiring Access Service Providers to adhere to the specific guidelines to protect consumer interests and prevent fraudulent activities. TRAI has mandated all Access Service Providers to abide by the directives. These steps advance TRAI's efforts to promote a secure messaging ecosystem, protecting consumer interests and eliminating fraudulent conduct.
Key Highlights of the TRAI’s Directives
- For improved monitoring and control, TRAI has directed that Access Service Providers move telemarketing calls, beginning with the 140 series, to an online DLT (Digital Ledger Technology) platform by September 30, 2024, at the latest.
- All Access Service Providers will be forbidden from sending messages that contain URLs, APKs, OTT links, or callback numbers that the sender has not whitelisted, the rule is to be effective from September 1st, 2024.
- In an effort to improve message traceability, TRAI has made it mandatory for all messages, starting on November 1, 2024, to include a traceable trail from sender to receiver. Any message with an undefined or mismatched telemarketer chain will be rejected.
- To discourage the exploitation or misuse of templates for promotional content, TRAI has introduced punitive actions in case of non-compliance. Content Templates registered in the wrong category will be banned, and subsequent offences will result in a one-month suspension of the Sender's services.
- To assure compliance with rules, all Headers and Content Templates registered on DLT must follow the requirements. Furthermore, a single Content Template cannot be connected to numerous headers.
- If any misuse of headers or content templates by a sender is discovered, TRAI has instructed an immediate ‘suspension of traffic’ from all of that sender's headers and content templates for their verification. Such suspension can only be revoked only after the Sender has taken legal action against such usage. Furthermore, Delivery-Telemarketers must identify and disclose companies guilty of such misuse within two business days, or else risk comparable repercussions.
CyberPeace Policy Outlook
TRAI’s measures are aimed at curbing the misuse of messaging services including spam. TRAI has mandated that headers and content templates follow defined requirements. Punitive actions are introduced in case of non-compliance with the directives, such as blacklisting and service suspension. TRAI’s measures will surely curb the increasing rate of scams such as phishing, spamming, and other fraudulent activities and ultimately protect consumer's interests and establish a true cyber-safe environment in messaging services ecosystem.
The official text of TRAI directives is available on the official website of TRAI or you can access the link here.
References
- https://www.trai.gov.in/sites/default/files/Direction_20082024.pdf
- https://www.trai.gov.in/sites/default/files/PR_No.53of2024.pdf
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2046872
- https://legal.economictimes.indiatimes.com/news/regulators/trai-issues-directives-to-access-providers-to-curb-misuse-fraud-through-messaging/112669368

In the rich history of humanity, the advent of artificial intelligence (AI) has added a new, delicate aspect. The aspect of promising technological advancement has the potential to either enrich the nest of our society or destroy it entirely. The latest straw in this complex nest is generative AI, a frontier teeming with both potential and perils. It is a realm where the ethereal concepts of cyber peace and resilience are not just theoretical constructs but tangible necessities.
The spectre of generative AI looms large over the digital landscape, casting a long shadow on the sanctity of data privacy and the integrity of political processes. The seeds of this threat were sown in the fertile soil of the Cambridge Analytica scandal of 2018, a watershed moment that unveiled the extent to which personal data could be harvested and utilized to influence electoral outcomes. However despite the indignation, the scandal resulted in meagre alterations to modus operandi of digital platforms.
Fast forward to the present day, and the spectre has only grown more ominous. A recent report by Human Rights Watch has shed light on the continued exploitation of data-driven campaigning in Hungary's re-election of Viktor Orbán. The report paints a chilling picture of political parties leveraging voter databases for targeted social media advertising, with the ruling Fidesz party even resorting to the unethical use of public service data to bolster its voter database.
The Looming Threat of Disinformation
As we stand on the precipice of 2024, a year that will witness over 50 countries holding elections, the advancements in generative AI could exponentially amplify the ability of political campaigns to manipulate electoral outcomes. This is particularly concerning in countries where information disparities are stark, providing fertile ground for the seeds of disinformation to take root and flourish.
The media, the traditional watchdog of democracy, has already begun to sound the alarm about the potential threats posed by deepfakes and manipulative content in the upcoming elections. The limited use of generative AI in disinformation campaigns has raised concerns about the enforcement of policies against generating targeted political materials, such as those designed to sway specific demographic groups towards a particular candidate.
Yet, while the threat of bad actors using AI to generate and disseminate disinformation is real and present, there is another dimension that has largely remained unexplored: the intimate interactions with chatbots. These digital interlocutors, when armed with advanced generative AI, have the potential to manipulate individuals without any intermediaries. The more data they have about a person, the better they can tailor their manipulations.
Root of the Cause
To fully grasp the potential risks, we must journey back 30 years to the birth of online banner ads. The success of the first-ever banner ad for AT&T, which boasted an astounding 44% click rate, birthed a new era of digital advertising. This was followed by the advent of mobile advertising in the early 2000s. Since then, companies have been engaged in a perpetual quest to harness technology for manipulation, blurring the lines between commercial and political advertising in cyberspace.
Regrettably, the safeguards currently in place are woefully inadequate to prevent the rise of manipulative chatbots. Consider the case of Snapchat's My AI generative chatbot, which ostensibly assists users with trivia questions and gift suggestions. Unbeknownst to most users, their interactions with the chatbot are algorithmically harvested for targeted advertising. While this may not seem harmful in its current form, the profit motive could drive it towards more manipulative purposes.
If companies deploying chatbots like My AI face pressure to increase profitability, they may be tempted to subtly steer conversations to extract more user information, providing more fuel for advertising and higher earnings. This kind of nudging is not clearly illegal in the U.S. or the EU, even after the AI Act comes into effect. The market size of AI in India is projected to touch US$4.11bn in 2023.
Taking this further, chatbots may be inclined to guide users towards purchasing specific products or even influencing significant life decisions, such as religious conversions or voting choices. The legal boundaries here remain unclear, especially when manipulation is not detectable by the user.
The Crucial Dos/Dont's
It is crucial to set rules and safeguards in order to manage the possible threats related to manipulative chatbots in the context of the general election in 2024.
First and foremost, candor and transparency are essential. Chatbots, particularly when employed for political or electoral matters, ought to make it clear to users what they are for and why they are automated. By being transparent, people are guaranteed to be aware that they are interacting with automated processes.
Second, getting user consent is crucial. Before collecting user data for any reason, including advertising or political profiling, users should be asked for their informed consent. Giving consumers easy ways to opt-in and opt-out gives them control over their data.
Furthermore, moral use is essential. It's crucial to create an ethics code for chatbot interactions that forbids manipulation, disseminating false information, and trying to sway users' political opinions. This guarantees that chatbots follow moral guidelines.
In order to preserve transparency and accountability, independent audits need to be carried out. Users might feel more confident knowing that chatbot behavior and data collecting procedures are regularly audited by impartial third parties to ensure compliance with legal and ethical norms.
Important "don'ts" to take into account. Coercion and manipulation ought to be outlawed completely. Chatbots should refrain from using misleading or manipulative approaches to sway users' political opinions or religious convictions.
Another hazard to watch out for is unlawful data collecting. Businesses must obtain consumers' express agreement before collecting personal information, and they must not sell or share this information for political reasons.
At all costs, one should steer clear of fake identities. Impersonating people or political figures is not something chatbots should do because it can result in manipulation and false information.
It is essential to be impartial. Bots shouldn't advocate for or take part in political activities that give preference to one political party over another. In encounters, impartiality and equity are crucial.
Finally, one should refrain from using invasive advertising techniques. Chatbots should ensure that advertising tactics comply with legal norms by refraining from displaying political advertisements or messaging without explicit user agreement.
Present Scenario
As we approach the critical 2024 elections and generative AI tools proliferate faster than regulatory measures can keep pace, companies must take an active role in building user trust, transparency, and accountability. This includes comprehensive disclosure about a chatbot's programmed business goals in conversations, ensuring users are fully aware of the chatbot's intended purposes.
To address the regulatory gap, stronger laws are needed. Both the EU AI Act and analogous laws across jurisdictions should be expanded to address the potential for manipulation in various forms. This effort should be driven by public demand, as the interests of lawmakers have been influenced by intensive Big Tech lobbying campaigns.
At present, India doesn’t have any specific laws pertaining to AI regulation. Ministry of Electronics and Information Technology (MEITY), is the executive body responsible for AI strategies and is constantly working towards a policy framework for AI. The Niti Ayog has presented seven principles for responsible AI which includes equality , inclusivity, safety, privacy, transparency, accountability, dependability and protection of positive human values.
Conclusion
We are at a pivotal juncture in history. As generative AI gains more power, we must proactively establish effective strategies to protect our privacy, rights and democracy. The public's waning confidence in Big Tech and the lessons learned from the techlash underscore the need for stronger regulations that hold tech companies accountable. Let's ensure that the power of generative AI is harnessed for the betterment of society and not exploited for manipulation.
Reference
McCallum, B. S. (2022, December 23). Meta settles Cambridge Analytica scandal case for $725m. BBC News. https://www.bbc.com/news/technology-64075067
Hungary: Data misused for political campaigns. (2022, December 1). Human Rights Watch. https://www.hrw.org/news/2022/12/01/hungary-data-misused-political-campaigns
Statista. (n.d.). Artificial Intelligence - India | Statista Market forecast. https://www.statista.com/outlook/tmo/artificial-intelligence/india