#FactCheck - Debunking the AI-Generated Image of an Alleged Israeli Army Dog Attack
Executive Summary:
A photo allegedly shows an Israeli Army dog attacking an elderly Palestinian woman has been circulating online on social media. However, the image is misleading as it was created using Artificial Intelligence (AI), as indicated by its graphical elements, watermark ("IN.VISUALART"), and basic anomalies. Although there are certain reports regarding the real incident in several news channels, the viral image was not taken during the actual event. This emphasizes the need to verify photos and information shared on social media carefully.

Claims:
A photo circulating in the media depicts an Israeli Army dog attacking an elderly Palestinian woman.



Fact Check:
Upon receiving the posts, we closely analyzed the image and found certain discrepancies that are commonly seen in AI-generated images. We can clearly see the watermark “IN.VISUALART” and also the hand of the old lady looks odd.

We then checked in AI-Image detection tools named, True Media and contentatscale AI detector. Both found potential AI Manipulation in the image.



Both tools found it to be AI Manipulated. We then keyword searched for relevant news regarding the viral photo. Though we found relevant news, we didn’t get any credible source for the image.

The photograph that was shared around the internet has no credible source. Hence the viral image is AI-generated and fake.
Conclusion:
The circulating photo of an Israeli Army dog attacking an elderly Palestinian woman is misleading. The incident did occur as per the several news channels, but the photo depicting the incident is AI-generated and not real.
- Claim: A photo being shared online shows an elderly Palestinian woman being attacked by an Israeli Army dog.
- Claimed on: X, Facebook, LinkedIn
- Fact Check: Fake & Misleading
Related Blogs

What Is a VPN and its Significance
A Virtual Private Network (VPN) creates a secure and reliable network connection between a device and the internet. It hides your IP address by rerouting it through the VPN’s host servers. For example, if you connect to a US server, you appear to be browsing from the US, even if you’re in India. It also encrypts the data being transferred in real-time so that it is not decipherable by third parties such as ad companies, the government, cyber criminals, or others.
All online activity leaves a digital footprint that is tracked for data collection, and surveillance, increasingly jeopardizing user privacy. VPNs are thus a powerful tool for enhancing the privacy and security of users, businesses, governments and critical sectors. They also help protect users on public Wi-Fi networks ( for example, at airports and hotels), journalists, activists and whistleblowers, remote workers and businesses, citizens in high-surveillance states, and researchers by affording them a degree of anonymity.
What VPNs Do and Don’t
- What VPNs Can Do:
- Mask your IP address to enhance privacy.
- Encrypt data to protect against hackers, especially on public Wi-Fi.
- Bypass geo-restrictions (e.g., access streaming content blocked in India).
- What VPNs Cannot Do:
- Make you completely anonymous and protect your identity (websites can still track you via cookies, browser fingerprinting, etc.).
- Protect against malware or phishing.
- Prevent law enforcement from tracing you if they have access to VPN logs.
- Free VPNs usually even share logs with third parties.
VPNs in the Context of India’s Privacy Policy Landscape
In April 2022, CERT-In (Computer Emergency Response Team- India) released Directions under Section 70B (6) of the Information Technology (“IT”) Act, 2000, mandating VPN service providers to store customer data such as “validated names of subscribers/customers hiring the services, period of hire including dates, IPs allotted to / being used by the members, email address and IP address and time stamp used at the time of registration/onboarding, the purpose for hiring services, validated address and contact numbers, and the ownership pattern of the subscribers/customers hiring services” collected as part of their KYC (Know Your Customer) requirements, for a period of five years, even after the subscription has been cancelled. While this directive was issued to aid with cybersecurity investigations, it undermines the core purpose of VPNs- anonymity and privacy. It also gave operators very little time to carry out compliance measures.
Following this, operators such as NordVPN, ExpressVPN, ProtonVPN, and others pulled their physical servers out of India, and now use virtual servers hosted abroad (e.g., Singapore) with Indian IP addresses. While the CERT-In Directions have extra-territorial applicability, virtual servers are able to bypass them since they physically operate from a foreign jurisdiction. This means that they are effectively not liable to provide user information to Indian investigative agencies, beating the whole purpose of the directive. To counter this, the Indian government could potentially block non-compliant VPN services in the future. Further, there are concerns about overreach since the Directions are unclear about how long CERT-In can retain the data it acquires from VPN operators, how it will be used and safeguarded, and the procedure of holding VPN operators responsible for compliance.
Conclusion: The Need for a Privacy-Conscious Framework
The CERT-In Directions reflect a governance model which, by prioritizing security over privacy, compromises on safeguards like independent oversight or judicial review to balance the two. The policy design renders a lose-lose situation because virtual VPN services are still available, while the government loses oversight. If anything, this can make it harder for the government to track suspicious activity. It also violates the principle of proportionality established in the landmark privacy judgment, Puttaswamy v. Union of India (II) by giving government agencies the power to collect excessive VPN data on any user. These issues underscore the need for a national-level, privacy-conscious cybersecurity framework that informs other policies on data protection and cybercrime investigations. In the meantime, users who use VPNs are advised to choose reputable providers, ensure strong encryption, and follow best practices to maintain online privacy and security.
References
- https://www.kaspersky.com/resource-center/definitions/what-is-a-vpn
- https://internetfreedom.in/top-secret-one-year-on-cert-in-refuses-to-reveal-information-about-compliance-notices-issued-under-its-2022-directions-on-cybersecurity/#:~:text=tl;dr,under%20this%20new%20regulatory%20mandate.
- https://www.wired.com/story/vpn-firms-flee-india-data-collection-law/#:~:text=Starting%20today%2C%20the%20Indian%20Computer,years%2C%20even%20after%20they%20have

Introduction:
The G7 Summit is an international forum that includes member states from France, the United States, the United Kingdom, Germany, Japan, Italy, Canada and the European Union (EU). The annual G7 meeting that is held every year was hosted by Japan this year in May 2023. It took place in Hiroshima. Artificial Intelligence (AI) was the major theme of this G7 summit. Key takeaways from this G7 summit highlight that leaders together focused on escalating the adoption of AI for beneficial use cases across the economy and the government and improving the governing structure to mitigate the potential risks of AI.
Need for fair and responsible use of AI:
The G7 recognises that they really need to work together to ensure the responsible and fair use of AI to help establish technical standards for the same. Members of the G7 countries agreed to adopt an open and enabling environment for the development of AI technologies. They also emphasized that AI regulations should be based on democratic values. G7 summit calls for the responsible use of AI. The ministers discussed the risks involved in AI technology programs like ChatGPT. They came up with an action plan for promoting responsible use of AI with human beings leading the efforts.
Further Ministers from the Group of Seven (G7) countries (Canada, France, Germany, Italy, Japan, the UK, the US, and the EU) met virtually on 7 September 2023 and committed to creating ‘international guiding principles applicable for all AI actors’, and a code of conduct for organisations developing ‘advanced’ AI systems.
What is HAP (Hiroshima AI Process)
Hiroshima AI Process (HAP) aims to establish trustworthy AI technical standards at the international level. The G7 agreed on creating a ministerial forum to prompt the fair use of AI. Hiroshima AI Process (HAP) is an effort by G7 to determine a way forward to regulate AI. The HAP establishes a forum for international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI at the global level.
The HAP will be operating in close connection with organisations including the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI).
This Hiroshima AI Process (HAP) initiated at the Annual G7 Summit held in Hiroshima, Japan is a significant step towards regulating AI and the Hiroshima AI Process (HAP) is likely to conclude by December 2023.
G7 leaders emphasized fostering an environment where trustworthy AI systems are designed, developed and deployed for the common good worldwide. They advocated for international standards and interoperable tools for trustworthy AI that enable Innovation by creating a comprehensive policy framework, including overall guiding principles for all AI actors in the AI ecosystem.
Stressing upon fair use of advanced technologies:
The impact and misuse of generative AI was also discussed by the G7 leaders. The G7 members also stressed misinformation and disinformation in the realm of generative AI models. As they are capable of creating synthetic content such as deepfakes. In particular, they noted that the next generation of interactive generative media will leverage targeted influence content that is highly personalized, localized, and conversational.
In the digital landscape, there is a rapid advancement of technologies such as generative
Artificial Intelligence (AI), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities, hence certain regulatory mechanisms at the global level will ensure and advocate for the ethical, reasonable and fair use of such advanced technologies.
Conclusion:
The G7 summit held in May 2023 focused on advanced international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI, in line with shared democratic values. AI governance has become a global issue, countries around the world are coming forward and advocating for the responsible and fair use of AI and influence on global AI governance and standards. It is significant to establish a regulatory framework that defines AI capabilities and identifies areas prone to misuse. And set forth reasonable technical standards while also fostering innovations. Hence overall prioritizing data privacy, integrity, and security in the evolving nature of advanced technologies.
References:
- https://www.politico.eu/wp-content/uploads/2023/09/07/3e39b82d-464d-403a-b6cb-dc0e1bdec642-230906_Ministerial-clean-Draft-Hiroshima-Ministers-Statement68.pdf
- https://www.g7hiroshima.go.jp/en/summit/about/
- https://www.drishtiias.com/daily-updates/daily-news-analysis/the-hiroshima-ai-process-for-global-ai-governance
- https://www.businesstoday.in/technology/news/story/hiroshima-ai-process-g7-calls-for-adoption-of-international-technical-standards-for-ai-382121-2023-05-20
.webp)
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.