#FactCheck - AI Generated image of Virat Kohli falsely claims to be sand art of a child
Executive Summary:
The picture of a boy making sand art of Indian Cricketer Virat Kohli spreading in social media, claims to be false. The picture which was portrayed, revealed not to be a real sand art. The analyses using AI technology like 'Hive' and ‘Content at scale AI detection’ confirms that the images are entirely generated by artificial intelligence. The netizens are sharing these pictures in social media without knowing that it is computer generated by deep fake techniques.

Claims:
The collage of beautiful pictures displays a young boy creating sand art of Indian Cricketer Virat Kohli.




Fact Check:
When we checked on the posts, we found some anomalies in each photo. Those anomalies are common in AI-generated images.

The anomalies such as the abnormal shape of the child’s feet, blended logo with sand color in the second image, and the wrong spelling ‘spoot’ instead of ‘sport’n were seen in the picture. The cricket bat is straight which in the case of sand made portrait it’s odd. In the left hand of the child, there’s a tattoo imprinted while in other photos the child's left hand has no tattoo. Additionally, the face of the boy in the second image does not match the face in other images. These made us more suspicious of the images being a synthetic media.
We then checked on an AI-generated image detection tool named, ‘Hive’. Hive was found to be 99.99% AI-generated. We then checked from another detection tool named, “Content at scale”


Hence, we conclude that the viral collage of images is AI-generated but not sand art of any child. The Claim made is false and misleading.
Conclusion:
In conclusion, the claim that the pictures showing a sand art image of Indian cricket star Virat Kohli made by a child is false. Using an AI technology detection tool and analyzing the photos, it appears that they were probably created by an AI image-generated tool rather than by a real sand artist. Therefore, the images do not accurately represent the alleged claim and creator.
Claim: A young boy has created sand art of Indian Cricketer Virat Kohli
Claimed on: X, Facebook, Instagram
Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
The ongoing armed conflict between Israel and Hamas/ Palestine is in the news all across the world. The latest conflict was triggered by unprecedented attacks against Israel by Hamas militants on October 7, killing thousands of people. Israel has launched a massive counter-offensive against the Islamic militant group. Amid the war, the bad information and propaganda spreading on various social media platforms, tech researchers have detected a network of 67 accounts that posted false content about the war and received millions of views. The ‘European Commission’ has sent a letter to Elon Musk, directing them to remove illegal content and disinformation; otherwise, penalties can be imposed. The European Commission has formally requested information from several social media giants on their handling of content related to the Israel-Hamas war. This widespread disinformation impacts and triggers the nature of war and also impacts the world and affects the goodwill of the citizens. The bad group, in this way, weaponise the information and fuels online hate activity, terrorism and extremism, flooding political polarisation with hateful content on social media. Online misinformation about the war is inciting extremism, violence, hate and different propaganda-based ideologies. The online information environment surrounding this conflict is being flooded with disinformation and misinformation, which amplifies the nature of war and too many fake narratives and videos are flooded on social media platforms.
Response of social media platforms
As there is a proliferation of online misinformation and violent content surrounding the war, It imposes a question on social media companies in terms of content moderation and other policy shifts. It is notable that Instagram, Facebook and X(Formerly Twitter) all have certain features in place giving users the ability to decide what content they want to view. They also allow for limiting the potentially sensitive content from being displayed in search results.
The experts say that It is of paramount importance to get a sort of control in this regard and define what is permissible online and what is not, Hence, what is required is expertise to determine the situation, and most importantly, It requires robust content moderation policies.
During wartime, people who are aggrieved or provoked are often targeted by this internet disinformation that blends ideological beliefs and spreads conspiracy theories and hatred. This is not a new phenomenon, it is often observed that disinformation-spreading groups emerged and became active during such war and emergency times and spread disinformation and propaganda-based ideologies and influence the society at large by misrepresenting the facts and planted stories. Social media has made it easier to post user-generated content without properly moderating it. However, it is a shared responsibility of tech companies, users, government guidelines and policies to collectively define and follow certain mechanisms to fight against disinformation and misinformation.
Digital Services Act (DSA)
The newly enacted EU law, i.e. Digital Services Act, pushes various larger online platforms to prevent posts containing illegal content and also puts limits on targeted advertising. DSA enables to challenge the of illegal online content and also poses requirements to prevent misinformation and disinformation and ensure more transparency over what the users see on the platforms. Rules under the DSA cover everything from content moderation & user privacy to transparency in operations. DSA is a landmark EU legislation moderating online platforms. Large tech platforms are now subject to content-related regulation under this new EU law ‘The Digital Services Act’, which also requires them to prevent the spread of misinformation and disinformation and overall ensure a safer online environment.
Indian Scenario
The Indian government introduced the Intermediary Guidelines (Intermediary Guidelines and Digital Media Ethics Code) Rules, updated in 2023 which talks about the establishment of a "fact check unit" to identify false or misleading online content. Digital Personal Data Protection, 2023 has also been enacted which aims to protect personal data. The upcoming Digital India bill is also proposed to be tabled in the parliament, this act will replace the current Information & Technology Act, of 2000. The upcoming Digital India bill can be seen as future-ready legislation to strengthen India’s current cybersecurity posture. It will comprehensively deal with the aspects of ensuring privacy, data protection, and fighting growing cyber crimes in the evolving digital landscape and ensuring a safe digital environment. Certain other entities including civil societies are also actively engaged in fighting misinformation and spreading awareness for safe and responsible use of the Internet.
Conclusion:
The widespread disinformation and misinformation content amid the Israel-Hamas war showcases how user-generated content on social media shows you the illusion of reality. There is widespread misinformation, misleading content or posts on social media platforms, and misuse of new advanced AI technologies that even make it easier for bad actors to create synthetic media content. It is also notable that social media has connected us like never before. Social media is a great platform with billions of active social media users around the globe, it offers various conveniences and opportunities to individuals and businesses. It is just certain aspects that require the attention of all of us to prevent the bad use of social media. The social media platforms and regulatory authorities need to be vigilant and active in clearly defining and improving the policies for content regulation and safe and responsible use of social media which can effectively combat and curtail the bad actors from misusing social media for their bad motives. As a user, it's the responsibility of users to exercise certain duties and promote responsible use of social media. With the increasing penetration of social media and the internet, misinformation is rampant all across the world and remains a global issue which needs to be addressed properly by implementing strict policies and adopting best practices to fight the misinformation. Users are encouraged to flag and report misinformative or misleading content on social media and should always verify it from authentic sources. Hence creating a safer Internet environment for everyone.
References:
- https://abcnews.go.com/US/experts-fear-hate-extremism-social-media-israel-hamas-war/story?id=104221215
- https://edition.cnn.com/2023/10/14/tech/social-media-misinformation-israel-hamas/index.html
- https://www.nytimes.com/2023/10/13/business/israel-hamas-misinformation-social-media-x.html
- https://www.africanews.com/2023/10/24/fact-check-misinformation-about-the-israel-hamas-war-is-flooding-social-media-here-are-the//
- https://www.theverge.com/23845672/eu-digital-services-act-explained
.webp)
Overview:
WazirX is the platform for cryptocurrencies, based in India that has been hacked, and it made a loss of more than $230 million in cryptocurrency. This case concerned an unauthorized transaction with a multisignature or multisig, wallet controlled through Liminal’a digital asset management platform. These attacking incidents have thereafter raised more questions on the security of the Cryptocurrency exchanges and efficiency of the existing policies and laws.
Wallet Configuration and Security Measures
This wallet was breached and had a multisig setting meaning that more than one signature was needed to authorize a transaction. Specifically, it had six signatories: five are funded by WazirX and one is funded by Liminal. Every transaction needed the approval of at least three signatories of WazirX, all of whom had addressed security concerns by using Ledger’s hardware wallets; while the Liminal, too, had a signatory, for approval.
To further increase the level of security of the transactions, a whitelisting policy was introduced, only limited addresses were authorized to receive funds. This system was rather vulnerable, and the attackers managed to grasp the discrepancy between the information available through Liminal’s interface and the content of the transaction to seize unauthorized control over the wallet and implement the theft.
Modus Operandi: Attack Mechanics
The cyber attack appears to have been carefully carried out, with preliminary investigations suggesting the following tactics:
- Payload Manipulation: The attackers apparently substituted the transaction’s payload during signing; hence, they can reroute the collected funds into an unrelated wallet.
- Chain Hopping: To make it much harder to track their movements, the attackers split large amounts of money across multiple blockchains and broke tens of thousands of dollars into thousands of transactions involving different cryptocurrencies. This technique makes it difficult to trace people and things.
- Zero Balance Transactions: There were also some instances where it ended up with no Ethereum (ETH) in the balance and such wallets also in use for the purpose of further anonymization of the transactions.
- Analysis of the blockchain data suggested the enemy might have been making the preparations for this attack for several days prior to their attack and involved a high amount of planning.
Actions taken by WazirX:
Following the attack, WazirX implemented a series of immediate actions:
- User Notifications: The users were immediately notified of the occurrence of the breach and the possible risk it posed to them.
- Law Enforcement Engagement: The matters were reported to the National Cyber Crime Reporting Portal and specific authorities of which the Financial Intelligence Unit (FIU) and the Computer Emergency Response Team (CERT-In).
- Service Suspension: WazirX had suspended all its trading operations and user deposits’ and withdrawals’ to minimize further cases and investigate.
- Global Outreach: The exchange contacted more than 500 cryptocurrency exchanges and requested to blacklist the wallet’s addresses linked to the theft.
- Bounty Program: A bounty program was announced to encourage people to share information that can enable the authorities to retrieve the stolen money. A maximum of 23 million dollars was placed on the bounty.
Further Investigations
WazirX stated that it has contracted the services of cybersecurity professionals to help in the prosecution process of identifying and compensating for the losses. The exchange is still investigating the forensic data and working with the police for tracking the stolen assets. Nevertheless, the prospects of full recovery may be quite questionable primarily because of complexity of the attack and the methods used by the attackers.
Precautionary measures:
The WazirX cyber attack clearly implies that there is the necessity to improve the security and the regulation of the cryptocurrency industry. As exchanges become increasingly targeted by hackers, there is a pressing need for:
- Stricter Security Protocols: The commitment to technical innovations, such as integration of MFA, as well as constant monitoring of the users’ wallets’ activities.
- Regulatory Oversight: Formalization of the laws that require proper security for the cryptocurrency exchange platforms to safeguard their users as well as their investments.
- Community Awareness: To bypass such predicaments, there is a need to study on emergent techniques in spreading awareness, particularly in cases of scams or phishing attempts that are likely to follow such breaches.
Conclusion:
The cyber attack on WazirX in the field of cryptocurrency market, shows weaknesses and provides valuable lessons for enhancing the security. This attack highlights critical vulnerabilities in cryptocurrency exchanges, even though employing advanced security measures like multisignature wallets and whitelisting policies. The attack's complexity, involving payload manipulation, chain hopping, and zero balance transactions, underscores the attackers' meticulous planning and the challenges in tracing stolen assets. This case brings a strong message regarding the necessity of solid security measures, and constant attention to security in the rapidly growing world of digital assets. Furthermore, the incident highlights the importance of community awareness and education on emerging threats like scams and phishing attempts, which usually follow such breaches. By fostering a culture of vigilance and knowledge, the cryptocurrency community can better defend against future attacks.
Reference:
https://wazirx.com/blog/important-update-cyber-attack-incident-and-measures-to-protect-your-assets/
https://www.linkedin.com/pulse/wazirx-cyberattack-in-depth-analysis-jyqxf

INTRODUCTION:
The Ministry of Defence has recently designated the Additional Directorate General of Strategic Communication in the Indian Army as the nodal officer now authorised to send removal requests and notices to social media intermediaries regarding posts consisting of illegal content with respect to the Army. Earlier, this process was followed through the Ministry of Electronics and Information Technology (MeitY). The recent designation gives the Army the autonomy of circumnavigating the old process and enables them to send direct notices (as deemed appropriate by the government and its agency). Let us look at the legal framework that allows them to do so and its policy implications.
BACKGROUND AND LEGAL FRAMEWORK:
Section 69 of the IT Act 2000 gives the government the power to issue directions for interception, monitoring or decryption of any data/information through any computer resource. This is done so under six reasons related to:
- Upholding the sovereignty or integrity of India
- Security of the state
- Defence of India
- Friendly relations with foreign states
- Public order or for preventing incitement of any cognisable offence
- Investigations of offences related to the aforementioned reasons
Section 79(3)(b) of the Information Technology Act 2000 is another aspect of the law related to the removal of data on notification. It allows for all intermediaries (including internet service providers and social media platforms) to have safety harbours from the liability of the content put out by third parties/users on their platforms. This, however, is only applicable when the intermediary has either received a notification or actual knowledge by the appropriate government or its agency of the data on their platform being used for unlawful acts and complies promptly by removing the data from their platform without tampering with evidence.
PLAUSIBLE REASONS FOR POLICY DECISION:
Cases related to the Indian Army are sensitive for a number of reasons, rooted in the fact that they directly pertain to the nation's security, integrity and sovereignty. The impact of the spread of misinformation and disinformation is almost instantaneous and the stakes are high in any circumstance, but exceptionally so when it comes to the Armed Forces and the nation’s security status. A mechanism to tackle cases of such a security level should allow for quick action from the authorities. Owing to the change in the ability to notify directly rather than through another ministry, the army can now promptly deal with these concerns as and when they arise. One immediate benefit of this change is that the forces can now quickly respond to instances where foreign states and actors with malicious intent put out information that can cause harm to the nation’s interests, image and integrity.
This step helps the forces deal with countering misinformation, ensuring national security and even addressing issues of online propaganda. An example of sensitive content about the army leading to legal intervention is the case of Delhi-based magazine The Caravan. The Defence Ministry, along with the Intelligence Bureau and the Jammu and Kashmir police ordered the Delhi-based publication to remove an article claiming the murder and torture of civilians by the Indian army in Jammu and Kashmir citing the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The instruction was challenged by the magazine in the courts.
CONCLUSION:
This move brings with it potential benefits along with risks and the focus should always be on maintaining a balanced approach. Transparency and accountability are imperative and checks on related guidelines so as to prevent misuse while simultaneously protecting national security should be at the centre of the objective of the policy approach. Misinformation in and about the armed forces must be dealt with immediately.
REFERENCES:
- https://www.hindustantimes.com/india-news/army-can-now-directly-issue-notices-to-remove-online-posts-101730313177838.html
- https://www.hindustantimes.com/india-news/inside-79-3-b-the-content-blocking-provision-with-many-legal-grey-areas-101706987924882.html
- https://www.thehindu.com/news/national/govt-orders-magazine-to-take-down-article-on-army-torture-and-murder-in-jammu/article67840790.ece
- https://myind.net/Home/viewArticle/army-gains-authority-to-directly-issue-notice-to-take-down-online-posts