#FactCheck - AI Generated image of Virat Kohli falsely claims to be sand art of a child
Executive Summary:
The picture of a boy making sand art of Indian Cricketer Virat Kohli spreading in social media, claims to be false. The picture which was portrayed, revealed not to be a real sand art. The analyses using AI technology like 'Hive' and ‘Content at scale AI detection’ confirms that the images are entirely generated by artificial intelligence. The netizens are sharing these pictures in social media without knowing that it is computer generated by deep fake techniques.

Claims:
The collage of beautiful pictures displays a young boy creating sand art of Indian Cricketer Virat Kohli.




Fact Check:
When we checked on the posts, we found some anomalies in each photo. Those anomalies are common in AI-generated images.

The anomalies such as the abnormal shape of the child’s feet, blended logo with sand color in the second image, and the wrong spelling ‘spoot’ instead of ‘sport’n were seen in the picture. The cricket bat is straight which in the case of sand made portrait it’s odd. In the left hand of the child, there’s a tattoo imprinted while in other photos the child's left hand has no tattoo. Additionally, the face of the boy in the second image does not match the face in other images. These made us more suspicious of the images being a synthetic media.
We then checked on an AI-generated image detection tool named, ‘Hive’. Hive was found to be 99.99% AI-generated. We then checked from another detection tool named, “Content at scale”


Hence, we conclude that the viral collage of images is AI-generated but not sand art of any child. The Claim made is false and misleading.
Conclusion:
In conclusion, the claim that the pictures showing a sand art image of Indian cricket star Virat Kohli made by a child is false. Using an AI technology detection tool and analyzing the photos, it appears that they were probably created by an AI image-generated tool rather than by a real sand artist. Therefore, the images do not accurately represent the alleged claim and creator.
Claim: A young boy has created sand art of Indian Cricketer Virat Kohli
Claimed on: X, Facebook, Instagram
Fact Check: Fake & Misleading
Related Blogs

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

Introduction
The Central Board of Secondary Education (CBSE) has issued a warning to students about fake social media accounts that spread false information about the CBSE. The board has warned students not to trust the information coming from these accounts and has released a list of 30 fake accounts. The board has expressed concern that these handles are misleading students and parents by spreading fake information with the name and logo of the CBSE. The board has has also clarified that it is not responsible for the information being spread from these fake accounts.
The Central Board of Secondary Education (CBSE), a venerable institution in the realm of Indian education, has found itself ensnared in the web of cyber duplicity. Impersonation attacks, a sinister facet of cybercrime, have burgeoned, prompting the Board to adopt a vigilant stance against the proliferation of counterfeit social media handles that masquerade under its esteemed name and emblem.
The CBSE, has revealed a list of approximately 30 spurious handles that have been sowing seeds of disinformation across the social media landscape. These digital doppelgängers, cloaked in the Board's identity, have been identified and exposed. The Board's official beacon in this murky sea of falsehoods is the verified handle '@cbseindia29', a lighthouse guiding the public to the shores of authentic information.
This unfolding narrative signifies the Board's unwavering commitment to tackle the scourge of misinformation and to fortify the bulwarks safeguarding the sanctity of its official communications. By spotlighting the rampant growth of fake social media personas, the CBSE endeavors to shield the public from the detrimental effects of misleading information and to preserve the trust vested in its official channels.
CBSE Impersonator Accounts
The list of identified malefactors, parading under the CBSE banner, serves as a stark admonition to the public to exercise discernment while navigating the treacherous waters of social media platforms. The CBSE has initiated appropriate legal manoeuvres against these unauthorised entities to stymie their dissemination of fallacious narratives.
The Board has previously unfurled comprehensive details concerning the impending board examinations for both Class 10 and Class 12 in the year 2024. These academic assessments are slated to commence from February 15 to April 2, 2024, with a uniform start time of 10:30 AM (IST) across all designated dates.
The CBSE has made it unequivocally clear that there are nefarious entities lurking in the shadows of social media, masquerading in the guise of the CBSE. It has implored students and the general public not to be ensnared by the siren songs emanating from these fraudulent accounts and has also unfurled a list of these imposters. The Board's warning is a beacon of caution, illuminating the path for students as they navigate the digital expanse with the impending commencement of the CBSE Class X and XII exams.
Sounding The Alarm
The Central Board of Secondary Education (CBSE) has sounded the alarm, issuing an advisory to schools, students, and their guardians about the existence of fake social media platform handles that brandish the board’s logo and mislead the academic community. The board has identified about 30 such accounts on the microblogging site 'X' (formerly known as Twitter) that misuse the CBSE logo and acronym, sowing confusion and disarray.
The board is in the process of taking appropriate action against these deceptive entities. CBSE has also stated that it bears no responsibility for any information disseminated by any other source that unlawfully appropriates its name and logo on social media platforms.
Sources reveal that these impostors post false information on various updates, including admissions and exam schedules. After receiving complaints about such accounts on 'X', the CBSE issued the advisory and has initiated action against those operating these accounts, sources said.
The Brute Nature of Impersonation
In the contemporary digital epoch, cybersecurity has ascended to a position of critical importance. It is the bulwark that ensures the sanctity of computer networks is maintained and that computer systems are not marked as prey by cyber predators. Cyberattacks are insidious stratagems executed with the intent of expropriating, manipulating, or annihilating authenticated user or organizational data. It is imperative that cyberattacks be mitigated at their roots so that users and organizations utilizing internet services can navigate the digital domain with a sense of safety and security. Knowledge about cyberattacks thus plays a pivotal role in educating cyber users about the diverse types of cyber threats and the preventive measures to counteract them.
Impersonation Attacks are a vicious form of cyberattack, characterised by the malicious intent to extract confidential information. These attacks revolve around a process where cyber attackers eschew the use of malware or bots to perpetrate their crimes, instead wielding the potent tactic of social engineering. The attacker meticulously researches and harvests information about the legitimate user through platforms such as social media and then exploits this information to impersonate or masquerade as the original, legitimate user.
The threats posed by Impersonation Attacks are particularly insidious because they demand immediate action, pressuring the victim to act without discerning between the authenticated user and the impersonated one. The very nature of an Impersonation Attack is a perilous form of cyber assault, as the original user who is impersonated holds rights to private information. These attacks can be executed by exploiting a resemblance to the original user's identity, such as email IDs. Email IDs with minute differences from the legitimate user are employed in this form of attack, setting it apart from the phishing cyber mechanism. The email addresses are so similar and close to each other that, without paying heed or attention to them, the differences can be easily overlooked. Moreover, the email addresses appear to be correct, as they generally do not contain spelling errors.
Strategies to Prevent
To prevent Impersonation Attacks, the following strategies can be employed:
- Proper security mechanisms help identify malicious emails and thereby filter spamming email addresses on a regular basis.
- Double-checking sensitive information is crucial, especially when important data or funds need to be transferred. It is vital to ensure that the data is transferred to a legitimate user by cross-verifying the email address.
- Ensuring organizational-level security is paramount. Organizations should have specific domain names assigned to them, which can help employees and users distinguish their identity from that of cyber attackers.
- Protection of User Identity is essential. Employees must not publicly share their private identities, which can be exploited by attackers to impersonate their presence within the organization.
Conclusion
The CBSE's struggle against the masquerade of misinformation is a reminder of the vigilance required to safeguard the legitimacy of our digital interactions. As we navigate the complex and uncharted terrain of the internet, let us arm ourselves with the knowledge and discernment necessary to unmask these digital charlatans and uphold the sanctity of truth.
References
- https://timesofindia.indiatimes.com/city/ahmedabad/cbse-warns-against-misuse-of-its-name-by-fake-social-media-handles/articleshow/107644422.cms
- https://www.timesnownews.com/education/cbse-releases-list-of-fake-social-media-handles-asks-not-to-follow-article-107632266
- https://www.etvbharat.com/en/!bharat/cbse-public-advisory-enn24021205856

Introduction
“an intermediary, on whose computer resource the information is stored, hosted or published, upon receiving actual knowledge in the form of an order by a court of competent jurisdiction or on being notified by the Appropriate Government or its agency under clause (b) of sub-section (3) of section 79 of the Act, shall not , which is prohibited under any law for the time being in force in relation to the interest of the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation; incitement to an offence relating to the above, or any information which is prohibited under any law for the time being in force”
Law grows by confronting its absences, it heals itself through its own gaps. The most recent notification from MeitY, G.S.R. 775(E) dated October 22, 2025, is an illustration of that self-correction. On November 15, 2025, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, will come into effect. They accomplish two crucial things: they restrict who can use "actual knowledge” to initiate takedown and require senior-level scrutiny of those directives. By doing this, they maintain genuine security requirements while guiding India’s content governance system towards more transparent due process.
When Regulation Learns Restraint
To better understand the jurisprudence of revision, one must need to understand that Regulation, in its truest form, must know when to pause. The 2025 amendment marks that rare moment when the government chooses precision over power, when regulation learns restraint. The amendment revises Rule 3(1)(d) of the 2021 Rules. Social media sites, hosting companies, and other digital intermediaries are still required to take action within 36 hours of receiving “actual knowledge” that a piece of content is illegal (e.g. poses a threat to public order, sovereignty, decency, or morality). However, “actual knowledge” now only occurs in the following situations:
(i) a court order from a court of competent jurisdiction, or
(ii) a reasoned written intimation from a duly authorised government officer not below Joint Secretary rank (or equivalent)
The authorised authority in matters involving the police “must not be below the rank of Deputy Inspector General of Police (DIG)”. This creates a well defined, senior-accountable channel in place of a diffuse trigger.
There are two more new structural guardrails. The Rules first establish a monthly assessment of all takedown notifications by a Secretary-level officer of the relevant government to test necessity, proportionality, and compliance with India’s safe harbour provision under Section 79(3) of the IT Act. Second, in order for platforms to act precisely rather than in an expansive manner, takedown requests must be accompanied by legal justification, a description of the illegal act, and precise URLs or identifiers. The cumulative result of these guardrails is that each removal has a proportionality check and a paper trail.
Due Process as the Law’s Conscience
Indian jurisprudence has been debating what constitutes “actual knowledge” for over a decade. The Supreme Court in Shreya Singhal (2015) connected an intermediary’s removal obligation to notifications from official channels or court orders rather than vague notice. But over time, that line became hazy due to enforcement practices and some court rulings, raising concerns about over-removal and safe-harbour loss under Section 79(3). Even while more recent decisions questioned the “reasonable efforts” of intermediaries, the 2025 amendment institutionally pays homage to Shreya Singhal’s ethos by refocusing “actual knowledge” on formal reviewable communications from senior state actors or judges.
The amendment also introduces an internal constitutionalism to executive orders by mandating monthly audits at the Secretary level. The state is required to re-justify its own orders on a rolling basis, evaluating them against proportionality and necessity, which are criteria that Indian courts are increasingly requesting for speech restrictions. Clearer triggers, better logs, and less vague “please remove” communications that previously left compliance teams in legal limbo are the results for intermediaries.
The Court’s Echo in the Amendment
The essence of this amendment is echoed in Karnataka High Court’s Ruling on Sahyog Portal, a government portal used to coordinate takedown orders under Section 79(3)(b), was constitutional. The HC rejected X’s (formerly Twitter’s) appeal contesting the legitimacy of the portal in September. The business had claimed that by giving nodal officers the authority to issue takedown orders without court review, the portal permitted arbitrary content removals. The court disagreed, holding that the officers’ acts were in accordance with Section 79 (3)(b) and that they were “not dropping from the air but emanating from statutes.” The amendment turns compliance into conscience by conforming to the Sahyog Portal verdict, reiterating that due process is the moral grammar of governance rather than just a formality.
Conclusion: The Necessary Restlessness of Law
Law cannot afford stillness; it survives through self doubt and reinvention. The 2025 amendment, too, is not a destination, it’s a pause before the next question, a reminder that justice breathes through revision. As befits a constitutional democracy, India’s path to content governance has been combative and iterative. The next rule making cycle has been sharpened by the stays split judgments, and strikes down that have resulted from strategic litigation centred on the IT Rules, safe harbour, government fact-checking, and blocking orders. Lessons learnt are reflected in the 2025 amendment: review triumphs over opacity; specificity triumphs over vagueness; and due process triumphs over discretion. A digital republic balances freedom and force in this way.
Sources
- https://pressnews.in/law-and-justice/government-notifies-amendments-to-it-rules-2025-strengthening-intermediary-obligations/
- https://www.meity.gov.in/static/uploads/2025/10/90dedea70a3fdfe6d58efb55b95b4109.pdf
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2181719
- https://www.scobserver.in/journal/x-relies-on-shreya-singhal-in-arbitrary-content-blocking-case-in-karnataka-hc/
- https://www.medianama.com/2025/10/223-content-takedown-rules-online-platforms-36-hr-deadline-officer-rank/#:~:text=It%20specifies%20that%20government%20officers,Deputy%20Inspector%20General%20of%20Police%E2%80%9D.