#FactCheck - AI Generated image of Virat Kohli falsely claims to be sand art of a child
Executive Summary:
The picture of a boy making sand art of Indian Cricketer Virat Kohli spreading in social media, claims to be false. The picture which was portrayed, revealed not to be a real sand art. The analyses using AI technology like 'Hive' and ‘Content at scale AI detection’ confirms that the images are entirely generated by artificial intelligence. The netizens are sharing these pictures in social media without knowing that it is computer generated by deep fake techniques.

Claims:
The collage of beautiful pictures displays a young boy creating sand art of Indian Cricketer Virat Kohli.




Fact Check:
When we checked on the posts, we found some anomalies in each photo. Those anomalies are common in AI-generated images.

The anomalies such as the abnormal shape of the child’s feet, blended logo with sand color in the second image, and the wrong spelling ‘spoot’ instead of ‘sport’n were seen in the picture. The cricket bat is straight which in the case of sand made portrait it’s odd. In the left hand of the child, there’s a tattoo imprinted while in other photos the child's left hand has no tattoo. Additionally, the face of the boy in the second image does not match the face in other images. These made us more suspicious of the images being a synthetic media.
We then checked on an AI-generated image detection tool named, ‘Hive’. Hive was found to be 99.99% AI-generated. We then checked from another detection tool named, “Content at scale”


Hence, we conclude that the viral collage of images is AI-generated but not sand art of any child. The Claim made is false and misleading.
Conclusion:
In conclusion, the claim that the pictures showing a sand art image of Indian cricket star Virat Kohli made by a child is false. Using an AI technology detection tool and analyzing the photos, it appears that they were probably created by an AI image-generated tool rather than by a real sand artist. Therefore, the images do not accurately represent the alleged claim and creator.
Claim: A young boy has created sand art of Indian Cricketer Virat Kohli
Claimed on: X, Facebook, Instagram
Fact Check: Fake & Misleading
Related Blogs

Executive Summary:
BrazenBamboo’s DEEPDATA malware represents a new wave of advanced cyber espionage tools, exploiting a zero-day vulnerability in Fortinet FortiClient to extract VPN credentials and sensitive data through fileless malware techniques and secure C2 communications. With its modular design, DEEPDATA targets browsers, messaging apps, and password stores, while leveraging reflective DLL injection and encrypted DNS to evade detection. Cross-platform compatibility with tools like DEEPPOST and LightSpy highlights a coordinated development effort, enhancing its espionage capabilities. To mitigate such threats, organizations must enforce network segmentation, deploy advanced monitoring tools, patch vulnerabilities promptly, and implement robust endpoint protection. Vendors are urged to adopt security-by-design practices and incentivize vulnerability reporting, as vigilance and proactive planning are critical to combating this sophisticated threat landscape.
Introduction
The increased use of zero-day vulnerabilities by more complex threat actors reinforces the importance of more developed countermeasures. One of the threat actors identified is BrazenBamboo uses a zero-day vulnerability in Fortinet FortiClient for Windows through the DEEPDATA advanced malware framework. This research explores technical details about DEEPDATA, the tricks used in its operations, and its other effects.
Technical Findings
1. Vulnerability Exploitation Mechanism
The vulnerability in Fortinet’s FortiClient lies in its failure to securely handle sensitive information in memory. DEEPDATA capitalises on this flaw via a specialised plugin, which:
- Accesses the VPN client’s process memory.
- Extracts unencrypted VPN credentials from memory, bypassing typical security protections.
- Transfers credentials to a remote C2 server via encrypted communication channels.
2. Modular Architecture
DEEPDATA exhibits a highly modular design, with its core components comprising:
- Loader Module (data.dll): Decrypts and executes other payloads.
- Orchestrator Module (frame.dll): Manages the execution of multiple plugins.
- FortiClient Plugin: Specifically designed to target Fortinet’s VPN client.
Each plugin operates independently, allowing flexibility in attack strategies depending on the target system.
3. Command-and-Control (C2) Communication
DEEPDATA establishes secure channels to its C2 infrastructure using WebSocket and HTTPS protocols, enabling stealthy exfiltration of harvested data. Technical analysis of network traffic revealed:
- Dynamic IP switching for C2 servers to evade detection.
- Use of Domain Fronting, hiding C2 communication within legitimate HTTPS traffic.
- Time-based communication intervals to minimise anomalies in network behavior.
4. Advanced Credential Harvesting Techniques
Beyond VPN credentials, DEEPDATA is capable of:
- Dumping password stores from popular browsers, such as Chrome, Firefox, and Edge.
- Extracting application-level credentials from messaging apps like WhatsApp, Telegram, and Skype.
- Intercepting credentials stored in local databases used by apps like KeePass and Microsoft Outlook.
5. Persistence Mechanisms
To maintain long-term access, DEEPDATA employs sophisticated persistence techniques:
- Registry-based persistence: Modifies Windows registry keys to reload itself upon system reboot.
- DLL Hijacking: Substitutes legitimate DLLs with malicious ones to execute during normal application operations.
- Scheduled Tasks and Services: Configures scheduled tasks to periodically execute the malware, ensuring continuous operation even if detected and partially removed.
Additional Tools in BrazenBamboo’s Arsenal
1. DEEPPOST
A complementary tool used for data exfiltration, DEEPPOST facilitates the transfer of sensitive files, including system logs, captured credentials, and recorded user activities, to remote endpoints.
2. LightSpy Variants
- The Windows variant includes a lightweight installer that downloads orchestrators and plugins, expanding espionage capabilities across platforms.
- Shellcode-based execution ensures that LightSpy’s payload operates entirely in memory, minimising artifacts on the disk.
3. Cross-Platform Overlaps
BrazenBamboo’s shared codebase across DEEPDATA, DEEPPOST, and LightSpy points to a centralised development effort, possibly linked to a Digital Quartermaster framework. This shared ecosystem enhances their ability to operate efficiently across macOS, iOS, and Windows systems.
Notable Attack Techniques
1. Memory Injection and Data Extraction
Using Reflective DLL Injection, DEEPDATA injects itself into legitimate processes, avoiding detection by traditional antivirus solutions.
- Memory Scraping: Captures credentials and sensitive information in real-time.
- Volatile Data Extraction: Extracts transient data that only exists in memory during specific application states.
2. Fileless Malware Techniques
DEEPDATA leverages fileless infection methods, where its payload operates exclusively in memory, leaving minimal traces on the system. This complicates post-incident forensic investigations.
3. Network Layer Evasion
By utilising encrypted DNS queries and certificate pinning, DEEPDATA ensures that network-level defenses like intrusion detection systems (IDS) and firewalls are ineffective in blocking its communications.
Recommendations
1. For Organisations
- Apply Network Segmentation: Isolate VPN servers from critical assets.
- Enhance Monitoring Tools: Deploy behavioral analysis tools that detect anomalous processes and memory scraping activities.
- Regularly Update and Patch Software: Although Fortinet has yet to patch this vulnerability, organisations must remain vigilant and apply fixes as soon as they are released.
2. For Security Teams
- Harden Endpoint Protections: Implement tools like Memory Integrity Protection to prevent unauthorised memory access.
- Use Network Sandboxing: Monitor and analyse outgoing network traffic for unusual behaviors.
- Threat Hunting: Proactively search for indicators of compromise (IOCs) such as unauthorised DLLs (data.dll, frame.dll) or C2 communications over non-standard intervals.
3. For Vendors
- Implement Security by Design: Adopt advanced memory protection mechanisms to prevent credential leakage.
- Bug Bounty Programs: Encourage researchers to report vulnerabilities, accelerating patch development.
Conclusion
DEEPDATA is a form of cyber espionage and represents the next generation of tools that are more advanced and tunned for stealth, modularity and persistence. While Brazen Bamboo is in the process of fine-tuning its strategies, the organisations and vendors have to be more careful and be ready to respond to these tricks. The continuous updating, the ability to detect the threats and a proper plan on how to deal with incidents are crucial in combating the attacks.
References:

There has been a struggle to create legal frameworks that can define where free speech ends and harmful misinformation begins, specifically in democratic societies where the right to free expression is a fundamental value. Platforms like YouTube, Wikipedia, and Facebook have gained a huge consumer base by focusing on hosting user-generated content. This content includes anything a visitor puts on a website or social media pages.
The legal and ethical landscape surrounding misinformation is dependent on creating a fine balance between freedom of speech and expression while protecting public interests, such as truthfulness and social stability. This blog is focused on examining the legal risks of misinformation, specifically user-generated content, and the accountability of platforms in moderating and addressing it.
The Rise of Misinformation and Platform Dynamics
Misinformation content is amplified by using algorithmic recommendations and social sharing mechanisms. The intent of spreading false information is closely interwoven with the assessment of user data to identify target groups necessary to place targeted political advertising. The disseminators of fake news have benefited from social networks to reach more people, and from the technology that enables faster distribution and can make it more difficult to distinguish fake from hard news.
Multiple challenges emerge that are unique to social media platforms regulating misinformation while balancing freedom of speech and expression and user engagement. The scale at which content is created and published, the different regulatory standards, and moderating misinformation without infringing on freedom of expression complicate moderation policies and practices.
The impacts of misinformation on social, political, and economic consequences, influencing public opinion, electoral outcomes, and market behaviours underscore the urgent need for effective regulation, as the consequences of inaction can be profound and far-reaching.
Legal Frameworks and Evolving Accountability Standards
Safe harbour principles allow for the functioning of a free, open and borderless internet. This principle is embodied under the US Communications Decency Act and the Information Technology Act in Sections 230 and 79 respectively. They play a pivotal role in facilitating the growth and development of the Internet. The legal framework governing misinformation around the world is still in nascent stages. Section 230 of the CDA protects platforms from legal liability relating to harmful content posted on their sites by third parties. It further allows platforms to police their sites for harmful content and protects them from liability if they choose not to.
By granting exemptions to intermediaries, these safe harbour provisions help nurture an online environment that fosters free speech and enables users to freely express themselves without arbitrary intrusions.
A shift in regulations has been observed in recent times. An example is the enactment of the Digital Services Act of 2022 in the European Union. The Act requires companies having at least 45 million monthly users to create systems to control the spread of misinformation, hate speech and terrorist propaganda, among other things. If not followed through, they risk penalties of up to 6% of the global annual revenue or even a ban in EU countries.
Challenges and Risks for Platforms
There are multiple challenges and risks faced by platforms that surround user-generated misinformation.
- Moderating user-generated misinformation is a big challenge, primarily because of the quantity of data in question and the speed at which it is generated. It further leads to legal liabilities, operational costs and reputational risks.
- Platforms can face potential backlash, both in instances of over-moderation or under-moderation. It can be considered as censorship, often overburdening. It can also be considered as insufficient governance in cases where the level of moderation is not protecting the privacy rights of users.
- Another challenge is more in the technical realm, including the limitations of AI and algorithmic moderation in detecting nuanced misinformation. It holds out to the need for human oversight to sift through the misinformation that is created by AI-generated content.
Policy Approaches: Tackling Misinformation through Accountability and Future Outlook
Regulatory approaches to misinformation each present distinct strengths and weaknesses. Government-led regulation establishes clear standards but may risk censorship, while self-regulation offers flexibility yet often lacks accountability. The Indian framework, including the IT Act and the Digital Personal Data Protection Act of 2023, aims to enhance data-sharing oversight and strengthen accountability. Establishing clear definitions of misinformation and fostering collaborative oversight involving government and independent bodies can balance platform autonomy with transparency. Additionally, promoting international collaborations and innovative AI moderation solutions is essential for effectively addressing misinformation, especially given its cross-border nature and the evolving expectations of users in today’s digital landscape.
Conclusion
A balance between protecting free speech and safeguarding public interest is needed to navigate the legal risks of user-generated misinformation poses. As digital platforms like YouTube, Facebook, and Wikipedia continue to host vast amounts of user content, accountability measures are essential to mitigate the harms of misinformation. Establishing clear definitions and collaborative oversight can enhance transparency and build public trust. Furthermore, embracing innovative moderation technologies and fostering international partnerships will be vital in addressing this cross-border challenge. As we advance, the commitment to creating a responsible digital environment must remain a priority to ensure the integrity of information in our increasingly interconnected world.
References
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://www.thehindu.com/opinion/op-ed/should-digital-platform-owners-be-held-liable-for-user-generated-content/article68609693.ece
- https://hbr.org/2021/08/its-time-to-update-section-230
- https://www.cnbctv18.com/information-technology/deepfakes-digital-india-act-safe-harbour-protection-information-technology-act-sajan-poovayya-19255261.htm

Introduction
AI has transformed the way we look at advanced technologies. As the use of AI is evolving, it also raises a concern about AI-based deepfake scams. Where scammers use AI technologies to create deep fake videos, images and audio to deceive people and commit AI-based crimes. Recently a Kerala man fall victim to such a scam. He received a WhatsApp video call, the scammer impersonated the face of the victim’s known friend using AI-based deep fake technology. There is a need for awareness and vigilance to safeguard ourselves from such incidents.
Unveiling the Kerala deep fake video call Scam
The man in Kerala received a WhatsApp video call from a person claiming to be his former colleague in Andhra Pradesh. In actuality, he was the scammer. He asked for help of 40,000 rupees from the Kerala man via google pay. Scammer to gain the trust even mentioned some common friends with the victim. The scammer said that he is at the Dubai airport and urgently need the money for the medical emergency of his sister.
As AI is capable of analysing and processing data such as facial images, videos, and audio creating a realistic deep fake of the same which closely resembles as real one. In the Kerala Deepfake video call scam the scammer made a video call that featured a convincingly similar facial appearance and voice as same to the victim’s colleague which the scammer was impersonating. The Kerala man believing that he was genuinely communicating with his colleague, transferred the money without hesitation. The Kerala man then called his former colleague on the number he had saved earlier in his contact list, and his former colleague said that he has not called him. Kerala man realised that he had been cheated by a scammer, who has used AI-based deep-fake technology to impersonate his former colleague.
Recognising Deepfake Red Flags
Deepfake-based scams are on the rise, as they pose challenges that really make it difficult to distinguish between genuine and fabricated audio, videos and images. Deepfake technology is capable of creating entirely fictional photos and videos from scratch. In fact, audio can be deepfaked too, to create “voice clones” of anyone.
However, there are some red flags which can indicate the authenticity of the content:
- Video quality- Deepfake videos often have compromised or poor video quality, and unusual blur resolution, which might pose a question to its genuineness.
- Looping videos: Deepfake videos often loop or unusually freeze or where the footage repeats itself, indicating that the video content might be fabricated.
- Verify Separately: Whenever you receive requests for such as financial help, verify the situation by directly contacting the person through a separate channel such as a phone call on his primary contact number.
- Be vigilant: Scammers often possess a sense of urgency leading to giving no time to the victim to think upon it and deceiving them by making a quick decision. So be vigilant and cautious when receiving and entertaining such a sudden emergency which demands financial support from you on an urgent basis.
- Report suspicious activity: If you encounter such activities on your social media accounts or through such calls report it to the platform or to the relevant authority.
Conclusion
The advanced nature of AI deepfake technology has introduced challenges in combatting such AI-based cyber crimes. The Kerala man’s case of falling victim to an AI-based deepfake video call and losing Rs 40,000 serves as an alarming need to remain extra vigilant and cautious in the digital age. So in the reported incident where Kerala man received a call from a person appearing as his former colleague but in actuality, he was a scammer and tricking the victim by using AI-based deepfake technology. By being aware of such types of rising scams and following precautionary measures we can protect ourselves from falling victim to such AI-based cyber crimes. And stay protected from such malicious scammers who exploit these technologies for their financial gain. Stay cautious and safe in the ever-evolving digital landscape.