#FactCheck - Indian Men’s 4x400m Relay Team’s Record-Breaking Achievement in August 2023 Misrepresented as Recent Event
Executive Summary:
The viral video circulating on social media about the Indian men’s 4x400m relay team recently broke the Asian record and qualified for the finals of the world Athletics championship. The fact check reveals that this is not a recent event but it is from the World World Athletics Championships, August 2023 that happened in Budapest, Hungary. The Indian team comprising Muhammed Anas Yahiya, Amoj Jacob, Muhammed Ajmal Variyathodi, and Rajesh Ramesh, clocked a time of 2 minutes 59.05 seconds, finishing second behind the USA and breaking the Asian record. Although they performed very well in the heats, they only got fifth place in the finals. The video is being reuploaded with false claims stating its a recent record.

Claims:
A recent claim that the Indian men’s 4x400m relay team set the Asian record and qualified to the world finals.




Fact Check:
In the recent past, a video of the Indian Men’s 4x400m relay team which set a new Asian record is viral on different Social Media. Many believe that this is a video of the recent achievement of the Indian team. Upon receiving the posts, we did keyword searches based on the input and we found related posts from various social media. We found an article published by ‘The Hindu’ on August 27, 2023.

According to the article, the Indian team competed in the World Athletics Championship held in Budapest, Hungary. During that time, the team had a very good performance. The Indian team, which consisted of Muhammed Anas Yahiya, Amoj Jacob, Muhammed Ajmal Variyathodi, and Rajesh Ramesh, completed the race in 2:58.47 seconds, coming second after the USA in the event.
The earlier record was 3.00.25 which was set in 2021.

This was a new record in Asia, so it was a historic moment for India. Despite their great success, this video is being reshared with captions that implies this is a recent event, which has raised confusion. We also found various social media posts posted on Aug 26, 2023. We also found the same video posted on the official X account of Prime Minister Narendra Modi, the caption of the post reads, “Incredible teamwork at the World Athletics Championships!
Anas, Amoj, Rajesh Ramesh, and Muhammed Ajmal sprinted into the finals, setting a new Asian Record in the M 4X400m Relay.
This will be remembered as a triumphant comeback, truly historical for Indian athletics.”

This reveals that this is not a recent event but it is from the World World Athletics Championships, August 2023 that happened in Budapest, Hungary.
Conclusion:
The viral video of the recent news about the Indian men’s 4x400m relay team breaking the Asian record is not true. The video was from August 2023 that happened at the World Athletics Championships, Budapest. The Indian team broke the Asian record with 2 minutes 59.05 seconds in second position while the US team obtained first position with a timing of 2 minutes 58.47 seconds. However, the video circulated projecting as a recent event is misleading and false.
- Claim: Recent achievement of the Indian men's 4x400m relay team broke the Asian record and qualified for the World finals.
- Claimed on: X, LinkedIn, Instagram
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/
%20(1).webp)
Digitisation in Agriculture
The traditional way of doing agriculture has undergone massive digitization in recent years, whereby several agricultural processes have been linked to the Internet. This globally prevalent transformation, driven by smart technology, encompasses the use of sensors, IoT devices, and data analytics to optimize and automate labour-intensive farming practices. Smart farmers in the country and abroad now leverage real-time data to monitor soil conditions, weather patterns, and crop health, enabling precise resource management and improved yields. The integration of smart technology in agriculture not only enhances productivity but also promotes sustainable practices by reducing waste and conserving resources. As a result, the agricultural sector is becoming more efficient, resilient, and capable of meeting the growing global demand for food.
Digitisation of Food Supply Chains
There has also been an increase in the digitisation of food supply chains across the globe since it enables both suppliers and consumers to keep track of the stage of food processing from farm to table and ensures the authenticity of the food product. The latest generation of agricultural robots is being tested to minimise human intervention. It is thought that AI-run processes can mitigate labour shortage, improve warehousing and storage and make transportation more efficient by running continuous evaluations and adjusting the conditions real-time while increasing yield. The company Muddy Machines is currently trialling an autonomous asparagus-harvesting robot called Sprout that not only addresses labour shortages but also selectively harvests green asparagus, which traditionally requires careful picking. However, Chris Chavasse, co-founder of Muddy Machines, highlights that hackers and malicious actors could potentially hack into the robot's servers and prevent it from operating by driving it into a ditch or a hedge, thereby impending core crop activities like seeding and harvesting. Hacking agricultural pieces of machinery also implies damaging a farmer’s produce and in turn profitability for the season.
Case Study: Muddy Machines and Cybersecurity Risks
A cyber attack on digitised agricultural processes has a cascading impact on online food supply chains. Risks are non-exhaustive and spill over to poor protection of cargo in transit, increased manufacturing of counterfeit products, manipulation of data, poor warehousing facilities and product-specific fraud, amongst others. Additional impacts on suppliers are also seen, whereby suppliers have supplied the food products but fail to receive their payments. These cyber-threats may include malware(primarily ransomware) that accounts for 38% of attacks, Internet of Things (IoT) attacks that comprise 29%, Distributed Denial of Service (DDoS) attacks, SQL Injections, phishing attacks etc.
Prominent Cyber Attacks and Their Impacts
Ransomware attacks are the most popular form of cyber threats to food supply chains and may include malicious contaminations, deliberate damage and destruction of tangible assets (like infrastructure) or intangible assets (like reputation and brand). In 2017, NotPetya malware disrupted the world’s largest logistics giant Maersk and destroyed all end-user devices in more than 60 countries. Interestingly, NotPetya was also linked to the malfunction of freezers connected to control systems. The attack led to these control systems being compromised, resulting in freezer failures and potential spoilage of food, highlighting the vulnerability of industrial control systems to cyber threats.
Further Case Studies
NotPetya also impacted Mondelez, the maker of Oreos but disrupting its email systems, file access and logistics for weeks. Mondelez’s insurance claim was also denied since NotPetya malware was described as a “war-like” action, falling outside the purview of the insurance coverage. In April 2021, over the Easter weekend, Bakker Logistiek, a logistics company based in the Netherlands that offers air-conditioned warehousing and food transportation for Dutch supermarkets, experienced a ransomware attack. This incident disrupted their supply chain for several days, resulting in empty shelves at Albert Heijn supermarkets, particularly for products such as packed and grated cheese. Despite the severity of the attack, the company successfully restored their operations within a week by utilizing backups. JBS, one of the world’s biggest meat processing companies, also had to pay $11 million in ransom via Bitcoin to resolve a cyber attack in the same year, whereby computer networks at JBS were hacked, temporarily shutting down their operations and endangering consumer data. The disruption threatened food supplies and risked higher food prices for consumers. Additional cascading impacts also include low food security and hindrances in processing payments at retail stores.
Credible Threat Agents and Their Targets
Any cyber-attack is usually carried out by credible threat agents that can be classified as either internal or external threat agents. Internal threat agents may include contractors, visitors to business sites, former/current employees, and individuals who work for suppliers. External threat agents may include activists, cyber-criminals, terror cells etc. These threat agents target large organisations owing to their larger ransom-paying capacity, but may also target small companies due to their vulnerability and low experience, especially when such companies are migrating from analogous methods to digitised processes.
The Federal Bureau of Investigation warns that the food and agricultural systems are most vulnerable to cyber-security threats during critical planting and harvesting seasons. It noted an increase in cyber-attacks against six agricultural co-operatives in 2021, with ancillary core functions such as food supply and distribution being impacted. Resultantly, cyber-attacks may lead to a mass shortage of food not only meant for human consumption but also for animals.
Policy Recommendations
To safeguard against digital food supply chains, Food defence emerges as one of the top countermeasures to prevent and mitigate the effects of intentional incidents and threats to the food chain. While earlier, food defence vulnerability assessments focused on product adulteration and food fraud, including vulnerability assessments of agriculture technology now be more relevant.
Food supply organisations must prioritise regular backups of data using air-gapped and password-protected offline copies, and ensure critical data copies are not modifiable or deletable from the main system. For this, blockchain-based food supply chain solutions may be deployed, which are not only resilient to hacking, but also allow suppliers and even consumers to track produce. Companies like Ripe.io, Walmart Global Tech, Nestle and Wholechain deploy blockchain for food supply management since it provides overall process transparency, improves trust issues in the transactions, enables traceable and tamper-resistant records and allows accessibility and visibility of data provenance. Extensive recovery plans with multiple copies of essential data and servers in secure, physically separated locations, such as hard drives, storage devices, cloud or distributed ledgers should be adopted in addition to deploying operations plans for critical functions in case of system outages. For core processes which are not labour-intensive, including manual operation methods may be used to reduce digital dependence. Network segmentation, updates or patches for operating systems, software, and firmware are additional steps which can be taken to secure smart agricultural technologies.
References
- Muddy Machines website, Accessed 26 July 2024. https://www.muddymachines.com/
- “Meat giant JBS pays $11m in ransom to resolve cyber-attack”, BBC, 10 June 2021. https://www.bbc.com/news/business-57423008
- Marshall, Claire & Prior, Malcolm, “Cyber security: Global food supply chain at risk from malicious hackers.”, BBC, 20 May 2022. https://www.bbc.com/news/science-environment-61336659
- “Ransomware Attacks on Agricultural Cooperatives Potentially Timed to Critical Seasons.”, Private Industry Notification, Federal Bureau of Investigation, 20 April https://www.ic3.gov/Media/News/2022/220420-2.pdf.
- Manning, Louise & Kowalska, Aleksandra. (2023). “The threat of ransomware in the food supply chain: a challenge for food defence”, Trends in Organized Crime. https://doi.org/10.1007/s12117-023-09516-y
- “NotPetya: the cyberattack that shook the world”, Economic Times, 5 March 2022. https://economictimes.indiatimes.com/tech/newsletters/ettech-unwrapped/notpetya-the-cyberattack-that-shook-the-world/articleshow/89997076.cms?from=mdr
- Abrams, Lawrence, “Dutch supermarkets run out of cheese after ransomware attack.”, Bleeping Computer, 12 April 2021. https://www.bleepingcomputer.com/news/security/dutch-supermarkets-run-out-of-cheese-after-ransomware-attack/
- Pandey, Shipra; Gunasekaran, Angappa; Kumar Singh, Rajesh & Kaushik, Anjali, “Cyber security risks in globalised supply chains: conceptual framework”, Journal of Global Operations and Strategic Sourcing, January 2020. https://www.researchgate.net/profile/Shipra-Pandey/publication/338668641_Cyber_security_risks_in_globalized_supply_chains_conceptual_framework/links/5e2678ae92851c89c9b5ac66/Cyber-security-risks-in-globalized-supply-chains-conceptual-framework.pdf
- Daley, Sam, “Blockchain for Food: 10 examples to know”, Builin, 22 March 2023 https://builtin.com/blockchain/food-safety-supply-chain
.webp)
Introduction
The recent events in Mira Road, a bustling suburb on the outskirts of Mumbai, India, unfold like a modern-day parable, cautioning us against the perils of unverified digital content. The Mira Road incident, a communal clash that erupted into the physical realm, has been mirrored and magnified through the prism of social media. The Maharashtra Police, in a concerted effort to quell the spread of discord, issued stern warnings against the dissemination of rumours and fake messages. These digital phantoms, they stressed, have the potential to ignite law and order conflagrations, threatening the delicate tapestry of peace.
The police's clarion call came in the wake of a video, mischievously edited, that falsely claimed anti-social elements had set the Mira Road railway station ablaze. This digital doppelgänger of reality swiftly went viral, its tendrils reaching into the ubiquitous realm of WhatsApp, ensnaring the unsuspecting in its web of deceit.
In this age of information overload, where the line between fact and fabrication blurs, the police urged citizens to exercise discernment. The note they issued was not merely an advisory but a plea for vigilance, a reminder that the act of sharing unauthenticated messages is not a passive one; it is an act that can disturb the peace and unravel the fabric of society.
The Massacre
The police's response to this crisis was multifaceted. Administrators and members of social media groups found to be the harbingers of such falsehoods would face legal repercussions. The Thane District, a mosaic of cultural and religious significance, has been marred by a series of violent incidents, casting a shadow over its storied history. The police, in their role as guardians of order, have detained individuals, scoured social media for inauthentic posts, and maintained a vigilant presence in the region.
The Maharashtra cyber cell, a digital sentinel, has unearthed approximately 15 posts laden with videos and messages designed to sow discord among the masses. These findings were shared with the Mira-Bhayandar, Vasai-Virar (MBVV) police, who stand ready to take appropriate action. Inspector General Yashasvi Yadav of the Maharashtra cyber cell issued an appeal to the public, urging them to refrain from circulating such unverified messages, reinforcing the notion that the propagation of inauthentic information is, in itself, a crime.
The MBVV police, in their zero-tolerance stance, have formed a team dedicated to scrutinizing social media posts. The message is clear: fake news will be met with strict action. The right to free speech on social media comes with the responsibility not to share information that could incite mischief. The Indian Penal Code and Information Technology Act serve as the bulwarks against such transgressions.
The Aftermath
In the aftermath of the clashes, the police have worked tirelessly to restore calm. A young man, whose video replete with harsh and obscene language went viral, was apprehended and has since apologised for his actions. The MBVV police have also taken to social media to reassure the public that the situation is under control, urging them to avoid circulating messages that could exacerbate tensions.
The Thane district has witnessed acts of vandalism targeting shops, further escalating tensions. In response, the police have apprehended individuals linked to these acts, hoping that such measures will expedite the return of peace. Advisories have been issued, warning against the dissemination of provocative messages and rumours.
In total, 19 individuals have been taken into custody in relation to numerous incidents of violence. The Mira-Bhayandar and Vasai-Virar police have underscored their commitment to legal action against those who spread rumours through fake messages. The authorities have also highlighted the importance of brotherhood and unity, reminding citizens that above all, they are Indians first.
Conclusion
In a world where old videos, stripped of context, can fuel tensions, the police have issued a note referring to the aforementioned fake video message. They urge citizens to exercise caution, to neither believe nor circulate such messages. Police Authorities have assured that no one involved in the violence will be spared, and peace committees are being convened to restore harmony. The Mira Road incident serves as a sign of the prowess of information and responsibility that comes with it. In the digital age, where the ephemeral and the eternal collide, we must navigate the waters of truth with care. Ultimately, it is not just the image of a locality that is at stake, but the essence of our collective humanity.
References
- https://youtu.be/gK2Ac1qP-nE?feature=shared
- https://www.mid-day.com/mumbai/mumbai-crime-news/article/mira-road-communal-clash-those-spreading-fake-messages-to-face-strict-action-say-mira-bhayandar-vasai-virar-cops-23331572
- https://www.mid-day.com/mumbai/mumbai-news/article/mira-road-communal-clash-cybercops-on-alert-for-fake-clips-23331653
- https://www.theweek.in/wire-updates/national/2024/01/24/bom43-mh-shops-3rdld-vandalism.html