#FactCheck - Misleading Video Allegedly Depicting Trampling of Indian Tri-colour in Kerala or Tamil Nadu Circulates on Social Media
Executive Summary:
The video that allegedly showed cars running into an Indian flag while Pakistan flags flying in the air in Indian states, went viral on social media but it has been established to be misleading. The video posted is neither from Kerala nor Tamil Nadu as claimed, instead from Karachi, Pakistan. There are specific details like the shop's name, Pakistani flags, car’s number plate, geolocation analyses that locate where the video comes from. The false information underscores the importance of verifying information before sharing it.


Claims:
A video circulating on social media shows cars trampling the Indian Tricolour painted on a road, as Pakistani flags are raised in pride, with the incident allegedly taking place in Tamil Nadu or Kerala.


Fact Check:
Upon receiving the post we closely watched the video, and found several signs that indicated the video was from Pakistan but not from any place in India.
We divided the video into keyframes and found a shop name near the road.
We enhanced the image quality to see the shop name clearly.


We can see that it’s written as ‘Sanam’, also we can see Pakistan flags waving on the road. Taking a cue from this we did some keyword searches with the shop name. We found some shops with the name and one of the shop's name ‘Sanam Boutique’ located in Karachi, Pakistan, was found to be similar when analyzed using geospatial Techniques.



We also found a similar structure of the building while geolocating the place with the viral video.


Additional confirmation of the place is the car’s number plate found in the keyframes of the video.

We found a website that shows the details of the number Plate in Karachi, Pakistan.

Upon thorough investigation, it was found that the location in the viral video is from Karachi, Pakistan, but not from Kerala or Tamil Nadu as claimed by different users in Social Media. Hence, the claim made is false and misleading.
Conclusion:
The video circulating on social media, claiming to show cars trampling the Indian Tricolour on a road while Pakistani flags are waved, does not depict an incident in Kerala or Tamil Nadu as claimed. By fact-checking methodologies, it has been confirmed now that the location in the video is actually from Karachi, Pakistan. The misrepresentation shows the importance of verifying the source of any information before sharing it on social media to prevent the spread of false narratives.
- Claim: A video shows cars trampling the Indian Tricolour painted on a road, as Pakistani flags are raised in pride, taking place in Tamil Nadu or Kerala.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs

In the pulsating heart of the digitized era, our world is rapidly morphing into a tightly knit network of interconnections. Concurrently, the vast expanse of the cyber realm continues to broaden at an unparalleled pace. As we, denizens of the Information Revolution, pioneer this challenging new frontier, a novel notion is steadily gaining traction as an essential instrument for tackling the multifaceted predicaments and hazards emanating from our escalating dependency on digital technology. This novel notion is cyber diplomacy.
Recently, a riveting discourse unraveling the continually evolving topography of cyber diplomacy unfolded on the podcast 'Patching the System.' Two distinguished personalities graced the conversation - Benedikt Wechsler, Switzerland's Ambassador for Digitization, and Kaja Ciglic, Senior Director of Digital Diplomacy at Microsoft. This thought-provoking dialogue provides a mesmerizing peek into the intricate maze of this freshly minted diplomatic domain - a landscape still in the process of carving out its rules against an ever-escalating high stakes backdrop.
Call for Robust International Norms
During their enlightening exchange, Wechsler and Ciglic shed light on the dire need of robust international norms and regulations in dynamic cyberspace. The drew comparison with well established norms governing maritime and airspace activities, suggesting a similar framework to maneuver the intricacies of the digital realm. The necessity of this mammoth task is accentuated by swift technological development and the unique nature of the internet where participation is diverse.
Their discourse also underscores the critical argument that cyberspace cannot be commoditized. It has evolved into critical infrastructure that demands collective supervision. Wechsler also advocated for collaboration and the importance of a united front composed of big tech giants and the government working in tandem for creation of a resilient and secured digital landscape.
Dual Edged Sword
Their conversation courageously plunged into the more sinister depths of the digital world and dissected the rising tide of cyberspace militarisation. Illustrative case point, recent cyber operations in Ukraine starkly underscore how malevolent elements have exploited digital tools to disastrous effect. Ciglic astutely pointed out the inherent dual nature of this scenario - while malignant entities will persistently manipulate technologies like AI, these identical tools can simultaneously serve as critical allies in reinforcing cyber defenses.
In finality, the dialogue unspools a potent call to arms. Both Wechsler and Ciglic fervently endorse the inception of a permanent body under the United Nations' purview specifically designed to tackle cyber-related quandaries. They also amplified the significance of an inclusive engagement process involving diverse stakeholders cutting across sectors - private entities, academia, civil society.
In India, this strategy is very practical. India has been making proactive investments in cybersecurity and digital resilience due to its rapidly developing digital ecosystem and strong IT industry. The government of the country, business executives, and academic institutions understand how strategically important it is to protect vital digital infrastructure and data. For example, India has seen a number of high-profile assaults on its vital infrastructure, like the Mumbai power outage in 2020, which emphasizes the necessity for extensive cybersecurity protections. The security components of the digital ecosystem have been given top priority by the Indian government's "Digital India" project, which aims to promote digital inclusion. This program has improved cybersecurity while simultaneously making great progress toward closing the nation's digital gap, especially in rural areas.
India's growing influence on global affairs and its prowess in the digital realm highlight how important it is to incorporate Indian viewpoints into the larger plan. By doing this, it guarantees a thorough and all-encompassing strategy that negotiates the intricacies of the Indian and global digital ecosystems. This strategy enhances cybersecurity at the national level and establishes India as a key global partner in the endeavor to make the internet a safer and more secure place for everyone. The whole community may benefit greatly from India's experiences and activities in combating cyber dangers and enhancing resilience in an increasingly interconnected world.
Conclusion
As we meticulously chart our trajectory across the cyber wilderness, the wisdom disseminated by Wechsler and Ciglic emerges as a priceless navigational aid. They inspire us to remember that while the gauntlet we face may be daunting, the opportunities unfurling before us are equally, if not more, monumental in their potential. By embracing a multi-faceted, synergistic approach, we set the stage for a shared journey towards a safer, resilient digital habitat.
The timeless words of Albert Einstein echo these sentiments: 'Technology advances could have made human life carefree and happy if the development of the organizing power of men [and women] had been able to keep pace with its technical advances.' As we grapple with the perplexities and burstiness of the digital age, let these words guide our collective endeavor as we strive to balance our organizing prowess with our rapid technological advancements.

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.
Reference

Introduction
Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework and eco-system for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. The Indian Ministry of Home Affairs approved a scheme for the establishment of the Indian Cyber Crime Coordination Centre (I4C) in October2018, which was inaugurated by Home Minister Amit Shah in January 2020. I4C is envisaged to act as the nodal point to curb Cybercrime in the country. Recently, on 13th March2024, the Centre designated the Indian Cyber Crime Coordination Centre (I4C) as an agency of the Ministry of Home Affairs (MHA) to perform the functions under the Information Technology Act, 2000, to inform about unlawful cyber activities.
The gazetted notification dated 13th March 2024 read as follows:
“In exercise of the powers conferred by clause (b) of sub-section (3) of section 79 of the Information Technology Act 2000, Central Government being the appropriate government hereby designate the Indian Cybercrime Coordination Centre (I4C), to be the agency of the Ministry of Home Affairs to perform the functions under clause (b) of sub-section (3) of section79 of Information Technology Act, 2000 and to notify the instances of information, data or communication link residing in or connected to a computer resource controlled by the intermediary being used to commit the unlawful act.”
Impact
Now, the Indian Cyber Crime Coordination Centre (I4C) is empowered to issue direct takedown orders under 79(b)(3) of the IT Act, 2000. Any information, data or communication link residing in or connected to a computer resource controlled by any intermediary being used to commit unlawful acts can be notified by the I4C to the intermediary. If an intermediary fails to expeditiously remove or disable access to a material after being notified, it will no longer be eligible for protection under Section 79 of the IT Act, 2000.
Safe Harbour Provision
Section79 of the IT Act also serves as a safe harbour provision for the Intermediaries. The safe harbour provision under Section 79 of the IT Act states that "an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by him". However, it is notable that this legal immunity cannot be granted if the intermediary "fails to expeditiously" take down a post or remove a particular content after the government or its agencies flag that the information is being used to commit something unlawful. Furthermore, Intermediaries are also obliged to perform due diligence on their platforms and comply with the rules & regulations and maintain and promote a safe digital environment on the respective platforms.
Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, The government has also mandated that a ‘significant social media intermediary’ must appoint a Chief Compliance Officer (CCO), Resident Grievance Officer (RGO), and Nodal Contact Person and publish periodic compliance report every month mentioning the details of complaints received and action taken thereon.
I4C's Role in Safeguarding Cyberspace
The Indian Cyber Crime Coordination Centre (I4C) is actively working towards initiatives to combat the emerging threats in cyberspace. I4C is one of the crucial extensions of the Ministry of Home Affairs, Government of India, working extensively to combat cyber crimes and ensure the overall safety of netizens. The ‘National Cyber Crime Reporting Portal’ equipped with a 24x7 helpline number 1930, is one of the key component of the I4C.
Components Of The I4C
- National Cyber Crime Threat Analytics Unit
- National Cyber Crime Reporting Portal
- National Cyber Crime Training Centre
- Cyber Crime Ecosystem Management Unit
- National Cyber Crime Research and Innovation Centre
- National Cyber Crime Forensic Laboratory Ecosystem
- Platform for Joint Cyber Crime Investigation Team.
Conclusion
I4C, through its initiatives and collaborative efforts, plays a pivotal role in safeguarding cyberspace and ensuring the safety of netizens. I4C reinforces India's commitment to combatting cybercrime and promoting a secure digital environment. The recent development by designating the I4C as an agency to notify the instances of unlawful activities in cyberspace serves as a significant step to counter cybercrime and promote an ethical and safe digital environment for netizens.
References
- https://www.deccanherald.com/india/centre-designates-i4c-as-agency-of-mha-to-notify-unlawful-activities-in-cyber-world-2936976
- https://www.business-standard.com/india-news/home-ministry-authorises-i4c-to-issue-takedown-notices-under-it-act-124031500844_1.html
- https://www.hindustantimes.com/india-news/it-ministry-empowers-i4c-to-notify-instances-of-cybercrime-101710443217873.html
- https://i4c.mha.gov.in/about.aspx#:~:text=Objectives%20of%20I4C,identifying%20Cybercrime%20trends%20and%20patterns