#FactCheck - Virat Kohli's Ganesh Chaturthi Video Falsely Linked to Ram Mandir Inauguration
Executive Summary:
Old footage of Indian Cricketer Virat Kohli celebrating Ganesh Chaturthi in September 2023 was being promoted as footage of Virat Kohli at the Ram Mandir Inauguration. A video of cricketer Virat Kohli attending a Ganesh Chaturthi celebration last year has surfaced, with the false claim that it shows him at the Ram Mandir consecration ceremony in Ayodhya on January 22. The Hindi newspaper Dainik Bhaskar and Gujarati newspaper Divya Bhaskar also displayed the now-viral video in their respective editions on January 23, 2024, escalating the false claim. After thorough Investigation, it was found that the Video was old and it was Ganesh Chaturthi Festival where the cricketer attended.
Claims:
Many social media posts, including those from news outlets such as Dainik Bhaskar and Gujarati News Paper Divya Bhaskar, show him attending the Ram Mandir consecration ceremony in Ayodhya on January 22, where after investigation it was found that the Video was of Virat Kohli attending Ganesh Chaturthi in September, 2023.
The caption of Dainik Bhaskar E-Paper reads, “ क्रिकेटर विराट कोहली भी नजर आए ”
Fact Check:
CyberPeace Research Team did a reverse Image Search of the Video where several results with the Same Black outfit was shared earlier, from where a Bollywood Entertainment Instagram Profile named Bollywood Society shared the same Video in its Page, the caption reads, “Virat Kohli snapped for Ganapaati Darshan” the post was made on 20 September, 2023.
Taking an indication from this we did some keyword search with the Information we have, and it was found in an article by Free Press Journal, Summarizing the article we got to know that Virat Kohli paid a visit to the residence of Shiv Sena leader Rahul Kanal to seek the blessings of Lord Ganpati. The Viral Video and the claim made by the news outlet is false and Misleading.
Conclusion:
The recent Claim made by the Viral Videos and News Outlet is an Old Footage of Virat Kohli attending Ganesh Chaturthi the Video back to the year 2023 but not of the recent auspicious day of Ram Mandir Pran Pratishtha. To be noted that, we also confirmed that Virat Kohli hadn’t attended the Program; there was no confirmation that Virat Kohli attended on 22 January at Ayodhya. Hence, we found this claim to be fake.
- Claim: Virat Kohli attending the Ram Mandir consecration ceremony in Ayodhya on January 22
- Claimed on: Youtube, X
- Fact Check: Fake
Related Blogs
Introduction
Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework and eco-system for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. The Indian Ministry of Home Affairs approved a scheme for the establishment of the Indian Cyber Crime Coordination Centre (I4C) in October2018, which was inaugurated by Home Minister Amit Shah in January 2020. I4C is envisaged to act as the nodal point to curb Cybercrime in the country. Recently, on 13th March2024, the Centre designated the Indian Cyber Crime Coordination Centre (I4C) as an agency of the Ministry of Home Affairs (MHA) to perform the functions under the Information Technology Act, 2000, to inform about unlawful cyber activities.
The gazetted notification dated 13th March 2024 read as follows:
“In exercise of the powers conferred by clause (b) of sub-section (3) of section 79 of the Information Technology Act 2000, Central Government being the appropriate government hereby designate the Indian Cybercrime Coordination Centre (I4C), to be the agency of the Ministry of Home Affairs to perform the functions under clause (b) of sub-section (3) of section79 of Information Technology Act, 2000 and to notify the instances of information, data or communication link residing in or connected to a computer resource controlled by the intermediary being used to commit the unlawful act.”
Impact
Now, the Indian Cyber Crime Coordination Centre (I4C) is empowered to issue direct takedown orders under 79(b)(3) of the IT Act, 2000. Any information, data or communication link residing in or connected to a computer resource controlled by any intermediary being used to commit unlawful acts can be notified by the I4C to the intermediary. If an intermediary fails to expeditiously remove or disable access to a material after being notified, it will no longer be eligible for protection under Section 79 of the IT Act, 2000.
Safe Harbour Provision
Section79 of the IT Act also serves as a safe harbour provision for the Intermediaries. The safe harbour provision under Section 79 of the IT Act states that "an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by him". However, it is notable that this legal immunity cannot be granted if the intermediary "fails to expeditiously" take down a post or remove a particular content after the government or its agencies flag that the information is being used to commit something unlawful. Furthermore, Intermediaries are also obliged to perform due diligence on their platforms and comply with the rules & regulations and maintain and promote a safe digital environment on the respective platforms.
Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, The government has also mandated that a ‘significant social media intermediary’ must appoint a Chief Compliance Officer (CCO), Resident Grievance Officer (RGO), and Nodal Contact Person and publish periodic compliance report every month mentioning the details of complaints received and action taken thereon.
I4C's Role in Safeguarding Cyberspace
The Indian Cyber Crime Coordination Centre (I4C) is actively working towards initiatives to combat the emerging threats in cyberspace. I4C is one of the crucial extensions of the Ministry of Home Affairs, Government of India, working extensively to combat cyber crimes and ensure the overall safety of netizens. The ‘National Cyber Crime Reporting Portal’ equipped with a 24x7 helpline number 1930, is one of the key component of the I4C.
Components Of The I4C
- National Cyber Crime Threat Analytics Unit
- National Cyber Crime Reporting Portal
- National Cyber Crime Training Centre
- Cyber Crime Ecosystem Management Unit
- National Cyber Crime Research and Innovation Centre
- National Cyber Crime Forensic Laboratory Ecosystem
- Platform for Joint Cyber Crime Investigation Team.
Conclusion
I4C, through its initiatives and collaborative efforts, plays a pivotal role in safeguarding cyberspace and ensuring the safety of netizens. I4C reinforces India's commitment to combatting cybercrime and promoting a secure digital environment. The recent development by designating the I4C as an agency to notify the instances of unlawful activities in cyberspace serves as a significant step to counter cybercrime and promote an ethical and safe digital environment for netizens.
References
- https://www.deccanherald.com/india/centre-designates-i4c-as-agency-of-mha-to-notify-unlawful-activities-in-cyber-world-2936976
- https://www.business-standard.com/india-news/home-ministry-authorises-i4c-to-issue-takedown-notices-under-it-act-124031500844_1.html
- https://www.hindustantimes.com/india-news/it-ministry-empowers-i4c-to-notify-instances-of-cybercrime-101710443217873.html
- https://i4c.mha.gov.in/about.aspx#:~:text=Objectives%20of%20I4C,identifying%20Cybercrime%20trends%20and%20patterns
Introduction
The emergence of deepfake technology has become a significant problem in an era driven by technological growth and power. The government has reacted proactively as a result of concerns about the exploitation of this technology due to its extraordinary realism in manipulating information. The national government is in the vanguard of defending national interests, public trust, and security as the digital world changes. On the 26th of December 2023, the central government issued an advisory to businesses, highlighting how urgent it is to confront this growing threat.
The directive aims to directly address the growing concerns around Deepfakes, or misinformation driven by AI. This advice represents the result of talks that Union Minister Shri Rajeev Chandrasekhar, had with intermediaries during the course of a month-long Digital India dialogue. The main aim of the advisory is to accurately and clearly inform users about information that is forbidden, especially those listed under Rule 3(1)(b) of the IT Rules.
Advisory
The Ministry of Electronics and Information Technology (MeitY) has sent a formal recommendation to all intermediaries, requesting adherence to current IT regulations and emphasizing the need to address issues with misinformation, specifically those driven by artificial intelligence (AI), such as Deepfakes. Union Minister Rajeev Chandrasekhar released the recommendation, which highlights the necessity of communicating forbidden information in a clear and understandable manner, particularly in light of Rule 3(1)(b) of the IT Rules.
Advise on Prohibited Content Communication
According to MeitY's advice, intermediaries must transmit content that is prohibited by Rule 3(1)(b) of the IT Rules in a clear and accurate manner. This involves giving users precise details during enrollment, login, and content sharing/uploading on the website, as well as including such information in customer contracts and terms of service.
Ensuring Users Are Aware of the Rules
Digital platform suppliers are required to inform their users of the laws that are relevant to them. This covers provisions found in the IT Act of 2000 and the Indian Penal Code (IPC). Corporations should inform users of the potential consequences of breaking the restrictions outlined in Rule 3(1)(b) and should also urge users to notify any illegal activity to law enforcement.
Talks Concerning Deepfakes
For more than a month, Union Minister Rajeev Chandrasekhar had a significant talk with various platforms where they addressed the issue of "deepfakes," or computer-generated fake videos. The meeting emphasized how crucial it is that everyone abides by the laws and regulations in effect, particularly the IT Rules to prevent deepfakes from spreading.
Addressing the Danger of Disinformation
Minister Chandrasekhar underlined the grave issue of disinformation, particularly in the context of deepfakes, which are false pieces of content produced using the latest developments such as artificial intelligence. He emphasized the dangers this deceptive data posed to internet users' security and confidence. The Minister emphasized the efficiency of the IT regulations in addressing this issue and cited the Prime Minister's caution about the risks of deepfakes.
Rule Against Spreading False Information
The Minister referred particularly to Rule 3(1)(b)(v), which states unequivocally that it is forbidden to disseminate false information, even when doing so involves cutting-edge technology like deepfakes. He called on intermediaries—the businesses that offer digital platforms—to take prompt action to take such content down from their systems. Additionally, he ensured that everyone is aware that breaking such rules has legal implications.
Analysis
The Central Government's latest advisory on deepfake technology demonstrates a proactive strategy to deal with new issues. It also highlights the necessity of comprehensive legislation to directly regulate AI material, particularly with regard to user interests.
There is a wider regulatory vacuum for content produced by artificial intelligence, even though the current guideline concentrates on the precision and lucidity of information distribution. While some limitations are mentioned in the existing laws, there are no clear guidelines for controlling or differentiating AI-generated content.
Positively, it is laudable that the government has recognized the dangers posed by deepfakes and is making appropriate efforts to counter them. As AI technology develops, there is a chance to create thorough laws that not only solve problems but also create a supportive environment for the creation of ethical AI content. User protection, accountability, openness, and moral AI use would all benefit from such laws. This offers an opportunity for regulatory development to guarantee the successful and advantageous incorporation of AI into our digital environment.
Conclusion
The Central Government's preemptive advice on deepfake technology shows a great dedication to tackling new risks in the digital sphere. The advice highlights the urgent need to combat deepfakes, but it also highlights the necessity for extensive legislation on content produced by artificial intelligence. The lack of clear norms offers a chance for constructive regulatory development to protect the interests of users. The advancement of AI technology necessitates the adoption of rules that promote the creation of ethical AI content, guaranteeing user protection, accountability, and transparency. This is a turning point in the evolution of regulations, making it easier to responsibly incorporate AI into our changing digital landscape.
References
- https://economictimes.indiatimes.com/tech/technology/deepfake-menace-govt-issues-advisory-to-intermediaries-to-comply-with-existing-it-rules/articleshow/106297813.cms
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1990542#:~:text=Ministry%20of%20Electronics%20and%20Information,misinformation%20powered%20by%20AI%20%E2%80%93%20Deepfakes.
- https://www.timesnownews.com/india/centres-deepfake-warning-to-it-firms-ensure-users-dont-violate-content-rules-article-106298282#:~:text=The%20Union%20government%20on%20Tuesday,actors%2C%20businesspersons%20and%20other%20celebrities
Introduction
Conversations surrounding the scourge of misinformation online typically focus on the risks to social order, political stability, economic safety and personal security. An oft-overlooked aspect of this phenomenon is the fact that it also takes a very real emotional and mental toll on people. Even as we grapple with the big picture questions about financial fraud or political rumors or inaccurate medical information online, we must also appreciate the fact that being exposed to misinformation and becoming aware of one’s own vulnerability are both significant sources of mental stress in today’s digital ecosystem.
Inaccurate information causes confusion and worry, which has negative consequences for mental health. Misinformation may also impair people's sense of well-being by undermining their trust in institutions, authority figures, and their own judgment. The constant bombardment of misinformation can lead to information overload, wherein people are unable to discriminate between legitimate sources and misleading content, resulting in mental exhaustion and a sense of being overwhelmed by the sheer volume of information available. Vulnerable groups such as children, the elderly, and those with pre-existing health conditions are more sensitive or susceptible to the negative effects of misinformation.
How Does Misinformation Endanger Mental Health?
Misinformation on social media platforms is a matter of public health because it has the potential to confuse people, lead to poor decision-making and result in cognitive dissonance, anxiety and unwanted behavioural changes.
Unconstrained misinformation can also lead to social disorder and the prevalence of negative emotions amongst larger numbers, ultimately causing a huge impact on society. Therefore, understanding the spread and diffusion characteristics of misinformation on Internet platforms is crucial.
The spread of misinformation can elicit different emotions of the public, and the emotions also change with the spread of misinformation. Factors such as user engagement, number of comments, and time of discussion all have an impact on the change of emotions in misinformation. Active users tend to make more comments, engage longer in discussions, and display more dominant negative emotions when triggered by misinformation. Understanding the evolution pattern of emotions triggered by misinformation is also important in view of the public’s emotional fluctuations under the influence of misinformation, and social media often magnifies the impact of emotions and makes emotions spread rapidly in social networks. For example, the sentiment of misinformation increases when there are sensitive topics such as political elections, viral trending topics, health-related information, communal and local information, information about natural disasters and more. Active misinformation on the Internet not only affects the public's psychology, mental health and behavior, but also has an impact on the stability of social order and the maintenance of social security.
Prebunking and Debunking To Build Mental Guards Against Misinformation
As the spread of misinformation and disinformation rises, so do the techniques aimed to tackle their spread. Prebunking or attitudinal inoculation is a technique for training individuals to recogniseand resist deceptive communications before they can take root. Prebunking is a psychological method for mitigating the effects of misinformation, strengthening resilience and creating cognitive defenses against future misinformation. Debunking provides individuals with accurate information to counter false claims and myths, correcting misconceptions and preventing the spread of misinformation. By presenting evidence-based refutations, debunking helps individuals distinguish fact from fiction.
What do health experts say about online misinformation?
“In the21st century, mental health is crucial due to the overwhelming amount of information available online. The COVID-19 pandemic-related misinformation was a prime example of this, with misinformation spreading online, leading to increased anxiety, panic buying, fear of leaving home, and mistrust in health measures. To protect our mental health, it is essential to cultivate a discerning mindset, question sources, and verify information before consumption. Fostering a supportive community that encourages open dialogue and fact-checking can help navigate the digital information landscape with confidence and emotional support. Prioritising self-care routines, mindfulness practices, and seeking professional guidance are also crucial for safeguarding mental health in the digital information era.”
In conversation with CyberPeace ~ Says Dubai-based psychologist, Aishwarya Menon, (BA,in Psychology and Criminology from the University of Westen Ontario, London and MA in Mental Health and Addictions (Humber College, University of Guelph),Toronto.
CyberPeace Policy Recommendations:
1) Countering misinformation is everyone's shared responsibility. To mitigate the negative effects of infodemics online, we must look at developing strong legal policies, creating and promoting awareness campaigns, relying on authenticated content on mass media, and increasing people's digital literacy.
2) Expert organisations actively verifying the information through various strategies including prebunking and debunking efforts are among those best placed to refute misinformation and direct users to evidence-based information sources. It is recommended that countermeasures for users on platforms be increased with evidence-based data or accurate information.
3) The role of social media platforms is crucial in the misinformation crisis, hence it is recommended that social media platforms actively counter the production of misinformation on their platforms. Local, national, and international efforts and additional research are required to implement the robust misinformation counterstrategies.
4) Netizens are advised or encouraged to follow official sources to check the reliability of any news or information. They must recognise the red flags by recognising the signs such as questionable facts, poorly written texts, surprising or upsetting news, fake social media accounts and fake websites designed to look like legitimate ones. Netizens are also encouraged to develop cognitive skills to discern fact and reality. Netizens are advised to approach information with a healthy dose of skepticism and curiosity.
Final Words:
It is crucial to protect mental health by escalating and disturbing the rise of misinformation incidents on various subjects, safeguarding our minds requires cognitive skills, building media literacy and verifying the information from trusted sources, prioritising mental health by self-care practices and staying connected with supportive authenticated networks. Promoting prebunking and debunking initiatives is necessary. Netizen scan protect themselves against the negative effects of misinformation and cultivate a resilient mindset in the digital information age.
References:
- https://www.hindawi.com/journals/scn/2021/7999760/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8502082/