#Fact Check: Old Photo Misused to Claim Israeli Helicopter Downed in Lebanon!
Executive Summary
A viral image claims that an Israeli helicopter shot down in South Lebanon. This investigation evaluates the possible authenticity of the picture, concluding that it was an old photograph, taken out of context for a more modern setting.
Claims
The viral image circulating online claims to depict an Israeli helicopter recently shot down in South Lebanon during the ongoing conflict between Israel and militant groups in the region.
Factcheck:
Upon Reverse Image Searching, we found a post from 2019 on Arab48.com with the exact viral picture.
Thus, reverse image searches led fact-checkers to the original source of the image, thus putting an end to the false claim.
There are no official reports from the main news agencies and the Israeli Defense Forces that confirm a helicopter shot down in southern Lebanon during the current hostilities.
Conclusion
Cyber Peace Research Team has concluded that the viral image claiming an Israeli helicopter shot down in South Lebanon is misleading and has no relevance to the ongoing news. It is an old photograph which has been widely shared using a different context, fueling the conflict. It is advised to verify claims from credible sources and not spread false narratives.
- Claim: Israeli helicopter recently shot down in South Lebanon
- Claimed On: Facebook
- Fact Check: Misleading, Original Image found by Google Reverse Image Search
Related Blogs
In the vast, interconnected cosmos of the internet, where knowledge and connectivity are celebrated as the twin suns of enlightenment, there lurk shadows of a more sinister nature. Here, in these darker corners, the innocence of childhood is not only exploited but also scarred, indelibly and forever. The production, distribution, and consumption of Child Sexual Abuse Material (CSAM) have surged to alarming levels globally, casting a long, ominous shadow over the digital landscape.
In response to this pressing issue, the National Human Rights Commission (NHRC) has unfurled a comprehensive four-part advisory, a beacon of hope aimed at combating CSAM and safeguarding the rights of children in this digital age. This advisory dated 27/10/23 is not merely a reaction to the rising tide of CSAM, but a testament to the imperative need for constant vigilance in the realm of cyber peace.
The statistics paint a sobering picture. In 2021, more than 1,500 instances of publishing, storing, and transmitting CSAM were reported, shedding a harsh light on the scale of the problem. Even more alarming is the upward trend in cases reported in subsequent years. By 2023, a staggering 450,207 cases of CSAM had already been reported, marking a significant increase from the 204,056 and 163,633 cases reported in 2022 and 2021, respectively.
The Key Aspects of Advisory
The NHRC's advisory commences with a fundamental recommendation - a redefinition of terminology. It suggests replacing the term 'Child Pornography' with 'Child Sexual Abuse Material' (CSAM). This shift in language is not merely semantic; it underscores the gravity of the issue, emphasizing that this is not about pornography but child abuse.
Moreover, the advisory calls for the definition of 'sexually explicit' under Section 67B of the IT Act, 2000. This step is crucial for ensuring the prompt identification and removal of online CSAM. By giving a clear definition, law enforcement can act swiftly in removing such content from the internet.
The digital world knows no borders, and CSAM can easily cross jurisdictional lines. NHRC recognizes this challenge and proposes that laws be harmonized across jurisdictions through bilateral agreements. Moreover, it recommends pushing for the adoption of a UN draft Convention on 'Countering the Use of Information and Communications Technologies for Criminal Purposes' at the General Assembly.
One of the critical aspects of the advisory is the strengthening of law enforcement. NHRC advocates for the creation of Specialized State Police Units in every state and union territory to handle CSAM-related cases. The central government is expected to provide support, including grants, to set up and equip these units.
The NHRC further recommends establishing a Specialized Central Police Unit under the government of India's jurisdiction. This unit will focus on identifying and apprehending CSAM offenders and maintaining a repository of such content. Its role is not limited to law enforcement; it is expected to cooperate with investigative agencies, analyze patterns, and initiate the process for content takedown. This coordinated approach is designed to combat the problem effectively, both on the dark web and open web.
The role of internet intermediaries and social media platforms in controlling CSAM is undeniable. The NHRC advisory emphasizes that intermediaries must deploy technology, such as content moderation algorithms, to proactively detect and remove CSAM from their platforms. This places the onus on the platforms to be proactive in policing their content and ensuring the safety of their users.
New Developments
Platforms using end-to-end encryption services may be required to create additional protocols for monitoring the circulation of CSAM. Failure to do so may invite the withdrawal of the 'safe harbor' clause under Section 79 of the IT Act, 2000. This measure ensures that platforms using encryption technology are not inadvertently providing safe havens for those engaged in illegal activities.
NHRC's advisory extends beyond legal and law enforcement measures; it emphasizes the importance of awareness and sensitization at various levels. Schools, colleges, and institutions are called upon to educate students, parents, and teachers about the modus operandi of online child sexual abusers, the vulnerabilities of children on the internet, and the early signs of online child abuse.
To further enhance awareness, a cyber curriculum is proposed to be integrated into the education system. This curriculum will not only boost digital literacy but also educate students about relevant child care legislation, policies, and the legal consequences of violating them.
NHRC recognizes that survivors of CSAM need more than legal measures and prevention strategies. Survivors are recommended to receive support services and opportunities for rehabilitation through various means. Partnerships with civil society and other stakeholders play a vital role in this aspect. Moreover, psycho-social care centers are proposed to be established in every district to facilitate need-based support services and organization of stigma eradication programs.
NHRC's advisory is a resounding call to action, acknowledging the critical importance of protecting children from the perils of CSAM. By addressing legal gaps, strengthening law enforcement, regulating online platforms, and promoting awareness and support, the NHRC aims to create a safer digital environment for children.
Conclusion
In a world where the internet plays an increasingly central role in our lives, these recommendations are not just proactive but imperative. They underscore the collective responsibility of governments, law enforcement agencies, intermediaries, and society as a whole in safeguarding the rights and well-being of children in the digital age.
NHRC's advisory is a pivotal guide to a more secure and child-friendly digital world. By addressing the rising tide of CSAM and emphasizing the need for constant vigilance, NHRC reaffirms the critical role of organizations, governments, and individuals in ensuring cyber peace and child protection in the digital age. The active contribution from premier cyber resilience firms like Cyber Peace Foundation, amplifies the collective action forging a secure digital space, highlighting the pivotal role played by think tanks in ensuring cyber peace and resilience.
References:
- https://www.hindustantimes.com/india-news/nhrc-issues-advisory-regarding-child-sexual-abuse-material-on-internet-101698473197792.html
- https://ssrana.in/articles/nhrcs-advisory-proliferation-of-child-sexual-abuse-material-csam/
- https://theprint.in/india/specialised-central-police-unit-use-of-technology-to-proactively-detect-csam-nhrc-advisory/1822223/
Introduction
As our reliance on digital communication technologies increases, so do the risks associated with the same. The propagation of false information is a significant concern. According to the World Economic Forum's 2024 Global Risk Report, India ranks the highest for misinformation and disinformation risk. Indian Vice President Shri Jagdeep Dhankhar emphasized the importance of transparency and accountability in the digital information age, addressing Indian Information Service officer trainees at the Vice President's Enclave on 18th June 2024. He has highlighted the issue of widespread misinformation and the need to regulate it. He stated “Information is power, information is too dangerous a power, information is that power which has to be regulated’’.
VC calls for regulation of the Information Landscape
The Vice President of India, Shri Dhankhar, has called on young Indian Information Service officers to act swiftly to neutralize misinformation on social media. He emphasized the importance of protecting individuals and institutions from fake narratives set afloat on social media. The VP called for the officers to act as information warriors, protecting the privacy and reputation of affected individuals or institutions.
The VP also highlighted India's vibrant democracy and the need for trust in the government. He called for the neutralization of motivated narratives set by global media and stressed the importance of not allowing others to calibrate them. He also emphasized the need to promote India's development narrative globally, highlighting its rich cultural heritage and diversity. He has expressed the need to regulate information, saying “Unregulated information & fake news can create a disaster of un-imaginable proportion.”
MeitY Advisory dated 1st March 2024
As regards to the issue of misinformation, the recently-issued advisory by the Ministry of Electronics and Information Technology (MeitY), specifies that all users should be well informed about the consequences of dealing with unlawful information on online platforms, including disabling access, removing non-compliant information, suspension or termination of access or usage rights of the user to their user account and imposing punishment under applicable law. The advisory entails that users are clearly informed, through terms of services and user agreements, about the consequences of engaging with unlawful information on the platform. Measures to combat deepfakes or misinformation have also been discussed in the advisory. The advisory necessitates identifying synthetically-created content across various formats, and advising platforms to employ labels, unique identifiers, or metadata to ensure transparency. Furthermore, the advisory mandates the disclosure of software details and tracing the first originator of such synthetically created content.
Conclusion
The battle against the growing incidences of misinformation and disinformation will not be easily won: developing a robust regulatory framework to counter online misinformation is essential. Alongside the regulatory framework, the government should encourage digital literacy campaigns, promote prebunking and debunking strategies and collaborate with relevant organisations such as cybersecurity experts, fact-checking entities, researchers, and policy analysts to combat misinformation on the Internet. Vice President Jagdeep Dhankhar's statement scores the need to regulate information to prevent the spread of fake news or misinformation.
References:
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2026304
- https://regmedia.co.uk/2024/03/04/meity_ai_advisory_1_march.pdf
Introduction
Privacy has become a concern for netizens and social media companies have access to a user’s data and the ability to use the said data as they see fit. Meta’s business model, where they rely heavily on collecting and processing user data to deliver targeted advertising, has been under scrutiny. The conflict between Meta and the EU traces back to the enactment of GDPR in 2018. Meta is facing numerous fines for not following through with the regulation and mainly failing to obtain explicit consent for data processing under Chapter 2, Article 7 of the GDPR. ePrivacy Regulation, which focuses on digital communication and digital data privacy, is the next step in the EU’s arsenal to protect user privacy and will target the cookie policies and tracking tech crucial to Meta's ad-targeting mechanism. Meta’s core revenue stream is sourced from targeted advertising which requires vast amounts of data for the creation of a personalised experience and is scrutinised by the EU.
Pay for Privacy Model and its Implications with Critical Analysis
Meta came up with a solution to deal with the privacy issue - ‘Pay or Consent,’ a model that allows users to opt out of data-driven advertising by paying a subscription fee. The platform would offer users a choice between free, ad-supported services and a paid privacy-enhanced experience which aligns with the GDPR and potentially reduces regulatory pressure on Meta.
Meta presently needs to assess the economic feasibility of this model and come up with answers for how much a user would be willing to pay for the privacy offered and shift Meta’s monetisation from ad-driven profits to subscription revenues. This would have a direct impact on Meta’s advertisers who use Meta as a platform for detailed user data for targeted advertising, and would potentially decrease ad revenue and innovate other monetisation strategies.
For the users, increased privacy and greater control of data aligning with global privacy concerns would be a potential outcome. While users will undoubtedly appreciate the option to avoid tracking, the suggestion does beg the question that the need to pay might become a barrier. This could possibly divide users between cost-conscious and privacy-conscious segments. Setting up a reasonable price point is necessary for widespread adoption of the model.
For the regulators and the industry, a new precedent would be set in the tech industry and could influence other companies’ approaches to data privacy. Regulators might welcome this move and encourage further innovation in privacy-respecting business models.
The affordability and fairness of the ‘pay or consent’ model could create digital inequality if privacy comes at a digital cost or even more so as a luxury. The subscription model would also need clarifications as to what data would be collected and how it would be used for non-advertising purposes. In terms of market competition, competitors might use and capitalise on Meta’s subscription model by offering free services with privacy guarantees which could further pressure Meta to refine its offerings to stay competitive. According to the EU, the model needs to provide a third way for users who have ads but are a result of non-personalisation advertising.
Meta has further expressed a willingness to explore various models to address regulatory concerns and enhance user privacy. Their recent actions in the form of pilot programs for testing the pay-for-privacy model is one example. Meta is actively engaging with EU regulators to find mutually acceptable solutions and to demonstrate its commitment to compliance while advocating for business models that sustain innovation. Meta executives have emphasised the importance of user choice and transparency in their future business strategies.
Future Impact Outlook
- The Meta-EU tussle over privacy is a manifestation of broader debates about data protection and business models in the digital age.
- The EU's stance on Meta’s ‘pay or consent’ model and any new regulatory measures will shape the future landscape of digital privacy, leading to other jurisdictions taking cues and potentially leading to global shifts in privacy regulations.
- Meta may need to iterate on its approach based on consumer preferences and concerns. Competitors and tech giants will closely monitor Meta’s strategies, possibly adopting similar models or innovating new solutions. And the overall approach to privacy could evolve to prioritise user control and transparency.
Conclusion
Consent is the cornerstone in matters of privacy and sidestepping it violates the rights of users. The manner in which tech companies foster a culture of consent is of paramount importance in today's digital landscape. As the exploration by Meta in the ‘pay or consent’ model takes place, it faces both opportunities and challenges in balancing user privacy with business sustainability. This situation serves as a critical test case for the tech industry, highlighting the need for innovative solutions that respect privacy while fostering growth with the specificity of dealing with data protection laws worldwide, starting with India’s Digital Personal Data Protection Act, of 2023.
Reference:
- https://ciso.economictimes.indiatimes.com/news/grc/eu-tells-meta-to-address-consumer-fears-over-pay-for-privacy/111946106
- https://www.wired.com/story/metas-pay-for-privacy-model-is-illegal-says-eu/
- https://edri.org/our-work/privacy-is-not-for-sale-meta-must-stop-charging-for-peoples-right-to-privacy/
- https://fortune.com/2024/04/17/meta-pay-for-privacy-rejected-edpb-eu-gdpr-schrems/