#FactCheck - Viral image claiming to show injury marks of the MP Kangana Ranaut slapped is fake & misleading
Executive Summary:
The viral image in the social media which depicts fake injuries on the face of the MP(Member of Parliament, Lok Sabha) Kangana Ranaut alleged to have been beaten by a CISF officer at the Chandigarh airport. The reverse search of the viral image taken back to 2006, was part of an anti-mosquito commercial and does not feature the MP, Kangana Ranaut. The findings contradict the claim that the photos are evidence of injuries resulting from the incident involving the MP, Kangana Ranaut. It is always important to verify the truthfulness of visual content before sharing it, to prevent misinformation.
Claims:
The images circulating on social media platforms claiming the injuries on the MP, Kangana Ranaut’s face were because of an assault incident by a female CISF officer at Chandigarh airport. This claim hinted that the photos are evidence of the physical quarrel and resulting injuries suffered by the MP, Kangana Ranaut.
Fact Check:
When we received the posts, we reverse-searched the image and found another photo that looked similar to the viral one. We could verify through the earring in the viral image with the new image.
The reverse image search revealed that the photo was originally uploaded in 2006 and is unrelated to the MP, Kangana Ranaut. It depicts a model in an advertisement for an anti-mosquito spray campaign.
We can validate this from the earrings in the photo after the comparison between the two photos.
Hence, we can confirm that the viral image of the injury mark of the MP, Kangana Ranaut has been debunked as fake and misleading, instead it has been cropped out from the original photo to misrepresent the context.
Conclusion:
Therefore, the viral photos on social media which claimed to be the results of injuries on the MP, Kangana Ranaut’s face after being assaulted allegedly by a CISF officer at the airport in Chandigarh were fake. Detailed analysis of the pictures provided the fact that the pictures have no connection with Ranaut; the picture was a 2006 anti-mosquito spray advertisement; therefore, the allegations that show these images as that of Ranaut’s injury are fake and misleading.
- Claim: photos circulating on social media claiming to show injuries on the MP, Kangana Ranaut's face following an assault incident by a female CISF officer at Chandigarh airport.
- Claimed on: X (Formerly known as Twitter), thread, Facebook
- Fact Check: Fake & Misleading
Related Blogs
Introduction
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
- Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
- Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms: Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
- User Empowerment to Counter Misinformation: Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
- Partnership with Fact-Checking/Expert Organizations: Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
References
- https://mark-hurlstone.github.io/THKE.22.BJP.pdf
- https://futurefreespeech.org/wp-content/uploads/2024/01/Empowering-Audiences-Through-%E2%80%98Prebunking-Michael-Bang-Petersen-Background-Report_formatted.pdf
- https://newsreel.pte.hu/news/unprecedented_challenges_Debunking_disinformation
- https://misinforeview.hks.harvard.edu/article/global-vaccination-badnews/
Executive Summary:
The picture of a boy making sand art of Indian Cricketer Virat Kohli spreading in social media, claims to be false. The picture which was portrayed, revealed not to be a real sand art. The analyses using AI technology like 'Hive' and ‘Content at scale AI detection’ confirms that the images are entirely generated by artificial intelligence. The netizens are sharing these pictures in social media without knowing that it is computer generated by deep fake techniques.
Claims:
The collage of beautiful pictures displays a young boy creating sand art of Indian Cricketer Virat Kohli.
Fact Check:
When we checked on the posts, we found some anomalies in each photo. Those anomalies are common in AI-generated images.
The anomalies such as the abnormal shape of the child’s feet, blended logo with sand color in the second image, and the wrong spelling ‘spoot’ instead of ‘sport’n were seen in the picture. The cricket bat is straight which in the case of sand made portrait it’s odd. In the left hand of the child, there’s a tattoo imprinted while in other photos the child's left hand has no tattoo. Additionally, the face of the boy in the second image does not match the face in other images. These made us more suspicious of the images being a synthetic media.
We then checked on an AI-generated image detection tool named, ‘Hive’. Hive was found to be 99.99% AI-generated. We then checked from another detection tool named, “Content at scale”
Hence, we conclude that the viral collage of images is AI-generated but not sand art of any child. The Claim made is false and misleading.
Conclusion:
In conclusion, the claim that the pictures showing a sand art image of Indian cricket star Virat Kohli made by a child is false. Using an AI technology detection tool and analyzing the photos, it appears that they were probably created by an AI image-generated tool rather than by a real sand artist. Therefore, the images do not accurately represent the alleged claim and creator.
Claim: A young boy has created sand art of Indian Cricketer Virat Kohli
Claimed on: X, Facebook, Instagram
Fact Check: Fake & Misleading
Introduction
The land of the dragon has been significantly advanced in terms of innovation and creating self-sustaining technologies of civic and military importance. Leading nations of the West still need to understand the advancements the dragon land has made in technologies and what potential threats it poses on an international level.
Int on Dragon Land
According to a leaked US intelligence study, China is developing powerful cyber weapons to “seize control” of adversary satellites and render them worthless for data communications or surveillance during combat.
According to the US, China’s effort to build up the capacity to “deny, exploit, or hijack” hostile satellites is critical to controlling information, which Beijing views as a crucial “war-fighting domain.”[1]
The CIA-marked document, one of hundreds purportedly given by a 21-year-old US Air Guardsman in the most influential American intelligence leaks in over a decade, was released this year and has yet to be disclosed before.
This kind of cyber capabilities would be significantly superior to what Russia has used in Ukraine, where electronic warfare troops have used a brute-force strategy to little avail.
How were the capabilities discovered?
According to a top-secret US dossier, China could use its cyber capabilities to “take control of a satellite, making it inoperable for support of communications, weapons, or intelligence, surveillance, and reconnaissance systems.” The US has never acknowledged having a comparable or superior capability.
By broadcasting related frequencies from truck-mounted jamming systems like the Tirada-2, these attacks were first developed in the 1980s to block communications between low-orbit SpaceX satellites and their on-ground terminals. China’s more ambitious cyberattacks are designed to imitate the signals that adversary satellites’ operators send out, tricking them into malfunctioning or being entirely taken over at critical points in a battle.
Implications of such military capabilities
The south Chinese island nation of Taiwan is attempting to develop a communications infrastructure that can withstand an attack from China after observing how crucial satellite communications have been to the Ukrainian military.
According to a January 2023 article in the Financial Times, it is seeking investors to launch its own satellite provider while testing with 700 non-geostationary satellite receivers around Taiwan to ensure bandwidth in the case of conflict or natural calamities. Similarly, a Russian cyber strike rendered thousands of Ukrainian military routers from US-based Viasat inoperable in the hours before it launched its invasion last year, demonstrating how important satellite communications have become in contemporary wartime. This attack was deemed to be catastrophic by the Ukraine officials as it broke down the communication between the Ukraine army and the govt.
Additionally, several hundred wind turbines in Germany, Poland, and Italy were impacted, which cut off service to thousands of Viasat users in those countries. Even though it was complex, the Viasat hack required accessing the business’ computer systems and then sending commands to the modems that made them break.
How significant is the threat?
According to the leaked assessment, China’s objectives are much more sophisticated and focused towards the future. According to analysts, they would aim to disable satellites’ ability to interact with one another, relay signals and orders to weapons systems, or give back visual and intercepted electronic data. Satellites often work in interconnected clusters and remain unmanned, thus preventing the scope of proper surveillance. Officials from the US military have warned that China has made substantial advancements in creating military space technologies, particularly satellite communications. Beijing is vigorously pursuing counter-space capabilities in an effort to realise its “space dream” of being the dominant force outside of the Earth’s atmosphere by 2045.
Threat to India?
As China aggressively invests in technology meant to disrupt, degrade, and destroy our space capabilities, a potential threat remains on the Indian satellites and spaceships. The complexity of the communication network and extended distance from the Earth can point towards a high number of vulnerabilities for the Indian Space program. Still, the Indian Space Research Organisation (ISRO) has been working tirelessly, and as of 1st January 2022, India has 21 operational satellites in Low Earth Orbit (LEO) and 28 operational satellites in Geostationary Orbit. In 2021, ISRO launched one PSLV-DL variant (PSLV-C51) mission and one GSLV-MkII variant (GSLV-F10) mission. GSLV-F10 could not accomplish the mission successfully. In 2021, India placed five satellites and 1 PSLV rocket body (PS4 stage) in Low Earth Orbits. India placed 65 rocket bodies in orbit from the first launch, of which 42 are still in orbit around the Earth, and 23 have re-entered and burnt up in the Earth’s atmosphere. The break-up event of the 4th stage of PSLV-C3 in 2001 generated 386 debris, of which 76 are still in orbit.
Conclusion
The space race is the new cold war, all nations are working towards securing their space assets while exploring new elements in outer space. It is pertinent that the national interest in space is protected, and a long awaiting space treaty for the modern age needs to be ratified by all nations with a presence in space. The future of space exploration is bright for most nations, but the threats should be eradicated, and an all-inclusive space should be promoted to maintain harmony in space.
[1] https://www.ft.com/content/fc72d277-7fa8-4b29-9231-4feb34f43b0c