#FactCheck - Digitally Altered Video of Olympic Medalist, Arshad Nadeem’s Independence Day Message
Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.
Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.
Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.
We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.
Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading
Related Blogs
Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.
The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.
Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.
Introduction
In today's era of digitalised community and connections, social media has become an integral part of our lives. we use social media to connect with our friends and family, and social media is also used for business purposes. Social media offers us numerous opportunities and ease to connect and communicate with larger communities. While it also poses some challenges, while we use social media, we come across issues such as inappropriate content, online harassment, online stalking, account hacking, misuse of personal information or data, privacy issues, fake accounts, Intellectual property violation issues, abusive and dishearted content, content against the terms and condition policy of the platform and more. To deal with such issues, social media entities have proper reporting mechanisms and set terms and conditions guidelines to effectively prevent such issues and by addressing them in the best possible way by platform help centre or reporting mechanism.
The Role of Help Centers in Resolving User Complaints:
The help centres are established on platforms to address user complaints and provide satisfactory assistance or resolution. Addressing user complaints is a key component of maintaining a safe and secure digital environment for users. Platform-centric help centres play a vital role in providing users with a resource to seek assistance and report their issues.
Some common issues reported on social media:
- Reporting abusive content: Users can report content that they find abusive, offensive, or in violation of platform policies. These reports are reviewed by the help centre.
- Reporting CSAM (Child Sexual Abuse Material): CSAM content can be reported to platform help centre. Social media platforms have stringent policies in place to address such concerns and ensure a safe digital environment for everyone, including children.
- Reporting Misinformation or Fake News: With the proliferation of misinformation online, users can report content that they find or suspect misleading or false information and Fact-checking bodies are employed to assess the accuracy of reported content.
- Content violating intellectual property rights: If there is a violation or infringement of any intellectual property work, it can be reported on the platform.
- Violence of commercial policies: Products listed on social media platforms are also needed to comply with the platform’s Commercial Policies.
Submitting a Complaint to the Indian Grievance Officer for Facebook:
A user can report his issue through the below-mentioned websites:
The user can go to the Facebook Help Center, where go to the "Reporting a Problem” section, then by clicking on Reporting a Problem, Choose the Appropriate Issue that best describes your complaint. For example, if you have encountered inappropriate or abusive content, select the ‘I found inappropriate or abusive content’ option.
Here is a list of issues which you can report on Facebook:
- My account has been hacked.
- I've lost access to a page or a group I used to manage.
- I've found a fake profile or a profile that's pretending to be me.
- I am being bullied or harassed.
- I found inappropriate or abusive content.
- I want to report content showing me in nudity/partial nudity or in a sexual act.
- I (or someone I am legally responsible for) appear in content that I do not want to be displayed.
- I am a law enforcement official seeking to access user data.
- I am a government official or a court officer seeking to submit an order, notice or direction.
- I want to download my personal data or report an issue with how Facebook is processing my data.
- I want to report an Intellectual Property infringement.
- I want to report another issue.
Then, describe your issues and attach supporting evidence such as screenshots, then submit your report. After submitting a report, you will receive a confirmation that your report has been submitted to the platform. The platform will review the complaint within the stipulated time period, and users can also check the status of their filed complaint. Appropriate action will be taken by platforms after reviewing such complaints. If it violates any standard policy, terms & conditions, or privacy policies of the platform, the platform will take down that content or will take any other appropriate action.
Conclusion:
It is important to be aware of your rights in a digital landscape and report such issues to the platform. It is essential to understand how to report your issues or grievances on social media platforms effectively. By using the help centre or reporting mechanism of the platform, users can effectively file their complaints on the platform and contribute to a safer and more responsible online environment. Social media platforms have their compliance framework and privacy and policy guidelines in place to ensure the compliance framework for community standards and legal requirements. So, whenever you encounter an issue on social media, report it on the platform and contribute to a safer digital environment on social media platforms.
References:
- https://www.cyberyodha.org/2023/09/how-to-submit-complaint-to-indian.html
- https://transparency.fb.com/en-gb/enforcement/taking-action/complaints-handling-process/
- https://www.facebook.com/help/contact/278770247037228
- https://www.facebook.com/help/263149623790594
Introduction
Misinformation poses a significant challenge to public health policymaking since it undermines efforts to promote effective health interventions and protect public well-being. The spread of inaccurate information, particularly through online channels such as social media and internet platforms, further complicates the decision-making process for policymakers since it perpetuates public confusion and distrust. This misinformation can lead to resistance against health initiatives, such as vaccination programs, and fuels scepticism towards scientifically-backed health guidelines.
Before the COVID-19 pandemic, misinformation surrounding healthcare largely encompassed the effects of alcohol and tobacco consumption, marijuana use, eating habits, physical exercise etc. However, there has been a marked shift in the years since. One such example is the outcry against palm oil in 2024: it is an ingredient prevalent in numerous food and cosmetic products, and came under the scanner after a number of claims that palmitic acid, which is present in palm oil, is detrimental to our health. However, scientific research by reputable institutions globally established that there is no cause for concern regarding the health risks posed by palmitic acid. Such trends and commentaries tend to create a parallel unscientific discourse that has the potential to not only impact individual choices but also public opinion and as a result, market developments and policy conversations.
A prevailing narrative during the worst of the Covid-19 pandemic was that the virus had been engineered to control society and boost hospital profits. The extensive misinformation surrounding COVID-19 and its management and care increased vaccine hesitancy amongst people worldwide. It is worth noting that vaccine hesitancy has been a consistent trend historically; the World Health Organisation flagged vaccine hesitancy as one of the main threats to global health, and there have been other instances where a majority of the population refused to get vaccinated anticipating unverified, long-lasting side effects. For example, research from 2016 observed a significant level of public skepticism regarding the development and approval process of the Zika vaccine in Africa. Further studies emphasised the urgent need to disseminate accurate information about the Zika virus on online platforms to help curb the spread of the pandemic.
In India during the COVID-19 pandemic, despite multiple official advisories, notifications and guidelines issued by the government and ICMR, people continued to remain opposed to vaccination, which resulted in inflated mortality rates within the country. Vaccination hesitancy was also compounded by anti-vaccination celebrities who claimed that vaccines were dangerous and contributed in large part to the conspiracy theories doing the rounds. Similar hesitation was noted in misinformation surrounding the MMR vaccines and their likely role in causing autism was examined. At the time of the crisis, the Indian government also had to tackle disinformation-induced fraud surrounding the supply of oxygens in hospitals. Many critically-ill patients relied on fake news and unverified sources that falsely portrayed the availability of beds, oxygen cylinders and even home set-ups, only to be cheated out of money.
The above examples highlight the difficulty health officials face in administering adequate healthcare. The special case of the COVID-19 pandemic also highlighted how current legal frameworks failed to address misinformation and disinformation, which impedes effective policymaking. It also highlights how taking corrective measures against health-related misinformation becomes difficult since such corrective action creates an uncomfortable gap in an individual’s mind, and it is seen that people ignore accurate information that may help bridge the gap. Misinformation, coupled with the infodemic trend, also leads to false memory syndrome, whereby people fail to differentiate between authentic information and fake narratives. Simple efforts to correct misperceptions usually backfire and even strengthen initial beliefs, especially in the context of complex issues like healthcare. Policymakers thus struggle with balancing policy making and making people receptive to said policies in the backdrop of their tendencies to reject/suspect authoritative action. Examples of the same can be observed on both the domestic front and internationally. In the US, for example, the traditional healthcare system rations access to healthcare through a combination of insurance costs and options versus out-of-pocket essential expenses. While this has been a subject of debate for a long time, it hadn’t created a large scale public healthcare crisis because the incentives offered to the medical professionals and public trust in the delivery of essential services helped balance the conversation. In recent times, however, there has been a narrative shift that sensationalises the system as an issue of deliberate “denial of care,” which has led to concerns about harms to patients.
Policy Recommendations
The hindrances posed by misinformation in policymaking are further exacerbated against the backdrop of policymakers relying on social media as a method to measure public sentiment, consensus and opinions. If misinformation about an outbreak is not effectively addressed, it could hinder individuals from adopting necessary protective measures and potentially worsen the spread of the epidemic. To improve healthcare policymaking amidst the challenges posed by health misinformation, policymakers must take a multifaceted approach. This includes convening a broad coalition of central, state, local, territorial, tribal, private, nonprofit, and research partners to assess the impact of misinformation and develop effective preventive measures. Intergovernmental collaborations such as the Ministry of Health and the Ministry of Electronics and Information Technology should be encouraged whereby doctors debunk online medical misinformation, in the backdrop of the increased reliance on online forums for medical advice. Furthermore, increasing investment in research dedicated to understanding misinformation, along with the ongoing modernization of public health communications, is essential. Enhancing the resources and technical support available to state and local public health agencies will also enable them to better address public queries and concerns, as well as counteract misinformation. Additionally, expanding efforts to build long-term resilience against misinformation through comprehensive educational programs is crucial for fostering a well-informed public capable of critically evaluating health information.
From an individual perspective, since almost half a billion people use WhatsApp it has become a platform where false health claims can spread rapidly. This has led to a rise in the use of fake health news. Viral WhatsApp messages containing fake health warnings can be dangerous, hence it is always recommended to check such messages with vigilance. This highlights the growing concern about the potential dangers of misinformation and the need for more accurate information on medical matters.
Conclusion
The proliferation of misinformation in healthcare poses significant challenges to effective policymaking and public health management. The COVID-19 pandemic has underscored the role of misinformation in vaccine hesitancy, fraud, and increased mortality rates. There is an urgent need for robust strategies to counteract false information and build public trust in health interventions; this includes policymakers engaging in comprehensive efforts, including intergovernmental collaboration, enhanced research, and public health communication modernization, to combat misinformation. By fostering a well-informed public through education and vigilance, we can mitigate the impact of misinformation and promote healthier communities.
References
- van der Meer, T. G. L. A., & Jin, Y. (2019), “Seeking Formula for Misinformation Treatment in Public Health Crises: The Effects of Corrective Information Type and Source” Health Communication, 35(5), 560–575. https://doi.org/10.1080/10410236.2019.1573295
- “Health Misinformation”, U.S. Department of Health and Human Services. https://www.hhs.gov/surgeongeneral/priorities/health-misinformation/index.html
- Mechanic, David, “The Managed Care Backlash: Perceptions and Rhetoric in Health Care Policy and the Potential for Health Care Reform”, Rutgers University. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2751184/pdf/milq_195.pdf
- “Bad actors are weaponising health misinformation in India”, Financial Express, April 2024.
- “Role of doctors in eradicating misinformation in the medical sector.”, Times of India, 1 July 2024. https://timesofindia.indiatimes.com/life-style/health-fitness/health-news/national-doctors-day-role-of-doctors-in-eradicating-misinformation-in-the-healthcare-sector/articleshow/111399098.cms