#FactCheck - Digitally Altered Video of Olympic Medalist, Arshad Nadeem’s Independence Day Message
Executive Summary:
A video of Pakistani Olympic gold medalist and Javelin player Arshad Nadeem wishing Independence Day to the People of Pakistan, with claims of snoring audio in the background is getting viral. CyberPeace Research Team found that the viral video is digitally edited by adding the snoring sound in the background. The original video published on Arshad's Instagram account has no snoring sound where we are certain that the viral claim is false and misleading.

Claims:
A video of Pakistani Olympic gold medalist Arshad Nadeem wishing Independence Day with snoring audio in the background.

Fact Check:
Upon receiving the posts, we thoroughly checked the video, we then analyzed the video in TrueMedia, an AI Video detection tool, and found little evidence of manipulation in the voice and also in face.


We then checked the social media accounts of Arshad Nadeem, we found the video uploaded on his Instagram Account on 14th August 2024. In that video, we couldn’t hear any snoring sound.

Hence, we are certain that the claims in the viral video are fake and misleading.
Conclusion:
The viral video of Arshad Nadeem with a snoring sound in the background is false. CyberPeace Research Team confirms the sound was digitally added, as the original video on his Instagram account has no snoring sound, making the viral claim misleading.
- Claim: A snoring sound can be heard in the background of Arshad Nadeem's video wishing Independence Day to the people of Pakistan.
- Claimed on: X,
- Fact Check: Fake & Misleading
Related Blogs
.webp)
Introduction
Misinformation is a major issue in the AI age, exacerbated by the broad adoption of AI technologies. The misuse of deepfakes, bots, and content-generating algorithms have made it simpler for bad actors to propagate misinformation on a large scale. These technologies are capable of creating manipulative audio/video content, propagate political propaganda, defame individuals, or incite societal unrest. AI-powered bots may flood internet platforms with false information, swaying public opinion in subtle ways. The spread of misinformation endangers democracy, public health, and social order. It has the potential to affect voter sentiments, erode faith in the election process, and even spark violence. Addressing misinformation includes expanding digital literacy, strengthening platform detection capabilities, incorporating regulatory checks, and removing incorrect information.
AI's Role in Misinformation Creation
AI's growth in its capabilities to generate content have grown exponentially in recent years. Legitimate uses or purposes of AI many-a-times take a backseat and result in the exploitation of content that already exists on the internet. One of the main examples of misinformation flooding the internet is when AI-powered bots flood social media platforms with fake news at a scale and speed that makes it impossible for humans to track and figure out whether the same is true or false.
The netizens in India are greatly influenced by viral content on social media. AI-generated misinformation can have particularly negative consequences. Being literate in the traditional sense of the word does not automatically guarantee one the ability to parse through the nuances of social media content authenticity and impact. Literacy, be it social media literacy or internet literacy, is under attack and one of the main contributors to this is the rampant rise of AI-generated misinformation. Some of the most common examples of misinformation that can be found are related to elections, public health, and communal issues. These issues have one common factor that connects them, which is that they evoke strong emotions in people and as such can go viral very quickly and influence social behaviour, to the extent that they may lead to social unrest, political instability and even violence. Such developments lead to public mistrust in the authorities and institutions, which is dangerous in any economy, but even more so in a country like India which is home to a very large population comprising a diverse range of identity groups.
Misinformation and Gen AI
Generative AI (GAI) is a powerful tool that allows individuals to create massive amounts of realistic-seeming content, including imitating real people's voices and creating photos and videos that are indistinguishable from reality. Advanced deepfake technology blurs the line between authentic and fake. However, when used smartly, GAI is also capable of providing a greater number of content consumers with trustworthy information, counteracting misinformation.
Generative AI (GAI) is a technology that has entered the realm of autonomous content production and language creation, which is linked to the issue of misinformation. It is often difficult to determine if content originates from humans or machines and if we can trust what we read, see, or hear. This has led to media users becoming more confused about their relationship with media platforms and content and highlighted the need for a change in traditional journalistic principles.
We have seen a number of different examples of GAI in action in recent times, from fully AI-generated fake news websites to fake Joe Biden robocalls telling the Democrats in the U.S. not to vote. The consequences of such content and the impact it could have on life as we know it are almost too vast to even comprehend at present. If our ability to identify reality is quickly fading, how will we make critical decisions or navigate the digital landscape safely? As such, the safe and ethical use and applications of this technology needs to be a top global priority.
Challenges for Policymakers
AI's ability to generate anonymous content makes it difficult to hold perpetrators accountable due to the massive amount of data generated. The decentralised nature of the internet further complicates regulation efforts, as misinformation can spread across multiple platforms and jurisdictions. Balancing the need to protect the freedom of speech and expression with the need to combat misinformation is a challenge. Over-regulation could stifle legitimate discourse, while under-regulation could allow misinformation to propagate unchecked. India's multilingual population adds more layers to already-complex issue, as AI-generated misinformation is tailored to different languages and cultural contexts, making it harder to detect and counter. Therefore, developing strategies catering to the multilingual population is necessary.
Potential Solutions
To effectively combat AI-generated misinformation in India, an approach that is multi-faceted and multi-dimensional is essential. Some potential solutions are as follows:
- Developing a framework that is specific in its application to address AI-generated content. It should include stricter penalties for the originator and spreader and dissemination of fake content in proportionality to its consequences. The framework should establish clear and concise guidelines for social media platforms to ensure that proactive measures are taken to detect and remove AI-generated misinformation.
- Investing in tools that are driven by AI for customised detection and flagging of misinformation in real time. This can help in identifying deepfakes, manipulated images, and other forms of AI-generated content.
- The primary aim should be to encourage different collaborations between tech companies, cyber security orgnisations, academic institutions and government agencies to develop solutions for combating misinformation.
- Digital literacy programs will empower individuals by training them to evaluate online content. Educational programs in schools and communities teach critical thinking and media literacy skills, enabling individuals to better discern between real and fake content.
Conclusion
AI-generated misinformation presents a significant threat to India, and it is safe to say that the risks posed are at scale with the rapid rate at which the nation is developing technologically. As the country moves towards greater digital literacy and unprecedented mobile technology adoption, one must be cognizant of the fact that even a single piece of misinformation can quickly and deeply reach and influence a large portion of the population. Indian policymakers need to rise to the challenge of AI-generated misinformation and counteract it by developing comprehensive strategies that not only focus on regulation and technological innovation but also encourage public education. AI technologies are misused by bad actors to create hyper-realistic fake content including deepfakes and fabricated news stories, which can be extremely hard to distinguish from the truth. The battle against misinformation is complex and ongoing, but by developing and deploying the right policies, tools, digital defense frameworks and other mechanisms, we can navigate these challenges and safeguard the online information landscape.
References:
- https://economictimes.indiatimes.com/news/how-to/how-ai-powered-tools-deepfakes-pose-a-misinformation-challenge-for-internet-users/articleshow/98770592.cms?from=mdr
- https://www.dw.com/en/india-ai-driven-political-messaging-raises-ethical-dilemma/a-69172400
- https://pure.rug.nl/ws/portalfiles/portal/975865684/proceedings.pdf#page=62

Introduction
In the ever-evolving world of technological innovation, a new chapter is being inscribed by the bold visionaries at Figure AI, a startup that is not merely capitalising on artificial intelligence rage but seeking to crest its very pinnacle. With the recent influx of a staggering $675 million in funding, this Sunnyvale, California-based enterprise has captured the imagination of industry giants and venture capitalists alike, all betting on a future where humanoid robots transcend the realm of science fiction to become an integral part of our daily lives.
The narrative of Figure AI's ascent is punctuated by the names of tech luminaries and corporate giants. Jeff Bezos, through his firm Explore Investments LLC, has infused a hefty $100 million into the venture. Microsoft, not to be outdone, has contributed a cool $95 million. Nvidia and an Amazon-affiliated fund have each bestowed $50 million upon Figure AI's ambitious endeavours. This surge of capital is a testament to the potential seen in the company's mission to develop general-purpose humanoid robots that promise to revolutionise industries and redefine human labour.
The Catalyst for Change
This investment craze can be traced back to the emergence of OpenAI's ChatGPT, a chatbot that caught the public eye in November 2022. Its success has not only ushered in a new era for AI but has also sparked a race among investors eager to stake their claim in startups determined to outshine their more established counterparts. OpenAI itself, once mulling over the acquisition of Figure AI, has now joined the ranks of its benefactors with a $5 million investment.
The roster of backers reads like a who's who of the tech and venture capital world. Intel's venture capital arm, LG Innotek, Samsung's investment group, Parkway Venture Capital, Align Ventures, ARK Venture Fund, Aliya Capital Partners, and Tamarack—all have invested their lot with Figure AI, signalling a broad consensus on the startup's potential to disrupt and innovate.
Yet, when probed for insights, these major players—Amazon, Nvidia, Microsoft, and Intel—have maintained a Sphinx-like silence, while Figure AI and other entities mentioned in the report have refrained from immediate responses to inquiries. This veil of secrecy only adds to the intrigue surrounding the company's prospects and the transformative impact its technology may have on society.
Need For AI Robots
Figure AI's robots are not mere assemblages of metal and circuitry; they are envisioned as versatile beings capable of navigating a multitude of environments and executing a diverse array of tasks. From working at aisles of warehouses to the bustling corridors of retail spaces, these humanoid automatons are being designed to fill the void of millions of jobs projected to remain vacant due to a shrinking human labour force.
The company's long-term mission statement is as audacious as it is altruistic: 'to develop general-purpose humanoids that make a positive impact on humanity and create a better life for future generations.' This noble pursuit is not just about engineering efficiency; it is about reshaping the very fabric of work, liberating humans from hazardous and menial tasks, and propelling us towards a future where our lives are enriched with purpose and fulfilment.
Conclusion
As we stand on the cusp of a new digital world, the strides of Figure AI serve as a beacon, illuminating the path towards machine and human symbiosis. The investment frenzy that has enveloped the company is a clarion call to all dreamers, pragmatists and innovators alike that the age of humanoid helpers is upon us, and the possibilities are as endless as our collective imagination.
Figure AI is forging a future where robots walk among us, not as novelties or overlords but as partners in forging a world where technology and humanity work together to unlock untold potential. The story of Figure AI is not just one of investment and innovation; it is a narrative of hope, a testament to the indomitable spirit of human ingenuity, and a preview of the wondrous epoch that lies just beyond the horizon.
References
- https://cybernews.com/tech/openai-bezos-nvidia-fund-robot-startup-figure-ai/
- https://www.thedailystar.net/business/news/bezos-nvidia-join-openai-funding-humanoid-robot-startup-3551476
- https://www.bloomberg.com/news/articles/2024-02-23/bezos-nvidia-join-openai-microsoft-in-funding-humanoid-robot-startup-figure-ai
- https://economictimes.indiatimes.com/tech/technology/bezos-nvidia-join-openai-in-funding-humanoid-robot-startup-report/articleshow/107967102.cms?from=mdr

Introduction
Misinformation spreads faster than a pimple before your best friend's wedding, and these viral skincare hacks on social media can do more harm than good if smeared on without a second thought. The unverified skin care tips, exaggerated results, and product endorsements lacking proper dermatological backing can often lead to breakouts and serious damage.
The Allure and Risks of Online Skincare Trends
In the age of social media, beauty advice is easily accessible, but not all trending skincare hacks are beneficial. Influencers lacking professional dermatological knowledge often endorse "medical grade" skincare products, which may not be suitable for all skin types. The viral DIY skincare hacks, such as natural remedies like multani mitti (Fuller's earth), have found a new audience online. However, suppose such skincare tips are approached without due care and caution regarding their suitability for different skin types, or without the proper formulation of ingredients. In that case, they can result in skin problems. It is crucial to approach online skincare advice with a critical eye, as not all trends are backed by scientific research.
CyberPeace Recommendations
- Influencer Responsibility and Ethical Endorsements in Skincare
Influencers play a crucial role in shaping public perception in the skincare and lifestyle industries. However, they must exercise due diligence before endorsing skincare products or practices, as misinformation can lead to financial loss and health consequences. Influencers should only promote products they have personally tested or vetted by dermatologists or skincare professionals. They should also research the brand's credibility, check ingredients for safety, and understand the product's target audience.
- Strengthening Digital Literacy in Skincare Spaces
CyberPeace highlights that improving digital literacy is one of the best strategies to stop the spread of false information about skincare. Users nowadays, particularly young people, are continuously exposed to a deluge of wellness and beauty-related content. Many people are duped by overstated claims, pseudoscientific cures, and influencer-driven marketing masquerading as sound advice if they lack the necessary digital literacy. We recommend supporting digital literacy initiatives that teach users how to evaluate sources, think critically, and comprehend how algorithms promote content. Long-term impact is thought to be achieved through influencer partnerships, gamified learning modules, and community workshops that promote media literacy.
- Recommendation for Users to Prioritise Research and Critical Thinking
Users should prioritise research and critical thinking when engaging with skincare content online. It's crucial to distinguish between valid advice and misinformation. Thorough research, including expert reviews, ingredient checks, and scientific sources, is essential. Questioning endorsements and relying on trusted platforms and dermatologists can help ensure a skincare routine based on sound practices.
- Mandating Transparency from Influencers and Brands
Enforcing stronger transparency laws for influencers and skincare companies is a key suggestion. Social media influencers frequently neglect to reveal sponsored collaborations or paid advertisements, giving followers the impression that the skincare advice is based on the creators' own experience and objective judgment. This dishonest practice frequently promotes goods with little to no scientific support and feeds false information. The social media companies need to be proactive in identifying and removing content that violates disclosure and advertising guidelines.
- Creating a Verified Registry for Skincare Professionals
Increasing the voices of real experts is one of the most important strategies to build credibility and trust online. The establishment of a publicly available, validated registry of certified dermatologists, cosmetologists, and skincare scientists is suggested by cybersecurity experts and medical professionals. These experts could then receive a "verified expert" badge from social media companies, making it easier for users to discern between content created by unqualified people and genuine, evidence-based advice. Algorithms that promote such verified content would inevitably limit the dissemination of false information.
- Enforcing Platform Accountability and Reporting System
There needs to be platform-level accountability and safeguard mechanisms in case of any false information about skincare. Platforms should monitor repeat offenders and implement a tiered penalty system that includes content removal and temporary or permanent bans on such malicious user profiles.
References