#FactCheck - AI-Generated Clip of Lion Carrying Woman Shared as Real Incident
Executive Summary
A video circulating on social media shows a lion carrying away a woman who was washing clothes near a pond. Users are sharing the clip claiming it depicts a real incident. However, research by CyberPeace found the viral claim to be false. The research revealed that the video is not real but AI-generated.
Claim
A user on Facebook shared the viral video claiming that a lion attacked and carried away a woman from a pond while she was washing clothes. The link to the post and its archived version are provided below

Fact Check:
Upon closely examining the viral clip, we noticed several visual inconsistencies that raised suspicion about its authenticity. The video was then analyzed using the AI-detection tool Sightengine. According to the analysis results, the viral video was identified as AI-generated.

Conclusion
The research confirms that the viral video does not depict a real incident. The clip is digitally created using artificial intelligence and is being falsely shared as a genuine event.
Related Blogs

Introduction
AI has transformed the way we look at advanced technologies. As the use of AI is evolving, it also raises a concern about AI-based deepfake scams. Where scammers use AI technologies to create deep fake videos, images and audio to deceive people and commit AI-based crimes. Recently a Kerala man fall victim to such a scam. He received a WhatsApp video call, the scammer impersonated the face of the victim’s known friend using AI-based deep fake technology. There is a need for awareness and vigilance to safeguard ourselves from such incidents.
Unveiling the Kerala deep fake video call Scam
The man in Kerala received a WhatsApp video call from a person claiming to be his former colleague in Andhra Pradesh. In actuality, he was the scammer. He asked for help of 40,000 rupees from the Kerala man via google pay. Scammer to gain the trust even mentioned some common friends with the victim. The scammer said that he is at the Dubai airport and urgently need the money for the medical emergency of his sister.
As AI is capable of analysing and processing data such as facial images, videos, and audio creating a realistic deep fake of the same which closely resembles as real one. In the Kerala Deepfake video call scam the scammer made a video call that featured a convincingly similar facial appearance and voice as same to the victim’s colleague which the scammer was impersonating. The Kerala man believing that he was genuinely communicating with his colleague, transferred the money without hesitation. The Kerala man then called his former colleague on the number he had saved earlier in his contact list, and his former colleague said that he has not called him. Kerala man realised that he had been cheated by a scammer, who has used AI-based deep-fake technology to impersonate his former colleague.
Recognising Deepfake Red Flags
Deepfake-based scams are on the rise, as they pose challenges that really make it difficult to distinguish between genuine and fabricated audio, videos and images. Deepfake technology is capable of creating entirely fictional photos and videos from scratch. In fact, audio can be deepfaked too, to create “voice clones” of anyone.
However, there are some red flags which can indicate the authenticity of the content:
- Video quality- Deepfake videos often have compromised or poor video quality, and unusual blur resolution, which might pose a question to its genuineness.
- Looping videos: Deepfake videos often loop or unusually freeze or where the footage repeats itself, indicating that the video content might be fabricated.
- Verify Separately: Whenever you receive requests for such as financial help, verify the situation by directly contacting the person through a separate channel such as a phone call on his primary contact number.
- Be vigilant: Scammers often possess a sense of urgency leading to giving no time to the victim to think upon it and deceiving them by making a quick decision. So be vigilant and cautious when receiving and entertaining such a sudden emergency which demands financial support from you on an urgent basis.
- Report suspicious activity: If you encounter such activities on your social media accounts or through such calls report it to the platform or to the relevant authority.
Conclusion
The advanced nature of AI deepfake technology has introduced challenges in combatting such AI-based cyber crimes. The Kerala man’s case of falling victim to an AI-based deepfake video call and losing Rs 40,000 serves as an alarming need to remain extra vigilant and cautious in the digital age. So in the reported incident where Kerala man received a call from a person appearing as his former colleague but in actuality, he was a scammer and tricking the victim by using AI-based deepfake technology. By being aware of such types of rising scams and following precautionary measures we can protect ourselves from falling victim to such AI-based cyber crimes. And stay protected from such malicious scammers who exploit these technologies for their financial gain. Stay cautious and safe in the ever-evolving digital landscape.

Introduction
Advanced deepfake technology blurs the line between authentic and fake. To ascertain the credibility of the content it has become important to differentiate between genuine and manipulated or curated online content highly shared on social media platforms. AI-generated fake voice clone, videos are proliferating on the Internet and social media. There is the use of sophisticated AI algorithms that help manipulate or generate synthetic multimedia content such as audio, video and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake multimedia content. McAfee Corp., a well-known or popular global leader in online protection, has recently launched an AI-powered deepfake audio detection technology under Project “Mockingbird” intending to safeguard consumers against the surging threat of fabricated or AI-generated audio or voice clones to dupe people for money or unauthorisly obtaining their personal information. McAfee Corp. announced its AI-powered deepfake audio detection technology, known as Project Mockingbird, at the Consumer Electronics Show, 2024.
What is voice cloning?
To create a voice clone of anyone's, audio can be deeplyfaked, too, which closely resembles a real voice but, in actuality, is a fake voice created through deepfake technology.
Emerging Threats: Cybercriminal Exploitation of Artificial Intelligence in Identity Fraud, Voice Cloning, and Hacking Acceleration
AI is used for all kinds of things from smart tech to robotics and gaming. Cybercriminals are misusing artificial intelligence for rather nefarious reasons including voice cloning to commit cyber fraud activities. Artificial intelligence can be used to manipulate the lips of an individual so it looks like they're saying something different, it could also be used for identity fraud to make it possible to impersonate someone for a remote verification for your bank and it also makes traditional hacking more convenient. Cybercriminals have been misusing advanced technologies such as artificial intelligence, which has led to an increase in the speed and volume of cyber attacks, and that's been the theme in recent times.
Technical Analysis
To combat Audio cloning fraudulent activities, McAfee Labs has developed a robust AI model that precisely detects artificially generated audio used in videos or otherwise.
- Context-Based Recognition: Contextual assessment is used by technological devices to examine audio components in the overall setting of an audio. It improves the model's capacity to recognise discrepancies suggestive of artificial intelligence-generated audio by evaluating its surroundings information.
- Conductual Examination: Psychological detection techniques examine linguistic habits and subtleties, concentrating on departures from typical individual behaviour. Examining speech patterns, tempo, and pronunciation enables the model to identify artificially or synthetically produced material.
- Classification Models: Auditory components are categorised by categorisation algorithms for detection according to established traits of human communication. The technology differentiates between real and artificial intelligence-synthesized voices by comparing them against an extensive library of legitimate human speech features.
- Accuracy Outcomes: McAfee Labs' deepfake voice recognition solution, which boasts an impressive ninety per cent success rate, is based on a combined approach incorporating psychological, context-specific, and categorised identification models. Through examining audio components in the larger video context and examining speech characteristics, such as intonation, rhythm, and pronunciation, the system can identify discrepancies that could be signs of artificial intelligence-produced audio. Categorical models make an additional contribution by classifying audio information according to characteristics of known human speech. This all-encompassing strategy is essential for precisely recognising and reducing the risks connected to AI-generated audio data, offering a strong barrier against the growing danger of deepfake situations.
- Application Instances: The technique protects against various harmful programs, such as celebrity voice-cloning fraud and misleading content about important subjects.
Conclusion
It is important to foster ethical and responsible consumption of technology. Awareness of common uses of artificial intelligence is a first step toward broader public engagement with debates about the appropriate role and boundaries for AI. Project Mockingbird by Macafee employs AI-driven deepfake audio detection to safeguard against cyber criminals who are using fabricated AI-generated audio for scams and manipulating the public image of notable figures, protecting consumers from financial and personal information risks.
References:
- https://www.cnbctv18.com/technology/mcafee-deepfake-audio-detection-technology-against-rise-in-ai-generated-misinformation-18740471.htm
- https://www.thehindubusinessline.com/info-tech/mcafee-unveils-advanced-deepfake-audio-detection-technology/article67718951.ece
- https://lifestyle.livemint.com/smart-living/innovation/ces-2024-mcafee-ai-technology-audio-project-mockingbird-111704714835601.html
- https://news.abplive.com/fact-check/audio-deepfakes-adding-to-cacophony-of-online-misinformation-abpp-1654724
.webp)
Introduction
The AI Action Summit is a global forum that brings together world leaders, policymakers, technology experts, and industry representatives to discuss AI governance, ethics, and its role in society. This year, the week-long Paris AI Action Summit officially culminated on the 11th of February, 2025. It brought together experts from the industry, policymakers, and other dignitaries to discuss Artificial Intelligence and its challenges. The event was co-chaired by Indian Prime Minister Narendra Modi and French President Emmanuel Macron. In line with the summit, the Indian delegation actively engaged in the 2nd India-France AI Policy Roundtable, an official side event of the summit, and the 14th India-France CEOs Forum. These discussions were on diverse sectors including defense, aerospace, technology, etc. among other things.
Prime Minister Modi’s Address
During the AI Action Summit in Paris, Prime Minister Narendra Modi drew attention to the revolutionary effect of AI in politics, the economy, security, and society. Stressing the requirement of international cooperation, he promoted strong frameworks of governance to combat AI-based risks and consequently, build public confidence in new technologies. Needed efforts with respect to cybersecurity issues such as deepfakes and disinformation were also acknowledged.
Democratising AI, and sharing its benefits, particularly with the Global South not only aligned with Sustainable Development Goals (SDGs) but also affirmed India’s resolve towards sharing expertise and best practices. India’s remarkable feat of creating a Digital Public Infrastructure, that caters to a population of 1.4 billion through open and accessible technology was highlighted as well.
Among the key announcements, India revealed its plans to create its own Large Language Model (LLM) that reflects the country's linguistic diversity, strengthening its AI aspirations. Further, India will be hosting the next AI Action Summit, reaffirming its position in international AI leadership. The Prime Minister also welcomed France's initiatives, such as the launch of the "AI Foundation" and the "Council for Sustainable AI", initiated by President Emmanuel Macron. He emphasized the necessity to extend the Global Partnership for AI and to get it more representative and inclusive so that Global South voices are actually incorporated into AI innovation and governance.
Other Perspectives
Though there were 58 countries that signed the international agreement on a more open, inclusive, sustainable, and ethical approach to AI development (including India, France, and China), the UK and the US have refused to sign the international agreement at the AI Summit stating their issues with global governance and national security. While the former raised concerns about the lack of sufficient details regarding the establishment of global AI governance and AI’s effect on national security as their reason, the latter showcased its reservations about the overly wide AI regulations which had the potential to hamper a transformative industry. Meanwhile, the US is also looking forward to ‘Stargate’, its $500 billion AI infrastructure project alongside the companies- OpenAI, Softbank, and Oracle.
CyberPeace Insights
The Summit has garnered greater significance with the backdrop of the release of platforms such as DeepSeek R1, China’s AI assistant system similar to that of OpenAI’s ChatGPT. On its release, it was the top-rated free application on Apple’s app store and sent the technology stocks tumbling. Moreover, investors world over appreciated the creation of the model which was made roughly in about $5 million while other AI companies spent more in comparison (keeping in mind the restrictions caused by the chip export controls in China). This breakthrough challenges the conventional notion that massive funding is a prerequisite for innovation, offering hope for India’s burgeoning AI ecosystem. With the IndiaAI mission and fewer geopolitical restrictions, India stands at a pivotal moment to drive responsible AI advancements.
References:
- https://www.mea.gov.in/press-releases.htm?dtl/39023/Prime_Minister_cochairs_AI_Action_Summit_in_Paris_February_11_2025
- https://indianexpress.com/article/explained/explained-sci-tech/what-is-stargate-trumps-500-billion-ai-project-9793165/
- https://pib.gov.in/PressReleasePage.aspx?PRID=2102056
- https://pib.gov.in/PressReleasePage.aspx?PRID=2101947
- https://pib.gov.in/PressReleasePage.aspx?PRID=2101896
- https://www.timesnownews.com/technology-science/uk-and-us-decline-to-sign-global-ai-agreement-at-paris-ai-action-summit-here-is-why-article-118164497
- https://www.thehindu.com/sci-tech/technology/india-57-others-sign-paris-joint-statement-on-inclusive-sustainable-ai/article69207937.ece