#FactCheck - Deepfake Video Falsely Claims visuals of a massive rally held in Manipur
Executive Summary:
A viral online video claims visuals of a massive rally organised in Manipur for stopping the violence in Manipur. However, the CyberPeace Research Team has confirmed that the video is a deep fake, created using AI technology to manipulate the crowd into existence. There is no original footage in connection to any similar protest. The claim that promotes the same is therefore, false and misleading.
Claims:
A viral post falsely claims of a massive rally held in Manipur.


Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. We could not locate any authentic sources mentioning such event held recently or previously. The viral video exhibited signs of digital manipulation, prompting a deeper investigation.
We used AI detection tools, such as TrueMedia and Hive AI Detection tool, to analyze the video. The analysis confirmed with 99.7% confidence that the video was a deepfake. The tools identified "substantial evidence of manipulation," particularly in the crowd and colour gradience , which were found to be artificially generated.



Additionally, an extensive review of official statements and interviews with Manipur State officials revealed no mention of any such rally. No credible reports were found linking to such protests, further confirming the video’s inauthenticity.
Conclusion:
The viral video claims visuals of a massive rally held in Manipur. The research using various tools such as truemedia.org and other AI detection tools confirms that the video is manipulated using AI technology. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using AI technology, making the claim false and misleading.
- Claim: Massive rally held in Manipur against the ongoing violence viral on social media.
- Claimed on: Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

Introduction
With the rise of AI deepfakes and manipulated media, it has become difficult for the average internet user to know what they can trust online. Synthetic media can have serious consequences, from virally spreading election disinformation or medical misinformation to serious consequences like revenge porn and financial fraud. Recently, a Pune man lost ₹43 lakh when he invested money based on a deepfake video of Infosys founder Narayana Murthy. In another case, that of Babydoll Archi, a woman from Assam had her likeness deepfaked by an ex-boyfriend to create revenge porn.
Image or video manipulation used to leave observable traces. Online sources may advise examining the edges of objects in the image, checking for inconsistent patterns, lighting differences, observing the lip movements of the speaker in a video or counting the number of fingers on a person’s hand. Unfortunately, as the technology improves, such folk advice might not always help users identify synthetic and manipulated media.
The Coalition for Content Provenance and Authenticity (C2PA)
One interesting project in the area of trust-building under these circumstances has been the Coalition for Content Provenance and Authenticity (C2PA). Started in 2019 by Adobe and Microsoft, C2PA is a collaboration between major players in AI, social media, journalism, and photography, among others. It set out to create a standard for publishers of digital media to prove the authenticity of digital media and track changes as they occur.
When photos and videos are captured, they generally store metadata like the date and time of capture, the location, the device it was taken on, etc. C2PA developed a standard for sharing and checking the validity of this metadata, and adding additional layers of metadata whenever a new user makes any edits. This creates a digital record of any and all changes made. Additionally, the original media is bundled with this metadata. This makes it easy to verify the source of the image and check if the edits change the meaning or impact of the media. This standard allows different validation software, content publishers and content creation tools to be interoperable in terms of maintaining and displaying proof of authenticity.

The standard is intended to be used on an opt-in basis and can be likened to a nutrition label for digital media. Importantly, it does not limit the creativity of fledgling photo editors or generative AI enthusiasts; it simply provides consumers with more information about the media they come across.
Could C2PA be Useful in an Indian Context?
The World Economic Forum’s Global Risk Report 2024, identifies India as a significant hotspot for misinformation. The recent AI Regulation report by MeitY indicates an interest in tools for watermarking AI-based synthetic content for ease of detecting and tracking harmful outcomes. Perhaps C2PA can be useful in this regard as it takes a holistic approach to tracking media manipulation, even in cases where AI is not the medium.
Currently, 26 India-based organisations like the Times of India or Truefy AI have signed up to the Content Authenticity Initiative (CAI), a community that contributes to the development and adoption of tools and standards like C2PA. However, people are increasingly using social media sites like WhatsApp and Instagram as sources of information, both of which are owned by Meta and have not yet implemented the standard in their products.
India also has low digital literacy rates and low resistance to misinformation. Part of the challenge would be showing people how to read this nutrition label, to empower people to make better decisions online. As such, C2PA is just one part of an online trust-building strategy. It is crucial that education around digital literacy and policy around organisational adoption of the standard are also part of the strategy.
The standard is also not foolproof. Current iterations may still struggle when presented with screenshots of digital media and other non-technical digital manipulation. Linking media to their creator may also put journalists and whistleblowers at risk. Actual use in context will show us more about how to improve future versions of digital provenance tools, though these improvements are not guarantees of a safer internet.
The largest advantage of C2PA adoption would be the democratisation of fact-checking infrastructure. Since media is shared at a significantly faster rate than it can be verified by professionals, putting the verification tools in the hands of people makes the process a lot more scalable. It empowers citizen journalists and leaves a public trail for any media consumer to look into.
Conclusion
From basic colour filters to make a scene more engaging, to removing a crowd from a social media post, to editing together videos of a politician to make it sound like they are singing a song, we are so accustomed to seeing the media we consume be altered in some way. The C2PA is just one way to bring transparency to how media is altered. It is not a one-stop solution, but it is a viable starting point for creating a fairer and democratic internet and increasing trust online. While there are risks to its adoption, it is promising to see that organisations across different sectors are collaborating on this project to be more transparent about the media we consume.
References
- https://c2pa.org/
- https://contentauthenticity.org/
- https://indianexpress.com/article/technology/tech-news-technology/kate-middleton-9-signs-edited-photo-9211799/
- https://photography.tutsplus.com/articles/fakes-frauds-and-forgeries-how-to-detect-image-manipulation--cms-22230
- https://www.media.mit.edu/projects/detect-fakes/overview/
- https://www.youtube.com/watch?v=qO0WvudbO04&pp=0gcJCbAJAYcqIYzv
- https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
- https://indianexpress.com/article/technology/tech-news-technology/ai-law-may-not-prescribe-penal-consequences-for-violations-9457780/
- https://thesecretariat.in/article/meity-s-ai-regulation-report-ambitious-but-no-concrete-solutions
- https://www.ndtv.com/lifestyle/assam-what-babydoll-archi-viral-fame-says-about-india-porn-problem-8878689
- https://www.meity.gov.in/static/uploads/2024/02/9f6e99572739a3024c9cdaec53a0a0ef.pdf

Introduction
In the face of escalating cybercrimes in India, criminals are adopting increasingly inventive methods to deceive victims. Imagine opening your phone to the notification of an incoming message from a stranger with a friendly introduction - a beginning that appears harmless, but is the beginning of an awful financial nightmare. "Pig Butchering '' scam—an increasingly sophisticated form of deception that's gaining more widespread popularity. Unlike any other scams, this one plays a long game, spinning a web of trust before it strikes. It's a modern-day financial thriller happening in the real world, with real victims. "pig butchering" scam, involves building trust through fake profiles and manipulating victims emotionally to extort money. The scale of such scams has raised concerns, emphasising the need for awareness and vigilance in the face of evolving cyber threats.
How does 'Pig Butchering' Scam Work?
At its core, the scam starts innocuously, often with a stranger reaching out via text, social media, or apps like WhatsApp or WeChat. The scammer, hiding behind a well-crafted and realistic online persona, seeks to forge a connection. This could be under the pretence of friendship or romance, employing fake photos and stories to seem authentic. Gradually, the scammer builds a rapport, engaging in personal and often non-financial conversations. They may portray themselves as a widow, single parent, or even a military member to evoke empathy and trust. Over time, this connection pivots to investment opportunities, with the scammer presenting lucrative tips or suggestions in stocks or cryptocurrencies. Initially, modest investments are encouraged, and falsified returns are shown to lure in larger sums. Often, the scammer claims affiliation with a profitable financial institution or success in cryptocurrency trading. They direct victims to specific, usually fraudulent, trading platforms under their control. The scam reaches its peak when significant investments are made, only for the scammer to manipulate the situation, block access to the trading platform, or vanish, leaving the victim with substantial losses.
Real-Life Examples and Global Reach
These scams are not confined to one region. In India, for instance, scammers use emotional manipulation, often starting with a WhatsApp message from an unknown, attractive individual. They pose as professionals offering part-time jobs, leading victims through tasks that escalate in investment and complexity. These usually culminate in cryptocurrency investments, with victims unable to withdraw their funds, the money often traced to accounts in Dubai.
In the West, several cases highlight the scam's emotional and financial toll: A Michigan woman was lured by an online boyfriend claiming to make money from gold trading. She invested through a fake brokerage, losing money while being emotionally entangled. A Canadian man named Sajid Ikram lost nearly $400,000 in a similar scam, initially misled by a small successful withdrawal. In California, a man lost $440,000, succumbing to pressure to invest more, including retirement savings and borrowed money. A Maryland victim faced continuous demands from scammers, losing almost $1.4 million in hopes of recovering previous losses. A notable case involved US authorities seizing about $9 million in cryptocurrency linked to a global pig butchering scam, showcasing its extensive reach.
Safeguarding Against Such Scams
Vigilance is crucial to prevent falling victim to these scams. Be skeptical of unsolicited contacts and wary of investment advice from strangers. Conduct thorough research before any financial engagement, particularly on unfamiliar platforms. Indian Cyber Crime Coordination Center warns of red flags like sudden large virtual currency transactions, interest in high-return investments mentioned by new online contacts, and atypical customer behaviour.
Victims should report incidents to various Indian and foreign websites and the Securities Exchange Commission. Financial institutions are advised to report suspicious activities related to these scams. In essence, the pig butchering scam is a cunning blend of emotional manipulation and financial fraud. Staying informed and cautious is key to avoiding these sophisticated traps.
Conclusion
The Pig Butchering Scams are one of the many new breeds of emerging cyber scams that have become a bone of contention for cyber security organisations. It is imperative for netizens to stay vigilant and well-informed about the dynamics of cyberspace and emerging cyber crimes.
References
- https://www.sentinelassam.com/more-news/national-news/from-impersonating-cbi-officers-to-pig-butchering-cyber-criminals-get-creative
- https://hiindia.com/from-impersonating-cbi-officers-to-pig-butchering-cyber-criminals-get-creative/

Executive Summary:
Recently, our team came across a video on social media that appears to show a saint lying in a fire during the Mahakumbh 2025. The video has been widely viewed and comes with captions claiming that it is part of a ritual during the ongoing Mahakumbh 2025. After thorough research, we found that these claims are false. The video is unrelated to Mahakumbh 2025 and comes from a different context and location. This is an example of how the information posted was from the past and not relevant to the alleged context.

Claim:
A video has gone viral on social media, claiming to show a saint lying in fire during Mahakumbh 2025, suggesting that this act is part of the traditional rituals associated with the ongoing festival. This misleading claim falsely implies that the act is a standard part of the sacred ceremonies held during the Mahakumbh event.

Fact Check:
Upon receiving the post we conducted a reverse image search of the key frames extracted from the video, and traced the video to an old article. Further research revealed that the original post was from 2009, when Ramababu Swamiji, aged 80, laid down on a burning fire for the benefit of society. The video is not recent, as it had already gone viral on social media in November 2009. A closer examination of the scene, crowd, and visuals clearly shows that the video is unrelated to the rituals or context of Mahakumbh 2025. Additionally, our research found that such activities are not part of the Mahakumbh rituals. Reputable sources were also kept into consideration to cross-verify this information, effectively debunking the claim and emphasizing the importance of verifying facts before believing in anything.


For more clarity, the YouTube video attached below further clears the doubt, which reminds us to verify whether such claims are true or not.

Conclusion:
The viral video claiming to depict a saint lying in fire during Mahakumbh 2025 is entirely misleading. Our thorough fact-checking reveals that the video dates back to 2009 and is unrelated to the current event. Such misinformation highlights the importance of verifying content before sharing or believing it. Always rely on credible sources to ensure the accuracy of claims, especially during significant cultural or religious events like Mahakumbh.
- Claim: A viral video claims to show a saint lying in fire during the Mahakumbh 2025.
- Claimed On: X (Formerly Known As Twitter)
- Fact Check: False and Misleading