#FactCheck - Debunking Viral Photo: Tears of Photographer Not Linked to Ram Mandir Opening
Executive Summary:
A photographer breaking down in tears in a viral photo is not connected to the Ram Mandir opening. Social media users are sharing a collage of images of the recently dedicated Lord Ram idol at the Ayodhya Ram Mandir, along with a claimed shot of the photographer crying at the sight of the deity. A Facebook post that posts this video says, "Even the cameraman couldn't stop his emotions." The CyberPeace Research team found that the event happened during the AFC Asian Cup football match in 2019. During a match between Iraq and Qatar, an Iraqi photographer started crying since Iraq had lost and was out of the competition.
Claims:
The photographer in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration. The Collage was also shared by many users in other Social Media like X, Reddit, Facebook. An Facebook user shared and the Caption of the Post reads,




Fact Check:
CyberPeace Research team reverse image searched the Photographer, and it landed to several memes from where the picture was taken, from there we landed to a Pinterest Post where it reads, “An Iraqi photographer as his team is knocked out of the Asian Cup of Nations”

Taking an indication from this we did some keyword search and tried to find the actual news behind this Image. We landed at the official Asian Cup X (formerly Twitter) handle where the image was shared 5 years ago on 24 Jan, 2019. The Post reads, “Passionate. Emotional moment for an Iraqi photographer during the Round of 16 clash against ! #AsianCup2019”

We are now confirmed about the News and the origin of this image. To be noted that while we were investigating the Fact Check we also found several other Misinformation news with the Same photographer image and different Post Captions which was all a Misinformation like this one.
Conclusion:
The recent Viral Image of the Photographer claiming to be associated with Ram Mandir Opening is Misleading, the Image of the Photographer was a 5 years old image where the Iraqi Photographer was seen Crying during the Asian Cup Football Competition but not of recent Ram Mandir Opening. Netizens are advised not to believe and share such misinformation posts around Social Media.
- Claim: A person in the widely shared images broke down in tears at seeing the icon of Lord Ram during the Ayodhya Ram Mandir's consecration.
- Claimed on: Facebook, X, Reddit
- Fact Check: Fake
Related Blogs

Introduction
Generative AI, particularly deepfake technology, poses significant risks to security in the financial sector. Deepfake technology can convincingly mimic voices, create lip-sync videos, execute face swaps, and carry out other types of impersonation through tools like DALL-E, Midjourney, Respeecher, Murf, etc, which are now widely accessible and have been misused for fraud. For example, in 2024, cybercriminals in Hong Kong used deepfake technology to impersonate the Chief Financial Officer of a company, defrauding it of $25 million. Surveys, including Regula’s Deepfake Trends 2024 and Sumsub reports, highlight financial services as the most targeted sector for deepfake-induced fraud.
Deepfake Technology and Its Risks to Financial Systems
India’s financial ecosystem, including banks, NBFCs, and fintech companies, is leveraging technology to enhance access to credit for households and MSMEs. The country is a leader in global real-time payments and its digital economy comprises 10% of its GDP. However, it faces unique cybersecurity challenges. According to the RBI’s 2023-24 Currency and Finance report, banks cite cybersecurity threats, legacy systems, and low customer digital literacy as major hurdles in digital adoption. Deepfake technology intensifies risks like:
- Social Engineering Attacks: Information security breaches through phishing, vishing, etc. become more convincing with deepfake imagery and audio.
- Bypassing Authentication Protocols: Deepfake audio or images may circumvent voice and image-based authentication systems, exposing sensitive data.
- Market Manipulation: Misleading deepfake content making false claims and endorsements can harm investor trust and damage stock market performance.
- Business Email Compromise Scams: Deepfake audio can mimic the voice of a real person with authority in the organization to falsely authorize payments.
- Evolving Deception Techniques: The usage of AI will allow cybercriminals to deploy malware that can adapt in real-time to carry out phishing attacks and inundate targets with increased speed and variations. Legacy security frameworks are not suited to countering automated attacks at such a scale.
Existing Frameworks and Gaps
In 2016, the RBI introduced cybersecurity guidelines for banks, neo-banking, lending, and non-banking financial institutions, focusing on resilience measures like Board-level policies, baseline security standards, data leak prevention, running penetration tests, and mandating Cybersecurity Operations Centres (C-SOCs). It also mandated incident reporting to the RBI for cyber events. Similarly, SEBI’s Cybersecurity and Cyber Resilience Framework (CSCRF) applies to regulated entities (REs) like stock brokers, mutual funds, KYC agencies, etc., requiring policies, risk management frameworks, and third-party assessments of cyber resilience measures. While both frameworks are comprehensive, they require updates addressing emerging threats from generative AI-driven cyber fraud.
Cyberpeace Recommendations
- AI Cybersecurity to Counter AI Cybercrime: AI-generated attacks can be designed to overwhelm with their speed and scale. Cybercriminals increasingly exploit platforms like LinkedIn, Microsoft Teams, and Messenger, to target people. More and more organizations of all sizes will have to use AI-based cybersecurity for detection and response since generative AI is becoming increasingly essential in combating hackers and breaches.
- Enhancing Multi-factor Authentication (MFA): With improving image and voice-generation/manipulation technologies, enhanced authentication measures such as token-based authentication or other hardware-based measures, abnormal behaviour detection, multi-device push notifications, geolocation verifications, etc. can be used to improve prevention strategies. New targeted technological solutions for content-driven authentication can also be implemented.
- Addressing Third-Party Vulnerabilities: Financial institutions often outsource operations to vendors that may not follow the same cybersecurity protocols, which can introduce vulnerabilities. Ensuring all parties follow standardized protocols can address these gaps.
- Protecting Senior Professionals: Senior-level and high-profile individuals at organizations are at a greater risk of being imitated or impersonated since they hold higher authority over decision-making and have greater access to sensitive information. Protecting their identity metrics through technological interventions is of utmost importance.
- Advanced Employee Training: To build organizational resilience, employees must be trained to understand how generative and emerging technologies work. A well-trained workforce can significantly lower the likelihood of successful human-focused human-focused cyberattacks like phishing and impersonation.
- Financial Support to Smaller Institutions: Smaller institutions may not have the resources to invest in robust long-term cybersecurity solutions and upgrades. They require financial and technological support from the government to meet requisite standards.
Conclusion
According to The India Cyber Threat Report 2025 by the Data Security Council of India (DSCI) and Seqrite, deepfake-enabled cyberattacks, especially in the finance and healthcare sectors, are set to increase in 2025. This has the potential to disrupt services, steal sensitive data, and exploit geopolitical tensions, presenting a significant risk to the critical infrastructure of India.
As the threat landscape changes, institutions will have to continue to embrace AI and Machine Learning (ML) for threat detection and response. The financial sector must prioritize robust cybersecurity strategies, participate in regulation-framing procedures, adopt AI-based solutions, and enhance workforce training, to safeguard against AI-enabled fraud. Collaborative efforts among policymakers, financial institutions, and technology providers will be essential to strengthen defenses.
Sources
- https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
- https://www.globenewswire.com/news-release/2024/10/31/2972565/0/en/Deepfake-Fraud-Costs-the-Financial-Sector-an-Average-of-600-000-for-Each-Company-Regula-s-Survey-Shows.html
- https://www.sipa.columbia.edu/sites/default/files/2023-05/For%20Publication_BOfA_PollardCartier.pdf
- https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
- https://www.rbi.org.in/Commonman/English/scripts/Notification.aspx?Id=1721
- https://elplaw.in/leadership/cybersecurity-and-cyber-resilience-framework-for-sebi-regulated-entities/
- https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-driven-deepfake-enabled-cyberattacks-to-rise-in-2025-healthcarefinance-sectors-at-risk-report/articleshow/115976846.cms?from=mdr

Disclaimer:
This report is the collaborative outcome of insights derived from the CyberPeace Helpline’s operational statistics and the CyberPeace Research Team, covering the monthly helpline case trends of May 2025, the report identifies recurring trends, operational challenges, and strategic opportunities. The objective is to foster research-driven solutions that enhance the overall efficacy of the helpline.
Executive Summary:
This report summarizes the cybercrime cases reported in May, offering insights into case types, gender distribution, resolution status, and geographic trends.
As per our analysis, out of various Cyber Frauds Financial Fraud was the most reported issue, making up 43% of cases, followed by Cyberbullying (26%) and Impersonation (14%). Less frequent but serious issues included Sexual Harassment, Sextortion, Hacking, Data Tampering, and Cyber Defamation, each accounting for 3–6%, highlighting a mix of financial and behavioral threats.The gender distribution was fairly balanced, with 51% male and 49% female respondents. While both genders were affected by major crimes like financial fraud and cyber bullying, some categories—such as sexual harassment—reflected more gender-specific risks, indicating the need for gender-responsive policies and support.
Regarding case status, 60% remain under follow-up while 40% have been resolved, reflecting strong case-handling efforts by the team.
The location-wise data shows higher case concentrations in Uttar Pradesh, Andhra Pradesh, Karnataka, and West Bengal, with significant reports also from Delhi, Telangana, Maharashtra, and Odisha. Reports from the northeastern and eastern states confirm the nationwide spread of cyber incidents.In conclusion, the findings point to a growing need for enhanced cybersecurity awareness, preventive strategies, and robust digital safeguards to address the evolving cyber threat landscape across India.
Cases Received in May:
As per the given dataset, the following types of cases were reported to our team during the month of May:
- 💰 Financial Fraud – 43%
- 💬 Cyber Bullying – 26%
- 🕵️♂️ Impersonation – 14%
- 🚫 Sexual Harassment – 6%
- 📸 Sextortion – 3%
- 💻 Hacking – 3%
- 📝 Data Tampering – 3%
- 🗣️ Cyber Defamation – 3%

The chart illustrates various cybercrime categories and their occurrence rates. Financial Fraud emerges as the most common, accounting for 43% of cases, highlighting the critical need for stronger digital financial security. This is followed by Cyber Bullying at 26%, reflecting growing concerns around online harassment, especially among youth. Impersonation ranks third with 14%, involving identity misuse for deceitful purposes. Less frequent but still serious crimes such as Sexual Harassment (6%), Sextortion, Hacking, Data Tampering, and Cyber Defamation (each 3%) also pose significant risks to users’ privacy and safety. Overall, the data underscores the need for improved cybersecurity awareness, legal safeguards, and preventive measures to address both financial and behavioral threats in the digital space.
Gender-Wise Distribution:
- 👨 Male – 51%
- 👩 Female – 49%

The chart illustrates the distribution of respondents by gender. The data shows that Male participants make up 51% of the total, while Female participants account for 49%. This indicates a fairly balanced representation of both genders, with a slight majority of male respondents.
Gender-Wise Case Distribution:

- The chart presents a gender-wise distribution of various cybercrime cases, offering a comparative view of how different types of cyber incidents affect males and females.
- It highlights that both genders are significantly impacted by cybercrimes such as financial fraud and cyber bullying, indicating a widespread risk across the board.
- Certain categories, including sexual harassment, cyber defamation, and hacking, show more gender-specific patterns of victimization, pointing to differing vulnerabilities.
- The data suggests the need for gender-sensitive policies and preventive measures to effectively address the unique risks faced by males and females in the digital space.
- These insights can inform the design of tailored awareness programs, support services, and intervention strategies aimed at improving cybersecurity for all individuals.
Major Location Wise Distribution:
The map visualization displays location-wise distribution of reported cases across India. The cases reflect the cyber-related incidents or cases mapped geographically.

The map highlights the regional distribution of cybercrime cases across Indian states, with a higher concentration in Uttar Pradesh, Andhra Pradesh, Karnataka, and West Bengal. States like Delhi, Telangana, Maharashtra, and Odisha also show notable activity, indicating widespread cyber threats. Regions including Assam, Tripura, Bihar, Jharkhand, and Jammu & Kashmir further reflect the pan-India spread of such incidents. This distribution stresses the need for targeted cybersecurity awareness and stronger digital safeguards nationwide
CyberPeace Advisory:
- Use Strong and Unique Passwords: Create complex passwords using a mix of letters, numbers, and symbols. Avoid reusing the same password across multiple platforms.
- Enable Multi-Factor Authentication (MFA): Add an extra layer of security by using a second verification step like an OTP or authentication app.
- Keep Software Updated: Regularly update your operating system, apps, and security tools to protect against known vulnerabilities.
- Install Trusted Security Software: Use reliable antivirus and anti-malware programs to detect and block threats.
- Limit Information Sharing: Be cautious about sharing personal or sensitive details, especially on social media or public platforms.
- Secure Your Network: Protect your Wi-Fi with a strong password and encryption. Avoid accessing confidential information on public networks.
- Back Up Important Data: Regularly save copies of important files in secure storage to prevent data loss in case of an attack.
- Stay Informed with Cybersecurity Training: Learn how to identify scams, phishing attempts, and other online threats through regular awareness sessions.
- Control Access to Data: Give access to sensitive information only to those who need it, based on their job roles.
- Monitor and Respond to Threats: Continuously monitor systems for unusual activity and have a clear response plan for handling security incidents.
- CyberPeace Helpline mail ID: helpline@cyberpeace.net
- CyberPeace Helpline Number: 9570000066
- Central Government Helpline: https://cybercrime.gov.in/
- Central Government Helpline Number: 1930
Conclusion
The cybercrime cases reported in May highlight a diverse and evolving threat landscape across India. Financial fraud, cyber bullying, and impersonation are the most prevalent, affecting both genders almost equally, though some crimes like sexual harassment call for targeted gender-sensitive measures. With 60% of cases still under follow-up, the team’s efforts in investigation and resolution remain strong. Geographically, cyber incidents are widespread, with higher concentrations in several key states, demonstrating that no region is immune. These findings underscore the urgent need to enhance cybersecurity awareness, strengthen preventive strategies, and build robust digital safeguards. Proactive and inclusive approaches are essential to protect individuals and communities and to address the growing challenges posed by cybercrime nationwide.

Introduction
Deepfakes are artificial intelligence (AI) technology that employs deep learning to generate realistic-looking but phoney films or images. Algorithms use large volumes of data to analyse and discover patterns in order to provide compelling and realistic results. Deepfakes use this technology to modify movies or photos to make them appear as if they involve events or persons that never happened or existed.The procedure begins with gathering large volumes of visual and auditory data about the target individual, which is usually obtained from publicly accessible sources such as social media or public appearances. This data is then utilised for training a deep-learning model to resemble the target of deep fakes.
Recent Cases of Deepfakes-
In an unusual turn of events, a man from northern China became the victim of a sophisticated deep fake technology. This incident has heightened concerns about using artificial intelligence (AI) tools to aid financial crimes, putting authorities and the general public on high alert.
During a video conversation, a scammer successfully impersonated the victim’s close friend using AI-powered face-swapping technology. The scammer duped the unwary victim into transferring 4.3 million yuan (nearly Rs 5 crore). The fraud occurred in Baotou, China.
AI ‘deep fakes’ of innocent images fuel spike in sextortion scams
Artificial intelligence-generated “deepfakes” are fuelling sextortion frauds like a dry brush in a raging wildfire. According to the FBI, the number of nationally reported sextortion instances came to 322% between February 2022 and February 2023, with a notable spike since April due to AI-doctored photographs. And as per the FBI, innocent photographs or videos posted on social media or sent in communications can be distorted into sexually explicit, AI-generated visuals that are “true-to-life” and practically hard to distinguish. According to the FBI, predators often located in other countries use doctored AI photographs against juveniles to compel money from them or their families or to obtain actual sexually graphic images.
Deepfake Applications
- Lensa AI.
- Deepfakes Web.
- Reface.
- MyHeritage.
- DeepFaceLab.
- Deep Art.
- Face Swap Live.
- FaceApp.
Deepfake examples
There are numerous high-profile Deepfake examples available. Deepfake films include one released by actor Jordan Peele, who used actual footage of Barack Obama and his own imitation of Obama to convey a warning about Deepfake videos.
A video shows Facebook CEO Mark Zuckerberg discussing how Facebook ‘controls the future’ with stolen user data, most notably on Instagram. The original video is from a speech he delivered on Russian election meddling; only 21 seconds of that address were used to create the new version. However, the vocal impersonation fell short of Jordan Peele’s Obama and revealed the truth.
The dark side of AI-Generated Misinformation
- Misinformation generated by AI-generated the truth, making it difficult to distinguish fact from fiction.
- People can unmask AI content by looking for discrepancies and lacking the human touch.
- AI content detection technologies can detect and neutralise disinformation, preventing it from spreading.
Safeguards against Deepfakes-
Technology is not the only way to guard against Deepfake videos. Good fundamental security methods are incredibly effective for combating Deepfake.For example, incorporating automatic checks into any mechanism for disbursing payments might have prevented numerous Deepfake and related frauds. You might also:
- Regular backups safeguard your data from ransomware and allow you to restore damaged data.
- Using different, strong passwords for different accounts ensures that just because one network or service has been compromised, it does not imply that others have been compromised as well. You do not want someone to be able to access your other accounts if they get into your Facebook account.
- To secure your home network, laptop, and smartphone against cyber dangers, use a good security package such as Kaspersky Total Security. This bundle includes anti-virus software, a VPN to prevent compromised Wi-Fi connections, and webcam security.
What is the future of Deepfake –
Deepfake is constantly growing. Deepfake films were easy to spot two years ago because of the clumsy movement and the fact that the simulated figure never looked to blink. However, the most recent generation of bogus videos has evolved and adapted.
There are currently approximately 15,000 Deepfake videos available online. Some are just for fun, while others attempt to sway your opinion. But now that it only takes a day or two to make a new Deepfake, that number could rise rapidly.
Conclusion-
The distinction between authentic and fake content will undoubtedly become more challenging to identify as technology advances. As a result, experts feel it should not be up to individuals to discover deep fakes in the wild. “The responsibility should be on the developers, toolmakers, and tech companies to create invisible watermarks and signal what the source of that image is,” they stated. Several startups are also working on approaches for detecting deep fakes.