#FactCheck - AI Generated Photo Circulating Online Misleads About BARC Building Redesign
Executive Summary:
A photo circulating on the web that claims to show the future design of the Bhabha Atomic Research Center, BARC building, has been found to be fake after fact checking has been done. Nevertheless, there is no official notice or confirmation from BARC on its website or social media handles. Through the AI Content Detection tool, we have discovered that the image is a fake as it was generated by an AI. In short, the viral picture is not the authentic architectural plans drawn up for the BARC building.

Claims:
A photo allegedly representing the new outlook of the Bhabha Atomic Research Center (BARC) building is reigning over social media platforms.


Fact Check:
To begin our investigation, we surfed the BARC's official website to check out their tender and NITs notifications to inquire for new constructions or renovations.
It was a pity that there was no corresponding information on what was being claimed.

Then, we hopped on their official social media pages and searched for any latest updates on an innovative building construction, if any. We looked on Facebook, Instagram and X . Again, there was no information about the supposed blueprint. To validate the fact that the viral image could be generated by AI, we gave a search on an AI Content Detection tool by Hive that is called ‘AI Classifier’. The tool's analysis was in congruence with the image being an AI-generated computer-made one with 100% accuracy.

To be sure, we also used another AI-image detection tool called, “isitai?” and it turned out to be 98.74% AI generated.

Conclusion:
To conclude, the statement about the image being the new BARC building is fake and misleading. A detailed investigation, examining BARC's authorities and utilizing AI detection tools, proved that the picture is more probable an AI-generated one than an original architectural design. BARC has not given any information nor announced anything for such a plan. This makes the statement untrustworthy since there is no credible source to support it.
Claim: Many social media users claim to show the new design of the BARC building.
Claimed on: X, Facebook
Fact Check: Misleading
Related Blogs

Introduction
Prebunking is a technique that shifts the focus from directly challenging falsehoods or telling people what they need to believe to understanding how people are manipulated and misled online to begin with. It is a growing field of research that aims to help people resist persuasion by misinformation. Prebunking, or "attitudinal inoculation," is a way to teach people to spot and resist manipulative messages before they happen. The crux of the approach is rooted in taking a step backwards and nipping the problem in the bud by deepening our understanding of it, instead of designing redressal mechanisms to tackle it after the fact. It has been proven effective in helping a wide range of people build resilience to misleading information.
Prebunking is a psychological strategy for countering the effect of misinformation with the goal of assisting individuals in identifying and resisting deceptive content, hence increasing resilience against future misinformation. Online manipulation is a complex issue, and multiple approaches are needed to curb its worst effects. Prebunking provides an opportunity to get ahead of online manipulation, providing a layer of protection before individuals encounter malicious content. Prebunking aids individuals in discerning and refuting misleading arguments, thus enabling them to resist a variety of online manipulations.
Prebunking builds mental defenses for misinformation by providing warnings and counterarguments before people encounter malicious content. Inoculating people against false or misleading information is a powerful and effective method for building trust and understanding along with a personal capacity for discernment and fact-checking. Prebunking teaches people how to separate facts from myths by teaching them the importance of thinking in terms of ‘how you know what you know’ and consensus-building. Prebunking uses examples and case studies to explain the types and risks of misinformation so that individuals can apply these learnings to reject false claims and manipulation in the future as well.
How Prebunking Helps Individuals Spot Manipulative Messages
Prebunking helps individuals identify manipulative messages by providing them with the necessary tools and knowledge to recognize common techniques used to spread misinformation. Successful prebunking strategies include;
- Warnings;
- Preemptive Refutation: It explains the narrative/technique and how particular information is manipulative in structure. The Inoculation treatment messages typically include 2-3 counterarguments and their refutations. An effective rebuttal provides the viewer with skills to fight any erroneous or misleading information they may encounter in the future.
- Micro-dosing: A weakened or practical example of misinformation that is innocuous.
All these alert individuals to potential manipulation attempts. Prebunking also offers weakened examples of misinformation, allowing individuals to practice identifying deceptive content. It activates mental defenses, preparing individuals to resist persuasion attempts. Misinformation can exploit cognitive biases: people tend to put a lot of faith in things they’ve heard repeatedly - a fact that malicious actors manipulate by flooding the Internet with their claims to help legitimise them by creating familiarity. The ‘prebunking’ technique helps to create resilience against misinformation and protects our minds from the harmful effects of misinformation.
Prebunking essentially helps people control the information they consume by teaching them how to discern between accurate and deceptive content. It enables one to develop critical thinking skills, evaluate sources adequately and identify red flags. By incorporating these components and strategies, prebunking enhances the ability to spot manipulative messages, resist deceptive narratives, and make informed decisions when navigating the very dynamic and complex information landscape online.
CyberPeace Policy Recommendations
- Preventing and fighting misinformation necessitates joint efforts between different stakeholders. The government and policymakers should sponsor prebunking initiatives and information literacy programmes to counter misinformation and adopt systematic approaches. Regulatory frameworks should encourage accountability in the dissemination of online information on various platforms. Collaboration with educational institutions, technological companies and civil society organisations can assist in the implementation of prebunking techniques in a variety of areas.
- Higher educational institutions should support prebunking and media literacy and offer professional development opportunities for educators, and scholars by working with academics and professionals on the subject of misinformation by producing research studies on the grey areas and challenges associated with misinformation.
- Technological companies and social media platforms should improve algorithm transparency, create user-friendly tools and resources, and work with fact-checking organisations to incorporate fact-check labels and tools.
- Civil society organisations and NGOs should promote digital literacy campaigns to spread awareness on misinformation and teach prebunking strategies and critical information evaluation. Training programmes should be available to help people recognise and resist deceptive information using prebunking tactics. Advocacy efforts should support legislation or guidelines that support and encourage prebunking efforts and promote media literacy as a basic skill in the digital landscape.
- Media outlets and journalists including print & social media should follow high journalistic standards and engage in fact-checking activities to ensure information accuracy before release. Collaboration with prebunking professionals, cyber security experts, researchers and advocacy analysts can result in instructional content and initiatives that promote media literacy, prebunking strategies and misinformation awareness.
Final Words
The World Economic Forum's Global Risks Report 2024 identifies misinformation and disinformation as the top most significant risks for the next two years. Misinformation and disinformation are rampant in today’s digital-first reality, and the ever-growing popularity of social media is only going to see the challenges compound further. It is absolutely imperative for all netizens and stakeholders to adopt proactive approaches to counter the growing problem of misinformation. Prebunking is a powerful problem-solving tool in this regard because it aims at ‘protection through prevention’ instead of limiting the strategy to harm reduction and redressal. We can draw parallels with the concept of vaccination or inoculation, reducing the probability of a misinformation infection. Prebunking exposes us to a weakened form of misinformation and provides ways to identify it, reducing the chance false information takes root in our psyches.
The most compelling attribute of this approach is that the focus is not only on preventing damage but also creating widespread ownership and citizen participation in the problem-solving process. Every empowered individual creates an additional layer of protection against the scourge of misinformation, not only making safer choices for themselves but also lowering the risk of spreading false claims to others.
References
- [1] https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
- [2] https://prebunking.withgoogle.com/docs/A_Practical_Guide_to_Prebunking_Misinformation.pdf
- [3] https://ijoc.org/index.php/ijoc/article/viewFile/17634/3565

Executive Summary:
A photo circulating on social media shows a stage with the words “Hindu Sammelan” (Hindu Conference) written in large letters. In front of the stage, rows of chairs appear largely empty, with only a few people seated while most seats remain vacant.
Users sharing the image claim that the event, held under the banner of a “Hindu Sammelan,” was in fact a “Brahmin Sammelan,” and that indigenous communities chose to stay away, resulting in poor attendance.
It is noteworthy that, on the occasion of the centenary year of the Rashtriya Swayamsevak Sangh (RSS), various “Hindu Sammelan” events are being organized across the country. The viral image is being linked to this broader context.
However, research conducted by the CyberPeace found the viral claim to be false. Our research revealed that the image being shared on social media is not authentic but AI-generated and is being circulated with a misleading narrative.
Claim
On February 21, 2026, a Facebook user shared the viral image. The original and archived links are provided below
- https://www.facebook.com/photo?fbid=935049042540479&set=gm.2425972001215469&idorvanity=465387370607285
- https://ghostarchive.org/archive/sxC6d

Fact Check:
A keyword search on Google confirmed that several “Hindu Sammelan” events have indeed been organized across the country as part of the RSS centenary year. For instance, media reports have covered such events in different cities, including Nagpur.

However, upon closely examining the viral image, we observed certain visual inconsistencies and unnatural elements that raised suspicion of AI generation. We first analyzed the image using the AI detection tool Hive Moderation, which indicated a 79.3 percent probability that the image was AI-generated.

To further verify, we scanned the image using another AI detection platform, Sightengine. The results showed a 97 percent likelihood that the image was AI-generated.

Conclusion
Our research confirms that the image circulating on social media is not genuine. It has been artificially created using AI technology and is being shared with a misleading claim.
.webp)
Introduction
In the sprawling and ever-evolving landscape of cybercrime, phishing links, phoney emails, and dubious investment offers are no longer the only tools used by scammers. Cybercriminals are becoming skilled at taking advantage of commonplace digital behaviours, undermining confidence, and turning popular features of our most essential apps into weapons. A fast expanding international threat has been revealed by the National Cybercrime Threat Analytics Unit (NCTAU) of the Indian Cybercrime Coordination Centre(I4C)’s most recent advisory on “WhatsApp account renting”. This scam uses QR codes to trick users into connecting their WhatsApp accounts to fraudulent sites under the guise of a “quick income” opportunity. What initially appears innocuous turns into a tool for thieves to take control of accounts and use them for illicit purposes.
The Global Rise of Cyber Mule Networks
Initially the word “mule” in cybercrime networks referred to a bank account used, knowingly often unknowingly, to transfer or “launder” money obtained from fraud and illegal activities. In light of the evolving nature of this cybercrime, Cyber mules in the present scenario can be referred to as, individuals who knowingly or unknowingly allow their digital identities, devices, or bank accounts to be used for illegal activity.
Various cybersecurity companies as well as Europol and Interpol, have frequently cautioned that hackers are increasingly using digital mule recruiting, which frequently takes the form of the following:
- Work-from-home Offers
- Streams of passive income
- Monetisation of social media
- Roles for verification assistants
- Apps that earn commissions
Earlier versions involved money transfers through personal bank accounts . Criminals now want your digital identity rather than just your money, as the trend has been reported to be changing.
Scammers frequently “rent” victims’ Facebook, LINE, Telegram, and WeChat accounts in parts of Southeast Asia and Africa in order to conduct impersonation frauds or assist with criminal operations. The WhatsApp variant that is making its way to India is a logical progression, although it comes only with the widely used WhatsApp Web linked-device capability.
How the WhatsApp Account Renting Scam Works
I4C’s advisory dated 15th October, 2025, highlights a sophisticated yet psychologically simple scheme that exploits trust, curiosity, and the illusion of easy income.The scam’s lifetime is as follows:
1. The Hook: “Automatically Earn Passive Income”
Threat actors claim users can earn daily rewards by connecting their WhatsApp accounts to a new “partner platform” in their polished and professional Instagram and Facebook ads.
This strategy imitates international scam factories in Cambodia and Myanmar, where victims are lured into investment schemes or bogus tasks by social media advertisements.
2.The Redirect: Rogue APKs & Fake Websites
When victims click on the advertisement, they are sent to
- Fake dashboards for earnings
- Untrustworthy websites that imitate authentic financial interfaces
- Instructions for installing Android APKs from sources other than the Play Store
- These APKs often carry spyware or remote-access malware.
3.The Trap: Scanning a QR Code
The user is asked to scan a QR code through WhatsApp’s “Linked Devices” feature, which is normally used for WhatsApp Web.
Without ever touching the victim’s phone, the con artist obtains complete session access to their WhatsApp account as soon as the QR is scanned.
Threat actors are able to:
- Transmit and receive messages
- Get access to contact lists
- Participate in or start groups
- Assume the victim’s identity
- Conduct frauds using their identities
4.The Illusion: A Multi-Level Commission Structure
A pyramid-style earnings model is displayed to maintain credibility:
- 10% off direct invites
- 5% of secondary invites
- 2% of tertiary invitations
These figures are designed to encourage victims to recruit more users, increasing the number of compromised WhatsApp accounts.
5.The Misuse: “Mule WhatsApp accounts”
The victim’s account becomes a digital mule once it is connected, allowing fraudsters to:
- Start UPI fraud and phishing
- Distribute harmful links
- Impersonate the victim to scam their contacts
- Participate in bulk messaging campaigns
- Get additional mule accounts
Precautions Issued by I4C
I4C has advised citizens to take the following precautions:
- You could face criminal charges or similar consequences if you carelessly rent or link your WhatsApp account for money
- Installing APKs from non-official app shops should be avoided
- Advertisements that promise automatic revenue, referral bonuses, or passive income should be avoided.
- Regularly check linked devices on WhatsApp: Settings → Linked Devices
- Use WhatsApp’s Official support page to report hacked accounts or impersonation: https://www.whatsapp.com/contact/forms/1534459096974129
- Report financial fraud immediately by calling 1930 or visiting cybercrime.gov.in
CyberPeace Outlook
The WhatsApp account rental fraud is not an isolated phenomenon; rather, it is the latest mutation of a global cybercrime apparatus that feeds on social engineering, digital identity theft, and international mule networks. Its simplicity, all it takes to take over your digital life is a QR code scan, makes it especially hazardous. I4C’s timely warning serves as an important reminder that easy money is nearly always a trap in the digital world and that, if we let our guard down, our most reliable platforms can become attack surfaces. Stay informed, and stay safe. In order to protect our identities, data, and communities, cyber hygiene is now a must.
References
- https://www.cnbctv18.com/personal-finance/mule-account-fraud-on-the-rise-what-it-is-and-how-to-shttps://i4c.mha.gov.in/theme/resources/advisories/Mule%20Whatsapp%20V1.4.pdftay-safe-19662507.htm
- https://i4c.mha.gov.in/theme/resources/advisories/Mule%20Whatsapp%20V1.4.pdf