#FactCheck: No, PM Modi Did Not Appear in Royal Attire,Image Is AI-Generated
A photograph showing Prime Minister Narendra Modi holding a trident and dressed in royal attire is being widely shared on social media. Users circulating the image are claiming that it shows PM Modi in a regal outfit.
However, a verification by the Cyber Peace Foundation’s Research Desk has found that the claim is false. The investigation established that the viral image is not authentic and has been generated using Artificial Intelligence (AI).
Claim:
On January 11, 2026, several Instagram users shared the image with captions describing it as a photograph of Prime Minister Modi in royal attire.
Links and archived versions of the posts, along with screenshots, are provided below.

Fact Check:
To verify the claim, relevant keywords such as “PM Modi holding trishul” were searched on Google. This led to a report published by Navbharat Times on January 10, 2025. The report features photographs of Prime Minister Modi holding a trident during his visit to the Somnath Temple. However, in the original images, he is seen wearing normal attire, not royal clothing as shown in the viral image. Link and screenshot

In the next step of the investigation, the original photograph was traced to the official Instagram account of BJP Gujarat, where it was posted on January 11, 2026. The post clearly identifies the image as being from Somnath Temple. Link and screenshot: https://www.instagram.com/p/DTVlb-9Da1V

A close examination of the viral image raised suspicion about digital manipulation. The image was then analysed using the AI detection tool TruthScan. The tool’s assessment indicated a 97 percent likelihood that the image was AI-generated.
Further comparison between the viral image and the original photograph revealed that all visual elements match except the clothing, confirming that the attire was digitally altered using AI tools.

Conclusion
The claim that Prime Minister Narendra Modi appeared in royal attire is false. The Cyber Peace Foundation’s research confirms that the viral image was created using AI by altering the clothing in an original photograph taken during PM Modi’s visit to Somnath Temple. The manipulated image was shared online to mislead users.
Related Blogs

Introduction
In the new age of technologies the internet and social media continue to witness a surge in deepfake videos a technological phenomenon that blurs the line between reality and fiction. The string of deepfake videos of Bollywood actors and other famous personalities has raised serious concerns. While Prime Minister Narendra Modi spoke against the risks of artificial intelligence at the G20 Virtual Summit. The central government has recently announced that it will soon set up dedicated regulations to tackle this Menace. This will include holding social media platforms and creators responsible for their actions against the rules and regulations. Very often most people shy away from initiating a legal process or taking action while being victims of misuse of fast-paced tech but the government has announced its big support to the victims and promised to stand by complaints against deepfake videos especially this includes helping individuals to report the incidents and any violations by platforms.
Social media platforms to realign their policies as per the Indian laws
The Ministry of Electronics and Information Technology (MeitY) announced on 24th November 2023 that it will be giving social media platforms seven days time period to align their terms of service and other policies with Indian laws and regulations in order to address the issue of hosting of deepfakes on these platforms. All platforms must align and transform their terms of use with their users to be consistent with the 12 areas that are prohibited under rule 3(1)(b) of the Information Technology (IT) Rules, 2021.
The platforms will ensure harmonization and alignment of their terms & policies so that every user on every platform is aware that when they use a platform the platform intends to be a safe and trusted platform and the platform will not tolerate these 12 types of content or information that have been prohibited under the IT Act and the IT rules. The government approach is to collectively advocate for responsible and safe use of the Internet. The government has taken a proactive step in partnership with these social media platforms to ensure an era where such platforms will be a lot more responsible and a lot more responsive to the expectations under the law and more compliant.
Officer to be appointed under rule 7
As Deepfake Videos continue to surface on social media, the Government has geared up to curb such content online. Mr. Rajeev Chandrasekhar Minister of State, (Meity), stated that the government will soon appoint an officer to take appropriate action against deepfake videos. This statement came after the government meeting with industry stakeholders and important players held on 24 Nov 2023. He added that Meity and the government of India will nominate an officer under rule 7 (IT rules 2021) and will ensure full compliance expectations from all the platforms. An officer appointed under Rule 7, will be entrusted with building a mechanism where users can put in their complaints regarding deepfakes and MeitY may also assist such aggrieved users with filing FIRs in such cases. Mr. Rajeev Chandrasekhar, Minister of State, (Meity) also added that we will also be creating a platform where it will be very easy for netizens to bring to the attention of the government of India and notices of allegations or reports of violation of law by the platforms and the rule 7 officer will take that digital platform information and respond accordingly.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (updated as on 6.4.2023)
Rule 3(1)(b) states that intermediaries shall inform its rules and regulations, privacy policy and user agreement to the user and shall make reasonable efforts to ‘restrict’ the users from hosting, displaying, uploading, modifying, publishing, transmitting, store, update or sharing any information that is prohibited under this rule which also includes deepfake, misinformation, CSAM(Child sexual abusive material) etc. As per rule 3(2)(b) Intermediaries shall remove or disable access within 24 hours of receipt of complaints of contents that expose the private areas of individuals, show such individuals in full or partial nudity or in a sexual act or is in the nature of impersonation including morphed images etc.
Ongoing Efforts Ahead of Crucial Meeting with Tech Giants
Ahead of the government meeting with online platforms such as Google, Facebook, and YouTube on Friday, 24th November 2023, Mr. Rajeev Chandrasekhar Minister of State, (Meity) added that way back from October 2022 the government of India had been alerting them to the threat of misinformation and deepfakes which are part of misinformation. He further added that the current IT rules under the IT Act provide for adequate compliance requirements on their part to deal with deepfake.
Deepfake Misinformation
Misinformation powered by AI becoming an even more potent force to disrupt and to mislead and to create chaos and confusion at a scale and of a type that is deeply detrimental. Deepfakes in a very simple basic way is misinformation which is powered by or enhanced by AI. Video-based deepfake misinformation is more dangerous since it has a greater reach as video consumption today is the preferred choice by users on the internet.
Way forward
The Honorable Prime Minister has raised the issue that deep fakes are deeply disruptive they can create divisions and all kinds of disruptions in communities, in families and therefore misuse of deepfake technology is a very clear present danger to the safe and trusted internet.
The Government is on its way to draft a dedicated legislation dedicated to tackling deepfakes.
Even as we speak to a future regulation and a future law which is certainly required given that our IT Act is 23 years old. However current IT rules provide for compliance requirements by the platforms on misinformation patently false information and deepfakes. Followed by the recent government advisory on misinformation and deepfake.
Conclusion
Prime Minister alerting of the dangers of deepfakes online. The government is now in the process of starting to look very seriously into this issue and also issued guidelines for intermediaries and in a finite period of time it is hoped that the threat of deep fakes would actually no longer exist in in our system. The government made it clear that apart from people spreading deepfake videos, the platforms making them spread and not taking action will also be liable they are currently liable and will be even more so in future after new rules and regulations are brought in.
References:
- https://www.moneycontrol.com/news/technology/deepfakes-meity-gives-social-media-platforms-7-day-ultimatum-to-align-their-policies-to-indian-laws-and-regulations-11805521.html
- https://www.azbpartners.com/bank/amendments-to-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021/#:~:text=Prior%20to%20the%20amendment%2C%20under%20Rule%203(1)
- https://www.drishtiias.com/daily-updates/daily-news-analysis/amendments-to-the-it-rules-2021
- https://youtu.be/zmI2ml1d_Es?feature=shared
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445
.webp)
In the tapestry of our modern digital ecosystem, a silent, pervasive conflict simmers beneath the surface, where the quest for cyber resilience seems Sisyphean at times. It is in this interconnected cyber dance that the obscure orchestrator, StripedFly, emerges as the maestro of stealth and disruption, spinning a complex, mostly unseen web of digital discord. StripedFly is not some abstract concept; it represents a continual battle against the invisible forces that threaten the sanctity of our digital domain.
This saga of StripedFly is not a tale of mere coincidence or fleeting concern. It is emblematic of a fundamental struggle that defines the era of interconnected technology—a struggle that is both unyielding and unforgiving in its scope. Over the past half-decade, StripedFly has slithered its way into over a million devices, creating a clandestine symphony of cybersecurity breaches, data theft, and unintentional complicity in its agenda. Let's delve deep into this grand odyssey to unravel the odious intricacies of StripedFly and assess the reverberations felt across our collective pursuit of cyber harmony.
The StripedFly malware represents the epitome of a digital chameleon, a master of cyber camouflage, masquerading as a mundane cryptocurrency miner while quietly plotting the grand symphony of digital bedlam. Its deceptive sophistication has effortlessly skirted around the conventional tripwires laid by our cybersecurity guardians for years. The Russian cybersecurity giant Kaspersky's encounter with StripedFly in 2017 brought this ghostly figure into the spotlight—hitherto, a phantom whistling past the digital graveyard of past threats.
How Does it work
Distinctive in its composition, StripedFly conceals within its modular framework the potential for vast infiltration—an exploitation toolkit designed to puncture the fortifications of both Linux and Windows systems. In an emboldened maneuver, it utilizes a customized version of the EternalBlue SMBv1 exploit—a technique notoriously linked to the enigmatic Equation Group. Through such nefarious channels, StripedFly not only deploys its malicious code but also tenaciously downloads binary files and executes PowerShell scripts with a sinister adeptness unbeknownst to its victims.
Despite its insidious nature, perhaps its most diabolical trait lies in its array of plugin-like functions. It's capable of exfiltrating sensitive information, erasing its tracks, and uninstalling itself with almost supernatural alacrity, leaving behind a vacuous space where once tangible evidence of its existence resided.
In the intricate chess game of cyber threats, StripedFly plays the long game, prioritizing persistence over temporary havoc. Its tactics are calculated—the meticulous disabling of SMBv1 on compromised hosts, the insidious utilization of pilfered keys to propagate itself across networks via SMB and SSH protocols, and the creation of task scheduler entries on Windows systems or employing various methods to assert its nefarious influence within Linux environments.
The Enigma around the Malware
This dualistic entity couples its espionage with monetary gain, downloading a Monero cryptocurrency miner and utilizing the shadowy veils of DNS over HTTPS (DoH) to camouflage its command and control pool servers. This intricate masquerade serves as a cunning, albeit elaborate, smokescreen, lulling security mechanisms into complacency and blind spots.
StripedFly goes above and beyond in its quest to minimize its digital footprint. Not only does it store its components as encrypted data on code repository platforms, deftly dispersed among the likes of Bitbucket, GitHub, and GitLab, but it also harbors a bespoke, efficient TOR client to communicate with its cloistered C2 server out of sight and reach in the labyrinthine depths of the TOR network.
One might speculate on the genesis of this advanced persistent threat—its nuanced approach to invasion, its parallels to EternalBlue, and the artistic flare that permeates its coding style suggest a sophisticated architect. Indeed, the suggestion of an APT actor at the helm of StripedFly invites a cascade of questions concerning the ultimate objectives of such a refined, enduring campaign.
How to deal with it
To those who stand guard in our ever-shifting cyber landscape, the narrative of StripedFly is a clarion call. StObjective reminders of the trench warfare we engage in to preserve the oasis of digital peace within a desert of relentless threats. The StripedFly chronicle stands as a persistent, looming testament to the necessity for heeding the sirens of vigilance and precaution in cyber practice.
Reaffirmation is essential in our quest to demystify the shadows cast by StripedFly, as it punctuates the critical mission to nurture a more impregnable digital habitat. Awareness and dedication propel us forward—the acquisition of knowledge regarding emerging threats, the diligent updating and patching of our systems, and the fortification of robust, multilayered defenses are keystones in our architecture of cyber defense. Together, in concert and collaboration, we stand a better chance of shielding our digital frontier from the dim recesses where threats like StripedFly lurk, patiently awaiting their moment to strike.
References:
https://thehackernews.com/2023/11/stripedfly-malware-operated-unnoticed.html?m=1

Executive Summary:
Traditional Business Email Compromise(BEC) attacks have become smarter, using advanced technologies to enhance their capability. Another such technology which is on the rise is WormGPT, which is a generative AI tool that is being leveraged by the cybercriminals for the purpose of BEC. This research aims at discussing WormGPT and its features as well as the risks associated with the application of the WormGPT in criminal activities. The purpose is to give a general overview of how WormGPT is involved in BEC attacks and give some advice on how to prevent it.
Introduction
BEC(Business Email Compromise) in simple terms can be defined as a kind of cybercrime whereby the attackers target the business in an effort to defraud through the use of emails. Earlier on, BEC attacks were executed through simple email scams and phishing. However, in recent days due to the advancement of AI tools like WormGPT such malicious activities have become sophisticated and difficult to identify. This paper seeks to discuss WormGPT, a generative artificial intelligence, and how it is used in the BEC attacks to make the attacks more effective.
What is WormGPT?
Definition and Overview
WormGPT is a generative AI model designed to create human-like text. It is built on advanced machine learning algorithms, specifically leveraging large language models (LLMs). These models are trained on vast amounts of text data to generate coherent and contextually relevant content. WormGPT is notable for its ability to produce highly convincing and personalised email content, making it a potent tool in the hands of cybercriminals.
How WormGPT Works
1. Training Data: Here the WormGPT is trained with the arrays of data sets, like emails, articles, and other writing material. This extensive training enables it to understand and to mimic different writing styles and recognizable textual content.
2. Generative Capabilities: Upon training, WormGPT can then generate text based on specific prompts, as in the following examples in response to prompts. For example, if a cybercriminal comes up with a prompt concerning the company’s financial information, WormGPT is capable of releasing an appearance of a genuine email asking for more details.
3. Customization: WormGPT can be retrained any time with an industry or an organisation of interest in mind. This customization enables the attackers to make their emails resemble the business activities of the target thus enhancing the chances for an attack to succeed.
Enhanced Phishing Techniques
Traditional phishing emails are often identifiable by their generic and unconvincing content. WormGPT improves upon this by generating highly personalised and contextually accurate emails. This personalization makes it harder for recipients to identify malicious intent.
Automation of Email Crafting
Previously, creating convincing phishing emails required significant manual effort. WormGPT automates this process, allowing attackers to generate large volumes of realistic emails quickly. This automation increases the scale and frequency of BEC attacks.
Exploitation of Contextual Information
WormGPT can be fed with contextual information about the target, such as recent company news or employee details. This capability enables the generation of emails that appear highly relevant and urgent, further deceiving recipients into taking harmful actions.
Implications for Cybersecurity
Challenges in Detection
The use of WormGPT complicates the detection of BEC attacks. Traditional email security solutions may struggle to identify malicious emails generated by advanced AI, as they can closely mimic legitimate correspondence. This necessitates the development of more sophisticated detection mechanisms.
Need for Enhanced Training
Organisations must invest in training their employees to recognize signs of BEC attacks. Awareness programs should emphasise the importance of verifying email requests for sensitive information, especially when such requests come from unfamiliar or unexpected sources.
Implementation of Robust Security Measures
- Multi-Factor Authentication (MFA): MFA can add an additional layer of security, making it harder for attackers to gain unauthorised access even if they successfully deceive an employee.
- Email Filtering Solutions: Advanced email filtering solutions that use AI and machine learning to detect anomalies and suspicious patterns can help identify and block malicious emails.
- Regular Security Audits: Conducting regular security audits can help identify vulnerabilities and ensure that security measures are up to date.
Case Studies
Case Study 1: Financial Institution
A financial institution fell victim to a BEC attack orchestrated using WormGPT. The attacker used the tool to craft a convincing email that appeared to come from the institution’s CEO, requesting a large wire transfer. The email’s convincing nature led to the transfer of funds before the scam was discovered.
Case Study 2: Manufacturing Company
In another instance, a manufacturing company was targeted by a BEC attack using WormGPT. The attacker generated emails that appeared to come from a key supplier, requesting sensitive business information. The attack exploited the company’s lack of awareness about BEC threats, resulting in a significant data breach.
Recommendations for Mitigation
- Strengthen Email Security Protocols: Implement advanced email security solutions that incorporate AI-driven threat detection.
- Promote Cyber Hygiene: Educate employees on recognizing phishing attempts and practising safe email habits.
- Invest in AI for Defense: Explore the use of AI and machine learning in developing defences against generative AI-driven attacks.
- Implement Verification Procedures: Establish procedures for verifying the authenticity of sensitive requests, especially those received via email.
Conclusion
WormGPT is a new tool in the arsenal of cybercriminals which improved their options to perform Business Email Compromise attacks more effectively and effectively. Therefore, it is critical to provide the defence community with information regarding the potential of WormGPT and its implications for enhancing the threat landscape and strengthening the protection systems against advanced and constantly evolving threats.
This means the development of rigorous security protocols, general awareness of security solutions, and incorporating technologies such as artificial intelligence to mitigate the risk factors that arise from generative AI tools to the best extent possible.