#FactCheck - Viral Photo of Modi and Rahul Gandhi in Parliament Found to Be AI-Generated
Executive Summary
An image showing Prime Minister Narendra Modi and Leader of Opposition in the Lok Sabha and Congress MP Rahul Gandhi standing face to face inside Parliament is going viral on social media. Several users are sharing the image claiming that the photograph was taken during the ongoing Budget Session, suggesting a direct face-off between the two leaders inside Parliament. However, research conducted by the CyberPeacehas found that the viral claim is false. The image in question is not real but has been generated using Artificial Intelligence (AI). The AI-generated image is now being shared on social media with a misleading claim.
Claim
A Facebook user named Madhu Davi shared the viral image on January 30, 2026, with the caption: “If this photo is from today and the Budget Session, it is commendable. RAGA Zindabad.”
(Archived version of the post available here.)
- https://www.facebook.com/photo/?fbid=759145877237871&set=a.110639115421887
- https://perma.cc/N2XD-TZ32?type=image

Fact Check:
To verify the viral claim, we first conducted a keyword search on Google to check whether any credible media outlet had reported such an incident during the Budget Session. However, no news reports supporting the claim were found. We then performed a reverse image search using Google Lens, but this too did not yield any reliable media reports or evidence confirming the authenticity of the image. This raised suspicion that the image might be AI-generated. To further verify, the image was analysed using the AI detection tool Hive Moderation. The tool indicated a probability of over 99 per cent that the image was generated using Artificial Intelligence.

Conclusion
CyberPeace research confirms that the image being circulated with the claim that Prime Minister Narendra Modi and Rahul Gandhi came face to face during the Budget Session is fake. The viral image has been created using AI and is being shared with a false and misleading narrative.
Related Blogs

Introduction
Indian Cybercrime Coordination Centre (I4C) was established by the Ministry of Home Affairs (MHA) to provide a framework and eco-system for law enforcement agencies (LEAs) to deal with cybercrime in a coordinated and comprehensive manner. The Indian Ministry of Home Affairs approved a scheme for the establishment of the Indian Cyber Crime Coordination Centre (I4C) in October2018, which was inaugurated by Home Minister Amit Shah in January 2020. I4C is envisaged to act as the nodal point to curb Cybercrime in the country. Recently, on 13th March2024, the Centre designated the Indian Cyber Crime Coordination Centre (I4C) as an agency of the Ministry of Home Affairs (MHA) to perform the functions under the Information Technology Act, 2000, to inform about unlawful cyber activities.
The gazetted notification dated 13th March 2024 read as follows:
“In exercise of the powers conferred by clause (b) of sub-section (3) of section 79 of the Information Technology Act 2000, Central Government being the appropriate government hereby designate the Indian Cybercrime Coordination Centre (I4C), to be the agency of the Ministry of Home Affairs to perform the functions under clause (b) of sub-section (3) of section79 of Information Technology Act, 2000 and to notify the instances of information, data or communication link residing in or connected to a computer resource controlled by the intermediary being used to commit the unlawful act.”
Impact
Now, the Indian Cyber Crime Coordination Centre (I4C) is empowered to issue direct takedown orders under 79(b)(3) of the IT Act, 2000. Any information, data or communication link residing in or connected to a computer resource controlled by any intermediary being used to commit unlawful acts can be notified by the I4C to the intermediary. If an intermediary fails to expeditiously remove or disable access to a material after being notified, it will no longer be eligible for protection under Section 79 of the IT Act, 2000.
Safe Harbour Provision
Section79 of the IT Act also serves as a safe harbour provision for the Intermediaries. The safe harbour provision under Section 79 of the IT Act states that "an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by him". However, it is notable that this legal immunity cannot be granted if the intermediary "fails to expeditiously" take down a post or remove a particular content after the government or its agencies flag that the information is being used to commit something unlawful. Furthermore, Intermediaries are also obliged to perform due diligence on their platforms and comply with the rules & regulations and maintain and promote a safe digital environment on the respective platforms.
Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, The government has also mandated that a ‘significant social media intermediary’ must appoint a Chief Compliance Officer (CCO), Resident Grievance Officer (RGO), and Nodal Contact Person and publish periodic compliance report every month mentioning the details of complaints received and action taken thereon.
I4C's Role in Safeguarding Cyberspace
The Indian Cyber Crime Coordination Centre (I4C) is actively working towards initiatives to combat the emerging threats in cyberspace. I4C is one of the crucial extensions of the Ministry of Home Affairs, Government of India, working extensively to combat cyber crimes and ensure the overall safety of netizens. The ‘National Cyber Crime Reporting Portal’ equipped with a 24x7 helpline number 1930, is one of the key component of the I4C.
Components Of The I4C
- National Cyber Crime Threat Analytics Unit
- National Cyber Crime Reporting Portal
- National Cyber Crime Training Centre
- Cyber Crime Ecosystem Management Unit
- National Cyber Crime Research and Innovation Centre
- National Cyber Crime Forensic Laboratory Ecosystem
- Platform for Joint Cyber Crime Investigation Team.
Conclusion
I4C, through its initiatives and collaborative efforts, plays a pivotal role in safeguarding cyberspace and ensuring the safety of netizens. I4C reinforces India's commitment to combatting cybercrime and promoting a secure digital environment. The recent development by designating the I4C as an agency to notify the instances of unlawful activities in cyberspace serves as a significant step to counter cybercrime and promote an ethical and safe digital environment for netizens.
References
- https://www.deccanherald.com/india/centre-designates-i4c-as-agency-of-mha-to-notify-unlawful-activities-in-cyber-world-2936976
- https://www.business-standard.com/india-news/home-ministry-authorises-i4c-to-issue-takedown-notices-under-it-act-124031500844_1.html
- https://www.hindustantimes.com/india-news/it-ministry-empowers-i4c-to-notify-instances-of-cybercrime-101710443217873.html
- https://i4c.mha.gov.in/about.aspx#:~:text=Objectives%20of%20I4C,identifying%20Cybercrime%20trends%20and%20patterns

Executive Summary
Amid the ongoing conflict between the US-Israel and Iran, a video of Indian Prime Minister Narendra Modi is being widely circulated on social media. In the clip, he is allegedly heard supporting Israel and calling Iran a “terrorist state.” The video also appears to show him speaking about the idea of “Akhand Bharat.” Many users are sharing this video as genuine. However, a detailed research by the CyberPeacefound that the claim is false. The viral video is a deepfake created using AI technology.
Claim:
A Facebook page named “Pushpendra Kulshreshtha” shared the video on March 23, 2026, with a caption suggesting that PM Modi made strong remarks in support of Israel and against Iran.

Fact Check:
To verify the claim, we first conducted a keyword search to find any credible reports or official statements where PM Modi made such remarks. However, no reliable news reports or authentic videos supporting the claim were found. We then extracted keyframes from the viral video and performed a reverse image search using Google Lens. This led us to the original video posted on the X (formerly Twitter) handle of ANI on March 12, 2026.

The visuals, including PM Modi’s attire and the stage setup, matched the viral clip—indicating that the fake video was created using this original footage. However, in the authentic video, PM Modi did not make any statements about Iran, Israel, or “Akhand Bharat” as seen in the viral version. In the original footage, PM Modi is seen addressing the NXT Summit in Delhi, where he spoke about the global energy crisis arising from ongoing conflicts and highlighted the expansion of LPG and PNG facilities in India. Additionally, a customised keyword search led us to a press release issued by the Prime Minister's Office regarding his address at the summit. The statement heard in the viral clip was not found there either.

Conclusion:
The viral video of PM Modi is a deepfake. He did not make any statement calling Iran a “terrorist state” or expressing support for Israel in the manner shown. The original video is from a summit held in Delhi and has been manipulated using AI to spread misleading claims.
.webp)
What is Deepfake
Deepfakes have been, a fascinating but unsettling phenomenon that is now prominent in this digital age. These incredibly convincing films have drawn attention and blended in well with our high-tech surroundings. The lifelike but completely manufactured quality of deepfake videos has become an essential component of our digital environment as we traverse the broad reaches of our digital society. While these works have an undoubtedly captivating charm, they have important ramifications. Come along as we examine the deep effects that misuse of deepfakes can have on our globalized digital culture. After many actors now business tycoon Ratan Tata has become the latest victim of deepfake. Tata called out a post from a user that used a fake interview of him in a video recommending Investments.
Case Study
The nuisance of deep fake is sparing none from actors politicians to entrepreneurs everyone is getting caught in the Trap. Soon after the actresses Rashmika Mandana, Katrina Kaif, Kajol and other actresses fell prey to the rising scenario of deepfake, a new case from the industry emerged, which took Mr. Ratan Tata on storm. Business tycoon Ratan Tata has become the latest victim of deepfake. He took to his social media sharing an image of the interview that asked people to invest money in a project in a post on Instagram. Ratan Tata called out a post from a user that used a fake interview of him in a video recommending these Investments.
This nuisance that has been created because of the deepfake is sparing nobody from actors to politicians to entrepreneurs now everyone is getting caught in the trap the latest victim being Ratan Tata. Tech magnate Ratan Tata is the most recent victim of this deepfake phenomenon. The millionaire was seen in the video, which was posted by the Instagram user, giving his followers a once-in-a-million opportunity to "exaggerate investments risk-free."
In the stated video, Ratan Tata was seen giving everyone in India advice mentioning to the public regarding the opportunity to increase their money with no risk and a 100% guarantee. The caption of the video clip stated, "Go to the channel right now."
Tata annotated both the video and the screenshot of the caption with the word "FAKE."
Ongoing Deepfake Assaults in India
Deepfake videos continue to target celebrities, and Priyanka Chopra is also a recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples, including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her looking the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Prevention and Detection
In order to effectively combat the growing threat posed by deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain.
Conclusion
The current instance involving Ratan Tata serves as an example of how the emergence of counterfeit technology poses an imminent danger to our digital civilization. The fake video, which was posted to Instagram, showed the business tycoon giving financial advice and luring followers with low-risk investment options. Tata quickly called out the footage as "FAKE," highlighting the need for careful media consumption. The Tata incident serves as a reminder of the possible damage deepfakes can do to prominent people's reputations. The issue, in Ratan Tata's instance specifically, demands that public personalities be more mindful of the possible misuse of their virtual identities. We can all work together to strengthen our defenses against this sneaky phenomenon and maintain the trustworthiness of our internet-based culture in the face of ever-changing technological challenges by emphasizing preventive measures like strict safety regulations and the implementation of state-of-the-art deepfake detection technologies.
References
- https://economictimes.indiatimes.com/magazines/panache/ratan-tata-slams-deepfake-video-that-features-him-giving-risk-free-investment-advice/articleshow/105805223.cms
- https://www.ndtv.com/india-news/ratan-tata-flags-deepfake-video-of-his-interview-recommending-investments-4640515
- https://www.businesstoday.in/bt-tv/short-video/viralvideo-business-tycoon-ratan-tata-falls-victim-to-deepfake-408557-2023-12-07
- https://www.livemint.com/news/india/false-ratan-tata-calls-out-a-deepfake-video-of-him-giving-investment-advice-11701926766285.html