#FactCheck - AI-Generated Image of Abhishek Bachchan and Aishwarya Rai Falsely Linked to Kedarnath Visit
A photo featuring Bollywood actor Abhishek Bachchan and actress Aishwarya Rai is being widely shared on social media. In the image, the Kedarnath Temple is clearly visible in the background. Users are claiming that the couple recently visited the Kedarnath shrine for darshan.
Cyber Peace Foundation’s research found the viral claim to be false. Our research revealed that the image of Abhishek Bachchan and Aishwarya Rai is not real, but AI-generated, and is being misleadingly shared as a genuine photograph.
Claim
On January 14, 2026, a user on X (formerly Twitter) shared the viral image with a caption suggesting that all rumours had ended and that the couple had restarted their life together. The post further claimed that both actors were seen smiling after a long time, implying that the image was taken during their visit to Kedarnath Temple.
The post has since been widely circulated on social media platforms

Fact Check:
To verify the claim, we first conducted a keyword search on Google related to Abhishek Bachchan, Aishwarya Rai, and a Kedarnath visit. However, we did not find any credible media reports confirming such a visit.
On closely examining the viral image, several visual inconsistencies raised suspicion about it being artificially generated. To confirm this, we scanned the image using the AI detection tool Sightengine. According to the tool’s analysis, the image was found to be 84 percent AI-generated.

Additionally, we scanned the same image using another AI detection tool, HIVE Moderation. The results showed an even stronger indication, classifying the image as 99 percent AI-generated.

Conclusion
Our research confirms that the viral image showing Abhishek Bachchan and Aishwarya Rai at Kedarnath Temple is not authentic. The picture is AI-generated and is being falsely shared on social media to mislead users.
Related Blogs
.webp)
Executive Summary:
A post on X (formerly Twitter) has gained widespread attention, featuring an image inaccurately asserting that Houthi rebels attacked a power plant in Ashkelon, Israel. This misleading content has circulated widely amid escalating geopolitical tensions. However, investigation shows that the footage actually originates from a prior incident in Saudi Arabia. This situation underscores the significant dangers posed by misinformation during conflicts and highlights the importance of verifying sources before sharing information.

Claims:
The viral video claims to show Houthi rebels attacking Israel's Ashkelon power plant as part of recent escalations in the Middle East conflict.

Fact Check:
Upon receiving the viral posts, we conducted a Google Lens search on the keyframes of the video. The search reveals that the video circulating online does not refer to an attack on the Ashkelon power plant in Israel. Instead, it depicts a 2022 drone strike on a Saudi Aramco facility in Abqaiq. There are no credible reports of Houthi rebels targeting Ashkelon, as their activities are largely confined to Yemen and Saudi Arabia.

This incident highlights the risks associated with misinformation during sensitive geopolitical events. Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The assertion that Houthi rebels targeted the Ashkelon power plant in Israel is incorrect. The viral video in question has been misrepresented and actually shows a 2022 incident in Saudi Arabia. This underscores the importance of being cautious when sharing unverified media. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The video shows massive fire at Israel's Ashkelon power plant
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading

Assembly elections are due to be held in Assam later this year, with polling likely in April or May. Ahead of the elections, a video claiming to be an Aaj Tak news bulletin is being widely circulated on social media.
In the viral video, Aaj Tak anchor Rajiv Dhoundiyal is allegedly seen stating that a leaked intelligence report has issued a warning for the ruling Bharatiya Janata Party (BJP) in Assam. The clip claims that according to this purported report, the BJP may suffer significant losses in the upcoming Assembly elections. Several social media users sharing the video have also claimed that the alleged intelligence report signals the possible removal of Assam Chief Minister Himanta Biswa Sarma from office.
However, an investigation by the Cyber Peace Foundation found the viral claim to be false. Our probe clearly established that no leaked intelligence report related to the Assam Assembly elections exists.
Further, Aaj Tak has neither published nor broadcast any such report on its official television channel, website, or social media platforms. The investigation also revealed that the viral video itself is not authentic and has been created using deepfake technology.
Claim
On social media platform Facebook, a user shared the viral video claiming that the BJP has been pushed on the back foot following organisational changes in the Congress—appointing Priyanka Gandhi Vadra as chairperson of the election screening committee and Gaurav Gogoi as the Assam Pradesh Congress Committee president. The post further claims that an Intelligence Bureau report predicts that the current Assam government will not return to power.
(Link to the post, archive link, and screenshots are available.)
FactCheck:
To verify the claim, we first searched for reports related to any alleged leaked intelligence assessment concerning the Assam Assembly elections using relevant keywords. However, no credible or reliable reports supporting the claim were found. We then reviewed Aaj Tak’s official website, social media pages, and YouTube channel. Our examination confirmed that no such news bulletin has been published or broadcast by the network on any of its official platforms.
- https://www.facebook.com/aajtak/?locale=hi_IN
- https://www.instagram.com/aajtak/
- https://x.com/aajtak
- https://www.youtube.com/channel/UCt4t-jeY85JegMlZ-E5UWtA
To further verify the authenticity of the video, its audio was scanned using the deepfake voice detection tool HIVE Moderation.
The analysis revealed that the voice heard in the video is 99 per cent AI-generated, clearly indicating that the audio is not genuine and has been artificially created using artificial intelligence.

Additionally, the video was analysed using another AI detection tool, Aurigin AI, which also identified the viral clip as AI-generated.

Conclusion:
The investigation clearly establishes that there is no leaked intelligence report predicting BJP’s defeat in the Assam Assembly elections. Aaj Tak has not published or broadcast any such content on its official platforms. The video circulating on social media is not authentic and has been created using deepfake technology to mislead viewers.

Introduction
A Reuters investigation has uncovered an elephant in the room regarding Meta Platforms' internal measures to address online fraud and illicit advertising. The confidential documents that Reuters reviewed disclosed that Meta was planning to generate approximately 10% of its 2024 revenue, i.e., USD 16 billion, from ads related to scams and prohibited goods. The findings point out a disturbing paradox: on the one hand, Meta is a vocal advocate for digital safety and platform integrity, while on the other hand, the internal logs of the company indicate the existence of a very large area allowing the shunning of fraudulent advertisement activities that exploit users throughout the world.
The Scale of the Problem
Internal Meta projections show that its platforms, Facebook, Instagram, and WhatsApp, are displaying a staggering 15 billion scam ads per day combined. The advertisements include deceitful e-commerce promotions, fake investment schemes, counterfeit medical products, and unlicensed gambling platforms.
Meta has developed sophisticated detection tools, but even then, the system does not catch the advertisers until they are 95% certain to be fraudsters. By having at least that threshold for removing an ad, the company is unlikely to lose much money. As a result, instead of turning the fraud adjacent advertisers down, it charges them higher ad rates, which is the strategy they call “penalty bids” internally.
Internal Acknowledgements & Business Dependence
Internal documents that date between 2021 and 2025 reveal that the financial, safety, and lobbying divisions of Meta were cognizant of the enormity of revenues generated from scams. One of the 2025 strategic papers even describes this revenue source as "violating revenue," which implies that it includes ads that are against Meta's policies regarding scams, gambling, sexual services, and misleading healthcare products.
The company's top executives consider the cost-benefit scenario of stricter enforcement. According to a 2024 internal projection, Meta's half-yearly earnings from high-risk scam ads were estimated at USD 3.5 billion, whereas regulatory fines for such violations would not exceed USD 1 billion, thus making it a tolerable trade-off from a commercial viewpoint. At the same time, the company intends to scale down scam ad revenue gradually, thus from 10.1% in 2024 to 7.3% by 2025, and 6% by 2026; however, the documents also reveal a planned slowdown in enforcement to avoid "abrupt reductions" that could affect business forecasts.
Algorithmic Amplification of Scams
One of the most alarming situations is the fact that Meta's own advertising algorithms amplify scam content. It has been reported that users who click on fraudulent ads are more likely to see other similar ads, as the platform's personalisation engine assumes user "interest."
This scenario creates a self-reinforcing feedback loop where the user engagement with scam content dictates the amount of such content being displayed. Thus, a digital environment is created which encourages deceptive engagement and consequently, user trust is eroded and systemic risk is amplified.
An internal presentation in May 2025 was said to put a number on how deeply the platform's ad ecosystem was intertwined with the global fraud economy, estimating that one-third of the scams that succeeded in the U.S. were due to advertising on Meta's platforms.
Regulatory & Legal Implications
The disclosures arrived at the same time as the US and UK governments started to closely check the company's activities more than ever before.
- The U.S. Securities and Exchange Commission (SEC) is said to be looking into whether Meta has had any part in the promotion of fraudulent financial ads.
- The UK’s Financial Conduct Authority (FCA) found that Meta’s platforms were the main sources of scams related to online payments and claimed that the amount of money lost was more than all the other social platforms combined in 2023.
Meta’s spokesperson, Andy Stone, at first denied the accusations, stating that the figures mentioned in the leak were “rough and overly-inclusive”; nevertheless, he conceded that the company’s consistent efforts toward enforcement had negatively impacted revenue and would continue to do so.
Operational Challenges & Policy Gaps
The internal documents also reveal the weaknesses in Meta's day-to-day operations when it comes to the implementation of its own policies.
- Because of the large number of employees laid off in 2023, the whole department that dealt with advertiser-brand impersonation was said to have been dissolved.
- Scam ads were categorised as a "low severity" issue, which was more of a "bad user experience" than a critical security risk.
- At the end of 2023, users were submitting around 100,000 legitimate scam reports per week, of which Meta dismissed or rejected 96%.
Human Impact: When Fraud Becomes Personal
The financial and ethical issues have tangible human consequences. The Reuters investigation documented multiple cases of individuals defrauded through hijacked Meta accounts.
One striking example involves a Canadian Air Force recruiter, whose hacked Facebook account was used to promote fake cryptocurrency schemes. Despite over a hundred user reports, Meta failed to act for weeks, during which several victims, including military colleagues, lost tens of thousands of dollars.
The case underscores not just platform negligence, but also the difficulty of law enforcement collaboration. Canadian authorities confirmed that funds traced to Nigerian accounts could not be recovered due to jurisdictional barriers, a recurring issue in transnational cyber fraud.
Ethical and Cybersecurity Implications
The research has questioned extremely important things at least from the perspective of cyber policy:
- Platform Accountability: Meta, by its practice, is giving more importance to the monetary aspect rather than the truth, and in this way, it is going against the principles of responsible digital governance.
- Transparency in Ad Ecosystems: The lack of transparency in digital advertising systems makes it very easy for dishonest actors to use automated processes with very little supervision.
- Algorithmic Responsibility: The use of algorithms that impact the visibility of misleading content and targeting can be considered the direct involvement of the algorithms in the fraud.
- Regulatory Harmonisation: The presence of different and disconnected enforcement frameworks across jurisdictions is a drawback to the efforts in dealing with cross-border cybercrime.
- Public Trust: Users’ trust in the digital world is mainly dependent on the safety level they see and the accountability of the companies.
Conclusion
Meta’s records show a very unpleasant mix of profit, laxity, and failure in the policy area concerning scam-related ads. The platform’s readiness to accept and even profit from fraudulent players, though admitting the damage they cause, calls for an immediate global rethinking of advertising ethics, regulatory enforcement, and algorithmic transparency.
With the expansion of its AI-driven operations and advertising networks, protecting the users of Meta must evolve from being just a public relations goal to being a core business necessity, thus requiring verifiable accountability measures, independent audits, and regulatory oversight. It is an undeniable fact that there are billions of users who count on Meta’s platforms for their right to digital safety, which is why this right must be respected and enforced rather than becoming optional.
References
- https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/?utm_source=chatgpt.com
- https://www.indiatoday.in/technology/news/story/leaked-docs-claim-meta-made-16-billion-from-scam-ads-even-after-deleting-134-million-of-them-2815183-2025-11-07