#FactCheck - Viral Video Showing Man Frying Bhature on His Stomach Is AI-Generated
A video circulating on social media shows a man allegedly rolling out bhature on his stomach and then frying them in a pan. The clip is being shared with a communal narrative, with users making derogatory remarks while falsely linking the act to a particular community.
CyberPeace Foundation’s research found the viral claim to be false. Our probe confirms that the video is not real but has been created using artificial intelligence (AI) tools and is being shared online with a misleading and communal angle.
Claim
On January 5, 2025, several users shared the viral video on social media platform X (formerly Twitter). One such post carried a communal caption suggesting that the person shown in the video does not belong to a particular community and making offensive remarks about hygiene and food practices..
- The post link and archived version can be viewed here: https://x.com/RightsForMuslim/status/2008035811804291381
- Archive Link: https://archive.ph/lKnX5

Fact Check:
Upon closely examining the viral video, several visual inconsistencies and unnatural movements were observed, raising suspicion about its authenticity. These anomalies are commonly associated with AI-generated or digitally manipulated content.
To verify this, the video was analysed using the AI detection tool HIVE Moderation. According to the tool’s results, the video was found to be 97 percent AI-generated, strongly indicating that it was not recorded in real life but synthetically created.

Conclusion
CyberPeace Foundation’s research clearly establishes that the viral video is AI-generated and does not depict a real incident. The clip is being deliberately shared with a false and communal narrative to mislead users and spread misinformation on social media. Users are advised to exercise caution and verify content before sharing such sensational and divisive material online.
Related Blogs

Executive Summary:
A photo allegedly shows an Israeli Army dog attacking an elderly Palestinian woman has been circulating online on social media. However, the image is misleading as it was created using Artificial Intelligence (AI), as indicated by its graphical elements, watermark ("IN.VISUALART"), and basic anomalies. Although there are certain reports regarding the real incident in several news channels, the viral image was not taken during the actual event. This emphasizes the need to verify photos and information shared on social media carefully.

Claims:
A photo circulating in the media depicts an Israeli Army dog attacking an elderly Palestinian woman.



Fact Check:
Upon receiving the posts, we closely analyzed the image and found certain discrepancies that are commonly seen in AI-generated images. We can clearly see the watermark “IN.VISUALART” and also the hand of the old lady looks odd.

We then checked in AI-Image detection tools named, True Media and contentatscale AI detector. Both found potential AI Manipulation in the image.



Both tools found it to be AI Manipulated. We then keyword searched for relevant news regarding the viral photo. Though we found relevant news, we didn’t get any credible source for the image.

The photograph that was shared around the internet has no credible source. Hence the viral image is AI-generated and fake.
Conclusion:
The circulating photo of an Israeli Army dog attacking an elderly Palestinian woman is misleading. The incident did occur as per the several news channels, but the photo depicting the incident is AI-generated and not real.
- Claim: A photo being shared online shows an elderly Palestinian woman being attacked by an Israeli Army dog.
- Claimed on: X, Facebook, LinkedIn
- Fact Check: Fake & Misleading

Introduction
A Reuters investigation has uncovered an elephant in the room regarding Meta Platforms' internal measures to address online fraud and illicit advertising. The confidential documents that Reuters reviewed disclosed that Meta was planning to generate approximately 10% of its 2024 revenue, i.e., USD 16 billion, from ads related to scams and prohibited goods. The findings point out a disturbing paradox: on the one hand, Meta is a vocal advocate for digital safety and platform integrity, while on the other hand, the internal logs of the company indicate the existence of a very large area allowing the shunning of fraudulent advertisement activities that exploit users throughout the world.
The Scale of the Problem
Internal Meta projections show that its platforms, Facebook, Instagram, and WhatsApp, are displaying a staggering 15 billion scam ads per day combined. The advertisements include deceitful e-commerce promotions, fake investment schemes, counterfeit medical products, and unlicensed gambling platforms.
Meta has developed sophisticated detection tools, but even then, the system does not catch the advertisers until they are 95% certain to be fraudsters. By having at least that threshold for removing an ad, the company is unlikely to lose much money. As a result, instead of turning the fraud adjacent advertisers down, it charges them higher ad rates, which is the strategy they call “penalty bids” internally.
Internal Acknowledgements & Business Dependence
Internal documents that date between 2021 and 2025 reveal that the financial, safety, and lobbying divisions of Meta were cognizant of the enormity of revenues generated from scams. One of the 2025 strategic papers even describes this revenue source as "violating revenue," which implies that it includes ads that are against Meta's policies regarding scams, gambling, sexual services, and misleading healthcare products.
The company's top executives consider the cost-benefit scenario of stricter enforcement. According to a 2024 internal projection, Meta's half-yearly earnings from high-risk scam ads were estimated at USD 3.5 billion, whereas regulatory fines for such violations would not exceed USD 1 billion, thus making it a tolerable trade-off from a commercial viewpoint. At the same time, the company intends to scale down scam ad revenue gradually, thus from 10.1% in 2024 to 7.3% by 2025, and 6% by 2026; however, the documents also reveal a planned slowdown in enforcement to avoid "abrupt reductions" that could affect business forecasts.
Algorithmic Amplification of Scams
One of the most alarming situations is the fact that Meta's own advertising algorithms amplify scam content. It has been reported that users who click on fraudulent ads are more likely to see other similar ads, as the platform's personalisation engine assumes user "interest."
This scenario creates a self-reinforcing feedback loop where the user engagement with scam content dictates the amount of such content being displayed. Thus, a digital environment is created which encourages deceptive engagement and consequently, user trust is eroded and systemic risk is amplified.
An internal presentation in May 2025 was said to put a number on how deeply the platform's ad ecosystem was intertwined with the global fraud economy, estimating that one-third of the scams that succeeded in the U.S. were due to advertising on Meta's platforms.
Regulatory & Legal Implications
The disclosures arrived at the same time as the US and UK governments started to closely check the company's activities more than ever before.
- The U.S. Securities and Exchange Commission (SEC) is said to be looking into whether Meta has had any part in the promotion of fraudulent financial ads.
- The UK’s Financial Conduct Authority (FCA) found that Meta’s platforms were the main sources of scams related to online payments and claimed that the amount of money lost was more than all the other social platforms combined in 2023.
Meta’s spokesperson, Andy Stone, at first denied the accusations, stating that the figures mentioned in the leak were “rough and overly-inclusive”; nevertheless, he conceded that the company’s consistent efforts toward enforcement had negatively impacted revenue and would continue to do so.
Operational Challenges & Policy Gaps
The internal documents also reveal the weaknesses in Meta's day-to-day operations when it comes to the implementation of its own policies.
- Because of the large number of employees laid off in 2023, the whole department that dealt with advertiser-brand impersonation was said to have been dissolved.
- Scam ads were categorised as a "low severity" issue, which was more of a "bad user experience" than a critical security risk.
- At the end of 2023, users were submitting around 100,000 legitimate scam reports per week, of which Meta dismissed or rejected 96%.
Human Impact: When Fraud Becomes Personal
The financial and ethical issues have tangible human consequences. The Reuters investigation documented multiple cases of individuals defrauded through hijacked Meta accounts.
One striking example involves a Canadian Air Force recruiter, whose hacked Facebook account was used to promote fake cryptocurrency schemes. Despite over a hundred user reports, Meta failed to act for weeks, during which several victims, including military colleagues, lost tens of thousands of dollars.
The case underscores not just platform negligence, but also the difficulty of law enforcement collaboration. Canadian authorities confirmed that funds traced to Nigerian accounts could not be recovered due to jurisdictional barriers, a recurring issue in transnational cyber fraud.
Ethical and Cybersecurity Implications
The research has questioned extremely important things at least from the perspective of cyber policy:
- Platform Accountability: Meta, by its practice, is giving more importance to the monetary aspect rather than the truth, and in this way, it is going against the principles of responsible digital governance.
- Transparency in Ad Ecosystems: The lack of transparency in digital advertising systems makes it very easy for dishonest actors to use automated processes with very little supervision.
- Algorithmic Responsibility: The use of algorithms that impact the visibility of misleading content and targeting can be considered the direct involvement of the algorithms in the fraud.
- Regulatory Harmonisation: The presence of different and disconnected enforcement frameworks across jurisdictions is a drawback to the efforts in dealing with cross-border cybercrime.
- Public Trust: Users’ trust in the digital world is mainly dependent on the safety level they see and the accountability of the companies.
Conclusion
Meta’s records show a very unpleasant mix of profit, laxity, and failure in the policy area concerning scam-related ads. The platform’s readiness to accept and even profit from fraudulent players, though admitting the damage they cause, calls for an immediate global rethinking of advertising ethics, regulatory enforcement, and algorithmic transparency.
With the expansion of its AI-driven operations and advertising networks, protecting the users of Meta must evolve from being just a public relations goal to being a core business necessity, thus requiring verifiable accountability measures, independent audits, and regulatory oversight. It is an undeniable fact that there are billions of users who count on Meta’s platforms for their right to digital safety, which is why this right must be respected and enforced rather than becoming optional.
References
- https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/?utm_source=chatgpt.com
- https://www.indiatoday.in/technology/news/story/leaked-docs-claim-meta-made-16-billion-from-scam-ads-even-after-deleting-134-million-of-them-2815183-2025-11-07

Introduction
The Telecom Regulatory Authority of India (TRAI) issued a consultation paper titled “Encouraging Innovative Technologies, Services, Use Cases, and Business Models through Regulatory Sandbox in Digital Communication Sector. The paper presents a draft sandbox structure for live testing of new digital communication products or services in a regulated environment. TRAI seeks comments from stakeholders on several parts of the framework.
What is digital communication?
Digital communication is the use of internet tools such as email, social media messaging, and texting to communicate with other people or a specific audience. Even something as easy as viewing the content on this webpage qualifies as digital communication.
Aim of Paper
- Frameworks are intended to support regulators’ desire for innovation while also ensuring economic resilience and consumer protection. Considering this, the Department of Telecom (DoT) asked TRAI to offer recommendations on a regulatory sandbox framework. TRAI approaches the issue with the goal of encouraging creativity and hastening the adoption of cutting-edge digital communications technologies.
- Artificial intelligence, the Internet of Things, edge computing, and other emerging technologies are revolutionizing how we connect, communicate, and access information, driving the digital communication sector to rapidly expand. To keep up with this dynamic environment, an enabling environment for the development and deployment of novel technologies, services, use cases, and business models is required.
- The regulatory sandbox concept is becoming increasingly popular around the world as a means of encouraging innovation in a range of industries. A regulatory sandbox is a regulated environment in which businesses and innovators can test their concepts, commodities, and services while operating under changing restrictions.
- Regulatory Sandbox will benefit the telecom startup ecosystem by providing access to a real-time network environment and other data, allowing them to evaluate the reliability of new applications before releasing them to the market. Regulatory Sandbox also attempts to stimulate cross-sectoral collaboration for carrying out such testing by engaging the assistance of other ministries and departments in order to give the starting company with a single window for acquiring all clearances.
What is regulatory sandbox?
- A regulatory sandbox is a controlled regulatory environment in which new products or services are tested in real-time.
- It serves as a “safe space” for businesses because authorities may or may not allow certain relaxations for the sole purpose of testing.
- The sandbox enables the regulator, innovators, financial service providers, and clients to perform field testing in order to gather evidence on the benefits and hazards of new financial innovations, while closely monitoring and mitigating their risks.
What are the advantages of having a regulatory sandbox?
- Firstly, regulators obtain first-hand empirical evidence on the benefits and risks of emerging technologies and their implications, allowing them to form an informed opinion on the regulatory changes or new regulations that may be required to support useful innovation while mitigating the associated risks.
- Second, sandbox customers can evaluate the viability of a product without the need for a wider and more expensive roll-out. If the product appears to have a high chance of success, it may be authorized and delivered to a wider market more quickly.
Digital communication sector and Regulatory Sandbox
- Many countries’ regulatory organizations have built sandbox settings for telecom tech innovation.
- These frameworks are intended to encourage regulators’ desire for innovation while also promoting economic resilience and consumer protection.
- In this context, the Department of Telecom (DoT) had asked TRAI to give recommendations on a regulatory sandbox framework.
- Written comments on the drafting framework will be received until July 17, 2023, and counter-comments will be taken until August 1, 2023. The Authority’s goal in the digital communication industry is to foster creativity and expedite the use of emerging technologies such as artificial intelligence (AI), the Internet of Things (IoT), and edge computing. These technologies are changing the way individuals connect, engage, and access information, causing rapid changes in the industry.
Conclusion
According to TRAI, these technologies are changing how individuals connect, engage, and obtain information, resulting in significant changes in the sector.
The regulatory sandbox also wants to stimulate cross-sectoral collaboration for carrying out such testing by engaging the assistance of other ministries and departments in order to give the starting company with a single window for acquiring all clearances. The consultation paper covers some of the worldwide regulatory sandbox frameworks in use in the digital communication industry, as well as some of the frameworks in use inside the country in other sectors.