#Fact Old image of Hindu Priest with Donald trump at White house goes viral as recent.
Executive Summary:
Our Team recently came across a post on X (formerly twitter) where a photo widely shared with misleading captions was used about a Hindu Priest performing a vedic prayer at Washington after recent elections. After investigating, we found that it shows a ritual performed by a Hindu priest at a private event in White House to bring an end to the Covid-19 Pandemic. Always verify claims before sharing.

Claim:
An image circulating after Donald Trump’s win in the US election shows Pujari Harish Brahmbhatt at the White House recently.

Fact Check:
The analysis was carried out and found that the video is from an old post that was uploaded in May 2020. By doing a Reverse Image Search we were able to trace the sacred Vedic Shanti Path or peace prayer was recited by a Hindu priest in the Rose Garden of the White House on the occasion of National Day of Prayer Service with other religious leaders to pray for the health, safety and well-being of everyone affected by the coronavirus pandemic during those difficult days, and to bring an end to Covid-19 Pandemic.

Conclusion:
The viral claim mentioning that a Hindu priest performed a Vedic prayer at the White House during Donald Trump’s presidency isn’t true. The photo is actually from a private event in 2020 and provides misleading information.
Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
- Claim: Hindu priest held a Vedic prayer at the White House under Trump
- Claimed On:Instagram and X (Formerly Known As Twitter)
- Fact Check: False and Misleading
Related Blogs

Executive Summary
A video is being widely shared on social media showing a police officer driving an e-rickshaw, while two other policemen are seen in the back seat. Users sharing the clip claim that, due to a shortage of petrol, this is a new initiative by the Uttar Pradesh Police. However, research by CyberPeace found the viral claim to be false. Our research also confirms that the video is not real but AI-generated.
Claim
An Instagram user shared the viral video claiming that due to fuel shortages, Uttar Pradesh Police has started patrolling using e-rickshaws.
- Post link: https://www.instagram.com/reel/DWepKWXAeiE/
- Archive: https://archive.ph/QBNXs

Fact Check
To verify the claim, we first conducted a keyword search on Google but found no credible media reports supporting this claim.

Next, we extracted keyframes from the viral video and performed a reverse image search using Google Lens. During this process, we found the same video uploaded on an Instagram channel on March 28, 2026. The uploader clearly mentioned that the video was created purely for entertainment purposes.

We further analyzed the video using AI detection tools. When scanned with Hive Moderation, the results indicated that the video is approximately 94% AI-generated.

In the next step, we also tested the clip using DeepAI. According to its analysis, the video is about 97% AI-generated.

Conclusion
Our research clearly shows that the viral video is not authentic. It is an AI-generated clip created for entertainment purposes, and the claim that Uttar Pradesh Police has started e-rickshaw patrolling due to petrol shortage is false.

Introduction
Betting has long been associated with sporting activities and has found a growing presence in online gaming and esports globally. As the esports industry continues to expand, Statista has projected that it will reach a market value of $5.9 billion by 2029. As such, associated markets have also seen significant growth. In 2024, this segment accounted for an estimated $2.5 billion globally. While such engagement avenues are popular among international audiences, they also bring attention to concerns around regulation, integrity, and user protection. As esports builds its credibility and reach, especially among younger demographics, these aspects become increasingly important to address in policy and practice.
What Does Esports Betting Involve?
Much like traditional sports, esports engagement in some regions includes the practice of wagering on teams, players, or match outcomes. But it is inherently more complex. The accurate valuation of odds in online gaming and esports can be complicated by frequently updated game titles, changing teams, and shifting updates to game mechanics (called metas- most effective strategies). Bets can be placed using real money, virtual items like skins (digital avatars), or increasingly, cryptocurrency.
Esports and Wagering: Emerging Issues and Implications
- Legal Grey Areas: While countries like South Korea and some USA states have dedicated regulations for esports betting and licensed bookmaking, most do not. This creates legal grey areas for betting service providers to access unregulated markets, increasing the risk of fraud, money laundering, and exploitation of bettors in those regions.
- The Skill v/s Chance Dilemma: Most gambling laws across the world regulate betting based on the distinction between ‘games of skill’ and ‘games of chance’. Betting on the latter is typically illegal, since winning depends on chance. But the definitions of ‘skill’ and ‘chance’ may vary by jurisdiction. Also, esports betting often blurs into gambling. Outcomes may depend on player skill, but in-game economies like skin betting and unpredictable gameplay introduce elements of chance, complicating regulation and making enforcement difficult.
- Underage Gambling and Addiction Risks: Players are often minors and are exposed to the gambling ecosystem due to gamified betting through reward systems like loot boxes. These often mimic the mechanics of betting, normalising gambling behaviours among young users before they fully understand the risks. This can lead to the development of addictive behaviours.
- Match-Fixing and Loss of Integrity: Esports are particularly susceptible to match-fixing because of weak regulation, financial pressures, and the anonymity of online betting. Instances like the Dota 2 Southeast Asia Scandals (2023) and Valorant match-fixing in North America (2021) can jeopardise audience trust and sponsorships. This affects the trustworthiness of minor tournaments, where talent is discovered.
- Cybersecurity and Data Risks: Esports betting apps collect sensitive user data, making them an attractive target for cybercrime. Bettors are susceptible to identity theft, financial fraud, and data breaches, especially on unlicensed platforms.
Way Forward
To strengthen trust, ensure user safety, and protect privacy within the esports ecosystem, responsible management of betting practices can be achieved through targeted interventions focused on:
- National-Level Regulations: Countries like India have a large online gaming and esports market. It will need to create a regulatory authority along the lines of the UK’s Gambling Commission and update its gambling laws to protect consumers.
- Protection of Minors: Setting guardrails such as age verification, responsible advertising, anti-fraud mechanisms, self-exclusion tools, and spending caps can help to keep a check on gambling by minors.
- Harmonizing Global Standards: Since esports is inherently global, aligning core regulatory principles across jurisdictions (such as through multi-country agreements or voluntary industry codes of conduct) can help create consistency while avoiding overregulation.
- Co-Regulation: Governments, esports organisers, betting platforms, and player associations should work closely to design effective, well-informed policies. This can help uphold the interests of all stakeholders in the industry.
Conclusion
Betting in esports is inevitable. But the industry faces a double dilemma- overregulating on the one hand, or letting gambling go unchecked, on the other. Both can be detrimental to its growth. This is why there is a need for industry actors like policymakers, platforms and organisers to work together to harmonise legal inconsistencies, protect vulnerable users and invest in forming data security. Forming industry-wide ethics boards, promoting regional regulatory dialogue, and instating transparency measures for betting operators can be a step in this direction to ensure that esports evolves into a mature, trusted global industry.
Sources

Introduction
In the digital realm of social media, Meta Platforms, the driving force behind Facebook and Instagram, faces intense scrutiny following The Wall Street Journal's investigative report. This exploration delves deeper into critical issues surrounding child safety on these widespread platforms, unravelling algorithmic intricacies, enforcement dilemmas, and the ethical maze surrounding monetisation features. Instances of "parent-managed minor accounts" leveraging Meta's subscription tools to monetise content featuring young individuals have raised eyebrows. While skirting the line of legality, this practice prompts concerns due to its potential appeal to adults and the associated inappropriate interactions. It's a nuanced issue demanding nuanced solutions.
Failed Algorithms
The very heartbeat of Meta's digital ecosystem, its algorithms, has come under intense scrutiny. These algorithms, designed to curate and deliver content, were found to actively promoting accounts featuring explicit content to users with known pedophilic interests. The revelation sparks a crucial conversation about the ethical responsibilities tied to the algorithms shaping our digital experiences. Striking the right balance between personalised content delivery and safeguarding users is a delicate task.
While algorithms play a pivotal role in tailoring content to users' preferences, Meta needs to reevaluate the algorithms to ensure they don't inadvertently promote inappropriate content. Stricter checks and balances within the algorithmic framework can help prevent the inadvertent amplification of content that may exploit or endanger minors.
Major Enforcement Challenges
Meta's enforcement challenges have come to light as previously banned parent-run accounts resurrect, gaining official verification and accumulating large followings. The struggle to remove associated backup profiles adds layers to concerns about the effectiveness of Meta's enforcement mechanisms. It underscores the need for a robust system capable of swift and thorough actions against policy violators.
To enhance enforcement mechanisms, Meta should invest in advanced content detection tools and employ a dedicated team for consistent monitoring. This proactive approach can mitigate the risks associated with inappropriate content and reinforce a safer online environment for all users.
The financial dynamics of Meta's ecosystem expose concerns about the exploitation of videos that are eligible for cash gifts from followers. The decision to expand the subscription feature before implementing adequate safety measures poses ethical questions. Prioritising financial gains over user safety risks tarnishing the platform's reputation and trustworthiness. A re-evaluation of this strategy is crucial for maintaining a healthy and secure online environment.
To address safety concerns tied to monetisation features, Meta should consider implementing stricter eligibility criteria for content creators. Verifying the legitimacy and appropriateness of content before allowing it to be monetised can act as a preventive measure against the exploitation of the system.
Meta's Response
In the aftermath of the revelations, Meta's spokesperson, Andy Stone, took centre stage to defend the company's actions. Stone emphasised ongoing efforts to enhance safety measures, asserting Meta's commitment to rectifying the situation. However, critics argue that Meta's response lacks the decisive actions required to align with industry standards observed on other platforms. The debate continues over the delicate balance between user safety and the pursuit of financial gain. A more transparent and accountable approach to addressing these concerns is imperative.
To rebuild trust and credibility, Meta needs to implement concrete and visible changes. This includes transparent communication about the steps taken to address the identified issues, continuous updates on progress, and a commitment to a user-centric approach that prioritises safety over financial interests.
The formation of a task force in June 2023 was a commendable step to tackle child sexualisation on the platform. However, the effectiveness of these efforts remains limited. Persistent challenges in detecting and preventing potential child safety hazards underscore the need for continuous improvement. Legislative scrutiny adds an extra layer of pressure, emphasising the urgency for Meta to enhance its strategies for user protection.
To overcome ongoing challenges, Meta should collaborate with external child safety organisations, experts, and regulators. Open dialogues and partnerships can provide valuable insights and recommendations, fostering a collaborative approach to creating a safer online environment.
Drawing a parallel with competitors such as Patreon and OnlyFans reveals stark differences in child safety practices. While Meta grapples with its challenges, these platforms maintain stringent policies against certain content involving minors. This comparison underscores the need for universal industry standards to safeguard minors effectively. Collaborative efforts within the industry to establish and adhere to such standards can contribute to a safer digital environment for all.
To align with industry standards, Meta should actively participate in cross-industry collaborations and adopt best practices from platforms with successful child safety measures. This collaborative approach ensures a unified effort to protect users across various digital platforms.
Conclusion
Navigating the intricate landscape of child safety concerns on Meta Platforms demands a nuanced and comprehensive approach. The identified algorithmic failures, enforcement challenges, and controversies surrounding monetisation features underscore the urgency for Meta to reassess and fortify its commitment to being a responsible digital space. As the platform faces this critical examination, it has an opportunity to not only rectify the existing issues but to set a precedent for ethical and secure social media engagement.
This comprehensive exploration aims not only to shed light on the existing issues but also to provide a roadmap for Meta Platforms to evolve into a safer and more responsible digital space. The responsibility lies not just in acknowledging shortcomings but in actively working towards solutions that prioritise the well-being of its users.
References
- https://timesofindia.indiatimes.com/gadgets-news/instagram-facebook-prioritised-money-over-child-safety-claims-report/articleshow/107952778.cms
- https://www.adweek.com/blognetwork/meta-staff-found-instagram-tool-enabled-child-exploitation-the-company-pressed-ahead-anyway/107604/
- https://www.tbsnews.net/tech/meta-staff-found-instagram-subscription-tool-facilitated-child-exploitation-yet-company