#FactCheck - False Claim of Italian PM Congratulating on Ram Temple, Reveals Birthday Thanks
Executive Summary:
A number of false information is spreading across social media networks after the users are sharing the mistranslated video with Indian Hindus being congratulated by Italian Prime Minister Giorgia Meloni on the inauguration of Ram Temple in Ayodhya under Uttar Pradesh state. Our CyberPeace Research Team’s investigation clearly reveals that those allegations are based on false grounds. The true interpretation of the video that actually is revealed as Meloni saying thank you to those who wished her a happy birthday.
Claims:
A X (Formerly known as Twitter) user’ shared a 13 sec video where Italy Prime Minister Giorgia Meloni speaking in Italian and user claiming to be congratulating India for Ram Mandir Construction, the caption reads,
“Italian PM Giorgia Meloni Message to Hindus for Ram Mandir #RamMandirPranPratishta. #Translation : Best wishes to the Hindus in India and around the world on the Pran Pratistha ceremony. By restoring your prestige after hundreds of years of struggle, you have set an example for the world. Lots of love.”

Fact Check:
The CyberPeace Research team tried to translate the Video in Google Translate. First, we took out the transcript of the Video using an AI transcription tool and put it on Google Translate; the result was something else.

The Translation reads, “Thank you all for the birthday wishes you sent me privately with posts on social media, a lot of encouragement which I will treasure, you are my strength, I love you.”
With this we are sure that it was not any Congratulations message but a thank you message for all those who sent birthday wishes to the Prime Minister.
We then did a reverse Image Search of frames of the Video and found the original Video on the Prime Minister official X Handle uploaded on 15 Jan, 2024 with caption as, “Grazie. Siete la mia” Translation reads, “Thank you. You are my strength!”

Conclusion:
The 13 Sec video shared by a user had a great reach at X as a result many users shared the Video with Similar Caption. A Misunderstanding starts from one Post and it spreads all. The Claims made by the X User in Caption of the Post is totally misleading and has no connection with the actual post of Italy Prime Minister Giorgia Meloni speaking in Italian. Hence, the Post is fake and Misleading.
- Claim: Italian Prime Minister Giorgia Meloni congratulated Hindus in the context of Ram Mandir
- Claimed on: X
- Fact Check: Fake
Related Blogs

Introduction
A policy, no matter how artfully conceived, is like a timeless idiom, its truth self-evident, its purpose undeniable, standing in silent witness before those it vows to protect, yet trapped in the stillness of inaction, where every moment of delay erodes the very justice it was meant to serve. This is the case of the Digital Personal Data Protection Act, 2023, which holds in its promise a resolution to all the issues related to data protection and a protection framework at par with GDPR and Global Best Practices. While debates on its substantive efficacy are inevitable, its execution has emerged as a site of acute contention. The roll-out and the decision-making have been making headlines since late July on various fronts. The government is being questioned by industry stakeholders, media and independent analysts on certain grounds, be it “slow policy execution”, “centralisation of power” or “arbitrary amendments”. The act is now entrenched in a never-ending dilemma of competing interests under the DPDP Act.
The change to the Right to Information Act (RTI), 2005, made possible by Section 44(3) of the DPDP Act, has become a focal point of debate. This amendment is viewed by some as an attack on weakening the hard-won transparency architecture of Indian democracy by substituting an absolute exemption for personal information for the “public interest override” in Section 8(1)(j) of the RTI Act.
The Lag Ledger: Tracking the Delays in DPDP Enforcement
As per a news report of July 28, 2025, the Parliamentary Standing Committee on Information and Communications Technology has expressed its concern over the delayed implementation and has urged the Ministry of Electronics and Information Technology (MeitY) to ensure that data privacy is adequately ensured in the nation. In the report submitted to the Lok Sabha on July 24, the committee reviewed the government’s reaction to the previous recommendations and concluded that MeitY had only been able to hold nine consultations and twenty awareness workshops about the Draft DPDP Rules, 2025. In addition, four brainstorming sessions with academic specialists were conducted to examine the needs for research and development. The ministry acknowledges that this is a specialised field that urgently needs industrial involvement. Another news report dated 30th July, 2025, of a day-long consultation held where representatives from civil society groups, campaigns, social movements, senior lawyers, retired judges, journalists, and lawmakers participated on the contentious and chilling effects of the Draft Rules that were notified in January this year. The organisers said in a press statement the DPDP Act may have a negative impact on the freedom of the press and people’s right to information and the activists, journalists, attorneys, political parties, groups and organisations “who collect, analyse, and disseminate critical information as they become ‘data fiduciaries’ under the law.”
The DPDP Act has thus been caught up in an uncomfortable paradox: praised as a significant legislative achievement for India’s digital future, but caught in a transitional phase between enactment and enforcement, where every day not only postpones protection but also feeds worries about the dwindling amount of room for accountability and transparency.
The Muzzling Effect: Diluting Whistleblower Protections
The DPDP framework raises a number of subtle but significant issues, one of which is the possibility that it would weaken safeguards for whistleblowers. Critics argue that the Act runs the risk of trapping journalists, activists, and public interest actors who handle sensitive material while exposing wrongdoing because it expands the definition of “personal data” and places strict compliance requirements on “data fiduciaries.”One of the most important checks on state overreach may be silenced if those who speak truth to power are subject to legal retaliation in the absence of clear exclusions of robust public-interest protections.
Noted lawyer Prashant Bhushan has criticised the law for failing to protect whistleblowers, warning that “If someone exposes corruption and names officials, they could now be prosecuted for violating the DPDP Act.”
Consent Management under the DPDP Act
In June 2025, the National e-Governance Division (NeGD) under MeitY released a Business Requirement Document (BRD) for developing consent management systems under the DPDP Act, 2023. The document supports the idea of “Consent Manager”, which acts as a single point of contact between Data Principals and Data Fiduciaries. This idea is fundamental to the Act, which is now being operationalised with the help of MeitY’s “Code for Consent: The DPDP Innovation Challenge.” The government has established a collaborative ecosystem to construct consent management systems (CMS) that can serve as a single, standardised interface between Data Principals and Data Fiduciaries by choosing six distinct entities, such as Jio Platforms, IDfy, and Zoop. Such a framework could enable people to have meaningful control over their personal data, lessen consent fatigue, and move India’s consent architecture closer to international standards if it is implemented precisely and transparently.
There is no debate to the importance of this development however, there are various concerns associated with this advancement that must be considered. Although effective, a centralised consent management system may end up being a single point of failure in terms of political overreach and technical cybersecurity flaws. Concerns are raised over the concentration of power over the framing, seeking, and recording of consent when big corporate entities like Jio are chosen as key innovators. Critics contend that the organisations responsible for generating revenue from user data should not be given the responsibility for designing the gatekeeping systems. Furthermore, the CMS can create opaque channels for data access, compromising user autonomy and whistleblower protections, in the absence of strong safeguards, transparency mechanisms and independent oversight.
Conclusion
Despite being hailed as a turning point in India’s digital governance, the DPDP Act is still stuck in a delayed and unequal transition from promise to reality. Its goals are indisputable, but so are the conundrum it poses to accountability, openness, and civil liberties. Every delay increases public mistrust, and every safeguard that remains unsolved. The true test of a policy intended to safeguard the digital rights of millions lies not in how it was drafted, but in the integrity, pace, and transparency with which it is to be implemented. In the digital age, the true cost of delay is measured not in time, but in trust. CyberPeace calls for transparent, inclusive, and timely execution that balances innovation with the protection of digital rights.
References
- https://www.storyboard18.com/how-it-works/parliamentary-committee-raises-concern-with-meity-over-dpdp-act-implementation-lag-77105.htm
- https://thewire.in/law/excessive-centralisation-of-power-lawyers-activists-journalists-mps-express-fear-on-dpdp-act
- https://www.medianama.com/2025/08/223-jio-idfy-meity-consent-management-systems-dpdpa/
- https://www.downtoearth.org.in/governance/centre-refuses-to-amend-dpdp-act-to-protect-journalists-whistleblowers-and-rti-activists

As e-commerce companies expand their base and sell a wide range of products on their platforms, attackers continue to look for newer avenues to exploit and potential loopholes to perpetuate scams. A recent method used by scammers is the brushing scam, which targets online shoppers to drive sales. As per reports, it is already being conducted on popular and trusted e-commerce websites such as Amazon and Alibaba Express, and online shoppers must exercise caution with regard to the packages they receive.
The Brushing Scam
Deriving its name from China’s e-commerce practice, this scam includes sellers creating and sending fake orders to unsuspecting individuals, posing to be from e-commerce websites in order to ‘brush up’ the sales figures of their product. The products received are usually low quality and contain items such as low-cost jewellery, seeds, and random gadgets, among other things. The aim is to manipulate reviews for a particular product and make it seem popular so other buyers online are encouraged to purchase the items marketed. Most online shoppers today check reviews before making a purchase, and popular items and seemingly-trustworthy reviews can go a long way towards influencing customer behaviour. Since many platforms do include labels to authenticate reviews tied to genuine purchases to counter fake reviews, scammers have evolved a step further to develop an MO for fake reviews that holds up against basic levels of scrutiny. Some of the packages received under the brushing scam also have QR codes which once scanned lead the receiver to malicious websites.
CyberPeace Insights
Mysterious deliveries that have no information but your name and address may seem tempting to many, as receivers might assume that it could be a marketing gig and free products to try for the sake of promoting a product. The credibility of such deliveries increases as they are packaged to show that these are delivered through trusted online shopping and e-commerce sites. However, even though receiving products for free might seem harmless, it is advised that unknown items be dealt with carefully, more so when addressed to an individual with personal details. Receiving an order itself is an indication that personal information such as one’s name and address has been compromised, and it is likely that the sellers are involved in procuring personal information through a third party, often using illegal methods.
Registering complaints to the concerned e-commerce websites is encouraged, as the frequency of cases raises questions and encourages platforms to take action to ensure a secure buying and delivery experience from their end. An awareness of such scams being carried out for their customers could encourage caution on the part of these platforms and prove to be helpful in addressing the issue on multiple levels. On the part of the receivers, they can change the passwords of their e-commerce accounts and use a 2FA (2-factor authentication) for better security. They should also exercise caution while receiving such parcels, and avoid scanning QR codes on suspicious items.
References
- https://www.livemint.com/technology/tech-news/brushing-scam-explained-from-fake-orders-to-reviews-how-fraudsters-are-manipulating-online-shopping-platforms-11735824384866.html
- https://www.indiatvnews.com/technology/news/beware-of-amazon-scams-how-fraudsters-use-fake-reviews-to-sell-counterfeit-products-2025-01-02-969115
- https://www.indiatoday.in/technology/news/story/brushing-scam-now-makes-buzz-as-it-targets-online-shoppers-everything-you-need-to-know-2659172-2025-01-03
- https://www.msn.com/en-in/money/news/brushing-scam-now-makes-buzz-as-it-targets-online-shoppers-everything-you-need-to-know/ar-AA1wTvon

Introduction
The rapid rise of AI tools has reshaped how health content spreads on platforms like Instagram Reels and YouTube Shorts. These sub-minute videos promise quick fixes for weight loss, glowing skin, or reduced anxiety, often delivered through polished visuals and confident AI-generated voiceovers. The result feels highly personalised, as if the advice is tailored to each viewer, even though it is usually generic and widely recycled.
Short-form videos tend to compress complex health topics into “one tip” solutions, such as drinking a specific detox drink daily or following a single workout for rapid fat loss. While appealing, this oversimplification removes essential context, including individual health conditions, long-term risks, and scientific nuance. For example, viral diet trends or fitness hacks may work for some but can be ineffective or even harmful for others.
Algorithms play a major role in amplifying such content. Videos that promise dramatic transformations or instant results are more likely to gain engagement, which pushes them to wider audiences. Repeated exposure then builds familiarity, making the advice seem more credible over time. Audiences often trust this content due to its clean presentation, authoritative tone, and frequent repetition. However, the risks include misinformation, unrealistic expectations, and potential harm from unverified practices. To approach such content critically, viewers should cross-check claims with credible medical sources, avoid relying on single tip solutions, and remember that real health advice is rarely one size fits all.
The Illusion of Personalisation
AI-generated health content often mimics personalisation through:
- Synthetic voiceovers that designers created to match different age groups through their voice output, which speak specifically to people who are 20 years old and younger.
- The script development process uses data that tracks currently popular search terms.
- Viewers can interpret information through visual elements, which show changes between two different states.
The process of "personalisation" uses generalised data that does not match individual health profiles to create targeted results. The videos fail to provide a medical assessment because they do not consider:
- Existing medical conditions
- Hereditary differences
- Personal habits and the impact of surrounding conditions
The users will think that general medical advice applies to their personal health needs, which will lead them to use this advice inappropriately.
Short-Form Content and Oversimplification
Short-form videos have time limitations, which result in reduced complex medical information development into basic medical stories. The typical patterns of evaluation include these two patterns of evaluation include:
- “One-tip solutions” (e.g., “Drink this before bed to burn fat”)
- Binary framing (“good vs bad foods”)
- The process of eliminating all disclaimers and side effects information
For example, the three diet methods here the three diet methods here
- Viral detox drinks that make the claim to "flush toxins" from the body
- Extreme calorie-cutting diet hacks
- Fitness shortcuts that guarantee users will see results within days
The content demonstrates a pattern of disregarding essential human body operation rules that include both metabolic patterns and human body operation over extended periods of time.
Algorithmic Amplification and Virality
The recommendation algorithms used by Instagram and YouTube deliver their most important results through three main factors, which include:
- Engagement (likes, shares, watch time)
- Retention rates
- Emotional or aspirational triggers
Health-related content that claims to deliver:
- Immediate body changes
- Needs minimal work from viewers
- Results in extreme physical changes
Attractive health-related content that displays extreme physical changes through quick transformations. The system produces a continuous cycle during which:
- Misleading content gains traction
- Algorithms amplify it further
- More creators replicate similar formats using AI tools
The system produces a secondary result that favours content that people share instead of content that has authentic credibility.
Why Do Users Trust AI-Generated Health Content?
Several psychological and technological factors contribute to trust:
- Professional Aesthetics - AI tools generate high-quality visual content together with authentic voiceover performance and expert-level script documentation, which replicates professional communication methods.
- Repetition and Familiarity - When people encounter identical recommendations multiple times, their belief in those recommendations increases through the illusory truth effect.
- Authority Signals
- Medical terminology serves as a standard term
- Medical professionals appear in stock footage through lab coat visuals
- The narrator delivers information through an assertive speaking style
- Perceived Personal Relevance - Algorithmic targeting makes users feel the content is "meant for them.
Real-World Examples of Viral Trends
The typical types of health misinformation that artificial intelligence systems spread through their enhanced capabilities include:
- Diet Trends: Keto shortcuts, extreme intermittent fasting variants
- Fitness Hacks: Spot reduction exercises (scientifically unsupported)
- Supplement Advice: Unverified claims about vitamins or herbal products
- Mental Health Tips: Oversimplified coping strategies that lack clinical evidence
The statement that drinking warm lemon water will detox your liver continues to be popular despite the fact that the liver has natural self-detoxification abilities.
Risks and Public Health Implications
The widespread consumption of such content creates multiple dangers, which include:
1. Physical Health Risks
- Nutritional deficiencies from extreme diets
- Injury from improper exercise techniques
- Delayed medical consultation
2. Psychological Impact
- Unrealistic body image expectations
- Anxiety due to conflicting advice
3. Misinformation Ecosystem
- The public loses confidence in evidence-based medicine
- Unverified or pseudoscientific practices spread throughout society
Regulatory and Ethical Concerns
The increase of AI-generated health materials connects to more extensive problems, which include:
- Who is responsible for the content
- Who is responsible for the platform
- How AI systems show their inner workings to users
Most platforms today do not have strict systems that can:
- Verify medical claims
- Display which health advice comes from artificial intelligence
- Punish users who spread false information multiple times
The absence of regulations allows misleading information to spread without consequences.
A CyberPeace Perspective: Building Digital Health Resilience
The problem needs complete involvement from several parties to create effective solutions that protect both online security and data integrity.
For Users
- Users should confirm claims by using trustworthy medical resources, which include the WHO and peer-reviewed studies.
- People should avoid using "quick solutions" until they receive guidance from certified experts.
- Users should exercise caution when they encounter content that does not include necessary warning signs.
For Platforms
- Platforms should implement systems that enable users to identify AI-generated content.
- Platforms should decrease the visibility of health information that contains false statements.
- Platforms should support authentic health content producers who have been validated.
For Policymakers
- Policymakers should create standards that govern AI-produced medical content.
- Policymakers need to enhance initiatives that teach people about the health information available online.
For Content Creators
- Content creators must show how they implement AI technologies.
- They should stay away from making claims that either go beyond what is needed or state things as absolute truth.
Conclusion
AI-generated health tips on short-form video platforms create complex research conditions that involve three scientific fields: technology, psychology and public health. The tools provide equal access to information, yet create higher risks for people to believe false information when they use the tools without responsible usage.
The challenge requires organisations to maintain user safety through accurate information management while providing users with transparent digital health services. The growing dependence of users on algorithm-based content requires educational institutions to develop students' critical thinking abilities and digital skills to minimise negative effects from AI-driven communication methods.
References
- https://pmc.ncbi.nlm.nih.gov/articles/PMC12924558/
- https://academic.oup.com/heapro/article/40/2/daaf023/8100645
- https://pmc.ncbi.nlm.nih.gov/articles/PMC12673052/
- https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2025.1713794/full
- https://www.who.int/teams/digital-health-and-innovation/digital-channels/combatting-misinformation-online
- https://link.springer.com/article/10.1186/s12982-025-00777-2
- https://www.washingtonpost.com/health/2026/04/21/chatbot-medical-advice-accurate/