#FactCheck – False Claim of Lord Ram's Hologram in Srinagar - Video Actually from Dehradun
Executive Summary:
A video purporting to be from Lal Chowk in Srinagar, which features Lord Ram's hologram on a clock tower, has gone popular on the internet. The footage is from Dehradun, Uttarakhand, not Jammu and Kashmir, the CyberPeace Research Team discovered.
Claims:
A Viral 48-second clip is getting shared over the Internet mostly in X and Facebook, The Video shows a car passing by the clock tower with the picture of Lord Ram. A screen showcasing songs about Lord Ram is shown when the car goes forward and to the side of the road.

The Claim is that the Video is from Kashmir, Srinagar

Similar Post:

Fact Check:
The CyberPeace Research team found that the Information is false. Firstly we did some keyword search relating to the Caption and found that the Clock Tower in Srinagar is not similar to the Video.

We found an article by NDTV mentioning Srinagar Lal Chowk’s Clock Tower, It's the only Clock Tower in the Middle of Road. We are somewhat confirmed that the Video is not From Srinagar. We then ran a reverse image search of the Video by breaking down into frames.
We found another Video that visualizes a similar structure tower in Dehradun.

Taking a cue from this we then Searched for the Tower in Dehradun and tried to see if it matches with the Video, and yes it’s confirmed that the Tower is a Clock Tower in Paltan Bazar, Dehradun and the Video is actually From Dehradun but not from Srinagar.
Conclusion:
After a thorough Fact Check Investigation of the Video and the originality of the Video, we found that the Visualisation of Lord Ram in the Clock Tower is not from Srinagar but from Dehradun. Internet users who claim the Visual of Lord Ram from Srinagar is totally Baseless and Misinformation.
- Claim: The Hologram of Lord Ram on the Clock Tower of Lal Chowk, Srinagar
- Claimed on: Facebook, X
- Fact Check: Fake
Related Blogs

Introduction
According to a shocking report, there are multiple scam loan apps on the App Store in India that charge excessive interest rates and force users to pay by blackmailing and harassing them. Apple has prohibited and removed these apps from the App Store, but they may still be installed on your iPhone and running. You must delete any of these apps if you have downloaded them. Learn the names of these apps and how they operated the fraud.
Why Apple banned these apps?
- Apple has taken action to remove certain apps from the Indian App Store. These apps were engaging in unethical behaviour, such as impersonating financial institutions, demanding high fees, and threatening borrowers. Here are the titles of these apps, as well as what Apple has said about their suspension.
- Following user concerns, Apple removed six loan apps from the Indian App Store. Loan apps include White Kash, Pocket Kash, Golden Kash, Ok Rupee, and others.
- According to multiple user reviews, certain apps seek unjustified access to users’ contact lists and media. These apps also charge exorbitant fees that are not necessitated. Furthermore, companies have been found to engage in unethical tactics such as charging high-interest rates and “processing fees” equal to half the loan amount.
- Some lending app users have reported being harassed and threatened for failing to return their loans on time. In some circumstances, the apps threatened the user’s contacts if payment was not completed by the deadline. According to one user, the app company threatened to produce and send false photographs of her to her contacts.
- These loan apps were removed from the App Store, according to Apple, because they broke the norms and standards of the Apple Developer Program License Agreement. These apps were discovered to be falsely claiming financial institution connections.
Issue of Fake loan apps on the App Store
- The App Store and our App Review Guidelines are designed to ensure we provide our users with the safest experience possible,” Apple explained. “We do not tolerate fraudulent activity on the App Store and have strict rules against apps and developers who attempt to game the system.
- In 2022, Apple blocked nearly $2 billion in fraudulent App Store sales. Furthermore, it rejected nearly 1.7 million software submissions that did not match Apple’s quality and safety criteria and cancelled 428,000 developer accounts due to suspected fraudulent activities.
- The scammers also used heinous tactics to force the loanees to pay. According to reports, the scammers behind the apps gained access to the user’s contact list as well as their images. They would morph the images and then scare the individual by sharing their fake nude photos with their whole contact list.
Dangerous financial fraud apps have surfaced on the App Store
- TechCrunch acquired a user review from one of these apps. “I borrowed an amount in a helpless situation, and a day before the repayment due date, I got some messages with my picture and my contacts in my phone saying that repay your loan or they will inform our contacts that you are not paying the loan,” it said.
- Sandhya Ramesh, a journalist from The Print, recently tweeted a screenshot of a direct message she got. A victim’s friend told a similar story in the message.
- TechCrunch contacted Apple, who confirmed that the apps had been removed from the App Store for breaking the Apple Developer Program License Agreement and guidelines.
Conclusion
Recently, some users have claimed that some quick-loan applications, such as White Kash, Pocket Kash, and Golden Kash, have appeared on the Top Finance applications chart in recent days. These apps necessitate unauthorised and intrusive access to users’ contact lists and media. According to hundreds of user evaluations, these apps charged exorbitantly high and useless fees. They used unscrupulous techniques such as demanding “processing fees” equal to half the loan amount and charging high-interest rates. Users were also harassed and threatened with restitution. If payments were not made by the due date, the lending applications threatened to notify users’ contacts. According to one user, the app provider even threatened to generate phoney nude images of her and send them to her contacts.
.webp)
Introduction
With the advent of the internet, the world revealed the promise of boundless connection and the ability to bridge vast distances with a single click. However, as we wade through the complex layers of the digital age, we find ourselves facing a paradoxical realm where anonymity offers both liberation and a potential for unforeseen dangers. Omegle, a chat and video messaging platform, epitomizes this modern conundrum. Launched over a decade ago in 2009, it has burgeoned into a popular avenue for digital interaction, especially amidst the heightened need for human connection spurred by the COVID-19 pandemic's social distancing requirements. Yet, this seemingly benign tool of camaraderie, tragically, doubles as a contemporary incarnation of Pandora's box, unleashing untold risks upon the online privacy and security landscape. Omegle shuts down its operations permanently after 14 years of its service.
The Rise of Omegle
The foundations of this nebulous virtual dominion can be traced back to the very architecture of Omegle. Introduced to the world as a simple, anonymous chat service, Omegle has since evolved, encapsulating the essence of unpredictable human interaction. Users enter this digital arena, often with the innocent desire to alleviate the pangs of isolation or simply to satiate curiosity; yet they remain blissfully unaware of the potential cybersecurity maelstrom that awaits them.
As we commence a thorough inquiry into the psyche of Omegle's vast user base, we observe a digital diaspora with staggering figures. The platform, in May 2022, counted 51.7 million unique visitors, a testament to its sprawling reach across the globe. Delve a bit deeper, and you will uncover that approximately 29.89% of these digital nomads originate from the United States. Others, in varying percentages, flock from India, the Philippines, the United Kingdom, and Germany, revealing a vast, intricate mosaic of international engagement.
Such statistics beguile the uninformed observer with the lie of demographic diversity. Yet we must proceed with caution, for while the platform boasts an impressive 63.91% male patronage, we cannot overlook the notable surge in female participation, which has climbed to 36.09% during the pandemic era. More alarming still is the revelation, borne out of a BBC investigation in February 2021, that children as young as seven have trespassed into Omegle's adult sections—a section purportedly guarded by a minimum age limit of thirteen. How we must ask, has underage presence burgeoned on this platform? A sobering pointer finger towards the platform's inadvertent marketing on TikTok, where youthful influencers, with abandon, promote their Omegle exploits under the #omegle hashtag.
The Omegle Allure
Omegle's allure is further compounded by its array of chat opportunities. It flaunts an adult section awash with explicit content, a moderated chat section that, despite the platform's own admissions, remains imperfectly patrolled, and an unmoderated section, its entry pasted with forewarnings of an 18+ audience. Beyond these lies the college chat option, a seemingly exclusive territory that only admits individuals armed with a verified '.edu' email address.
The effervescent charm of Omegle's interface, however, belies its underlying treacheries. Herein lies a digital wilderness where online predators and nefarious entities prowl, emboldened by the absence of requisite registration protocols. No email address, no unique identifier—pestilence to any notion of accountability or safeguarding. Within this unchecked reality, the young and unwary stand vulnerable, a hapless game for exploitation.
Threat to Users
Venture even further into Omegle's data fiefdom, and the spectre of compromise looms larger. Users, particularly the youth, risk exposure to unsuitable content, and their naivety might lead to the inadvertent divulgence of personal information. Skulking behind the facade of connection, opportunities abound for coercion, blackmail, and stalking—perils rendered more potent as every video exchange and text can be captured, and recorded by an unseen adversary. The platform acts as a quasi-familiar confidante, all the while harvesting chat logs, cookies, IP addresses, and even sensory data, which, instead of being ephemeral, endure within Omegle's databases, readily handed to law enforcement and partnered entities under the guise of due diligence.
How to Combat the threat
In mitigating these online gorgons, a multi-faceted approach is necessary. To thwart incursion into your digital footprint, adults, seeking the thrills of Omegle's roulette, would do well to cloak their activities with a Virtual Private Network (VPN), diligently pore over the privacy policy, deploy robust cybersecurity tools, and maintain an iron-clad reticence on personal disclosures. For children, the recommendation gravitates towards outright avoidance. There, a constellation of parental control mechanisms await the vigilant guardian, ready to shield their progeny from the internet's darker alcoves.
Conclusion
In the final analysis, Omegle emerges as a microcosm of the greater web—a vast, paradoxical construct proffering solace and sociability, yet riddled with malevolent traps for the uninformed. As digital denizens, our traverse through this interconnected cosmos necessitates a relentless guarding of our private spheres and the sober acknowledgement that amidst the keystrokes and clicks, we must tread with caution lest we unseal the perils of this digital Pandora's box.
References:

Introduction
In the dynamic intersection of pop culture and technology, an unexpected drama unfolded in the virtual world, where the iconic Taylor Swift account has been temporarily blocked on X . The incident sent a shockwave through the online community, sparking debates and speculation about the misuse of deepfake technology.
Taylor Swift's searches on social media platform X have been restored after a temporary blockage was lifted following outrage over her explicit AI images. The social media site, formerly known as Twitter, temporarily restricted searches for Taylor Swift as a temporary measure to address a flood of AI-generated deepfake images that went viral across X and other platforms.
X has mentioned it is actively removing the images and taking appropriate actions against the accounts responsible for spreading them. While Swift has not spoken publicly about the fake images, a report stated that her team is "considering legal action" against the site which published the AI-generated images.
The Social Media Frenzy
As news of temporary blockages spread like wildfire across social media platforms, users engaged in a frenzy of reactions. The fake picture was re-shared 24,000 times, with tens of thousands of users liking the post. This engagement supercharged the deepfake image of Taylor Swift, and by the time the moderators woke up, it was too late. Hundreds of accounts began reposting it, which started an online trend. Taylor Swift's AI video reached an even larger audience. The source of the photograph wasn't even known to begin with. The revelations are causing outrage. American lawmakers from across party lines have spoken. One of them said they were astounded, while another said they were shocked.
AI Deepfake Controversy
The deepfake controversy is not new. There are lot of cases such as Rashmika Mandana, Sachin Tendulkar, and now Taylor Swift have been the victims of such misuse of Deepfake technology. The world is facing a concern about the misuse of AI or deepfake technology. With no proactive measures in place, this threat will only worsen affecting privacy concerns for individuals. This incident has opened a debate among users and industry experts on the ethical use of AI in the digital age and its privacy concerns.
Why has the Incident raised privacy concerns?
The emergence of Taylor Swift's deepfake has raised privacy concerns for several reasons.
- Misuse of Personal Imagery: Deepfake uses AI and its algorithms to superimpose one person’s face onto another person’s body, the algorithms are processed again and again till the desired results are obtained. In the case of celebrities or higher-position people, it's very easy for crooks to get images and generate a deepfake. In the case of Taylor Swift, her images are misused. The misuse of Images can have serious consequences for an individual's reputation and privacy.
- False narrative and Manipulation: Deepfake opens the door for public reaction and spreads false narratives, causing harm to reputation, and affecting personal and professional life. Such false narratives through deepfakes may influence public opinion and damage reputation making it challenging for the person to control it.
- Invasion of Privacy: Creating a deepfake involves gathering a significant amount of information about their targets without their consent. The use of such personal information for the creation of AI-generated content without permission raises serious privacy concerns.
- Difficulty in differentiation: Advanced Deepfake technology makes it difficult for people to differentiate between genuine and manipulated content.
- Potential for Exploitation: Deepfake could be exploited for financial gain or malicious motives of the cyber crooks. These videos do harm the reputation, damage the brand name, and partnerships, and even hamper the integrity of the digital platform upon which the content is posted, they also raise questions about the platform’s policy or should we say against the zero-tolerance policy on posting the non-consensual nude images.
Is there any law that could safeguard Internet users?
Legislation concerning deepfakes differs by nation and often spans from demanding disclosure of deepfakes to forbidding harmful or destructive material. Speaking about various countries, the USA including its 10 states like California, Texas, and Illinois have passed criminal legislation prohibiting deepfake. Lawmakers are advocating for comparable federal statutes. A Democrat from New York has presented legislation requiring producers to digitally watermark deepfake content. The United States does not criminalise such deepfakes but does have state and federal laws addressing privacy, fraud, and harassment.
In 2019, China enacted legislation requiring the disclosure of deepfake usage in films and media. Sharing deepfake pornography became outlawed in the United Kingdom in 2023 as part of the Online Safety Act.
To avoid abuse, South Korea implemented legislation in 2020 criminalising the dissemination of deepfakes that endanger the public interest, carrying penalties of up to five years in jail or fines of up to 50 million won ($43,000).
In 2023, the Indian government issued an advisory to social media & internet companies to protect against deepfakes that violate India'sinformation technology laws. India is on its way to coming up with dedicated legislation to deal with this subject.
Looking at the present situation and considering the bigger picture, the world urgently needs strong legislation to combat the misuse of deepfake technology.
Lesson learned
The recent blockage of Taylor Swift's searches on Elon Musk's X has sparked debates on responsible technology use, privacy protection, and the symbiotic relationship between celebrities and the digital era. The incident highlights the importance of constant attention, ethical concerns, and the potential dangers of AI in the digital landscape. Despite challenges, the digital world offers opportunities for growth and learning.
Conclusion
Such deepfake incidents highlight privacy concerns and necessitate a combination of technological solutions, legal frameworks, and public awareness to safeguard privacy and dignity in the digital world as technology becomes more complex.
References:
- https://www.hindustantimes.com/world-news/us-news/taylor-swift-searches-restored-on-elon-musks-x-after-brief-blockage-over-ai-deepfakes-101706630104607.html
- https://readwrite.com/x-blocks-taylor-swift-searches-as-explicit-deepfakes-of-singer-go-viral/