#Factcheck-Allu Arjun visits Shiva temple after success of Pushpa 2? No, image is from 2017
Executive Summary:
Recently, a viral post on social media claiming that actor Allu Arjun visited a Shiva temple to pray in celebration after the success of his film, PUSHPA 2. The post features an image of him visiting the temple. However, an investigation has determined that this photo is from 2017 and does not relate to the film's release.

Claims:
The claim states that Allu Arjun recently visited a Shiva temple to express his thanks for the success of Pushpa 2, featuring a photograph that allegedly captures this moment.

Fact Check:
The image circulating on social media, that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2, is misleading.
After conducting a reverse image search, we confirmed that this photograph is from 2017, taken during the actor's visit to the Tirumala Temple for a personal event, well before Pushpa 2 was ever announced. The context has been altered to falsely connect it to the film's success. Additionally, there is no credible evidence or recent reports to support the claim that Allu Arjun visited a temple for this specific reason, making the assertion entirely baseless.

Before sharing viral posts, take a brief moment to verify the facts. Misinformation spreads quickly and it’s far better to rely on trusted fact-checking sources.
Conclusion:
The claim that Allu Arjun visited a Shiva temple to celebrate the success of Pushpa 2 is false. The image circulating is actually from an earlier time. This situation illustrates how misinformation can spread when an old photo is used to construct a misleading story. Before sharing viral posts, take a moment to verify the facts. Misinformation spreads quickly, and it is far better to rely on trusted fact-checking sources.
- Claim: The image claims Allu Arjun visited Shiva temple after Pushpa 2’s success.
- Claimed On: Facebook
- Fact Check: False and Misleading
Related Blogs

Introduction
The world has been witnessing various advancements in cyberspace, and one of the major changes is the speed with which we gain and share information. Cyberspace has been declared as the fifth dimension of warfare, and hence, the influence of technology will go a long way in safeguarding ourselves and our nation. Information plays a vital role in this scenario, and due to the easy access to information, the instances of misinformation and disinformation have been rampant across the globe. In the recent Russia-Ukraine crisis, it was clearly seen how instances of misinformation can lead to major loss and harm to a nation and its subjects. All nations and global leaders are deliberating upon this aspect and efficient sharing of information among friendly nations and inter-government organisations.
What is IW?
IW, also known as Information warfare, is a critical aspect of defending our cyberspace. Information Warfare, in its broadest sense, is a struggle over the information and communications process, a struggle that began with the advent of human communication and conflict. Over the past few decades, the rapid rise in information and communication technologies and their increasing prevalence in our society has revolutionised the communications process and, with it, the significance and implications of information warfare. Information warfare is the application of destructive force on a large scale against information assets and systems, against the computers and networks that support the four critical infrastructures (the power grid, communications, financial, and transportation). However, protecting against computer intrusion, even on a smaller scale, is in the national security interests of the country and is important in the current discussion about information warfare.
IW in India
The aspects of misinformation have been recently seen in India in the form of the violence in Manipur and Nuh, which resulted in a massive loss of property and even human lives. A lot of miscreants or anti-national elements often seed misinformation in our daily news feed, and this is often magnified by social media platforms such as Instagram or X (formerly known as Twitter) and OTT-based messaging applications like WhatsApp or Telegram during the pandemic. It was seen nearly every week that some or the other new ways to treat COVID-19 were shared on Social media, which were false and inaccurate, especially in regard to the vaccination drive. A lot of posts and messages highlighted that the Vaccine is not safe, but a lot of this was a part of misinformation propaganda. Most of the time, the speed of spread of such episodes of misinformation is rapid and is often spread by the use of social media platforms and OTT messaging applications.
IW and Indian Army
Former Meta employees have recently come up with allegations that the Chinar Corp of the Indian Army had approached the social media giant to suppress some pages and channels which propagated content that may be objectionable. It is alleged that the formation made such a request to propagate its counterintelligence operations against Pakistan. The Chinar Corps is one of the most prestigious formations of the Indian Army and has the operational area of Kashmir Valley. The instances of online grooming and brainwashing have been common from the anti-national elements of Pakistan, as a faction of youth has been engaged in terrorist activities directly or indirectly. Various messaging and social media apps are used by the bad actors to lure in innocent youth on the fake and fabricated pretext of religion or any other social issue. The Indian Army had launched an anti-misinformation campaign in Kashmir, which aimed to protect Kashmiris from the propaganda of fake news and misinformation, which often led to radicalisation or even riots or attacks on defence forces. The aspect of net neutrality is often misused by bad actors in areas which are sociological, critical or unstable. The Indian Army has created special offices focusing on IW at all levels of formations, and the same is also used to eradicate all or any fake news or fake propaganda against the Indian Army.
Conclusion
Information has always been a source of power since the days of the Roman Empire. Control, dissemination, moderation and mode of sharing of information plays a vital role for any nation both in term of safety from external threats and to maintain National Security. Information Warfare is part of the 5th dimension of warfare, i.e., Cyberwar and is a growing concern for developed as well as developing nations. Information warfare is a critical aspect which needs to be incorporated in terms of basic training for defence personnel and law enforcement agencies. The anti-misinformation operation in Kashmir was primarily focused towards eradicating the bad elements after repealing Article 377, from cyberspace and ensuring harmony, peace, stability and prosperity in the state.
References
- https://irp.fas.org/eprint/snyder/infowarfare.htm
- https://www.thehindu.com/news/national/metas-india-team-delayed-action-against-army-led-misinfo-op-in-kashmir-us-news-report/article67352470.ece
- https://www.indiatoday.in/india/story/facebook-instagram-block-handles-of-chinar-corps-no-response-from-company-over-a-week-says-officials-1910445-2022-02-08

Introduction
Public infrastructure has traditionally served as the framework for civilisation, transporting people, money, and ideas across time and space, from the iron veins of transcontinental railroads to the unseen arteries of the internet. In democracies where free markets and public infrastructure co-exist, this framework has not only facilitated but also accelerated progress. Digital Public Infrastructure (DPI), which powers inclusiveness, fosters innovation, and changes citizens from passive recipients to active participants in the digital age, is emerging as the new civic backbone as we move away from highways and towards high-speed data.
DPI makes it possible for innovation at the margins and for inclusion at scale by providing open-source, interoperable platforms for identities, payments, and data exchange. Examples of how the Global South is evolving from a passive consumer of technology to a creator of globally replicable governance models are India’s Aadhaar (digital identification), UPI (real-time payments), and DigiLocker (data empowerment). As the ‘digital commons’ emerges, DPI does more than simply link users; it also empowers citizens, eliminates inefficiencies from the past, and reimagines the creation and distribution of public value in the digital era.
Securing the Digital Infrastructure: A Contemporary Imperative
As humans, we are already the inhabitants of the future, we stand at the temporal threshold for reform. Digital Infrastructure is no longer just a public good. It’s now a strategic asset, akin to oil pipelines in the 20th century. India is recognised globally for the introduction of “India Stack”, through which the face of digital payments has also been changed. The economic value contributed by DPIs to India’s GDP is predicted to reach 2.9-4.2 percent by 2030, having already reached 0.9% in 2022. Its role in India’s economic development is partly responsible for its success; among emerging market economies, it helped propel India to the top of the revenue administrations’ digitalisation index. The other portion has to do with how India’s social service delivery has changed across the board. By enabling digital and financial inclusion, it has increased access to education (DIKSHA) and is presently being developed to offer agricultural (VISTAAR) and digital health (ABDM) services.
Securing the Foundations: Emerging Threats to Digital Public Infrastructure
The rising prominence of DPI is not without its risks, as adversarial forces are developing with comparable sophistication. The core underpinnings of public digital systems are the target of a new generation of cyber threats, ranging from hostile state actors to cybercriminal syndicates. The threats pose a great risk to the consistent development endeavours of the government. To elucidate, targeted attacks on Biometric databases, AI-based Misinformation and Psychological Warfare, Payment System Hacks, State-sponsored malware, cross-border phishing campaigns, surveillance spyware and Sovereign Malware are modern-day examples of cyber threats.
To secure DPI, a radical rethink beyond encryption methods and perimeter firewalls is needed. It requires an understanding of cybersecurity that is systemic, ethical, and geopolitical. Democracy, inclusivity, and national integrity are all at risk from DPI. To preserve the confidence and promise of digital public infrastructure, policy frameworks must change from fragmented responses to coordinated, proactive and people-centred cyber defence policies.
CyberPeace Recommendations
Powering Progress, Ignoring Protection: A Precarious Path
The Indian government is aware that cyberattacks are becoming more frequent and sophisticated in the nation. To address the nation’s cybersecurity issues, the government has implemented a number of legislative, technical, and administrative policy initiatives. While the initiatives are commendable, there are a few Non-Negotiables that need to be in place for effective protection:
- DPIs must be declared Critical Information Infrastructure. In accordance with the IT Act, 2000, the DPI (Aadhaar, UPI, DigiLocker, Account Aggregator, CoWIN, and ONDC) must be designated as Critical Information Infrastructure (CII) and be supervised by the NCIIPC, just like the banking, energy, and telecom industries. Give NCIIPC the authority to publish required security guidelines, carry out audits, and enforce adherence to the DPI stack, including incident response protocols tailored to each DPI.
- To solidify security, data sovereignty, and cyber responsibility, India should spearhead global efforts to create a Global DPI Cyber Compact through the “One Future Alliance” and the G20. To ensure interoperable cybersecurity frameworks for international DPI projects, promote open standards, cross-border collaboration on threat intelligence, and uniform incident reporting guidelines.
- Establish a DPI Threat Index to monitor vulnerabilities, including phishing attacks, efforts at biometric breaches, sovereign malware footprints, spikes in AI misinformation, and patterns in payment fraud. Create daily or weekly risk dashboards by integrating data from state CERTs, RBI, UIDAI, CERT-In, and NPCI. Use machine learning (ML) driven detection systems.
- Make explainability audits necessary for AI/ML systems used throughout DPI to make sure that the decision-making process is open, impartial, and subject to scrutiny (e.g., welfare algorithms, credit scoring). Use the recently established IndiaAI Safety Institute in line with India’s AI mission to conduct AI audits, establish explanatory standards, and create sector-specific compliance guidelines.
References
- https://orfamerica.org/newresearch/dpi-catalyst-private-sector-innovation?utm_source=chatgpt.com
- https://www.institutmontaigne.org/en/expressions/indias-digital-public-infrastructure-success-story-world
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2116341
- https://www.pib.gov.in/PressReleaseIframePage.aspx?PRID=2033389
- https://www.governancenow.com/news/regular-story/dpi-must-ensure-data-privacy-cyber-security-citizenfirst-approach

Introduction
The term ‘super spreader’ is used to refer to social media and digital platform accounts that are able to quickly transmit information to a significantly large audience base in a short duration. The analogy references the medical term, where a small group of individuals is able to rapidly amplify the spread of an infection across a huge population. The fact that a few handful accounts are able to impact and influence many is attributed to a number of factors like large follower bases, high engagement rates, content attractiveness or virality and perceived credibility.
Super spreader accounts have become a considerable threat on social media because they are responsible for generating a large amount of low-credibility material online. These individuals or groups may create or disseminate low-credibility content for a number of reasons, running from social media fame to garnering political influence, from intentionally spreading propaganda to seeking financial gains. Given the exponential reach of these accounts, identifying, tracing and categorising such accounts as the sources of misinformation can be tricky. It can be equally difficult to actually recognise the content they spread for the misinformation that it actually is.
How Do A Few Accounts Spark Widespread Misinformation?
Recent research suggests that misinformation superspreaders, who consistently distribute low-credibility content, may be the primary cause of the issue of widespread misinformation about different topics. A study[1] by a team of social media analysts at Indiana University has found that a significant portion of tweets spreading misinformation are sent by a small percentage of a given user base. The researchers conducted a review of 2,397,388 tweets posted on Twitter (now X) that were flagged as having low credibility and details on who was sending them. The study found that it does not take a lot of influencers to sway the beliefs and opinions of large numbers. This is attributed to the impact of what they describe as superspreaders. The researchers collected 10 months of data, which added up to 2,397,388 tweets sent by 448,103 users, and then reviewed it, looking for tweets that were flagged as containing low-credibility information. They found that approximately a third of the low-credibility tweets had been posted by people using just 10 accounts, and that just 1,000 accounts were responsible for posting approximately 70% of such tweets.[2]
Case Study
- How Misinformation ‘Superspreaders’ Seed False Election Theories
During the 2020 U.S. presidential election, a small group of "repeat spreaders" aggressively pushed false election claims across various social media platforms for political gain, and this even led to rallies and radicalisation in the U.S.[3] Superspreaders accounts were responsible for disseminating a disproportionately large amount of misinformation related to the election, influencing public opinion and potentially undermining the electoral process.
In the domestic context, India was ranked highest for the risk of misinformation and disinformation according to experts surveyed for the World Economic Forum’s 2024 Global Risk Report. In today's digital age, misinformation, deep fakes, and AI-generated fakes pose a significant threat to the integrity of elections and democratic processes worldwide. With 64 countries conducting elections in 2024, the dissemination of false information carries grave implications that could influence outcomes and shape long-term socio-political landscapes. During the 2024 Indian elections, we witnessed a notable surge in deepfake videos of political personalities, raising concerns about the influence of misinformation on election outcomes.
- Role of Superspreaders During Covid-19
Clarity in public health communication is important when any grey areas or gaps in information can be manipulated so quickly. During the COVID-19 pandemic, misinformation related to the virus, vaccines, and public health measures spread rapidly on social media platforms, including Twitter (Now X). Some prominent accounts or popular pages on platforms like Facebook and Twitter(now X) were identified as superspreaders of COVID-19 misinformation, contributing to public confusion and potentially hindering efforts to combat the pandemic.
As per the Center for Countering Digital Hate Inc (US), The "disinformation dozen," a group of 12 prominent anti-vaccine accounts[4], were found to be responsible for a large amount of anti-vaccine content circulating on social media platforms, highlighting the significant role of superspreaders in influencing public perceptions and behaviours during a health crisis.
There are also incidents where users are unknowingly engaged in spreading misinformation by forwarding information or content which are not always shared by the original source but often just propagated by amplifiers, using other sources, websites, or YouTube videos that help in dissemination. The intermediary sharers amplify these messages on their pages, which is where it takes off. Hence such users do not always have to be the ones creating or deliberately popularising the misinformation, but they are the ones who expose more people to it because of their broad reach. This was observed during the pandemic when a handful of people were able to create a heavy digital impact sharing vaccine/virus-related misinformation.
- Role of Superspreaders in Influencing Investments and Finance
Misinformation and rumours in finance may have a considerable influence on stock markets, investor behaviour, and national financial stability. Individuals or accounts with huge followings or influence in the financial niche can operate as superspreaders of erroneous information, potentially leading to market manipulation, panic selling, or incorrect impressions about individual firms or investments.
Superspreaders in the finance domain can cause volatility in markets, affect investor confidence, and even trigger regulatory responses to address the spread of false information that may harm market integrity. In fact, there has been a rise in deepfake videos, and fake endorsements, with multiple social media profiles providing unsanctioned investing advice and directing followers to particular channels. This leads investors into dangerous financial decisions. The issue intensifies when scammers employ deepfake videos of notable personalities to boost their reputation and can actually shape people’s financial decisions.
Bots and Misinformation Spread on Social Media
Bots are automated accounts that are designed to execute certain activities, such as liking, sharing, or retweeting material, and they can broaden the reach of misinformation by swiftly spreading false narratives and adding to the virality of a certain piece of content. They can also artificially boost the popularity of disinformation by posting phony likes, shares, and comments, making it look more genuine and trustworthy to unsuspecting users. Bots can exploit social network algorithms by establishing false identities that interact with one another and with real users, increasing the spread of disinformation and pushing it to the top of users' feeds and search results.
Bots can use current topics or hashtags to introduce misinformation into popular conversations, allowing misleading information to acquire traction and reach a broader audience. They can lead to the construction of echo chambers, in which users are exposed to a narrow variety of perspectives and information, exacerbating the spread of disinformation inside restricted online groups. There are incidents reported where bot's were found as the sharers of content from low-credibility sources.
Bots are frequently employed as part of planned misinformation campaigns designed to propagate false information for political, ideological, or commercial gain. Bots, by automating the distribution of misleading information, can make it impossible to trace the misinformation back to its source. Understanding how bots work and their influence on information ecosystems is critical for combatting disinformation and increasing digital literacy among social media users.
CyberPeace Policy Recommendations
- Recommendations/Advisory for Netizens:
- Educating oneself: Netizens need to stay informed about current events, reliable fact-checking sources, misinformation counter-strategies, and common misinformation tactics, so that they can verify potentially problematic content before sharing.
- Recognising the threats and vulnerabilities: It is important for netizens to understand the consequences of spreading or consuming inaccurate information, fake news, or misinformation. Netizens must be cautious of sensationalised content spreading on social media as it might attempt to provoke strong reactions or to mold public opinions. Netizens must consider questioning the credibility of information, verifying its sources, and developing cognitive skills to identify low-credibility content and counter misinformation.
- Practice caution and skepticism: Netizens are advised to develop a healthy skepticism towards online information, and critically analyse the veracity of all information sources. Before spreading any strong opinions or claims, one must seek supporting evidence, factual data, and expert opinions, and verify and validate claims with reliable sources or fact-checking entities.
- Good netiquette on the Internet, thinking before forwarding any information: It is important for netizens to practice good netiquette in the online information landscape. One must exercise caution while sharing any information, especially if the information seems incorrect, unverified or controversial. It's important to critically examine facts and recognise and understand the implications of sharing false, manipulative, misleading or fake information/content. Netizens must also promote critical thinking and encourage their loved ones to think critically, verify information, seek reliable sources and counter misinformation.
- Adopting and promoting Prebunking and Debunking strategies: Prebunking and debunking are two effective strategies to counter misinformation. Netizens are advised to engage in sharing only accurate information and do fact-checking to debunk any misinformation. They can rely on reputable fact-checking experts/entities who are regularly engaged in producing prebunking and debunking reports and material. Netizens are further advised to familiarise themselves with fact-checking websites, and resources and verify the information.
- Recommendations for tech/social media platforms
- Detect, report and block malicious accounts: Tech/social media platforms must implement strict user authentication mechanisms to verify account holders' identities to minimise the formation of fraudulent or malicious accounts. This is imperative to weed out suspicious social media accounts, misinformation superspreader accounts and bots accounts. Platforms must be capable of analysing public content, especially viral or suspicious content to ascertain whether it is misleading, AI-generated, fake or deliberately misleading. Upon detection, platform operators must block malicious/ superspreader accounts. The same approach must apply to other community guidelines’ violations as well.
- Algorithm Improvements: Tech/social media platform operators must develop and deploy advanced algorithm mechanisms to detect suspicious accounts and recognise repetitive posting of misinformation. They can utilise advanced algorithms to identify such patterns and flag any misleading, inaccurate, or fake information.
- Dedicated Reporting Tools: It is important for the tech/social media platforms to adopt robust policies to take action against social media accounts engaged in malicious activities such as spreading misinformation, disinformation, and propaganda. They must empower users on the platforms to flag/report suspicious accounts, and misleading content or misinformation through user-friendly reporting tools.
- Holistic Approach: The battle against online mis/disinformation necessitates a thorough examination of the processes through which it spreads. This involves investing in information literacy education, modifying algorithms to provide exposure to varied viewpoints, and working on detecting malevolent bots that spread misleading information. Social media sites can employ similar algorithms internally to eliminate accounts that appear to be bots. All stakeholders must encourage digital literacy efforts that enable consumers to critically analyse information, verify sources, and report suspect content. Implementing prebunking and debunking strategies. These efforts can be further supported by collaboration with relevant entities such as cybersecurity experts, fact-checking entities, researchers, policy analysts and the government to combat the misinformation warfare on the Internet.
References:
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201 {1}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {2}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette {3}
- https://counterhate.com/research/the-disinformation-dozen/ {4}
- https://phys.org/news/2024-05-superspreaders-responsible-large-portion-misinformation.html#google_vignette
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201
- https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
- https://www.wbur.org/onpoint/2021/08/06/vaccine-misinformation-and-a-look-inside-the-disinformation-dozen
- https://healthfeedback.org/misinformation-superspreaders-thriving-on-musk-owned-twitter/
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/
- https://www.jmir.org/2021/5/e26933/
- https://www.yahoo.com/news/7-ways-avoid-becoming-misinformation-121939834.html