#FactCheck - Debunked: Viral Video Falsely Claims Allu Arjun Joins Congress Campaign
Executive Summary
The viral video, in which south actor Allu Arjun is seen supporting the Congress Party's campaign for the upcoming Lok Sabha Election, suggests that he has joined Congress Party. Over the course of an investigation, the CyberPeace Research Team uncovered that the video is a close up of Allu Arjun marching as the Grand Marshal of the 2022 India Day parade in New York to celebrate India’s 75th Independence Day. Reverse image searches, Allu Arjun's official YouTube channel, the news coverage, and stock images websites are also proofs of this fact. Thus, it has been firmly established that the claim that Allu Arjun is in a Congress Party's campaign is fabricated and misleading

Claims:
The viral video alleges that the south actor Allu Arjun is using his popularity and star status as a way of campaigning for the Congress party during the 2024 upcoming Lok Sabha elections.



Fact Check:
Initially, after hearing the news, we conducted a quick search using keywords to relate it to actor Allu Arjun joining the Congress Party but came across nothing related to this. However, we found a video by SoSouth posted on Feb 20, 2022, of Allu Arjun’s Father-in-law Kancharla Chandrasekhar Reddy joining congress and quitting former chief minister K Chandrasekhar Rao's party.

Next, we segmented the video into keyframes, and then reverse searched one of the images which led us to the Federation of Indian Association website. It says that the picture is from the 2022 India Parade. The picture looks similar to the viral video, and we can compare the two to help us determine if they are from the same event.

Taking a cue from this, we again performed a keyword search using “India Day Parade 2022”. We found a video uploaded on the official Allu Arjun YouTube channel, and it’s the same video that has been shared on Social Media in recent times with different context. The caption of the original video reads, “Icon Star Allu Arjun as Grand Marshal @ 40th India Day Parade in New York | Highlights | #IndiaAt75”

The Reverse Image search results in some more evidence of the real fact, we found the image on Shutterstock, the description of the photo reads, “NYC India Day Parade, New York, NY, United States - 21 Aug 2022 Parade Grand Marshall Actor Allu Arjun is seen on a float during the annual Indian Day Parade on Madison Avenue in New York City on August 21, 2022.”

With this, we concluded that the Claim made in the viral video of Allu Arjun supporting the Lok Sabha Election campaign 2024 is baseless and false.
Conclusion:
The viral video circulating on social media has been put out of context. The clip, which depicts Allu Arjun's participation in the Indian Day parade in 2022, is not related to the ongoing election campaigns for any Political Party.
Hence, the assertion that Allu Arjun is campaigning for the Congress party is false and misleading.
- Claim: A video, which has gone viral, says that actor Allu Arjun is rallying for the Congress party.
- Claimed on: X (Formerly known as Twitter) and YouTube
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In the labyrinthine world of cybersecurity, a new spectre has emerged from the digital ether, casting a long shadow over the seemingly impregnable orchards of Apple's macOS. This phantom, known as SpectralBlur, is a backdoor so cunningly crafted that it remained shrouded in the obscurity of cyberspace, undetected by the vigilant eyes of antivirus software until its recent unmasking. The discovery of SpectralBlur is not just a tale of technological intrigue but a narrative that weaves together the threads of geopolitical manoeuvring, the relentless pursuit of digital supremacy, and the ever-evolving landscape of cyber warfare.
SpectralBlur, a term that conjures images of ghostly interference and elusive threats, is indeed a fitting moniker for this new macOS backdoor threat. Cybersecurity researchers have peeled back the layers of the digital onion to reveal a moderately capable backdoor that can upload and download files, execute shell commands, update its configuration, delete files, and enter states of hibernation or sleep, all at the behest of a remote command-and-control server. Greg Lesnewich, a security researcher whose name has become synonymous with the relentless pursuit of digital malefactors, has shed light on this new threat that overlaps with a known malware family attributed to the enigmatic North Korean threat actors.
SpectralBlur similar to Lazarus Group’s KANDYKORN
The malware shares its DNA with KANDYKORN, also known as SockRacket, an advanced implant that functions as a remote access trojan capable of taking control of a compromised host. It is a digital puppeteer, pulling the strings of infected systems with a malevolent grace. The KANDYKORN activity also intersects with another campaign orchestrated by the Lazarus sub-group known as BlueNoroff, or TA444, which culminates in the deployment of a backdoor referred to as RustBucket and a late-stage payload dubbed ObjCShellz.
Recently, the threat actor has been observed combining disparate pieces of these two infection chains, leveraging RustBucket droppers to deliver KANDYKORN. This latest finding is another sign that North Korean threat actors are increasingly setting their sights on macOS to infiltrate high-value targets, particularly those within the cryptocurrency and blockchain industries. 'TA444 keeps running fast and furious with these new macOS malware families,' Lesnewich remarked, painting a picture of a relentless adversary in the digital realm.
Patrick Wardle, a security researcher whose insights into the inner workings of SpectralBlur have further illuminated the threat landscape, noted that the Mach-O binary was uploaded to the VirusTotal malware scanning service in August 2023 from Colombia. The functional similarities between KANDYKORN and SpectralBlur have raised the possibility that they may have been built by different developers with the same requirements. What makes the malware stand out are its attempts to hinder analysis and evade detection while using grant to set up a pseudo-terminal and execute shell commands received from the C2 server.
The disclosure comes as 21 new malware families designed to target macOS systems, including ransomware, information stealers, remote access trojans, and nation-state-backed malware, were discovered in 2023, up from 13 identified in 2022. 'With the continued growth and popularity of macOS (especially in the enterprise!), 2024 will surely bring a bevvy of new macOS malware,' Wardle noted, his words a harbinger of the digital storms on the horizon.
Hackers are beefing up their efforts to go after the best MacBooks as security researchers have discovered a brand new macOS backdoor which appears to have ties to another recently identified Mac malware strain. As reported by Security Week, this new Mac malware has been dubbed SpectralBlur and although it was uploaded to VirusTotal back in August of last year, it remained undetected by the best antivirus software until it recently caught the attention of Proofpoint’s Greg Lesnewich.
Lesnewich explained that SpectralBlur has similar capabilities to other backdoors as it can upload and download files, delete files and hibernate or sleep when given commands from a hacker-controlled command-and-control (C2) server. What is surprising about this new Mac malware strain though is that it shares similarities to the KandyKorn macOS backdoor which was created by the infamous North Korean hacking group Lazarus.
Just like SpectralBlur, KandyKorn is designed to evade detection while providing the hackers behind it with the ability to monitor and control infected Macs. Although different, these two Mac malware strains appear to be built based on the same requirements. Once installed on a vulnerable Mac, SpectralBlur executes a function that allows it to decrypt and encrypt network traffic to help it avoid being detected. However, it can also erase files after opening them and then overwrite the data they contain with zeros..
How to keep your Apple computers safe from hackers
As with the best iPhones, keeping your Mac up to date is the easiest and most important way to keep it safe from hackers. Hackers often prey on users who haven’t updated their devices to the latest software as they can exploit unpatched vulnerabilities and security flaws.
Checking to see if you're running the latest macOS version is quite easy. Just click on the Apple Logo in the top right corner of your computer, head to System Preferences and then click on Software Update. If you need a bit more help, check out our guide on how to update a Mac for more detailed instructions with pictures.
Even though your Mac has its own built-in malware scanner from Apple called xProtect, you should consider using one of the best Mac antivirus software solutions for additional protection. Paid antivirus software is often updated more frequently and you often also get access to other extras to help keep you safe online like a password manager or a VPN.
Besides updating your Mac frequently and using antivirus software, you must be careful online. This means sticking to trusted online retailers, carefully checking the URLs of the websites you visit and avoiding opening links and attachments sent to you via email or social media from people you don’t know. Likewise, you should also learn how to spot a phishing scam to know which emails you want to delete right away.
Conclusion
The thing about hackers and other cybercriminals is that they are constantly evolving their tactics and attack methods. This helps them avoid detection and allows them to devise brand-new ways to trick ordinary people. With the surge we saw in Mac malware last year, though, Apple will likely be working on beefing up xProtect and macOS to better defend against these new threats.
References
- https://www.scmagazine.com/news/new-macos-malware-spectralblur-idd-as-north-korean-backdoor
- https://www.tomsguide.com/news/this-new-macos-backdoor-lets-hackers-take-over-your-mac-remotely-how-to-stay-safe
- https://thehackernews.com/2024/01/spectralblur-new-macos-backdoor-threat.html

Introduction
Meta is the leader in social media platforms and has been successful in having a widespread network of users and services across global cyberspace. The corporate house has been responsible for revolutionizing messaging and connectivity since 2004. The platform has brought people closer together in terms of connectivity, however, being one of the most popular platforms is an issue as well. Popular platforms are mostly used by cyber criminals to gain unauthorised data or create chatrooms to maintain anonymity and prevent tracking. These bad actors often operate under fake names or accounts so that they are not caught. The platforms like Facebook and Instagram have been often in the headlines as portals where cybercriminals were operating and committing crimes.
To keep the data of the netizen safe and secure Paytm under first of its kind service is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption.
Meta’s Cybersecurity
Meta has one of the best cyber security in the world but that diest mean that it cannot be breached. The social media giant is the most vulnerable platform in cases of data breaches as various third parties are also involved. As seen the in the case of Cambridge Analytica, a huge chunk of user data was available to influence the users in terms of elections. Meta needs to be ahead of the curve to have a safe and secure platform, for this Meta has deployed various AI and ML driven crawlers and software which work o keeping the platform safe for its users and simultaneously figure out which accounts may be used by bad actors and further removes the criminal accounts. The same is also supported by the keen participation of the user in terms of the reporting mechanism. Meta-Cyber provides visibility of all OT activities, observes continuously the PLC and SCADA for changes and configuration, and checks the authorization and its levels. Meta is also running various penetration and bug bounty programs to reduce vulnerabilities in their systems and applications, these testers are paid heavily depending upon the scope of the vulnerability they found.
CyberRoot Risk Investigation
Social media giant Meta has taken down over 40 accounts operated by an Indian firm CyberRoot Risk Analysis, allegedly involved in hack-for-hire services along with this Meta has taken down 900 fraudulently run accounts, these accounts are said to be operated from China by an unknown entity. CyberRoot Risk Analysis was responsible for sharing malware over the platform and used it to impersonate themselves just as their targets, i.e lawyers, doctors, entrepreneurs, and industries like – cosmetic surgery, real estate, investment firms, pharmaceutical, private equity firms, and environmental and anti-corruption activists. They would get in touch with such personalities and then share malware hidden in files which would often lead to data breaches subsequently leading to different types of cybercrimes.
Meta and its team is working tirelessly to eradicate the influence of such bad actors from their platforms, use of AI and Ml based tools have increased exponentially.
Paytm CyberFraud Cover
Paytm is offering customers protection against cyber fraud through an insurance policy available for fraudulent mobile transactions up to Rs 10,000 for a premium of Rs 30. The cover ‘Paytm Payment Protect’ is provided through a group insurance policy issued by HDFC Ergo. The company said that the plan is being offered to increase the trust in digital payments, which will push up adoption. The insurance cover protects transactions made through UPI across all apps and wallets. The insurance coverage has been obtained by One97 Communications, which operates under the Paytm brand.
The exponential increase in the use of digital payments during the pandemic has made more people susceptible to cyber fraud. While UPI has all the digital safeguards in place, most UPI-related frauds are undertaken by confidence tricksters who get their victims to authorise a transaction by passing collect requests as payments. There are also many fraudsters collecting payments by pretending to be merchants. These types of frauds have resulted in a loss of more than Rs 63 crores in the previous financial year. The issue of data insurance is new to India but is indeed the need of the hour, majority of netizens are unaware of the value of their data and hence remain ignorant towards data protection, such steps will result in safer data management and protection mechanisms, thus safeguarding the Indian cyberspace.
Conclusion
cyberspace is at a critical juncture in terms of data protection and privacy, with new legislation coming out on the same we can expect new and stronger policies to prevent cybercrimes and cyber-attacks. The efforts by tech giants like Meta need to gain more speed in terms of the efficiency of cyber safety of the platform and the user to make sure that the future of the platforms remains secured strongly. The concept of data insurance needs to be shared with netizens to increase awareness about the subject. The initiative by Paytm will be a monumental initiative as this will encourage more platforms and banks to commit towards coverage for cyber crimes. With the increasing cases of cybercrimes, such financial coverage has come as a light of hope and security for the netizens.

Introduction
In September 2025, social media feeds were flooded with strikingly vintage saree-type portraits. These images were not taken by professional photographers, but AI-generated images. More than a million people turned to the "Nano Banana" AI tool of Google Gemini, uploading their ordinary selfies and watching them transform into Bollywood-style, cinematic, 1990s posters. The popularity of this trend is evident, as are the concerns of law enforcement agencies and cybersecurity experts regarding risks of infringement of privacy, unauthorised data sharing, and threats related to deepfake misuse.
What is the Trend?
This trend in AI sarees is created using Google Geminis' Nano Banana image-editing tool, editing and morphing uploaded selfies into glitzy vintage portraits in traditional Indian attire. A user would upload a clear photograph of a solo subject and enter prompts to generate images of cinematic backgrounds, flowing chiffon sarees, golden-hour ambience, and grainy film texture, reminiscent of classic Bollywood imagery. Since its launch, the tool has processed over 500 million images, with the saree trend marking one of its most popular uses. Photographs are uploaded to an AI system, which uses machine learning to alter the pictures according to the description specified. The transformed AI portraits are then shared by users on their Instagram, WhatsApp, and other social media platforms, thereby contributing to the viral nature of the trend.
Law Enforcement Agency Warnings
- A few Indian police agencies have issued strong advisories against participation in such trends. IPS Officer VC Sajjanar warned the public: "The uploading of just one personal photograph can make greedy operators go from clicking their fingers to joining hands with criminals and emptying one's bank account." His advisory had further warned that sharing personal information through trending apps can lead to many scams and fraud.
- Jalandhar Rural Police issued a comprehensive warning stating that such applications put the user at risk of identity theft and online fraud when personal pictures are uploaded. A senior police officer stated: "Once sensitive facial data is uploaded, it can be stored, analysed, and even potentially misused to open the way for cyber fraud, impersonation, and digital identity crimes.
The Cyber Crime Police also put out warnings on social media platforms regarding how photo applications appear entertaining but can pose serious risks to user privacy. They specifically warned that selfies uploaded can lead to data misuse, deepfake creation, and the generation of fake profiles, which are punishable under Sections 66C and 66D of the IT Act 2000.
Consequences of Such Trends
The massification of AI photo trends has several severe effects on private users and society as a whole. Identity fraud and theft are the main issues, as uploaded biometric information can be used by hackers to generate imitated identities, evading security measures or committing financial fraud. The facial recognition information shared by means of these trends remains a digital asset that could be abused years after the trend has passed. ‘Deepfake’ production is another tremendous threat because personal images shared on AI platforms can be utilised to create non-consensual artificial media. Studies have found that more than 95,000 deepfake videos circulated online in 2023 alone, a 550% increase from 2019. The images uploaded can be leveraged to produce embarrassing or harmful content that can cause damage to personal reputation, relationships, and career prospects.
Financial exploitation is also when fake applications in the guise of genuine AI tools strip users of their personal data and financial details. Such malicious platforms tend to look like well-known services so as to trick users into divulging sensitive information. Long-term privacy infringement also comes about due to the permanent retention and possible commercial exploitation of personal biometric information by AI firms, even when users close down their accounts.
Privacy Risks
A few months ago, the Ghibli trend went viral, and now this new trend has taken over. Such trends may subject users to several layers of privacy threats that go far beyond the instant gratification of taking pleasing images. Harvesting of biometric data is the most critical issue since facial recognition information posted on these sites becomes inextricably linked with user identities. Under Google's privacy policy for Gemini tools, uploaded images might be stored temporarily for processing and may be kept for longer periods if used for feedback purposes or feature development.
Illegal data sharing happens when AI platforms provide user-uploaded content to third parties without user consent. A Mozilla Foundation study in 2023 discovered that 80% of popular AI apps had either non-transparent data policies or obscured the ability of users to opt out of data gathering. This opens up opportunities for personal photographs to be shared with anonymous entities for commercial use. Exploitation of training data includes the use of personal photos uploaded to enhance AI models without notifying or compensating users. Although Google provides users with options to turn off data sharing within privacy settings, most users are ignorant of these capabilities. Integration of cross-platform data increases privacy threats when AI applications use data from interlinked social media profiles, providing detailed user profiles that can be taken advantage of for purposeful manipulation or fraud. Inadequacy of informed consent continues to be a major problem, with users engaging in trends unaware of the entire context of sharing information. Studies show that 68% of individuals show concern regarding the misuse of AI app data, but 42% use these apps without going through the terms and conditions.
CyberPeace Expert Recommendations
While the Google Gemini image trend feature operates under its own terms and conditions, it is important to remember that many other tools and applications allow users to generate similar content. Not every platform can be trusted without scrutiny, so users who engage in such trends should do so only on trustworthy platforms and make reliable, informed choices. Above all, following cybersecurity best practices and digital security principles remains essential.
Here are some best practices:-
1.Immediate Protection Measures for User
In a nutshell, protection of personal information may begin by not uploading high-resolution personal photos into AI-based applications, especially those trained for facial recognition. Instead, a person can play with stock images or non-identifiable pictures to the degree that it satisfies the program's creative features without compromising biometric security. Strong privacy settings should exist on every social media platform and AI app by which a person can either limit access to their data, content, or anything else.
2.Organisational Safeguards
AI governance frameworks within organisations should enumerate policies regarding the usage of AI tools by employees, particularly those concerning the upload of personal data. Companies should appropriately carry out due diligence before the adoption of an AI product made commercially available for their own use in order to ensure that such a product has its privacy and security levels as suitable as intended by the company. Training should instruct employees regarding deepfake technology.
3.Technical Protection Strategies
Deepfake detection software should be used. These tools, which include Microsoft Video Authenticator, Intel FakeCatcher, and Sensity AI, allow real-time detection with an accuracy higher than 95%. Use blockchain-based concepts to verify content to create tamper-proof records of original digital assets so that the method of proposing deepfake content as original remains very difficult.
4.Policy and Awareness Initiatives
For high-risk transactions, especially in banks and identity verification systems, authentication should include voice and face liveness checks to ensure the person is real and not using fake or manipulated media. Implement digital literacy programs to empower users with knowledge about AI threats, deepfake detection techniques, and safe digital practices. Companies should also liaise with law enforcement, reporting purported AI crimes, thus offering assistance in combating malicious applications of synthetic media technology.
5.Addressing Data Transparency and Cross-Border AI Security
Regulatory systems need to be called for requiring the transparency of data policies in AI applications, along with providing the rights and choices to users regarding either Biometric data or any other data. Promotion must be given to the indigenous development of AI pertaining to India-centric privacy concerns, assuring the creation of AI models in a secure, transparent, and accountable manner. In respect of cross-border AI security concerns, there must be international cooperation for setting common standards of ethical design, production, and use of AI. With the virus-like contagiousness of AI phenomena such as saree editing trends, they portray the potential and hazards of the present-day generation of artificial intelligence. While such tools offer newer opportunities, they also pose grave privacy and security concerns, which should have been considered quite some time ago by users, organisations, and policy-makers. Through the setting up of all-around protection mechanisms and keeping an active eye on digital privacy, both individuals and institutions will reap the benefits of this AI innovation, and they shall not fall on the darker side of malicious exploitation.
References
- https://www.hindustantimes.com/trending/amid-google-gemini-nano-banana-ai-trend-ips-officer-warns-people-about-online-scams-101757980904282.html%202
- https://www.moneycontrol.com/news/india/viral-banana-ai-saree-selfies-may-risk-fraud-warn-jalandhar-rural-police-13549443.html
- https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
- https://socradar.io/top-10-ai-deepfake-detection-tools-2025/