#FactCheck - Manipulated Image Alleging Disrespect Towards PM Circulates Online
Executive Summary:
A manipulated image showing someone making an offensive gesture towards Prime Minister Narendra Modi is circulating on social media. However, the original photo does not display any such behavior towards the Prime Minister. The CyberPeace Research Team conducted an analysis and found that the genuine image was published in a Hindustan Times article in May 2019, where no rude gesture was visible. A comparison of the viral and authentic images clearly shows the manipulation. Moreover, The Hitavada also published the same image in 2019. Further investigation revealed that ABPLive also had the image.

Claims:
A picture showing an individual making a derogatory gesture towards Prime Minister Narendra Modi is being widely shared across social media platforms.



Fact Check:
Upon receiving the news, we immediately ran a reverse search of the image and found an article by Hindustan Times, where a similar photo was posted but there was no sign of such obscene gestures shown towards PM Modi.

ABP Live and The Hitavada also have the same image published on their website in May 2019.


Comparing both the viral photo and the photo found on official news websites, we found that almost everything resembles each other except the derogatory sign claimed in the viral image.

With this, we have found that someone took the original image, published in May 2019, and edited it with a disrespectful hand gesture, and which has recently gone viral across social media and has no connection with reality.
Conclusion:
In conclusion, a manipulated picture circulating online showing someone making a rude gesture towards Prime Minister Narendra Modi has been debunked by the Cyberpeace Research team. The viral image is just an edited version of the original image published in 2019. This demonstrates the need for all social media users to check/ verify the information and facts before sharing, to prevent the spread of fake content. Hence the viral image is fake and Misleading.
- Claim: A picture shows someone making a rude gesture towards Prime Minister Narendra Modi
- Claimed on: X, Instagram
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In the dynamic intersection of pop culture and technology, an unexpected drama unfolded in the virtual world, where the iconic Taylor Swift account has been temporarily blocked on X . The incident sent a shockwave through the online community, sparking debates and speculation about the misuse of deepfake technology.
Taylor Swift's searches on social media platform X have been restored after a temporary blockage was lifted following outrage over her explicit AI images. The social media site, formerly known as Twitter, temporarily restricted searches for Taylor Swift as a temporary measure to address a flood of AI-generated deepfake images that went viral across X and other platforms.
X has mentioned it is actively removing the images and taking appropriate actions against the accounts responsible for spreading them. While Swift has not spoken publicly about the fake images, a report stated that her team is "considering legal action" against the site which published the AI-generated images.
The Social Media Frenzy
As news of temporary blockages spread like wildfire across social media platforms, users engaged in a frenzy of reactions. The fake picture was re-shared 24,000 times, with tens of thousands of users liking the post. This engagement supercharged the deepfake image of Taylor Swift, and by the time the moderators woke up, it was too late. Hundreds of accounts began reposting it, which started an online trend. Taylor Swift's AI video reached an even larger audience. The source of the photograph wasn't even known to begin with. The revelations are causing outrage. American lawmakers from across party lines have spoken. One of them said they were astounded, while another said they were shocked.
AI Deepfake Controversy
The deepfake controversy is not new. There are lot of cases such as Rashmika Mandana, Sachin Tendulkar, and now Taylor Swift have been the victims of such misuse of Deepfake technology. The world is facing a concern about the misuse of AI or deepfake technology. With no proactive measures in place, this threat will only worsen affecting privacy concerns for individuals. This incident has opened a debate among users and industry experts on the ethical use of AI in the digital age and its privacy concerns.
Why has the Incident raised privacy concerns?
The emergence of Taylor Swift's deepfake has raised privacy concerns for several reasons.
- Misuse of Personal Imagery: Deepfake uses AI and its algorithms to superimpose one person’s face onto another person’s body, the algorithms are processed again and again till the desired results are obtained. In the case of celebrities or higher-position people, it's very easy for crooks to get images and generate a deepfake. In the case of Taylor Swift, her images are misused. The misuse of Images can have serious consequences for an individual's reputation and privacy.
- False narrative and Manipulation: Deepfake opens the door for public reaction and spreads false narratives, causing harm to reputation, and affecting personal and professional life. Such false narratives through deepfakes may influence public opinion and damage reputation making it challenging for the person to control it.
- Invasion of Privacy: Creating a deepfake involves gathering a significant amount of information about their targets without their consent. The use of such personal information for the creation of AI-generated content without permission raises serious privacy concerns.
- Difficulty in differentiation: Advanced Deepfake technology makes it difficult for people to differentiate between genuine and manipulated content.
- Potential for Exploitation: Deepfake could be exploited for financial gain or malicious motives of the cyber crooks. These videos do harm the reputation, damage the brand name, and partnerships, and even hamper the integrity of the digital platform upon which the content is posted, they also raise questions about the platform’s policy or should we say against the zero-tolerance policy on posting the non-consensual nude images.
Is there any law that could safeguard Internet users?
Legislation concerning deepfakes differs by nation and often spans from demanding disclosure of deepfakes to forbidding harmful or destructive material. Speaking about various countries, the USA including its 10 states like California, Texas, and Illinois have passed criminal legislation prohibiting deepfake. Lawmakers are advocating for comparable federal statutes. A Democrat from New York has presented legislation requiring producers to digitally watermark deepfake content. The United States does not criminalise such deepfakes but does have state and federal laws addressing privacy, fraud, and harassment.
In 2019, China enacted legislation requiring the disclosure of deepfake usage in films and media. Sharing deepfake pornography became outlawed in the United Kingdom in 2023 as part of the Online Safety Act.
To avoid abuse, South Korea implemented legislation in 2020 criminalising the dissemination of deepfakes that endanger the public interest, carrying penalties of up to five years in jail or fines of up to 50 million won ($43,000).
In 2023, the Indian government issued an advisory to social media & internet companies to protect against deepfakes that violate India'sinformation technology laws. India is on its way to coming up with dedicated legislation to deal with this subject.
Looking at the present situation and considering the bigger picture, the world urgently needs strong legislation to combat the misuse of deepfake technology.
Lesson learned
The recent blockage of Taylor Swift's searches on Elon Musk's X has sparked debates on responsible technology use, privacy protection, and the symbiotic relationship between celebrities and the digital era. The incident highlights the importance of constant attention, ethical concerns, and the potential dangers of AI in the digital landscape. Despite challenges, the digital world offers opportunities for growth and learning.
Conclusion
Such deepfake incidents highlight privacy concerns and necessitate a combination of technological solutions, legal frameworks, and public awareness to safeguard privacy and dignity in the digital world as technology becomes more complex.
References:
- https://www.hindustantimes.com/world-news/us-news/taylor-swift-searches-restored-on-elon-musks-x-after-brief-blockage-over-ai-deepfakes-101706630104607.html
- https://readwrite.com/x-blocks-taylor-swift-searches-as-explicit-deepfakes-of-singer-go-viral/

Introduction
Election misinformation poses a major threat to democratic processes all over the world. The rampant spread of misleading information intentionally (disinformation) and unintentionally (misinformation) during the election cycle can not only create grounds for voter confusion with ramifications on election results but also incite harassment, bullying, and even physical violence. The attack on the United States Capitol Building in Washington D.C., in 2021, is a classic example of this phenomenon, where the spread of dis/misinformation snowballed into riots.
Election Dis/Misinformation
Election dis/misinformation is false or misleading information that affects/influences public understanding of voting, candidates, and election integrity. The internet, particularly social media, is the foremost source of false information during elections. It hosts fabricated news articles, posts or messages containing incorrectly-captioned pictures and videos, fabricated websites, synthetic media and memes, and distorted truths or lies. In a recent example during the 2024 US elections, fake videos using the Federal Bureau of Investigation’s (FBI) insignia alleging voter fraud in collusion with a political party and claiming the threat of terrorist attacks were circulated. According to polling data collected by Brookings, false claims influenced how voters saw candidates and shaped opinions on major issues like the economy, immigration, and crime. It also impacted how they viewed the news media’s coverage of the candidates’ campaign. The shaping of public perceptions can thus, directly influence election outcomes. It can increase polarisation, affect the quality of democratic discourse, and cause disenfranchisement. From a broader perspective, pervasive and persistent misinformation during the electoral process also has the potential to erode public trust in democratic government institutions and destabilise social order in the long run.
Challenges In Combating Dis/Misinformation
- Platform Limitations: Current content moderation practices by social media companies struggle to identify and flag misinformation effectively. To address this, further adjustments are needed, including platform design improvements, algorithm changes, enhanced content moderation, and stronger regulations.
- Speed and Spread: Due to increasingly powerful algorithms, the speed and scale at which misinformation can spread is unprecedented. In contrast, content moderation and fact-checking are reactive and are more time-consuming. Further, incendiary material, which is often the subject of fake news, tends to command higher emotional engagement and thus, spreads faster (virality).
- Geopolitical influences: Foreign actors seeking to benefit from the erosion of public trust in the USA present a challenge to the country's governance, administration and security machinery. In 2018, the federal jury indicted 11 Russian military officials for alleged computer hacking to gain access to files during the 2016 elections. Similarly, Russian involvement in the 2024 federal elections has been alleged by high-ranking officials such as White House national security spokesman John Kirby, and Attorney General Merrick Garland.
- Lack of Targeted Plan to Combat Election Dis/Misinformation: In the USA, dis/misinformation is indirectly addressed through laws on commercial advertising, fraud, defamation, etc. At the state level, some laws such as Bills AB 730, AB 2655, AB 2839, and AB 2355 in California target election dis/misinformation. The federal and state governments criminalize false claims about election procedures, but the Constitution mandates “breathing space” for protection from false statements within election speech. This makes it difficult for the government to regulate election-related falsities.
CyberPeace Recommendations
- Strengthening Election Cybersecurity Infrastructure: To build public trust in the electoral process and its institutions, security measures such as updated data protection protocols, publicized audits of election results, encryption of voter data, etc. can be taken. In 2022, the federal legislative body of the USA passed the Electoral Count Reform and Presidential Transition Improvement Act (ECRA), pushing reforms allowing only a state’s governor or designated executive official to submit official election results, preventing state legislatures from altering elector appointment rules after Election Day and making it more difficult for federal legislators to overturn election results. More investments can be made in training, scenario planning, and fact-checking for more robust mitigation of election-related malpractices online.
- Regulating Transparency on Social Media Platforms: Measures such as transparent labeling of election-related content and clear disclosure of political advertising to increase accountability can make it easier for voters to identify potential misinformation. This type of transparency is a necessary first step in the regulation of content on social media and is useful in providing disclosures, public reporting, and access to data for researchers. Regulatory support is also required in cases where popular platforms actively promote election misinformation.
- Increasing focus on ‘Prebunking’ and Debunking Information: Rather than addressing misinformation after it spreads, ‘prebunking’ should serve as the primary defence to strengthen public resilience ahead of time. On the other hand, misinformation needs to be debunked repeatedly through trusted channels. Psychological inoculation techniques against dis/misinformation can be scaled to reach millions on social media through short videos or messages.
- Focused Interventions On Contentious Themes By Social Media Platforms: As platforms prioritize user growth, the burden of verifying the accuracy of posts largely rests with users. To shoulder the responsibility of tackling false information, social media platforms can outline critical themes with large-scale impact such as anti-vax content, and either censor, ban, or tweak the recommendations algorithm to reduce exposure and weaken online echo chambers.
- Addressing Dis/Information through a Socio-Psychological Lens: Dis/misinformation and its impact on domains like health, education, economy, politics, etc. need to be understood through a psychological and sociological lens, apart from the technological one. A holistic understanding of the propagation of false information should inform digital literacy training in schools and public awareness campaigns to empower citizens to evaluate online information critically.
Conclusion
According to the World Economic Forum’s Global Risks Report 2024, the link between misleading or false information and societal unrest will be a focal point during elections in several major economies over the next two years. Democracies must employ a mixed approach of immediate tactical solutions, such as large-scale fact-checking and content labelling, and long-term evidence-backed countermeasures, such as digital literacy, to curb the spread and impact of dis/misinformation.
Sources
- https://www.cbsnews.com/news/2024-election-misinformation-fbi-fake-videos/
- https://www.brookings.edu/articles/how-disinformation-defined-the-2024-election-narrative/
- https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections
- https://indianexpress.com/article/world/misinformation-spreads-fear-distrust-ahead-us-election-9652111/
- https://academic.oup.com/ajcl/article/70/Supplement_1/i278/6597032#377629256
- https://www.brennancenter.org/our-work/policy-solutions/how-states-can-prevent-election-subversion-2024-and-beyond
- https://www.bbc.com/news/articles/cx2dpj485nno
- https://msutoday.msu.edu/news/2022/how-misinformation-and-disinformation-influence-elections
- https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
- https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
- https://www.weforum.org/stories/2024/03/disinformation-trust-ecosystem-experts-curb-it/
- https://www.apa.org/topics/journalism-facts/misinformation-recommendations
- https://mythvsreality.eci.gov.in/
- https://www.brookings.edu/articles/transparency-is-essential-for-effective-social-media-regulation/
- https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/

Starting in mid-December, 2024, a series of attacks have targeted Chrome browser extensions. A data protection company called Cyberhaven, California, fell victim to one of these attacks. Though identified in the U.S., the geographical extent and potential of the attack are yet to be determined. Assessment of these cases can help us to be better prepared for such instances if they occur in the near future.
The Attack
Browser extensions are small software applications that add and enable functionality or a capacity (feature) to a web browser. These are written in CSS, HTML, or JavaScript and like other software, can be coded to deliver malware. Also known as plug-ins, they have access to their own set of Application Programming Interface (APIs). They can also be used to remove unwanted elements as per customisation, such as pop-up advertisements and auto-play videos, when one lands on a website. Some examples of browser extensions include Ad-blockers (for blocking ads and content filtering) and StayFocusd (which limits the time of the users on a particular website).
In the aforementioned attack, the publisher of the browser at Cyberhaven received a phishing mail from an attacker posing to be from the Google Chrome Web Store Developer Support. It mentioned that their browser policies were not compatible and encouraged the user to click on the “Go to Policy”action item, which led the user to a page that enabled permissions for a malicious OAuth called Privacy Policy Extension (Open Authorisation is an adopted standard that is used to authorise secure access for temporary tokens). Once the permission was granted, the attacker was able to inject malicious code into the target’s Chrome browser extension and steal user access tokens and session cookies. Further investigation revealed that logins of certain AI and social media platforms were targeted.
CyberPeace Recommendations
As attacks of such range continue to occur, it is encouraged that companies and developers take active measures that would make their browser extensions less susceptible to such attacks. Google also has a few guidelines on how developers can safeguard their extensions from their end. These include:
- Minimal Permissions For Extensions- It is encouraged that minimal permissions for extensions barring the required APIs and websites that it depends on are acquired as limiting extension privileges limits the surface area an attacker can exploit.
- Prioritising Protection Of Developer Accounts- A security breach on this end could lead to compromising all users' data as this would allow attackers to mess with extensions via their malicious codes. A 2FA (2-factor authentication) by setting a security key is endorsed.
- HTTPS over HTTP- HTTPS should be preferred over HTTP as it requires a Secure Sockets Layer (SSL)/ transport layer security(TLS) certificate from an independent certificate authority (CA). This creates an encrypted connection between the server and the web browser.
Lastly, as was done in the case of the attack at Cyberhaven, it is encouraged to promote the practice of transparency when such incidents take place to better deal with them.
References
- https://indianexpress.com/article/technology/tech-news-technology/hackers-hijack-companies-chrome-extensions-cyberhaven-9748454/
- https://indianexpress.com/article/technology/tech-news-technology/google-chrome-extensions-hack-safety-tips-9751656/
- https://www.techtarget.com/whatis/definition/browser-extension
- https://www.forbes.com/sites/daveywinder/2024/12/31/google-chrome-2fa-bypass-attack-confirmed-what-you-need-to-know/
- https://www.cloudflare.com/learning/ssl/why-use-https/