#FactCheck-Mosque fire in India? False, it's from Indonesia
Executive Summary:
A social media viral post claims to show a mosque being set on fire in India, contributing to growing communal tensions and misinformation. However, a detailed fact-check has revealed that the footage actually comes from Indonesia. The spread of such misleading content can dangerously escalate social unrest, making it crucial to rely on verified facts to prevent further division and harm.

Claim:
The viral video claims to show a mosque being set on fire in India, suggesting it is linked to communal violence.

Fact Check
The investigation revealed that the video was originally posted on 8th December 2024. A reverse image search allowed us to trace the source and confirm that the footage is not linked to any recent incidents. The original post, written in Indonesian, explained that the fire took place at the Central Market in Luwuk, Banggai, Indonesia, not in India.

Conclusion: The viral claim that a mosque was set on fire in India isn’t True. The video is actually from Indonesia and has been intentionally misrepresented to circulate false information. This event underscores the need to verify information before spreading it. Misinformation can spread quickly and cause harm. By taking the time to check facts and rely on credible sources, we can prevent false information from escalating and protect harmony in our communities.
- Claim: The video shows a mosque set on fire in India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction:
Cybercriminals can hack your phone using or exploiting some public charging stations such as at airports, Malls, hotel rooms, etc. When you plug in your phone or laptop devices into a power charger using USB, you may be plugging into a hacker. Juice jacking poses a security threat at public charging stations at airports, shopping malls and other public places that provide free charging stations for mobile, tablet, and laptop devices.
Cybercriminals can either hack into the public charging spot or download malware or viruses through the USB port into your system. When you plug your phone, laptop, tablet or other such devices for charging at public charging stations, it can download malware to your phone and other such devices, and then hackers can access your personal information or passwords, It is really a problem since hackers can even get access to your bank account for unauthorised transactions by accessing your passwords and personal information.
Hence it is important to think twice before using public charging spots, as it might lead to serious consequences such as malware, data leak and hacking. Hacking can gain unauthorised access to your personal information by installing malware in your device and they might monitor your device by installing monitor software or spyware to your device. This scam is referred to as juice jacking.
FBI issued an advisory warning about using public charging stations:
The Federal Bureau of Investigation (FBI), In May 2023, advised users to avoid using free charging stations in airports, hotels, or shopping centres. The warning comes as threat actors have figured out ways to inject malware into devices attached to publicly installed USB ports.
Updated Security measures:
We all must have seen public charging points such as airports, shopping malls, metro, and other public places that provide charging stations for mobile devices. But it can be a threat to your stored data on your device. During the charging process, your data can be transferred which can ultimately lead to a data breach. Hence utmost care should be taken to protect your information and data. iPhones and other devices have security measures in place, When you plug your phone into a charging power source, a pop-up appears to ask permission to allow or disallow the transfer of Data. There is also a default setting in the phones where data transfer is disabled. In the latest models, when you plug your device into a new port or a computer, a pop-up appears asking whether the device is trusted or not.
Two major risks involved in the threat of Juice jacking:
- Malware installation: – Malware apps can be used by bad actors to clone your phone data to their device, Your personal data is transferred leading to a data breach. Some types of malware include Trojans, adware, spyware, crypto-miners, etc. Once this malware is injected into your device, It is easy for cybercriminals to extort a ransom to restore the information they have unauthorized access to.
- Data Theft: It is important to give emphasis to the question of whether your data is protected at public charging stations? When we use a USB cable and connect to a public charging station port, cyber-criminals by injecting malware into the charging port system, can inject the malware into your device or your data can be transferred to the bad actors. USB cords can be exploited by cybercriminals to commit malicious activities.
Best practices:
- Avoid using public charging stations: Using public charging stations is not safe. It is very possible for a cybercriminal to load malware into a charging station with a USB cord. Hence It is advisable not to use public charging spots, try to make sure you charge your phone, and laptop devices in your car, at home or office so it will help you to avoid public charging stations.
- Alternative method of charging: You can carry a power bank along with you to avoid the use of public charging stations.
- Lock your phone: Lock your phone once connected to the charging port. Locking your device once connected to the charging station will prevent it from being able to sync or transfer data.
- Software update: It is important to enable and use your device’s software security measures. Mobile devices have certain technical protections against such vulnerabilities and security threats.
- Review Settings: Disable your device’s option to automatically transfer data when a charging cable is connected. This is the default on iOS devices. Android users should disable this option in the Settings app. If your device displays a prompt asking you to “trust this computer,” it means you are connected to another device, not simply a power outlet. Deny the permission, as trusting the computer will enable data transfers to and from your device. So when you plug your device into a USB port and a prompt appears asking permission to "share data" or “trust this computer” or “charge only,” always select “charge only.”
Conclusion:
Cybercriminals or bad actors exploit public charging stations. There have been incidents where malware was planted in the system by the use of a USB cord, During the charging process, the USB cord opens a path into your device that a cybercriminal can exploit, which means the devices can exchange data. That's called juice jacking. Hence avoid using public charging stations, our safety is in our hands and it is significantly important to give priority to best practices and stay protected in the evolving digital landscape.
References:
- https://www.cbsnews.com/philadelphia/news/fbi-issue-warning-about-juice-jacking-when-using-free-cell-phone-charging-kiosks/
- https://www.comparitech.com/blog/information-security/juice-jacking/#:~:text=Avoid%20public%20charging%20stations,guaranteed%20success%20with%20this%20method
- https://www.fcc.gov/juice-jacking-tips-to-avoid-it

Introduction
Deepfakes are artificial intelligence (AI) technology that employs deep learning to generate realistic-looking but phoney films or images. Algorithms use large volumes of data to analyse and discover patterns in order to provide compelling and realistic results. Deepfakes use this technology to modify movies or photos to make them appear as if they involve events or persons that never happened or existed.The procedure begins with gathering large volumes of visual and auditory data about the target individual, which is usually obtained from publicly accessible sources such as social media or public appearances. This data is then utilised for training a deep-learning model to resemble the target of deep fakes.
Recent Cases of Deepfakes-
In an unusual turn of events, a man from northern China became the victim of a sophisticated deep fake technology. This incident has heightened concerns about using artificial intelligence (AI) tools to aid financial crimes, putting authorities and the general public on high alert.
During a video conversation, a scammer successfully impersonated the victim’s close friend using AI-powered face-swapping technology. The scammer duped the unwary victim into transferring 4.3 million yuan (nearly Rs 5 crore). The fraud occurred in Baotou, China.
AI ‘deep fakes’ of innocent images fuel spike in sextortion scams
Artificial intelligence-generated “deepfakes” are fuelling sextortion frauds like a dry brush in a raging wildfire. According to the FBI, the number of nationally reported sextortion instances came to 322% between February 2022 and February 2023, with a notable spike since April due to AI-doctored photographs. And as per the FBI, innocent photographs or videos posted on social media or sent in communications can be distorted into sexually explicit, AI-generated visuals that are “true-to-life” and practically hard to distinguish. According to the FBI, predators often located in other countries use doctored AI photographs against juveniles to compel money from them or their families or to obtain actual sexually graphic images.
Deepfake Applications
- Lensa AI.
- Deepfakes Web.
- Reface.
- MyHeritage.
- DeepFaceLab.
- Deep Art.
- Face Swap Live.
- FaceApp.
Deepfake examples
There are numerous high-profile Deepfake examples available. Deepfake films include one released by actor Jordan Peele, who used actual footage of Barack Obama and his own imitation of Obama to convey a warning about Deepfake videos.
A video shows Facebook CEO Mark Zuckerberg discussing how Facebook ‘controls the future’ with stolen user data, most notably on Instagram. The original video is from a speech he delivered on Russian election meddling; only 21 seconds of that address were used to create the new version. However, the vocal impersonation fell short of Jordan Peele’s Obama and revealed the truth.
The dark side of AI-Generated Misinformation
- Misinformation generated by AI-generated the truth, making it difficult to distinguish fact from fiction.
- People can unmask AI content by looking for discrepancies and lacking the human touch.
- AI content detection technologies can detect and neutralise disinformation, preventing it from spreading.
Safeguards against Deepfakes-
Technology is not the only way to guard against Deepfake videos. Good fundamental security methods are incredibly effective for combating Deepfake.For example, incorporating automatic checks into any mechanism for disbursing payments might have prevented numerous Deepfake and related frauds. You might also:
- Regular backups safeguard your data from ransomware and allow you to restore damaged data.
- Using different, strong passwords for different accounts ensures that just because one network or service has been compromised, it does not imply that others have been compromised as well. You do not want someone to be able to access your other accounts if they get into your Facebook account.
- To secure your home network, laptop, and smartphone against cyber dangers, use a good security package such as Kaspersky Total Security. This bundle includes anti-virus software, a VPN to prevent compromised Wi-Fi connections, and webcam security.
What is the future of Deepfake –
Deepfake is constantly growing. Deepfake films were easy to spot two years ago because of the clumsy movement and the fact that the simulated figure never looked to blink. However, the most recent generation of bogus videos has evolved and adapted.
There are currently approximately 15,000 Deepfake videos available online. Some are just for fun, while others attempt to sway your opinion. But now that it only takes a day or two to make a new Deepfake, that number could rise rapidly.
Conclusion-
The distinction between authentic and fake content will undoubtedly become more challenging to identify as technology advances. As a result, experts feel it should not be up to individuals to discover deep fakes in the wild. “The responsibility should be on the developers, toolmakers, and tech companies to create invisible watermarks and signal what the source of that image is,” they stated. Several startups are also working on approaches for detecting deep fakes.

Introduction
Election misinformation poses a major threat to democratic processes all over the world. The rampant spread of misleading information intentionally (disinformation) and unintentionally (misinformation) during the election cycle can not only create grounds for voter confusion with ramifications on election results but also incite harassment, bullying, and even physical violence. The attack on the United States Capitol Building in Washington D.C., in 2021, is a classic example of this phenomenon, where the spread of dis/misinformation snowballed into riots.
Election Dis/Misinformation
Election dis/misinformation is false or misleading information that affects/influences public understanding of voting, candidates, and election integrity. The internet, particularly social media, is the foremost source of false information during elections. It hosts fabricated news articles, posts or messages containing incorrectly-captioned pictures and videos, fabricated websites, synthetic media and memes, and distorted truths or lies. In a recent example during the 2024 US elections, fake videos using the Federal Bureau of Investigation’s (FBI) insignia alleging voter fraud in collusion with a political party and claiming the threat of terrorist attacks were circulated. According to polling data collected by Brookings, false claims influenced how voters saw candidates and shaped opinions on major issues like the economy, immigration, and crime. It also impacted how they viewed the news media’s coverage of the candidates’ campaign. The shaping of public perceptions can thus, directly influence election outcomes. It can increase polarisation, affect the quality of democratic discourse, and cause disenfranchisement. From a broader perspective, pervasive and persistent misinformation during the electoral process also has the potential to erode public trust in democratic government institutions and destabilise social order in the long run.
Challenges In Combating Dis/Misinformation
- Platform Limitations: Current content moderation practices by social media companies struggle to identify and flag misinformation effectively. To address this, further adjustments are needed, including platform design improvements, algorithm changes, enhanced content moderation, and stronger regulations.
- Speed and Spread: Due to increasingly powerful algorithms, the speed and scale at which misinformation can spread is unprecedented. In contrast, content moderation and fact-checking are reactive and are more time-consuming. Further, incendiary material, which is often the subject of fake news, tends to command higher emotional engagement and thus, spreads faster (virality).
- Geopolitical influences: Foreign actors seeking to benefit from the erosion of public trust in the USA present a challenge to the country's governance, administration and security machinery. In 2018, the federal jury indicted 11 Russian military officials for alleged computer hacking to gain access to files during the 2016 elections. Similarly, Russian involvement in the 2024 federal elections has been alleged by high-ranking officials such as White House national security spokesman John Kirby, and Attorney General Merrick Garland.
- Lack of Targeted Plan to Combat Election Dis/Misinformation: In the USA, dis/misinformation is indirectly addressed through laws on commercial advertising, fraud, defamation, etc. At the state level, some laws such as Bills AB 730, AB 2655, AB 2839, and AB 2355 in California target election dis/misinformation. The federal and state governments criminalize false claims about election procedures, but the Constitution mandates “breathing space” for protection from false statements within election speech. This makes it difficult for the government to regulate election-related falsities.
CyberPeace Recommendations
- Strengthening Election Cybersecurity Infrastructure: To build public trust in the electoral process and its institutions, security measures such as updated data protection protocols, publicized audits of election results, encryption of voter data, etc. can be taken. In 2022, the federal legislative body of the USA passed the Electoral Count Reform and Presidential Transition Improvement Act (ECRA), pushing reforms allowing only a state’s governor or designated executive official to submit official election results, preventing state legislatures from altering elector appointment rules after Election Day and making it more difficult for federal legislators to overturn election results. More investments can be made in training, scenario planning, and fact-checking for more robust mitigation of election-related malpractices online.
- Regulating Transparency on Social Media Platforms: Measures such as transparent labeling of election-related content and clear disclosure of political advertising to increase accountability can make it easier for voters to identify potential misinformation. This type of transparency is a necessary first step in the regulation of content on social media and is useful in providing disclosures, public reporting, and access to data for researchers. Regulatory support is also required in cases where popular platforms actively promote election misinformation.
- Increasing focus on ‘Prebunking’ and Debunking Information: Rather than addressing misinformation after it spreads, ‘prebunking’ should serve as the primary defence to strengthen public resilience ahead of time. On the other hand, misinformation needs to be debunked repeatedly through trusted channels. Psychological inoculation techniques against dis/misinformation can be scaled to reach millions on social media through short videos or messages.
- Focused Interventions On Contentious Themes By Social Media Platforms: As platforms prioritize user growth, the burden of verifying the accuracy of posts largely rests with users. To shoulder the responsibility of tackling false information, social media platforms can outline critical themes with large-scale impact such as anti-vax content, and either censor, ban, or tweak the recommendations algorithm to reduce exposure and weaken online echo chambers.
- Addressing Dis/Information through a Socio-Psychological Lens: Dis/misinformation and its impact on domains like health, education, economy, politics, etc. need to be understood through a psychological and sociological lens, apart from the technological one. A holistic understanding of the propagation of false information should inform digital literacy training in schools and public awareness campaigns to empower citizens to evaluate online information critically.
Conclusion
According to the World Economic Forum’s Global Risks Report 2024, the link between misleading or false information and societal unrest will be a focal point during elections in several major economies over the next two years. Democracies must employ a mixed approach of immediate tactical solutions, such as large-scale fact-checking and content labelling, and long-term evidence-backed countermeasures, such as digital literacy, to curb the spread and impact of dis/misinformation.
Sources
- https://www.cbsnews.com/news/2024-election-misinformation-fbi-fake-videos/
- https://www.brookings.edu/articles/how-disinformation-defined-the-2024-election-narrative/
- https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections
- https://indianexpress.com/article/world/misinformation-spreads-fear-distrust-ahead-us-election-9652111/
- https://academic.oup.com/ajcl/article/70/Supplement_1/i278/6597032#377629256
- https://www.brennancenter.org/our-work/policy-solutions/how-states-can-prevent-election-subversion-2024-and-beyond
- https://www.bbc.com/news/articles/cx2dpj485nno
- https://msutoday.msu.edu/news/2022/how-misinformation-and-disinformation-influence-elections
- https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/
- https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
- https://www.weforum.org/stories/2024/03/disinformation-trust-ecosystem-experts-curb-it/
- https://www.apa.org/topics/journalism-facts/misinformation-recommendations
- https://mythvsreality.eci.gov.in/
- https://www.brookings.edu/articles/transparency-is-essential-for-effective-social-media-regulation/
- https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/