#FactCheck - AI-Generated Video Falsely Linked to Protests in Iran
Amid protests against rising inflation in Iran, a video is being widely shared on social media showing people gathering on streets at night while using mobile phone flashlights. The video is being circulated with the claim that it shows recent protests in Iran. Cyber Peace Foundation’s research found that the video being shared as visuals from the ongoing protests in Iran is not real. Our investigation revealed that the viral video is AI-generated and has no connection with actual events on the ground.
Claim
On January 11, 2026, an Instagram user shared the video with a caption written in Spanish. The Hindi translation of the caption reads: “The Iranian government shut down the lights of protesters, but that did not stop them from remaining on the streets demanding that the Ayatollahs step down from power.”The post link, its archived version, and screenshots can be seen below: https://www.instagram.com/p/DTXqzayjqFz/

FactCheck:
To verify the claim, we extracted keyframes from the viral video and conducted a Google reverse image search.During this process, we found the same video uploaded on Instagram on January 11, 2026. In that post, the user explicitly stated that the video was created using AI. The caption reads that the streetlights were turned off to hide the scale of protesters, but people used their phone lights to show their presence, adding:
“I created this video using AI, inspired by tonight’s protests (January 10, 2026) in Tehran, Iran.” Link to the post and screenshot can be seen below: https://www.instagram.com/p/DTWXsHajNvl/

To further verify the authenticity of the video, we scanned it using multiple AI detection tools.Hive Moderation flagged the video as 97 percent AI-generated.
We also scanned the video using another AI detection tool, Wasitai, which likewise identified the video as AI-generated.


Conclusion
Our investigation confirms that the video being shared as footage from protests in Iran is not real. The viral video has been created using artificial intelligence and is being falsely linked to the ongoing protests. The claim circulating on social media is false and misleading.
Related Blogs
.webp)
Introduction
The link between social media and misinformation is undeniable. Misinformation, particularly the kind that evokes emotion, spreads like wildfire on social media and has serious consequences, like undermining democratic processes, discrediting science, and promulgating hateful discourses which may incite physical violence. If left unchecked, misinformation propagated through social media has the potential to incite social disorder, as seen in countless ethnic clashes worldwide. This is why social media platforms have been under growing pressure to combat misinformation and have been developing models such as fact-checking services and community notes to check its spread. This article explores the pros and cons of the models and evaluates their broader implications for online information integrity.
How the Models Work
- Third-Party Fact-Checking Model (formerly used by Meta) Meta initiated this program in 2016 after claims of extraterritorial election tampering through dis/misinformation on its platforms. It entered partnerships with third-party organizations like AFP and specialist sites like Lead Stories and PolitiFact, which are certified by the International Fact-Checking Network (IFCN) for meeting neutrality, independence, and editorial quality standards. These fact-checkers identify misleading claims that go viral on platforms and publish verified articles on their websites, providing correct information. They also submit this to Meta through an interface, which may link the fact-checked article to the social media post that contains factually incorrect claims. The post then gets flagged for false or misleading content, and a link to the article appears under the post for users to refer to. This content will be demoted in the platform algorithm, though not removed entirely unless it violates Community Standards. However, in January 2025, Meta announced it was scrapping this program and beginning to test X’s Community Notes Model in the USA, before rolling it out in the rest of the world. It alleges that the independent fact-checking model is riddled with personal biases, lacks transparency in decision-making, and has evolved into a censoring tool.
- Community Notes Model ( Used by X and being tested by Meta): This model relies on crowdsourced contributors who can sign up for the program, write contextual notes on posts and rate the notes made by other users on X. The platform uses a bridging algorithm to display those notes publicly, which receive cross-ideological consensus from voters across the political spectrum. It does this by boosting those notes that receive support despite the political leaning of the voters, which it measures through their engagements with previous notes. The benefit of this system is that it is less likely for biases to creep into the flagging mechanism. Further, the process is relatively more transparent than an independent fact-checking mechanism since all Community Notes contributions are publicly available for inspection, and the ranking algorithm can be accessed by anyone, allowing for external evaluation of the system by anyone.
CyberPeace Insights
Meta’s uptake of a crowdsourced model signals social media’s shift toward decentralized content moderation, giving users more influence in what gets flagged and why. However, the model’s reliance on diverse agreements can be a time-consuming process. A study (by Wirtschafter & Majumder, 2023) shows that only about 12.5 per cent of all submitted notes are seen by the public, making most misleading content go unchecked. Further, many notes on divisive issues like politics and elections may not see the light of day since reaching a consensus on such topics is hard. This means that many misleading posts may not be publicly flagged at all, thereby hindering risk mitigation efforts. This casts aspersions on the model’s ability to check the virality of posts which can have adverse societal impacts, especially on vulnerable communities. On the other hand, the fact-checking model suffers from a lack of transparency, which has damaged user trust and led to allegations of bias.
Since both models have their advantages and disadvantages, the future of misinformation control will require a hybrid approach. Data accuracy and polarization through social media are issues bigger than an exclusive tool or model can effectively handle. Thus, platforms can combine expert validation with crowdsourced input to allow for accuracy, transparency, and scalability.
Conclusion
Meta’s shift to a crowdsourced model of fact-checking is likely to have bigger implications on public discourse since social media platforms hold immense power in terms of how their policies affect politics, the economy, and societal relations at large. This change comes against the background of sweeping cost-cutting in the tech industry, political changes in the USA and abroad, and increasing attempts to make Big Tech platforms more accountable in jurisdictions like the EU and Australia, which are known for their welfare-oriented policies. These co-occurring contestations are likely to inform the direction the development of misinformation-countering tactics will take. Until then, the crowdsourcing model is still in development, and its efficacy is yet to be seen, especially regarding polarizing topics.
References
- https://www.cyberpeace.org/resources/blogs/new-youtube-notes-feature-to-help-users-add-context-to-videos
- https://en-gb.facebook.com/business/help/315131736305613?id=673052479947730
- http://techxplore.com/news/2025-01-meta-fact.html
- https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
- https://communitynotes.x.com/guide/en/about/introduction
- https://blogs.lse.ac.uk/impactofsocialsciences/2025/01/14/do-community-notes-work/?utm_source=chatgpt.com
- https://www.techpolicy.press/community-notes-and-its-narrow-understanding-of-disinformation/
- https://www.rstreet.org/commentary/metas-shift-to-community-notes-model-proves-that-we-can-fix-big-problems-without-big-government/
- https://tsjournal.org/index.php/jots/article/view/139/57

Executive Summary
An image of Prime Minister Narendra Modi is being widely circulated on social media. The picture is being shared with the claim that during an election campaign in Assam, a full-fledged shooting set was arranged in a tea garden where Modi interacted with women workers, complete with cameras, microphones, lights, and a director-led production team. However, research by the CyberPeace has found the claim to be false. Our research reveals that the viral image is AI-generated and is being shared with a misleading narrative.
Claim
An Instagram user shared the viral image with the caption suggesting that such a large-scale “shoot” setup had been arranged and questioned the cost involved.
Post link:

Fact Check
To verify the claim, we conducted a keyword-based search on Google. However, we did not find any credible media reports supporting the claim that such a shooting setup was arranged during Prime Minister Modi’s visit to Assam. Upon closely examining the viral image, we noticed several visual inconsistencies that raised suspicion about it being artificially generated. To confirm this, we analyzed the image using the AI detection tool Hive Moderation, which indicated that the image is approximately 99% AI-generated.

To further validate the findings, we also tested the image using another AI detection tool, NoteGPT, which similarly classified the image as 99% AI-generated.

For context, ahead of the 2026 Assam Assembly elections, Prime Minister Narendra Modi has been on a campaign visit to the state. According to a report by DD News, he visited a tea garden in Dibrugarh, where he interacted with women workers and even plucked tea leaves himself.

Conclusion
Our research clearly establishes that the viral image of Prime Minister Narendra Modi is not authentic and has been digitally created using AI tools. There is no evidence to support the claim that a staged shooting setup involving cameras, lights, and a production crew was arranged during his visit. The image is being circulated with a misleading narrative to create a false impression. This case highlights how AI-generated visuals can be used to distort real events and spread misinformation.

Overview:
‘Kia Connect’ is the application that is used to connect ‘Kia’ cars which allows the user control various parameters of the vehicle through the application on his/her smartphone. The vulnerabilities found in most Kias built after 2013 with but little exception. Most of the risks are derived from a flawed API that deals with dealer relations and vehicle coordination.
Technical Breakdown of Exploitation:
- API Exploitation: The attack uses the vulnerabilities in Kia’s dealership network. The researchers also noticed that, for example, the logs generated while impersonating a dealer and registering on the Kia dealer portal would be sufficient for deriving access tokens needed for next steps.
- Accessing Vehicle Information: The license plate number allowed the attackers to get the Vehicle Identification Number (VIN) number of their preferred car. This VIN can then be used to look up more information about the car and is an essential number to determine for the shared car.
- Information Retrieval: Having the VIN number in hand, attackers can launch a number of requests to backends to pull more sensitive information about the car owner, including:
- Name
- Email address
- Phone number
- Geographical address
- Modifying Account Access: With this information, attackers could change the accounts settings to make them a second user on the car, thus being hidden from the actual owner of the account.
- Executing Remote Commands: Once again, it was discovered that attackers could remotely execute different commands on the vehicle, which includes:some text
- Unlocking doors
- Starting the engine
- Monitoring the location of the vehicle in terms of position.
- Honking the horn
Technical Execution:
The researchers demonstrated that an attacker could execute a series of four requests to gain control over a Kia vehicle:
- Generate Dealer Token: The attacker sends an HTTP request in order to create a dealer token.
- Retrieve Owner Information: As indicated using the generated token, they make another request to another endpoint that returns the owner’s email address and phone number.
- Modify Access Permissions: The attacker takes advantage of the leaked information (email address and VIN) of the owner to change between users accounts and make himself the second user.
- Execute Commands: As the last one, they can send commands to perform actions on the operated vehicle.
Security Response and Precautionary Measures for Vehicle Owners
- Regular Software Updates: Car owners must make sure their cars receive updates on the recent software updates provided by auto producers.
- Use Strong Passwords: The owners of Kia Connect accounts should develop specific and complex passwords for their accounts and then update them periodically. They should avoid using numbers like the birth dates, vehicle numbers and simple passwords.
- Enable Multi-Factor Authentication: For security, vehicle owners should turn on the use of the secondary authentication when it is available to protect against unauthorized access to an account.
- Limit Personal Information Sharing: Owners of vehicles should be careful with the details that are connected with the account on their car, like the e-mail or telephone number, sharing them on social networks, for example.
- Monitor Account Activity: It is also important to monitor the account activity because of change or access attempts that are unauthorized. In case of any abnormality or anything suspicious felt while using the car, report it to Kia customer support.
- Educate Yourself on Vehicle Security: Being aware of cyber threats that are connected to vehicles and learning about how to safeguard a vehicle from such threats.
- Consider Disabling Remote Features When Not Needed: If remote features are not needed, then it is better to turn them off, and then turn them on again when needed. This can prove to help diminish the attack vector for would-be hackers.
Industry Implications:
The findings from this research underscore broader issues within automotive cybersecurity:
- Web Security Gaps: Most car manufacturers pay more attention to equipment running in automobiles instead of the safety of the websites that the car uses to operate thereby exposing automobiles that are connected very much to risks.
- Continued Risks: Vehicles become increasingly connected to internet technologies. Auto makers will have to carry cyber security measures in their cars in the future.
Conclusion:
The weaknesses found in Kia’s connected car system are a key concern for Automotive security. Since cars need web connections for core services, suppliers also face the problem of risks and need to create effective safeguards. Kia took immediate actions to tighten the safety after disclosure; however, new threats will emerge as this is a dynamic domain involving connected technology. With growing awareness of these risks, it is now important for car makers not only to put in proper security measures but also to maintain customer communication on how it safeguards their information and cars against cyber dangers. That being an incredibly rapid approach to advancements in automotive technology, the key to its safety is in our capacity to shield it from ever-present cyber threats.
Reference:
- https://timesofindia.indiatimes.com/auto/cars/hackers-could-unlock-your-kia-car-with-just-a-license-plate-is-yours-safe/articleshow/113837543.cms
- https://www.thedrive.com/news/hackers-found-millions-of-kias-could-be-tracked-controlled-with-just-a-plate-number
- https://www.securityweek.com/millions-of-kia-cars-were-vulnerable-to-remote-hacking-researchers/
- https://news24online.com/auto/kia-vehicles-hack-connected-car-cybersecurity-threat/346248/
- https://www.malwarebytes.com/blog/news/2024/09/millions-of-kia-vehicles-were-vulnerable-to-remote-attacks-with-just-a-license-plate-number
- https://informationsecuritybuzz.com/kia-vulnerability-enables-remote-acces/
- https://samcurry.net/hacking-kia