#FactCheck -AI-Generated Image Falsely Linked to Kotdwar Shop Controversy
Executive Summary
A dispute had recently emerged in Kotdwar, Uttarakhand, over the name of a shop. During the controversy, a local youth, Deepak Kumar, came forward in support of the shopkeeper. The incident subsequently became a subject of discussion on social media, with users expressing varied reactions. Meanwhile, a photo began circulating on social media showing a burqa-clad woman presenting a bouquet to Deepak Kumar. The image is being shared with the claim that All India Majlis-e-Ittehadul Muslimeen (AIMIM)’s women’s president, Rubina, welcomed “Mohammad Deepak Kumar” by presenting him with a bouquet. However, research conducted by the CyberPeace found the viral claim to be false. The research revealed that users are sharing an AI-generated image with a misleading claim.
Claim:
On social media platform Instagram, a user shared the viral image claiming that AIMIM’s women’s president Rubina welcomed “Mohammad Deepak Kumar” by presenting him with a bouquet. The link to the post, its archived version, and a screenshot are provided below.

Fact Check:
Upon closely examining the viral image, certain inconsistencies raised suspicion that it could be AI-generated. To verify its authenticity, the image was analysed using the AI detection tool Hive Moderation, which indicated a 96 percent probability that the image was AI-generated.

In the next stage of the research , the image was also analysed using another AI detection tool, Wasit AI, which likewise identified the image as AI-generated.

Conclusion
The research establishes that users are circulating an AI-generated image with a misleading claim linking it to the Kotdwar controversy.
Related Blogs

Executive Summary:
A video circulating on Social media has claimed that Iran has launched a missile strike destroying Ben Gurion Airport in Tel Aviv. With rising tensions in geopolitics, the video quickly became popular. However, our research has detailed inspections through digital verification tools and visual analysis showed that the video is AI-generated. No incident or damage ever occurred.

Claim:
A viral video circulating on social media platforms claims to show Tel Aviv’s Ben Gurion Airport destroyed following an Iranian missile strike. The video is being shared with captions suggesting it is the last recorded visuals from the attack, with some users asserting it as evidence of escalating conflict between Iran and Israel.

Fact Check:
After looking into the video that purported to show the destruction of Tel Aviv's Ben Gurion Airport in an Iranian missile strike, we researched the topic whether the claim is accurate or not. The video depicts a damaged airport terminal, with debris and fires, but a visual analysis determined that there were a number of suspicious characteristics: asymmetrical layout, artificial-looking smoke patterns, and the absence of activity or humans—those are all typical indications of AI generation. Our research traced the origins of the video to an Instagram post, with a date of May 27, 2025, made by what seems to be a user who frequently shares AI-generated images.


In order to verify our conclusions, we used Hive Moderation, an AI content detection tool, which produced a result of an 80% probability that the video is altered, and this level of probability strongly supports the idea that the footage is not real. Additionally, reports from popular organizations like India Today and Reuters supported these results. All findings resulting from our research established that the video is synthetic and unrelated to any event occurring at Ben Gurion Airport, and therefore debunked a false narrative propagated on social media.

To confirm, we also compared the visuals with a real aerial image of Tel Aviv’s Ben Gurion Airport available on aviation stock sites.



Fig: Google Maps image of Tel Aviv’s Ben Gurion Airport
The visuals from the viral video are not real locations or scenes of Aviv’s Ben Gurion Airport's true location and configuration therefore it is fake and misleading.
Conclusion:
After thorough research it is concluded that the viral video is fake and it is not an actual missile strike at Ben Gurion Airport. The video is made with AI, and posted by a content creator of synthetic content well before any conflict update. There is no official confirmation or credible news coverage to substantiate the claim, with a high probability of AI-detection, and it has been proven to be digitally manipulated. Therefore, the claim is untrue and misleading.
- Claim: A video shows Iran's missile strike destroying Tel Aviv’s Ben Gurion Airport.
- Claimed On: Social Media
- Fact Check: False and Misleading

Introduction
Deepfake technology, which combines the words "deep learning" and "fake," uses highly developed artificial intelligence—specifically, generative adversarial networks (GANs)—to produce computer-generated content that is remarkably lifelike, including audio and video recordings. Because it can provide credible false information, there are concerns about its misuse, including identity theft and the transmission of fake information. Cybercriminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
India Topmost destination for deepfake attacks
According to Sumsub’s identity fraud report 2023, a well-known digital identity verification company with headquarters in the UK. India, Bangladesh, and Pakistan have become an important participants in the Asia-Pacific identity fraud scene with India’s fraud rate growing exponentially by 2.99% from 2022 to 2023. They are among the top ten nations most impacted by the use of deepfake technology. Deepfake technology is being used in a significant number of cybercrimes, according to the newly released Sumsub Identity Fraud Report for 2023, and this trend is expected to continue in the upcoming year. This highlights the need for increased cybersecurity awareness and safeguards as identity fraud poses an increasing concern in the area.
How Deeepfake Works
Deepfakes are a fascinating and worrisome phenomenon that have emerged in the modern digital landscape. These realistic-looking but wholly artificial videos have become quite popular in the last few months. Such realistic-looking, but wholly artificial, movies have been ingrained in the very fabric of our digital civilisation as we navigate its vast landscape. The consequences are enormous and the attraction is irresistible.
Deep Learning Algorithms
Deepfakes examine large datasets, frequently pictures or videos of a target person, using deep learning techniques, especially Generative Adversarial Networks. By mimicking and learning from gestures, speech patterns, and facial expressions, these algorithms can extract valuable information from the data. By using sophisticated approaches, generative models create material that mixes seamlessly with the target context. Misuse of this technology, including the dissemination of false information, is a worry. Sophisticated detection techniques are becoming more and more necessary to separate real content from modified content as deepfake capabilities improve.
Generative Adversarial Networks
Deepfake technology is based on GANs, which use a dual-network design. Made up of a discriminator and a generator, they participate in an ongoing cycle of competition. The discriminator assesses how authentic the generated information is, whereas the generator aims to create fake material, such as realistic voice patterns or facial expressions. The process of creating and evaluating continuously leads to a persistent improvement in Deepfake's effectiveness over time. The whole deepfake production process gets better over time as the discriminator adjusts to become more perceptive and the generator adapts to produce more and more convincing content.
Effect on Community
The extensive use of Deepfake technology has serious ramifications for several industries. As technology develops, immediate action is required to appropriately manage its effects. And promoting ethical use of technologies. This includes strict laws and technological safeguards. Deepfakes are computer trickery that mimics prominent politicians' statements or videos. Thus, it's a serious issue since it has the potential to spread instability and make it difficult for the public to understand the true nature of politics. Deepfake technology has the potential to generate totally new characters or bring stars back to life for posthumous roles in the entertainment industry. It gets harder and harder to tell fake content from authentic content, which makes it simpler for hackers to trick people and businesses.
Ongoing Deepfake Assaults In India
Deepfake videos continue to target popular celebrities, Priyanka Chopra is the most recent victim of this unsettling trend. Priyanka's deepfake adopts a different strategy than other examples including actresses like Rashmika Mandanna, Katrina Kaif, Kajol, and Alia Bhatt. Rather than editing her face in contentious situations, the misleading film keeps her look the same but modifies her voice and replaces real interview quotes with made-up commercial phrases. The deceptive video shows Priyanka promoting a product and talking about her yearly salary, highlighting the worrying development of deepfake technology and its possible effects on prominent personalities.
Actions Considered by Authorities
A PIL was filed requesting the Delhi High Court that access to websites that produce deepfakes be blocked. The petitioner's attorney argued in court that the government should at the very least establish some guidelines to hold individuals accountable for their misuse of deepfake and AI technology. He also proposed that websites should be asked to identify information produced through AI as such and that they should be prevented from producing illegally. A division bench highlighted how complicated the problem is and suggested the government (Centre) to arrive at a balanced solution without infringing the right to freedom of speech and expression (internet).
Information Technology Minister Ashwini Vaishnaw stated that new laws and guidelines would be implemented by the government to curb the dissemination of deepfake content. He presided over a meeting involving social media companies to talk about the problem of deepfakes. "We will begin drafting regulation immediately, and soon, we are going to have a fresh set of regulations for deepfakes. this might come in the way of amending the current framework or ushering in new rules, or a new law," he stated.
Prevention and Detection Techniques
To effectively combat the growing threat posed by the misuse of deepfake technology, people and institutions should place a high priority on developing critical thinking abilities, carefully examining visual and auditory cues for discrepancies, making use of tools like reverse image searches, keeping up with the latest developments in deepfake trends, and rigorously fact-check reputable media sources. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and adjusting the constantly changing terrain.
Conclusion
Advanced artificial intelligence-powered deepfake technology produces extraordinarily lifelike computer-generated information, raising both creative and moral questions. Misuse of tech or deepfake presents major difficulties such as identity theft and the propagation of misleading information, as demonstrated by examples in India, such as the latest deepfake video involving Priyanka Chopra. It is important to develop critical thinking abilities, use detection strategies including analyzing audio quality and facial expressions, and keep up with current trends in order to counter this danger. A thorough strategy that incorporates fact-checking, preventative tactics, and awareness-raising is necessary to protect against the negative effects of deepfake technology. Important actions to improve resistance against deepfake threats include putting in place strong security policies, integrating cutting-edge deepfake detection technologies, supporting the development of ethical AI, and encouraging candid communication and cooperation. We can all work together to effectively and mindfully manage the problems presented by deepfake technology by combining these tactics and making adjustments to the constantly changing terrain. Creating a true cyber-safe environment for netizens.
References:
- https://yourstory.com/2023/11/unveiling-deepfake-technology-impact
- https://www.indiatoday.in/movies/celebrities/story/deepfake-alert-priyanka-chopra-falls-prey-after-rashmika-mandanna-katrina-kaif-and-alia-bhatt-2472293-2023-12-05
- https://www.csoonline.com/article/1251094/deepfakes-emerge-as-a-top-security-threat-ahead-of-the-2024-us-election.html
- https://timesofindia.indiatimes.com/city/delhi/hc-unwilling-to-step-in-to-curb-deepfakes-delhi-high-court/articleshow/105739942.cms
- https://www.indiatoday.in/india/story/india-among-top-targets-of-deepfake-identity-fraud-2472241-2023-12-05
- https://sumsub.com/fraud-report-2023/

The World Wide Web was created as a portal for communication, to connect people from far away, and while it started with electronic mail, mail moved to instant messaging, which let people have conversations and interact with each other from afar in real-time. But now, the new paradigm is the Internet of Things and how machines can communicate with one another. Now one can use a wearable gadget that can unlock the front door upon arrival at home and can message the air conditioner so that it switches on. This is IoT.
WHAT EXACTLY IS IoT?
The term ‘Internet of Things’ was coined in 1999 by Kevin Ashton, a computer scientist who put Radio Frequency Identification (RFID) chips on products in order to track them in the supply chain, while he worked at Proctor & Gamble (P&G). And after the launch of the iPhone in 2007, there were already more connected devices than people on the planet.
Fast forward to today and we live in a more connected world than ever. So much so that even our handheld devices and household appliances can now connect and communicate through a vast network that has been built so that data can be transferred and received between devices. There are currently more IoT devices than users in the world and according to the WEF’s report on State of the Connected World, by 2025 there will be more than 40 billion such devices that will record data so it can be analyzed.
IoT finds use in many parts of our lives. It has helped businesses streamline their operations, reduce costs, and improve productivity. IoT also helped during the Covid-19 pandemic, with devices that could help with contact tracing and wearables that could be used for health monitoring. All of these devices are able to gather, store and share data so that it can be analyzed. The information is gathered according to rules set by the people who build these systems.
APPLICATION OF IoT
IoT is used by both consumers and the industry.
Some of the widely used examples of CIoT (Consumer IoT) are wearables like health and fitness trackers, smart rings with near-field communication (NFC), and smartwatches. Smartwatches gather a lot of personal data. Smart clothing, with sensors on it, can monitor the wearer’s vital signs. There are even smart jewelry, which can monitor sleeping patterns and also stress levels.
With the advent of virtual and augmented reality, the gaming industry can now make the experience even more immersive and engrossing. Smart glasses and headsets are used, along with armbands fitted with sensors that can detect the movement of arms and replicate the movement in the game.
At home, there are smart TVs, security cameras, smart bulbs, home control devices, and other IoT-enabled ‘smart’ appliances like coffee makers, that can be turned on through an app, or at a particular time in the morning so that it acts as an alarm. There are also voice-command assistants like Alexa and Siri, and these work with software written by manufacturers that can understand simple instructions.
Industrial IoT (IIoT) mainly uses connected machines for the purposes of synchronization, efficiency, and cost-cutting. For example, smart factories gather and analyze data as the work is being done. Sensors are also used in agriculture to check soil moisture levels, and these then automatically run the irrigation system without the need for human intervention.
Statistics
- The IoT device market is poised to reach $1.4 trillion by 2027, according to Fortune Business Insight.
- The number of cellular IoT connections is expected to reach 3.5 billion by 2023. (Forbes)
- The amount of data generated by IoT devices is expected to reach 73.1 ZB (zettabytes) by 2025.
- 94% of retailers agree that the benefits of implementing IoT outweigh the risk.
- 55% of companies believe that 3rd party IoT providers should have to comply with IoT security and privacy regulations.
- 53% of all users acknowledge that wearable devices will be vulnerable to data breaches, viruses,
- Companies could invest up to 15 trillion dollars in IoT by 2025 (Gigabit)
CONCERNS AND SOLUTIONS
- Two of the biggest concerns with IoT devices are the privacy of users and the devices being secure in order to prevent attacks by bad actors. This makes knowledge of how these things work absolutely imperative.
- It is worth noting that these devices all work with a central hub, like a smartphone. This means that it pairs with the smartphone through an app and acts as a gateway, which could compromise the smartphone as well if a hacker were to target that IoT device.
- With technology like smart television sets that have cameras and microphones, the major concern is that hackers could hack and take over the functioning of the television as these are not adequately secured by the manufacturer.
- A hacker could control the camera and cyberstalk the victim, and therefore it is very important to become familiar with the features of a device and ensure that it is well protected from any unauthorized usage. Even simple things, like keeping the camera covered when it is not being used.
- There is also the concern that since IoT devices gather and share data without human intervention, they could be transmitting data that the user does not want to share. This is true of health trackers. Users who wear heart and blood pressure monitors have their data sent to the insurance company, who may then decide to raise the premium on their life insurance based on the data they get.
- IoT devices often keep functioning as normal even if they have been compromised. Most devices do not log an attack or alert the user, and changes like higher power or bandwidth usage go unnoticed after the attack. It is therefore very important to make sure the device is properly protected.
- It is also important to keep the software of the device updated as vulnerabilities are found in the code and fixes are provided by the manufacturer. Some IoT devices, however, lack the capability to be patched and are therefore permanently ‘at risk’.
CONCLUSION
Humanity inhabits this world that is made up of all these nodes that talk to each other and get things done. Users can harmonize their devices so that everything runs like a tandem bike – completely in sync with all other parts. But while we make use of all the benefits, it is also very important that one understands what they are using, how it is functioning, and how one can tackle issues should they come up. This is also important to understand because once people get used to IoT, it will be that much more difficult to give up the comfort and ease that these systems provide, and therefore it would make more sense to be prepared for any eventuality. A lot of times, good and sensible usage alone can keep devices safe and services intact. But users should be aware of any issues because forewarned is forearmed.