#Fact Check: Pakistan’s Airstrike Claim Uses Video Game Footage
Executive Summary:
A widely circulated claim on social media, including a post from the official X account of Pakistan, alleges that the Pakistan Air Force (PAF) carried out an airstrike on India, supported by a viral video. However, according to our research, the video used in these posts is actually footage from the video game Arma-3 and has no connection to any real-world military operation. The use of such misleading content contributes to the spread of false narratives about a conflict between India and Pakistan and has the potential to create unnecessary fear and confusion among the public.

Claim:
Viral social media posts, including the official Government of Pakistan X handle, claims that the PAF launched a successful airstrike against Indian military targets. The footage accompanying the claim shows jets firing missiles and explosions on the ground. The video is presented as recent and factual evidence of heightened military tensions.


Fact Check:
As per our research using reverse image search, the videos circulating online that claim to show Pakistan launching an attack on India under the name 'Operation Sindoor' are misleading. There is no credible evidence or reliable reporting to support the existence of any such operation. The Press Information Bureau (PIB) has also verified that the video being shared is false and misleading. During our research, we also came across footage from the video game Arma-3 on YouTube, which appears to have been repurposed to create the illusion of a real military conflict. This strongly indicates that fictional content is being used to propagate a false narrative. The likely intention behind this misinformation is to spread fear and confusion by portraying a conflict that never actually took place.


Conclusion:
It is true to say that Pakistan is using the widely shared misinformation videos to attack India with false information. There is no reliable evidence to support the claim, and the videos are misleading and irrelevant. Such false information must be stopped right away because it has the potential to cause needless panic. No such operation is occurring, according to authorities and fact-checking groups.
- Claim: Viral social media posts claim PAF attack on India
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In the new age of technologies the internet and social media continue to witness a surge in deepfake videos a technological phenomenon that blurs the line between reality and fiction. The string of deepfake videos of Bollywood actors and other famous personalities has raised serious concerns. While Prime Minister Narendra Modi spoke against the risks of artificial intelligence at the G20 Virtual Summit. The central government has recently announced that it will soon set up dedicated regulations to tackle this Menace. This will include holding social media platforms and creators responsible for their actions against the rules and regulations. Very often most people shy away from initiating a legal process or taking action while being victims of misuse of fast-paced tech but the government has announced its big support to the victims and promised to stand by complaints against deepfake videos especially this includes helping individuals to report the incidents and any violations by platforms.
Social media platforms to realign their policies as per the Indian laws
The Ministry of Electronics and Information Technology (MeitY) announced on 24th November 2023 that it will be giving social media platforms seven days time period to align their terms of service and other policies with Indian laws and regulations in order to address the issue of hosting of deepfakes on these platforms. All platforms must align and transform their terms of use with their users to be consistent with the 12 areas that are prohibited under rule 3(1)(b) of the Information Technology (IT) Rules, 2021.
The platforms will ensure harmonization and alignment of their terms & policies so that every user on every platform is aware that when they use a platform the platform intends to be a safe and trusted platform and the platform will not tolerate these 12 types of content or information that have been prohibited under the IT Act and the IT rules. The government approach is to collectively advocate for responsible and safe use of the Internet. The government has taken a proactive step in partnership with these social media platforms to ensure an era where such platforms will be a lot more responsible and a lot more responsive to the expectations under the law and more compliant.
Officer to be appointed under rule 7
As Deepfake Videos continue to surface on social media, the Government has geared up to curb such content online. Mr. Rajeev Chandrasekhar Minister of State, (Meity), stated that the government will soon appoint an officer to take appropriate action against deepfake videos. This statement came after the government meeting with industry stakeholders and important players held on 24 Nov 2023. He added that Meity and the government of India will nominate an officer under rule 7 (IT rules 2021) and will ensure full compliance expectations from all the platforms. An officer appointed under Rule 7, will be entrusted with building a mechanism where users can put in their complaints regarding deepfakes and MeitY may also assist such aggrieved users with filing FIRs in such cases. Mr. Rajeev Chandrasekhar, Minister of State, (Meity) also added that we will also be creating a platform where it will be very easy for netizens to bring to the attention of the government of India and notices of allegations or reports of violation of law by the platforms and the rule 7 officer will take that digital platform information and respond accordingly.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (updated as on 6.4.2023)
Rule 3(1)(b) states that intermediaries shall inform its rules and regulations, privacy policy and user agreement to the user and shall make reasonable efforts to ‘restrict’ the users from hosting, displaying, uploading, modifying, publishing, transmitting, store, update or sharing any information that is prohibited under this rule which also includes deepfake, misinformation, CSAM(Child sexual abusive material) etc. As per rule 3(2)(b) Intermediaries shall remove or disable access within 24 hours of receipt of complaints of contents that expose the private areas of individuals, show such individuals in full or partial nudity or in a sexual act or is in the nature of impersonation including morphed images etc.
Ongoing Efforts Ahead of Crucial Meeting with Tech Giants
Ahead of the government meeting with online platforms such as Google, Facebook, and YouTube on Friday, 24th November 2023, Mr. Rajeev Chandrasekhar Minister of State, (Meity) added that way back from October 2022 the government of India had been alerting them to the threat of misinformation and deepfakes which are part of misinformation. He further added that the current IT rules under the IT Act provide for adequate compliance requirements on their part to deal with deepfake.
Deepfake Misinformation
Misinformation powered by AI becoming an even more potent force to disrupt and to mislead and to create chaos and confusion at a scale and of a type that is deeply detrimental. Deepfakes in a very simple basic way is misinformation which is powered by or enhanced by AI. Video-based deepfake misinformation is more dangerous since it has a greater reach as video consumption today is the preferred choice by users on the internet.
Way forward
The Honorable Prime Minister has raised the issue that deep fakes are deeply disruptive they can create divisions and all kinds of disruptions in communities, in families and therefore misuse of deepfake technology is a very clear present danger to the safe and trusted internet.
The Government is on its way to draft a dedicated legislation dedicated to tackling deepfakes.
Even as we speak to a future regulation and a future law which is certainly required given that our IT Act is 23 years old. However current IT rules provide for compliance requirements by the platforms on misinformation patently false information and deepfakes. Followed by the recent government advisory on misinformation and deepfake.
Conclusion
Prime Minister alerting of the dangers of deepfakes online. The government is now in the process of starting to look very seriously into this issue and also issued guidelines for intermediaries and in a finite period of time it is hoped that the threat of deep fakes would actually no longer exist in in our system. The government made it clear that apart from people spreading deepfake videos, the platforms making them spread and not taking action will also be liable they are currently liable and will be even more so in future after new rules and regulations are brought in.
References:
- https://www.moneycontrol.com/news/technology/deepfakes-meity-gives-social-media-platforms-7-day-ultimatum-to-align-their-policies-to-indian-laws-and-regulations-11805521.html
- https://www.azbpartners.com/bank/amendments-to-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021/#:~:text=Prior%20to%20the%20amendment%2C%20under%20Rule%203(1)
- https://www.drishtiias.com/daily-updates/daily-news-analysis/amendments-to-the-it-rules-2021
- https://youtu.be/zmI2ml1d_Es?feature=shared
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445

Introduction
The Ministry of Communications, Department of Telecommunications notified the Telecommunications (Telecom Cyber Security) Rules, 2024 on 22nd November 2024. These rules were notified to overcome the vulnerabilities that rapid technological advancements pose. The evolving nature of cyber threats has contributed to strengthening and enhancing telecom cyber security. These rules empower the central government to seek traffic data and any other data (other than the content of messages) from service providers.
Background Context
The Telecommunications Act of 2023 was passed by Parliament in December, receiving the President's assent and being published in the official Gazette on December 24, 2023. The act is divided into 11 chapters 62 sections and 3 schedules. The said act has repealed the old legislation viz. Indian Telegraph Act of 1885 and the Indian Wireless Telegraphy Act of 1933. The government has enforced the act in phases. Sections 1, 2, 10-30, 42-44, 46, 47, 50-58, 61, and 62 came into force on June 26, 2024. While, sections 6-8, 48, and 59(b) were notified to be effective from July 05, 2024.
These rules have been notified under the powers granted by Section 22(1) and Section 56(2)(v) of the Telecommunications Act, 2023.
Key Provisions of the Rules
These rules collectively aim to reinforce telecom cyber security and ensure the reliability of telecommunication networks and services. They are as follows:
The Central Government agency authorized by it may request traffic or other data from a telecommunication entity through the Central Government portal to safeguard and ensure telecom cyber security. In addition, the Central Govt. can instruct telecommunication entities to establish the necessary infrastructure and equipment for data collection, processing, and storage from designated points.
● Obligations Relating To Telecom Cybersecurity:
Telecom entities must adhere to various obligations to prevent cyber security risks. Telecommunication cyber security must not be endangered, and no one is allowed to send messages that could harm it. Misuse of telecommunication equipment such as identifiers, networks, or services is prohibited. Telecommunication entities are also required to comply with directions and standards issued by the Central Govt. and furnish detailed reports of actions taken on the government portal.
● Compulsory Measures To Be Taken By Every Telecommunication Entity:
Telecom entities must adopt and notify the Central Govt. of a telecom cyber security policy to enhance cybersecurity. They have to identify and mitigate risks of security incidents, ensure timely responses, and take appropriate measures to address such incidents and minimize their impact. Periodic telecom cyber security audits must be conducted to assess network resilience against potential threats for telecom entities. They must report security incidents promptly to the Central Govt. and establish facilities like a Security Operations Centre.
● Reporting of Security Incidents:
- Telecommunication entities must report the detection of security incidents affecting their network or services within six hours.
- 24 hours are provided for submitting detailed information about the incident, including the number of affected users, the duration, geographical scope, the impact on services, and the remedial measures implemented.
The Central Govt. may require the affected entity to provide further information, such as its cyber security policy, or conduct a security audit.
CyberPeace Policy Analysis
The notified rules reflect critical updates from their draft version, including the obligation to report incidents immediately upon awareness. This ensures greater privacy for consumers while still enabling robust cybersecurity oversight. Importantly, individuals whose telecom identifiers are suspended or disconnected due to security concerns must be given a copy of the order and a chance to appeal, ensuring procedural fairness. The notified rules have removed "traffic data" and "message content" definitions that may lead to operational ambiguities. While the rules establish a solid foundation for protecting telecom networks, they pose significant compliance challenges, particularly for smaller operators who may struggle with costs associated with audits, infrastructure, and reporting requirements.
Conclusion
The Telecom Cyber Security Rules, 2024 represent a comprehensive approach to securing India’s communication networks against cyber threats. Mandating robust cybersecurity policies, rapid incident reporting, and procedural safeguards allows the rules to balance national security with privacy and fairness. However, addressing implementation challenges through stakeholder collaboration and detailed guidelines will be key to ensuring compliance without overburdening telecom operators. With adaptive execution, these rules have the potential to enhance the resilience of India’s telecom sector and also position the country as a global leader in digital security standards.
References
● Telecommunications Act, 2023 https://acrobat.adobe.com/id/urn:aaid:sc:AP:767484b8-4d05-40b3-9c3d-30c5642c3bac
● CyberPeace First Read of the Telecommunications Act, 2023 https://www.cyberpeace.org/resources/blogs/the-government-enforces-key-sections-of-the-telecommunication-act-2023
● Telecommunications (Telecom Cyber Security) Rules, 2024

CAPTCHA, or the Completely Automated Public Turing Test to Tell Computers and Humans Apart function, is an image or distorted text that users have to identify or interpret to prove they are human. 2007 marked the inception of CAPTCHA, and Google developed its free service called reCAPTCHA, one of the most commonly used technologies to tell computers apart from humans. CAPTCHA protects websites from spam and abuse by using tests considered easy for humans but were supposed to be difficult for bots to solve.
But, now this has changed. With AI becoming more and more sophisticated, it is now capable of solving CAPTCHA tests at a rate that is more accurate than humans, rendering them increasingly ineffective. This raises the question of whether CAPTCHA is still effective as a detection tool with the advancements of AI.
CAPTCHA Evolution: From 2007 Till Now
CAPTCHA has evolved through various versions to keep bots at bay. reCAPTCHA v1 relied on distorted text recognition, v2 introduced image-based tasks and behavioural analysis, and v3 operated invisibly, assigning risk scores based on user interactions. While these advancements improved user experience and security, AI now solves CAPTCHA with 96% accuracy, surpassing humans (50-86%). Bots can mimic human behaviour, undermining CAPTCHA’s effectiveness and raising the question: is it still a reliable tool for distinguishing real people from bots?
Smarter Bots and Their Rise
AI advancements like machine learning, deep learning and neural networks have developed at a very fast pace in the past decade, making it easier for bots to bypass CAPTCHA. They allow the bots to process and interpret the CAPTCHA types like text and images with almost human-like behaviour. Some examples of AI developments against bots are OCR or Optical Character Recognition. The earlier versions of CAPTCHA relied on distorted text: AI because of this tech is able to recognise and decipher the distorted text, making CAPTCHA useless. AI is trained on huge datasets which allows Image Recognition by identifying the objects that are specific to the question asked. These bots can mimic human habits and patterns by Behavioural Analysis and therefore fool the CAPTCHA.
To defeat CAPTCHA, attackers have been known to use Adversarial Machine Learning, which refers to AI models trained specifically to defeat CAPTCHA. They collect CAPTCHA datasets and answers and create an AI that can predict correct answers. The implications that CAPTCHA failures have on platforms can range from fraud to spam to even cybersecurity breaches or cyberattacks.
CAPTCHA vs Privacy: GDPR and DPDP
GDPR and the DPDP Act emphasise protecting personal data, including online identifiers like IP addresses and cookies. Both frameworks mandate transparency when data is transferred internationally, raising compliance concerns for reCAPTCHA, which processes data on Google’s US servers. Additionally, reCAPTCHA's use of cookies and tracking technologies for risk scoring may conflict with the DPDP Act's broad definition of data. The lack of standardisation in CAPTCHA systems highlights the urgent need for policymakers to reevaluate regulatory approaches.
CyberPeace Analysis: The Future of Human Verification
CAPTCHA, once a cornerstone of online security, is losing ground as AI outperforms humans in solving these challenges with near-perfect accuracy. Innovations like invisible CAPTCHA and behavioural analysis provided temporary relief, but bots have adapted, exploiting vulnerabilities and undermining their effectiveness. This decline demands a shift in focus.
Emerging alternatives like AI-based anomaly detection, biometric authentication, and blockchain verification hold promise but raise ethical concerns like privacy, inclusivity, and surveillance. The battle against bots isn’t just about tools but it’s about reimagining trust and security in a rapidly evolving digital world.
AI is clearly winning the CAPTCHA war, but the real victory will be designing solutions that balance security, user experience and ethical responsibility. It’s time to embrace smarter, collaborative innovations to secure a human-centric internet.
References
- https://www.business-standard.com/technology/tech-news/bot-detection-no-longer-working-just-wait-until-ai-agents-come-along-124122300456_1.html
- https://www.milesrote.com/blog/ai-defeating-recaptcha-the-evolving-battle-between-bots-and-web-security
- https://www.technologyreview.com/2023/10/24/1081139/captchas-ai-websites-computing/
- https://datadome.co/guides/captcha/recaptcha-gdpr/