#FactCheck -Viral Video of Electric Car Powered by Generator Is AI-Generated
Executive Summary
A video circulating on social media shows an electric car allegedly being powered by a portable generator attached to it. The clip is being shared with the claim that the generator is directly running the vehicle, suggesting a groundbreaking or unusual technological feat. However, research conducted by the CyberPeace found the viral claim to be false. Our research revealed that the video is not authentic but AI-generated.
Claim
On February 22, 2026, a user on X (formerly Twitter) shared the viral video with the caption: “After watching this video, Newton might turn in his grave.” The post implied that the video demonstrates a scientific impossibility.

Fact Check:
To verify the claim, we conducted a keyword search on Google. However, we found no credible reports from any reputable media organization supporting the assertion made in the viral post. A close examination of the video revealed several visual inconsistencies and unnatural elements, raising suspicion that the footage may have been generated using artificial intelligence. We then analyzed the video using the AI detection tool Hive Moderation. The results indicated a 96 percent probability that the video was AI-generated.

In the next step of our research , we scanned the video using another AI detection platform, WasItAI, which also concluded that the viral video was AI-generated.

Conclusion
Our research confirms that the viral video is not real. It has been artificially created using AI technology and is being circulated with a misleading claim.
Related Blogs

Introduction:
The Ministry of Civil Aviation, GOI, established the initiative ‘DigiYatra’ to ensure hassle-free and health-risk-free journeys for travellers/passengers. The initiative uses a single token of face biometrics to digitally validate identity, travel, and health along with any other data needed to enable air travel.
Cybersecurity is a top priority for the DigiYatra platform administrators, with measures implemented to mitigate risks of data loss, theft, or leakage. With over 6.5 million users, DigiYatra is an important step forward for India, in the direction of secure digital travel with seamless integration of proactive cybersecurity protocols. This blog focuses on examining the development, challenges and implications that stand in the way of securing digital travel.
What is DigiYatra? A Quick Overview
DigiYatra is a flagship initiative by the Government of India to enable paperless travel, reducing identity checks for a seamless airport experience. This technology allows the entry of passengers to be automatically processed based on a facial recognition system at all the checkpoints at the airports, including main entry, security check areas, aircraft boarding, and more.
This technology makes the boarding process quick and seamless as each passenger needs less than three seconds to pass through every touchpoint. Passengers’ faces essentially serve as their documents (ID proof and if required, Vaccine Proof) and their boarding passes.
DigiYatra has also enhanced airport security as passenger data is validated by the Airlines Departure Control System. It allows only the designated passengers to enter the terminal. Additionally, the entire DigiYatra Process is non-intrusive and automatic. In improving long-standing security and operational airport protocols, the platform has also significantly improved efficiency and output for all airport professionals, from CISF personnel to airline staff members.
Policy Origins and Framework
Rooted in the Government of India's Digital India campaign and enabled by the National Civil Aviation Policy (NCAP) 2016, DigiYatra aims to modernise air travel by integrating Aadhaar-based passenger identification. While Aadhaar is currently the primary ID, efforts are underway to include other identification methods. The platform, supported by stakeholders like the Airports Authority of India (26%) and private airports (14.8% each), must navigate stringent cybersecurity demands. Compliance with the Digital Personal Data Protection Act, 2023, ensures the secure use of sensitive facial recognition data, while the Aircraft (Security) Rules, 2023, mandate robust interoperability and data protection mechanisms across stakeholders. DigiYatra also aspires to democratise digital travel, extending its reach to underserved airports and non-tech-savvy travellers. As India refines its cybersecurity and privacy frameworks, learning from global best practices is essential to safeguarding data and ensuring seamless, secure air travel operations.
International Practices
Global practices offer crucial lessons to strengthen DigiYatra's cybersecurity and streamline the seamless travel experience. Initiatives such as CLEAR in the USA and Seamless Traveller initiatives in Singapore offer actionable insights into further expanding the system to its full potential. CLEAR is operational in 58 airports and has more than 17 million users. Singapore has made Seamless Traveller active since the beginning of 2024 and aims to have a 95% shift to automated lanes by 2026.
Some additional measures that India can adopt from international initiatives are regular audits and updates to the cybersecurity policies. Further, India can aim for a cross-border policy for international travel. By implementing these recommendations, DigiYatra can not only improve data security and operational efficiency but also establish India as a leader in global aviation security standards, ensuring trust and reliability for millions of travellers
CyberPeace Recommendations
Some recommendations for further improving upon our efforts for seamless and secure digital travel are:
- Strengthen the legislation on biometric data usage and storage.
- Collaborate with global aviation bodies to develop standardised operations.
- Cybersecurity technologies, such as blockchain for immutable data records, should be adopted alongside encryption standards, data minimisation practices, and anonymisation techniques.
- A cybersecurity-first culture across aviation stakeholders.
Conclusion
DigiYatra represents a transformative step in modernising India’s aviation sector by combining seamless travel with robust cybersecurity. Leveraging facial recognition and secure data validation enhances efficiency while complying with the Digital Personal Data Protection Act, 2023, and Aircraft (Security) Rules, 2023.
DigiYatra must address challenges like secure biometric data storage, adopt advanced technologies like blockchain, and foster a cybersecurity-first culture to reach its full potential. Expanding to underserved regions and aligning with global best practices will further solidify its impact. With continuous innovation and vigilance, DigiYatra can position India as a global leader in secure, digital travel.
References
- https://government.economictimes.indiatimes.com/news/governance/digi-yatra-operates-on-principle-of-privacy-by-design-brings-convenience-security-ceo-digi-yatra-foundation/114926799
- https://www.livemint.com/news/india/explained-what-is-digiyatra-how-it-will-work-and-other-questions-answered-11660701094885.html
- https://www.civilaviation.gov.in/sites/default/files/2023-09/ASR%20Notification_published%20in%20Gazette.pdf
.webp)
Introduction
Search engines have become indispensable in our daily lives, allowing us to find information instantly by entering keywords or phrases. Using the prompt "search Google or type a URL" reflects just how seamless this journey to knowledge has become. With millions of searches conducted every second, and Google handling over 6.3 million searches per minute as of 2023 (Statista), one critical question arises: do search engines prioritise results based on user preferences and past behaviours, or are they truly unbiased?
Understanding AI Bias in Search Algorithms
AI bias is also known as machine learning bias or algorithm bias. It refers to the occurrence of biased results due to human biases that deviate from the original training data or AI algorithm which leads to distortion of outputs and creation of potentially harmful outcomes. The sources of this bias are algorithmic bias, data bias and interpretation bias which emerge from user history, geographical data, and even broader societal biases in training data.
Common biases include excluding certain groups of people from opportunities because of AI bias. In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms. While AI helps streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in biased outcomes due to a biased dataset or any other bias in the input data.
Case in Point: Google’s "Helpful" Results and Its Impact
Google optimises results by analysing user interactions to determine satisfaction with specific types of content. This data-driven approach forms ‘filter bubbles’ by repeatedly displaying content that aligns with a user’s preferences, regardless of factual accuracy. While this can create a more personalised experience, it risks confining users to a limited view, excluding diverse perspectives or alternative viewpoints.
The personal and societal impacts of such biases are significant. At an individual level, filter bubbles can influence decision-making, perceptions, and even mental health. On a societal level, these biases can reinforce stereotypes, polarise opinions, and shape collective narratives. There is also a growing concern that these biases may promote misinformation or limit users’ exposure to diverse perspectives, all stemming from the inherent bias in search algorithms.
Policy Challenges and Regulatory Measures
Regulating emerging technologies like AI, especially in search engine algorithms, presents significant challenges due to their intricate, proprietary nature. Traditional regulatory frameworks struggle to keep up with them as existing laws were not designed to address the nuances of algorithm-driven platforms. Regulatory bodies are pushing for transparency and accountability in AI-powered search algorithms to counter biases and ensure fairness globally. For example, the EU’s Artificial Intelligence Act aims to establish a regulatory framework that will categorise AI systems based on risk and enforces strict standards for transparency, accountability, and fairness, especially for high-risk AI applications, which may include search engines. India has proposed the Digital India Act in 2023 which will define and regulate High-risk AI.
Efforts include ethical guidelines emphasising fairness, accountability, and transparency in information prioritisation. However, a complex regulatory landscape could hinder market entrants, highlighting the need for adaptable, balanced frameworks that protect user interests without stifling innovation.
CyberPeace Insights
In a world where search engines are gateways to knowledge, ensuring unbiased, accurate, and diverse information access is crucial. True objectivity remains elusive as AI-driven algorithms tend to personalise results based on user preferences and past behaviour, often creating a biased view of the web. Filter bubbles, which reinforce individual perspectives, can obscure factual accuracy and limit exposure to diverse viewpoints. Addressing this bias requires efforts from both users and companies. Users should diversify sources and verify information, while companies should enhance transparency and regularly audit algorithms for biases. Together, these actions can promote a more equitable, accurate, and unbiased search experience for all users.
References
- https://www.bbc.com/future/article/20241101-how-online-photos-and-videos-alter-the-way-you-think
- https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear
- https://www.ibm.com/topics/ai-bias#:~:text=In%20healthcare%2C%20underrepresenting%20data%20of,can%20skew%20predictive%20AI%20algorithms

Executive Summary
As India concluded its 77th Republic Day celebrations on January 26, 2026, with grandeur and patriotic enthusiasm along the iconic Kartavya Path, a video began circulating on social media claiming to show Indian security personnel failing to perform motorcycle stunts during the ceremonial parade. The short clip allegedly depicts soldiers attempting high-risk, synchronised motorcycle manoeuvres, only to lose balance and fall off their bikes. The visuals were widely shared online with mocking captions, suggesting incompetence during a nationally televised event. However, an research by the CyberPeace found that the video is not authentic and was digitally generated using artificial intelligence.
Claim
A Pakistan-based X user, Sadaf Baloch (@sadafzbaloch), shared the video on January 27, claiming it showed Indian security personnel failing to execute motorcycle stunts during the Republic Day parade held on January 26, 2026. While sharing the clip, the user wrote:“Every time the Indian Army tries a tactical stunt, it looks less like combat training and more like a low-budget circus trailer filmed in one take.”The post was widely circulated with similar narratives questioning the professionalism of Indian forces.
Here is the link and archive link to the post, along with a screenshot.

To verify the authenticity of the viral video, the Desk conducted a detailed frame-by-frame analysis. During the examination, a watermark linked to ‘Sora’—an AI text-to-video generation model was detected at the 00:05 timestamp. The presence of this watermark strongly indicated that the video was artificially generated and not recorded during a real-world event.

Fact Check:
Further visual scrutiny revealed several inconsistencies commonly associated with AI-generated content. The background appeared unnatural and lacked realistic depth, while the movements and reactions of the security personnel looked mechanically exaggerated and inconsistent with real physics. Facial expressions and body motions during the alleged falls also appeared unrealistic. To strengthen the verification, the Desk analysed the clip using Sightengine, an AI-detection tool. The results showed a 98 per cent probability that the video contained AI-generated or deepfake elements.
Below is a screenshot of the result.

As part of the research , the Desk also conducted a customised keyword search and reviewed official coverage of the Republic Day parade. A full-length video broadcast by DD News on its official YouTube channel was examined. The footage showed joint CRPF and SSB motorcycle teams performing traditional daredevil stunts without any mishap. No incident resembling the viral claim was found in the official broadcast or in any credible media reports.
Here is the video link and a screenshot.

Conclusion
The CyberPeace research confirms that the viral video purportedly showing Indian security personnel failing to perform motorcycle stunts during the 77th Republic Day parade is AI-generated. The clip has been falsely circulated online as genuine content with the intent to mislead viewers and spread misinformation.