#FactCheck -Viral Humanoid Robot Video Actually Filmed at the Museum of the Future
Executive Summary
A video circulating widely on social media shows a man interacting with a humanoid robot and using abusive language, after which the robot asks him to maintain politeness. Several users shared the clip claiming that the incident took place during a recent AI summit in New Delhi. The video triggered strong reactions online, with some users demanding legal action against the individual. However, research by CyberPeace found the claim to be misleading.
Claim
Social media users claimed that the viral video showing a man abusing a robot was recorded during an AI summit in New Delhi, India.

Fact Check
To verify the claim, we conducted a reverse image search of the individual seen in the video. The search led us to an Instagram post uploaded by a Pakistani account identifying the individual as Kashif Zameer.

Further keyword searches helped us locate his Instagram profile, where the same video had been uploaded on February 17, 2026. The post included hashtags such as “Dubai,” indicating the actual location of the incident. The profile also lists Lahore, Pakistan, as the user’s location and describes him as a businessman and social media personality.

To confirm the location shown in the video, we conducted additional searches using keywords such as “Dubai” and “humanoid robot.” The research revealed that the robot featured in the clip is “Ameca,” located at the Museum of the Future in Dubai.

Conclusion
The viral claim is false. The video is not related to any AI summit held in New Delhi. The incident occurred in Dubai, and the person seen in the video is not an Indian citizen.
Related Blogs

Introduction
In September 2024, the Australian government announced the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 ( CLA Bill 2024 hereon), to provide new powers to the Australian Communications and Media Authority (ACMA), the statutory regulatory body for Australia's communications and media infrastructure, to combat online misinformation and disinformation. It proposed allowing the ACMA to hold digital platforms accountable for the “seriously harmful mis- and disinformation” being spread on their platforms and their response to it, while also balancing freedom of expression. However, the Bill was subsequently withdrawn, primarily over concerns regarding the possibility of censorship by the government. This development is reflective of the global contention on the balance between misinformation regulation and freedom of speech.
Background and Key Features of the Bill
According to the BBC’s Global Minds Survey of 2023, nearly 73% of Australians struggled to identify fake news and AI-generated misinformation. There has been a substantial rise in misinformation on platforms like Facebook, Twitter, and TikTok since the COVID-19 pandemic, especially during major events like the bushfires of 2020 and the 2022 federal elections. The government’s campaign against misinformation was launched against this background, with the launch of The Australian Code of Practice on Disinformation and Misinformation in 2021. The main provisions of the CLA Bill, 2024 were:
- Core Transparency Obligations of Digital Media Platforms: Publishing current media literacy plans, risk assessment reports, and policies or information on their approach to addressing mis- and disinformation. The ACMA would also be allowed to make additional rules regarding complaints and dispute-handling processes.
- Information Gathering and Record-Keeping Powers: The ACMA would form rules allowing it to gather consistent information across platforms and publish it. However, it would not have been empowered to gather and publish user information except in limited circumstances.
- Approving Codes and Making Standards: The ACMA would have powers to approve codes developed by the industry and make standards regarding reporting tools, links to authoritative information, support for fact-checking, and demonetisation of disinformation. This would make compliance mandatory for relevant sections of the industry.
- Parliamentary Oversight: The transparency obligations, codes approved and standards set by ACMA under the Bill would be subject to parliamentary scrutiny and disallowance. ACMA would be required to report to the Parliament annually.
- Freedom of Speech Protections: End-users would not be required to produce information for ACMA unless they are a person providing services to the platform, such as its employees or fact-checkers. Further, it would not be allowed to call for removing content from platforms unless it involved inauthentic behavior such as bots.
- Penalties for Non-Compliance: ACMA would be required to employ a “graduated, proportionate and risk-based approach” to non-compliance and enforcement in the form of formal warnings, remedial directions, injunctions, or significant civil penalties as decided by the courts, subject to review by the Administrative Review Tribunal (ART). No criminal penalties would be imposed.
Key Concerns
- Inadequacy of Freedom of Speech Protections: The biggest contention on this Bill has been regarding the issue of possible censorship, particularly of alternative opinions that are crucial to the health of a democratic system. To protect the freedom of speech, the Bill defined mis- and disinformation, what constitutes “serious harm” (election interference, harming public health, etc.), and what would be excluded from its scope. However, reservations among the Opposition persisted due to the lack of a clear mechanism to protect divergent opinions from the purview of this Bill.
- Efficacy of Regulatory Measures: Many argue that by allowing the digital platform industry to make its codes, this law lets it self-police. Big Tech companies have no incentive to curb misinformation effectively since their business models allow them to reap financial benefits from the rampant spread of misinformation. Unless there are financial non- or dis- incentives to curb misinformation, Big Tech is not likely to address the situation at war footing. Thus, this law would run the risk of being toothless. Secondly, the Bill did not require platforms to report on the “prevalence of” false content which, along with other metrics, is crucial for researchers and legislators to track the efficacy of the current misinformation-curbing practices employed by platforms.
- Threat of Government Overreach: The Bill sought to expand the ACMA’s compliance and enforcement powers concerning misinformation and disinformation on online communication platforms by giving it powers to form rules on information gathering, code registration, standard-making powers, and core transparency obligations. However, even though the ACMA as a regulatory authority is answerable to the Parliament, the Bill was unclear in defining limits to these powers. This raised concerns from civil society about potential government overreach in a domain filled with contextual ambiguities regarding information.
Conclusion
While the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill sought to equip the ACMA with tools to hold digital platforms accountable and mitigate the harm caused by false information, its critique highlights the complexities of regulating such content without infringing on freedom of speech. Legislations and proposals regarding the matter all over the world are having to contend with this challenge. Globally, legislation and proposals addressing this issue face similar challenges, emphasizing the need for a continuous discourse at the intersection of platform accountability, regulatory restraint, and the protection of diverse viewpoints.
To regulate Big Tech effectively, governments can benefit from adopting a consultative, incremental, and cooperative approach, as exemplified by the European Union’s Digital Services Act 2023. Such a framework provides for a balanced response, fostering accountability while safeguarding democratic freedoms.
Resources
- https://www.infrastructure.gov.au/sites/default/files/documents/factsheet-misinformation-disinformation-bill.pdf
- https://www.infrastructure.gov.au/have-your-say/new-acma-powers-combat-misinformation-and-disinformation
- https://www.mi-3.com.au/07-02-2024/over-80-australians-feel-they-may-have-fallen-fake-news-says-bbc
- https://www.hrlc.org.au/news/misinformation-inquiry
- https://humanrights.gov.au/our-work/legal/submission/combatting-misinformation-and-disinformation-bill-2024
- https://www.sbs.com.au/news/article/what-is-the-misinformation-bill-and-why-has-it-triggered-worries-about-freedom-of-speech/4n3ijebde
- https://www.hrw.org/report/2023/06/14/no-internet-means-no-work-no-pay-no-food/internet-shutdowns-deny-access-basic#:~:text=The%20Telegraph%20Act%20allows%20authorities,preventing%20incitement%20to%20the%20commission
- https://www.hrlc.org.au/submissions/2024/11/8/submission-combatting-misinformation?utm_medium=email&utm_campaign=Media%20Release%20Senate%20Committee%20to%20hear%20evidence%20calling%20for%20Albanese%20Government%20to%20regulate%20and%20hold%20big%20tech%20accountable%20for%20misinformation&utm_content=Media%20Release%20Senate%20Committee%20to%20hear%20evidence%20calling%20for%20Albanese%20Government%20to%20regulate%20and%20hold%20big%20tech%20accountable%20for%20misinformation+Preview+CID_31c6d7200ed9bd2f7f6f596ba2a8b1fb&utm_source=Email%20campaign&utm_term=Read%20the%20Human%20Rights%20Law%20Centres%20submission%20to%20the%20inquiry

Executive Summary:
A viral online video claims of an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate. However, the CyberPeace Research Team has confirmed that the video is fake, created using video editing tools to manipulate the true essence of the original footage by merging two very different videos as one and making false claims. The original footage has no connection to an attack on Mr. Netanyahu. The claim that endorses the same is therefore false and misleading.

Claims:
A viral video claims an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate.


Fact Check:
Upon receiving the viral posts, we conducted a Reverse Image search on the keyframes of the video. The search led us to various legitimate sources featuring an attack on an ethnic Turkish leader of Bulgaria but not on the Prime Minister Benjamin Netanyahu, none of which included any attacks on him.

We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 68.0% confidence that the video was an editing. The tools identified "substantial evidence of manipulation," particularly in the change of graphics quality of the footage and the breakage of the flow in footage with the change in overall background environment.



Additionally, an extensive review of official statements from the Knesset revealed no mention of any such incident taking place. No credible reports were found linking the Israeli PM to the same, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming of an attack on Prime Minister Netanyahu is an old video that has been edited. The research using various AI detection tools confirms that the video is manipulated using edited footage. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using video editing technology, making the claim false and misleading.
- Claim: Attack on the Prime Minister Netanyahu Israeli Senate
- Claimed on: Facebook, Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading

Introduction
Google Play has announced its new policy which will ensure trust and transparency on google play by providing a new framework for developer verification and app details. The new policy requires that new developer accounts on Google Play will have to provide a D-U-N-S number to verify the business. So when an organisation will create a new Play Console developer account the organisation will need to provide a D-U-N-S number. Which is a nine-digit unique identifier which will be used to verify their business. The new google play policy aims to enhance user trust. And the developer will provide detailed developer details on the app’s listing page. Users will get to know who is behind the app which they are installing.
Verifying Developer Identity with D-U-N-S Numbers
To boost security the google play new policy requires the developer account to provide the D-U-N-S number when creating a new Play Console developer account. The D-U-N-S number assigned by Dun & Bradstreet will be used to verify the business. Once the developer creates his new Play Console developer account by providing a D-U-N-S number, Google Play will verify the developer’s details, and he will be able to start publishing the apps. Through this step, Google Play aims to validate the business information in a more authentic way.
If your organisation does not have a D-U-N-S number, you may check on or request for it for free on this website (https://www.dnb.com/duns-number/lookup.html). The request process for D-U-N-S can take up to 30 days. Developers are also required to keep the information up to date.
Building User Trust with Enhanced App Details
In addition to verifying developer identities in a more efficient way, google play also requires that developer provides sufficient app details to the users. There will be an “App Support” section on the app’s store listing page, where the developer will display the app’s support email address and even can include their website and phone number for support.
The new section “About the developer” will also be introduced to provide users with verified identity information, including the developer’s name, address, and contact details. Which will make the users more informed about the valuable information of the app developers.
Key highlights of the Google Play Polic
- Google Play came up with the policy to keep the platform safe by verifying the developers’ identity and it will also help to reduce the spread of malware apps and help the users to make confident informed decisions about the apps they download. Google Play announced the policy by expanding its developer verification requirement to strengthen Google Play as a platform and build user trust. When you create a new Play Console Developer account and choose organisation as your account type you will now need to provide a D-U-N-S number.
- Users will get detailed information about the developers’ identities and contact information, building more transparency and encouraging responsible app development practices.
- This policy will enable the users to make informed choices about the apps they download.
- The new “App support” section will provide enhanced communication between users and developers by displaying support email addresses, website and support phone numbers, streamlining the support process and user satisfaction.
Timeline and Implementation
The new policy requirements for D-U-N-S numbers will start rolling out on 31 August 2023 for all new Play Console developer accounts. The “About the developer” section will be visible to users as soon as a new app is published. and In October 2023, existing developers will also be required to update and verify their existing accounts to comply with the new verification policy.
Conclusion
Google Play’s new policy will aim to enhance the more transparent app ecosystem. This new policy will provide the users with more information about the developers. Google Play aims to establish a platform where users can confidently discover and download apps. This new policy will enhance the user experience on google play in terms of a reliable and trustworthy platform.