#FactCheck - Debunking the AI-Generated Image of an Alleged Israeli Army Dog Attack
Executive Summary:
A photo allegedly shows an Israeli Army dog attacking an elderly Palestinian woman has been circulating online on social media. However, the image is misleading as it was created using Artificial Intelligence (AI), as indicated by its graphical elements, watermark ("IN.VISUALART"), and basic anomalies. Although there are certain reports regarding the real incident in several news channels, the viral image was not taken during the actual event. This emphasizes the need to verify photos and information shared on social media carefully.

Claims:
A photo circulating in the media depicts an Israeli Army dog attacking an elderly Palestinian woman.



Fact Check:
Upon receiving the posts, we closely analyzed the image and found certain discrepancies that are commonly seen in AI-generated images. We can clearly see the watermark “IN.VISUALART” and also the hand of the old lady looks odd.

We then checked in AI-Image detection tools named, True Media and contentatscale AI detector. Both found potential AI Manipulation in the image.



Both tools found it to be AI Manipulated. We then keyword searched for relevant news regarding the viral photo. Though we found relevant news, we didn’t get any credible source for the image.

The photograph that was shared around the internet has no credible source. Hence the viral image is AI-generated and fake.
Conclusion:
The circulating photo of an Israeli Army dog attacking an elderly Palestinian woman is misleading. The incident did occur as per the several news channels, but the photo depicting the incident is AI-generated and not real.
- Claim: A photo being shared online shows an elderly Palestinian woman being attacked by an Israeli Army dog.
- Claimed on: X, Facebook, LinkedIn
- Fact Check: Fake & Misleading
Related Blogs

Introduction
In January 2026, the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness came into effect in South Korea, establishing one of the first national AI laws in the world. The bill, enacted by the National Assembly of Korea in December 2024 and implemented from January 22, 2026, aims to strike a balance between the rapid advancement of technology and clear safeguards against risks, as well as transparency, accountability, and responsible AI use. It puts Seoul and the European Union on the frontline of developing legal systems for artificial intelligence and indicates a long-term goal of becoming an AI power on the global stage.
What the AI Basic Act Covers
The AI Basic Act consists of 19 separate AI bills that are merged into a single piece of legislation that covers the lifecycle of AI, including research and development, deployment, and utilisation. It is very wide in its coverage: it refers to any AI system that influences the Korean market or users inside the country, irrespective of the country in which it is created. The law does not apply to national defence and security applications.
The law defines key concepts like artificial intelligence, generative AI, and high-impact AI and establishes the principles of ethical AI, safety, user rights, industry support, and national policy coordination. It also offers a legal foundation for the activities of the government to promote AI innovation without jeopardising the common good.
Fundamentally, the AI Basic Act is designed to establish a culture of trust between businesses and the government/citizens. It does not prohibit AI technologies and does not excessively limit innovation. Instead, it creates the framework of responsible development and economic growth.
Guardrails for Safety and Accountability
One of the defining features of the AI Basic Act is its risk-based approach. Rather than considering all AI systems as similar, it makes a distinction between ordinary and high-impact AI systems, the ones applied in sectors where the wrong or unsafe decision can have a major impact on the safety, rights, or critical infrastructure of the population. Some of them can be seen in healthcare, transportation, financial services, education, and public services.
The high-impact AI operators must integrate risk management plans, human controls, and surveillance systems. In critical decision-making situations, human control should be available at all times; that is, machines can help but not override human control where human safety or other human rights are involved.
The law enables the regulators to perform on-site checks, demand documentation, and conduct compliance investigations. Fines for breaches may go up to 30 million Korean won (approximately 21,000 US dollars). It has a one-year period of transition that is based on guidance but not enforcement, thus allowing companies time to implement compliance measures before imposing fines.
These requirements contribute to enhancing accountability by defining who is accountable for the safety outcomes. The law in South Korea is placed in the ecosystem, as opposed to the methods in which industry self-governance alone is utilised.
Transparency and Labelling Requirements
The AI Basic Act is based on transparency. The legislation ensures that users are notified before an AI system is operating, particularly with the generation of AI outputs that could be confused with human-created material. As an example, AI-generated text, images, video, or audio that may be difficult to distinguish between reality and fake must have obvious labels or watermarks to allow users to understand the source of the content.
The necessity to label is meant to fight misinformation, misleading activities, and unintended influence on the perception of the people. It is based on international anxiety regarding AI-generated content, such as deepfakes, manipulated media, and misleading online advertisements that have already been addressed separately in policy by South Korea, as well as discussions of data governance.
The transparency is also applied to the process of decision-making in AI systems. Developers and operators should be able to give explicit information about the way in which high-impact systems make their conclusions so that those who are victims of automated decisions can seek meaningful explanations. Although specific explainability criteria are in the process of being developed, the law grounds the principle that AI cannot act behind the scenes in situations where crucial decisions are being made.
Data Privacy and User Protection
The AI governance practice in South Korea is complementary to its current data protection laws, the Personal Information Protection Act (PIPA), which is broadly regarded as equivalent to major international data protection regulations like the GDPR in regard to personal data laws. The AI Basic Act provides an explanation as to how the data can be gathered, processed, and utilised within AI systems with regard to privacy rights, particularly in areas of high impact.
The law does not supersede the personal data protection policies, but it sets certain conditions on how AI developers must address the data to be utilised in training, testing, and running AIs. Operators will be required to document their data workflows and demonstrate how they guard the privacy of their users, including by transparency and consent mechanisms where necessary. This can assist in ensuring that the information that is utilised in AI functions is regulated by definite norms, and it is more difficult to avoid privacy requirements in the name of innovation.
Accountability and Governance Infrastructure
The AI Basic Act establishes a national policy framework of AI governance. The National Artificial Intelligence Strategy Committee, chaired by the President, is at the top and proposes the overall AI policy and aligns it with national objectives. The organisations that would support this are the specialised organisations that deal with safety, risk assessment, and research and the policy centre that would analyse the effects of AI on society and assist in its adoption by the industry.
This institutional structure facilitates strategic guidance as well as operational control. It is through incorporating AI governance in the administration of the people, but not into the market forces, that South Korea wishes to have the ethical and societal concerns become part of the sectors and agencies.
Promoting Innovation and Industrial Support
Although the AI Basic Act does not disregard regulation, it is not a law of restrictions. It also offers legal justification for research and development, human capital, and the growth of the AI industry, with special consideration for startups and small and medium-sized businesses. The legislation promotes AI clusters, long-term funding programmes, and policies to bring foreign talent to the Korean AI ecosystem.
This bidimensional approach of compliance and support is indicative of the broader desire of Korea to become one of the leading AI powers in the world, along with the US and China. The government has pointed out that it will encourage trust by having clear and predictable rules that will attract investment and maintain innovation and not stifle it.
What This Means Globally
The AI Basic Act of South Korea is not only interesting in its contents but also in its timing. It is also among the first thorough AI legislations to come into force in the world, and it beats the gradual regulatory implementations in other parts of the globe, like the European Union. Its system incorporates a principle-based framework, transparency requirements, accountability regulations, and industrial support, which reflects a contrasting model to either pure prescriptive risk regulation or lax self-regulation models elsewhere.
Other critics, such as industry groups and civil society organisations, have suggested that some of the protections may be more explicit, in particular to those who are harmed by AI systems, or to establish high-impact categories. Nonetheless, the framework sets a benchmark upon which most nations will pay close attention when they establish their own AI regimes.
Conclusion
The AI Basic Act puts South Korea at the forefront of national AI regulation, including very well-developed guardrails that enforce transparency, ethical control, accountability, and data protection in addition to fostering innovation. It recognises that AI could lead to economic and social advantages, yet also actual risks, particularly when systems are opaque, autonomous, or widely implemented. South Korea has gone holistically in responsible AI governance by integrating human oversight, labelling requirements, risk management planning, and governance infrastructure into law to be emulated by other countries in the years to come.
Sources
- https://www.theguardian.com/world/2026/jan/29/south-korea-world-first-ai-regulation-laws
- https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/10/artificial-intelligence-and-the-labour-market-in-korea_af668423/68ab1a5a-en.pdf
- https://asianintelligence.ai/south-korea
- https://aibasicact.kr/
- https://aibusinessweekly.net/p/south-korea-ai-basic-act-takes-effect-jan22-2026
- https://asiadaily.org/news/12112/

Executive Summary
A video circulating on social media claims that a Pakistani man misbehaved with TV anchor Rubika Liyaquat during a live television debate. Users sharing the clip alleged that the Pakistani participant silenced the anchor on live TV.
However, research by CyberPeace found the viral claim to be false and revealed that the video being shared on social media is edited. In the original video, published on YouTube on November 26, 2025, the alleged Pakistani man was not present in the TV debate.
Claim
On February 13, 2026, a user shared the viral clip on X (formerly Twitter), claiming that the anchor was insulted during the debate and was left speechless. Another user on February 11, 2026, asked News18 India to verify the video and questioned who allowed such behaviour towards the journalist on air.

Fact Check:
To verify the claim, we extracted key frames from the viral video and conducted a reverse image search using Google Lens. During the research, we found the full version of the debate uploaded on the official YouTube channel of News18 India on November 26, 2025. The nearly 40-minute original broadcast featured anchor Rubika Liyaquat along with panelists Zafar Islam, Varun Purohit, Prateek Kumar, Arvind Kumar Vajpayee, Tausif Ahmed Khan, and Aziz Khan. However, the person seen misbehaving with the anchor in the viral clip was not present in the original video.

Upon carefully reviewing the footage, we located the actual segment around the 25-minute 40-second mark. In this portion, the anchor can be heard asking panelist Tausif Ahmed Khan to leave the show, using the same words heard in the viral clip. However, the original broadcast does not feature any Pakistani participant or any individual named “Nadeem Shahzad.”

Conclusion
Our research found that the viral claim is false. The circulating video has been edited, and the alleged Pakistani participant does not appear in the original debate uploaded on November 26, 2025.

Introduction:
The G7 Summit is an international forum that includes member states from France, the United States, the United Kingdom, Germany, Japan, Italy, Canada and the European Union (EU). The annual G7 meeting that is held every year was hosted by Japan this year in May 2023. It took place in Hiroshima. Artificial Intelligence (AI) was the major theme of this G7 summit. Key takeaways from this G7 summit highlight that leaders together focused on escalating the adoption of AI for beneficial use cases across the economy and the government and improving the governing structure to mitigate the potential risks of AI.
Need for fair and responsible use of AI:
The G7 recognises that they really need to work together to ensure the responsible and fair use of AI to help establish technical standards for the same. Members of the G7 countries agreed to adopt an open and enabling environment for the development of AI technologies. They also emphasized that AI regulations should be based on democratic values. G7 summit calls for the responsible use of AI. The ministers discussed the risks involved in AI technology programs like ChatGPT. They came up with an action plan for promoting responsible use of AI with human beings leading the efforts.
Further Ministers from the Group of Seven (G7) countries (Canada, France, Germany, Italy, Japan, the UK, the US, and the EU) met virtually on 7 September 2023 and committed to creating ‘international guiding principles applicable for all AI actors’, and a code of conduct for organisations developing ‘advanced’ AI systems.
What is HAP (Hiroshima AI Process)
Hiroshima AI Process (HAP) aims to establish trustworthy AI technical standards at the international level. The G7 agreed on creating a ministerial forum to prompt the fair use of AI. Hiroshima AI Process (HAP) is an effort by G7 to determine a way forward to regulate AI. The HAP establishes a forum for international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI at the global level.
The HAP will be operating in close connection with organisations including the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI).
This Hiroshima AI Process (HAP) initiated at the Annual G7 Summit held in Hiroshima, Japan is a significant step towards regulating AI and the Hiroshima AI Process (HAP) is likely to conclude by December 2023.
G7 leaders emphasized fostering an environment where trustworthy AI systems are designed, developed and deployed for the common good worldwide. They advocated for international standards and interoperable tools for trustworthy AI that enable Innovation by creating a comprehensive policy framework, including overall guiding principles for all AI actors in the AI ecosystem.
Stressing upon fair use of advanced technologies:
The impact and misuse of generative AI was also discussed by the G7 leaders. The G7 members also stressed misinformation and disinformation in the realm of generative AI models. As they are capable of creating synthetic content such as deepfakes. In particular, they noted that the next generation of interactive generative media will leverage targeted influence content that is highly personalized, localized, and conversational.
In the digital landscape, there is a rapid advancement of technologies such as generative
Artificial Intelligence (AI), deepfake, machine learning, etc. Such technologies offer convenience to users in performing several tasks and are capable of assisting individuals and business entities. Since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities, hence certain regulatory mechanisms at the global level will ensure and advocate for the ethical, reasonable and fair use of such advanced technologies.
Conclusion:
The G7 summit held in May 2023 focused on advanced international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI, in line with shared democratic values. AI governance has become a global issue, countries around the world are coming forward and advocating for the responsible and fair use of AI and influence on global AI governance and standards. It is significant to establish a regulatory framework that defines AI capabilities and identifies areas prone to misuse. And set forth reasonable technical standards while also fostering innovations. Hence overall prioritizing data privacy, integrity, and security in the evolving nature of advanced technologies.
References:
- https://www.politico.eu/wp-content/uploads/2023/09/07/3e39b82d-464d-403a-b6cb-dc0e1bdec642-230906_Ministerial-clean-Draft-Hiroshima-Ministers-Statement68.pdf
- https://www.g7hiroshima.go.jp/en/summit/about/
- https://www.drishtiias.com/daily-updates/daily-news-analysis/the-hiroshima-ai-process-for-global-ai-governance
- https://www.businesstoday.in/technology/news/story/hiroshima-ai-process-g7-calls-for-adoption-of-international-technical-standards-for-ai-382121-2023-05-20