#FactCheck - Edited Video Falsely Claims as an attack on PM Netanyahu in the Israeli Senate
Executive Summary:
A viral online video claims of an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate. However, the CyberPeace Research Team has confirmed that the video is fake, created using video editing tools to manipulate the true essence of the original footage by merging two very different videos as one and making false claims. The original footage has no connection to an attack on Mr. Netanyahu. The claim that endorses the same is therefore false and misleading.

Claims:
A viral video claims an attack on Prime Minister Benjamin Netanyahu in the Israeli Senate.


Fact Check:
Upon receiving the viral posts, we conducted a Reverse Image search on the keyframes of the video. The search led us to various legitimate sources featuring an attack on an ethnic Turkish leader of Bulgaria but not on the Prime Minister Benjamin Netanyahu, none of which included any attacks on him.

We used AI detection tools, such as TrueMedia.org, to analyze the video. The analysis confirmed with 68.0% confidence that the video was an editing. The tools identified "substantial evidence of manipulation," particularly in the change of graphics quality of the footage and the breakage of the flow in footage with the change in overall background environment.



Additionally, an extensive review of official statements from the Knesset revealed no mention of any such incident taking place. No credible reports were found linking the Israeli PM to the same, further confirming the video’s inauthenticity.
Conclusion:
The viral video claiming of an attack on Prime Minister Netanyahu is an old video that has been edited. The research using various AI detection tools confirms that the video is manipulated using edited footage. Additionally, there is no information in any official sources. Thus, the CyberPeace Research Team confirms that the video was manipulated using video editing technology, making the claim false and misleading.
- Claim: Attack on the Prime Minister Netanyahu Israeli Senate
- Claimed on: Facebook, Instagram and X(Formerly Twitter)
- Fact Check: False & Misleading
Related Blogs

In the Intricate mazes of the digital world, where the line between reality and illusion blurs, the quest for truth becomes a Sisyphean task. The recent firestorm of rumours surrounding global pop icon Dua Lipa's visit to Rajasthan, India, is a poignant example of this modern Dilemma. A single image, plucked from the continuum of time and stripped of context, became the fulcrum upon which a narrative of sexual harassment was precariously balanced. This incident, a mere droplet in the ocean of digital discourse, encapsulates the broader phenomenon of misinformation—a spectre that haunts the virtual halls of our interconnected existence.
Misinformation Incident
Amidst the ceaseless hum of social media, a claim surfaced with the tenacity of a weed in fertile soil: Dua Lipa, the three-time Grammy Award winner, had allegedly been subjected to sexual harassment during her sojourn in the historic city of Jodhpur. The evidence? A viral picture, its origins murky, accompanied by a caption that seemed to confirm the worst fears of her ardent followers. The digital populace quickly reacted, with many sharing the image, asserting the claim's veracity without pause for verification.
Unraveling the Fabric of Fake News: Fact-Checking Dua Lipa's India Experience
The narrative gained momentum through platforms of dubious credibility, such as the Twitter handle,' which, upon closer scrutiny by the Digital Forensics Research and Analytics Center, was revealed to be a purveyor of fake news. The very fabric of the claim began to unravel as the original photo was traced back to the official Facebook page of RVCJ Media, untainted by the allegations that had been so hastily ascribed to it. Moreover, the silence of Dua Lipa on the matter, rather than serving as a testament to the truth, inadvertently fueled the fires of speculation—a stark reminder of the paradox where the absence of denial is often misconstrued as an affirmation.
The pop star's words, shared on her Instagram account, painted a starkly different picture of her experience in India. She spoke not of fear and harassment, but of gratitude and joy, describing her trip as 'deeply meaningful' and expressing her luck to be 'within the magic' with her family. The juxtaposition of her heartfelt account with the sinister narrative constructed around her serves as a cautionary tale of the power of misinformation to distort and defile.
A Political Microcosm: Bye Elections of Telangana
Another incident is electoral misinformation, the political landscape of Telangana, India, bristled with anticipation as the Election Commission announced bye-elections for two Member of Legislative Council (MLC) seats. Here, too, the machinery of misinformation whirred into action, with political narratives being shaped and reshaped through the lens of partisan prisms. The electoral process, transparent in its intent, became susceptible to selective amplification, with certain facets magnified or distorted to fit entrenched political narratives. The bye-elections, thus, became a battleground not just for political supremacy but also for the integrity of information.
The Far-Reaching Claws of Misinformation: Fact Check
The misinformation regarding the experience of dua lipa upon India's visit and another incident of political Microcosm of Misinformation in Telangana are manifestations of a global challenge. Misinformation, adapts to the different contours of its environment, whether it be the gritty arena of politics or the glitzy realm of stardom. Its tentacles reach far and wide, with geopolitical implications that can destabilise regions, sow discord, and undermine the very pillars of democracy. The erosion of trust that misinformation engenders is perhaps its most insidious effect, as it chips away at the bedrock of societal cohesion and collective well-being.
Paradox of Technology
The same technological developments that have allowed the spread of misinformation also hold the keys to its containment. Artificial intelligence-powered fact-checking tools, blockchain-enabled transparency counter-measures, and comprehensive digital literacy campaigns stand as bulwarks against falsehoods. These tools, however, are not panaceas; they require the active engagement and critical thinking skills of each digital citizen to be truly effective.
Conclusion
As we stand at the cusp of the digital age, the way forward demands vigilance, collaboration, and innovation. Cultivating a digitally literate person, capable of discerning the nuances of digital content, is paramount. Governments, the tech industry, media companies, and civil society must join forces in a common front, leveraging their collective expertise in the battle against misinformation. Promoting algorithmic accountability and fostering diverse information ecosystems will also be crucial in mitigating the inadvertent amplification of falsehoods.
In the end, discerning truth in the digital age is a delicate process. It requires us to be attuned to the rhythm of reality, and wary of the seductive allure of unverified claims. As we navigate this digital realm, remember that the truth is not just a destination but a journey that demands our unwavering commitment to the pursuit of what is real and what is right.
References
- https://telanganatoday.com/eci-releases-schedule-for-bye-elections-to-two-mlc-seats-in-telangana
- https://www.oneindia.com/fact-check/was-pop-singer-dua-lipa-sexually-harassed-in-rajasthan-during-her-india-trip-heres-the-truth-3718833.html?story=3
- https://www.thequint.com/news/webqoof/edited-graphic-of-dua-lipa-being-sexually-harassed-in-jodhpur-falsely-shared-fact-check
.jpg)
Introduction
The Indian Cabinet has approved a comprehensive national-level IndiaAI Mission with a budget outlay ofRs.10,371.92 crore. The mission aims to strengthen the Indian AI innovation ecosystem by democratizing computing access, improving data quality, developing indigenous AI capabilities, attracting top AI talent, enabling industry collaboration, providing startup risk capital, ensuring socially-impactful A projects, and bolstering ethical AI. The mission will be implemented by the'IndiaAI' Independent Business Division (IBD) under the Digital India Corporation (DIC) and consists of several components such as IndiaAI Compute Capacity, IndiaAI Innovation Centre (IAIC), IndiaAI Datasets Platform, India AI Application Development Initiative, IndiaAI Future Skills, IndiaAI Startup Financing, and Safe & Trusted AI over the next 5 years.
This financial outlay is intended to befulfilled through a public-private partnership model, to ensure a structured implementation of the IndiaAI Mission. The main objective is to create and nurture an ecosystem for India’s AI innovation. This mission is intended to act as a catalyst for shaping the future of AI for India and the world. AI has the potential to become an active enabler of the digital economy and the Indian government aims to harness its full potential to benefit its citizens and drive the growth of its economy.
Key Objectives of India's AI Mission
● With the advancements in data collection, processing and computational power, intelligent systems can be deployed in varied tasks and decision-making to enable better connectivity and enhance productivity.
● India’s AI Mission will concentrate on benefiting India and addressing societal needs in primary areas of healthcare, education, agriculture, smart cities and infrastructure, including smart mobility and transportation.
● This mission will work with extensive academia-industry interactions to ensure the development of core research capability at the national level. This initiative will involve international collaborations and efforts to advance technological frontiers by generating new knowledge and developing and implementing innovative applications.
The strategies developed for implementing the IndiaAI Mission are via Public-Private Partnerships, Skilling initiatives and AI Policy and Regulation. An example of the work towards the public-private partnership is the pre-bid meeting that the IT Ministry hosted on 29th August2024, which saw industrial participation from Nvidia, Intel, AMD, Qualcomm, Microsoft Azure, AWS, Google Cloud and Palo Alto Networks.
Components of IndiaAI Mission
The IndiaAI Compute Capacity: The IndiaAI Compute pillar will build a high-end scalable AI computing ecosystem to cater to India's rapidly expanding AI start-ups and research ecosystem. The ecosystem will comprise AI compute infrastructure of 10,000 or more GPUs, built through public-private partnerships. An AI marketplace will offer AI as a service and pre-trained models to AI innovators.
The IndiaAI Innovation Centre will undertake the development and deployment of indigenous Large Multimodal Models (LMMs) and domain-specific foundational models in critical sectors. The IndiaAI Datasets Platform will streamline access to quality on-personal datasets for AI innovation.
The IndiaAI Future Skills pillar will mitigate barriers to entry into AI programs and increase AI courses in undergraduate, master-level, and Ph.D. programs. Data and AI Labs will be set up in Tier 2 and Tier 3 cities across India to impart foundational-level courses.
The IndiaAI Startup Financing pillar will support and accelerate deep-tech AI startups, providing streamlined access to funding for futuristic AI projects.
The Safe & Trusted AI pillar will enable the implementation of responsible AI projects and the development of indigenous tools and frameworks, self-assessment check lists for innovators, and other guidelines and governance frameworks by recognising the need for adequate guardrails to advance the responsible development, deployment, and adoption of AI.
CyberPeace Considerations for the IndiaAI Mission
● Data privacy and security are paramount as emerging privacy instruments aim to ensure ethical AI use. Addressing bias and fairness in AI remains a significant challenge, especially with poor-quality or tampered datasets that can lead to flawed decision-making, posing risks to fairness, privacy, and security.
● Geopolitical tensions and export control regulations restrict access to cutting-edge AI technologies and critical hardware, delaying progress and impacting data security. In India, where multilingualism and regional diversity are key characteristics, the unavailability of large, clean, and labeled datasets in Indic languages hampers the development of fair and robust AI models suited to the local context.
● Infrastructure and accessibility pose additional hurdles in India’s AI development. The country faces challenges in building computing capacity, with delays in procuring essential hardware, such as GPUs like Nvidia’s A100 chip, hindering businesses, particularly smaller firms. AI development relies heavily on robust cloud computing infrastructure, which remains in its infancy in India. While initiatives like AIRAWAT signal progress, significant gaps persist in scaling AI infrastructure. Furthermore, the scarcity of skilled AI professionals is a pressing concern, alongside the high costs of implementing AI in industries like manufacturing. Finally, the growing computational demands of AI lead to increased energy consumption and environmental impact, raising concerns about balancing AI growth with sustainable practices.
Conclusion
We advocate for ethical and responsible AI development adoption to ensure ethical usage, safeguard privacy, and promote transparency. By setting clear guidelines and standards, the nation would be able to harness AI's potential while mitigating risks and fostering trust. The IndiaAI Mission will propel innovation, build domestic capacities, create highly-skilled employment opportunities, and demonstrate how transformative technology can be used for social good and enhance global competitiveness.
References
● https://pib.gov.in/PressReleasePage.aspx?PRID=2012375

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.