#FactCheck - Viral Video of US President Biden Dozing Off during Television Interview is Digitally Manipulated and Inauthentic
Executive Summary:
The claim of a video of US President Joe Biden dozing off during a television interview is digitally manipulated . The original video is from a 2011 incident involving actor and singer Harry Belafonte. He seems to fall asleep during a live satellite interview with KBAK – KBFX - Eyewitness News. Upon thorough analysis of keyframes from the viral video, it reveals that US President Joe Biden’s image was altered in Harry Belafonte's video. This confirms that the viral video is manipulated and does not show an actual event involving President Biden.
Claims:
A video shows US President Joe Biden dozing off during a television interview while the anchor tries to wake him up.
Fact Check:
Upon receiving the posts, we watched the video then divided the video into keyframes using the inVid tool, and reverse-searched one of the frames from the video.
We found another video uploaded on Oct 18, 2011 by the official channel of KBAK - KBFX - Eye Witness News. The title of the video reads, “Official Station Video: Is Harry Belafonte asleep during live TV interview?”
The video looks similar to the recent viral one, the TV anchor could be heard saying the same thing as in the viral video. Taking a cue from this we also did some keyword searches to find any credible sources. We found a news article posted by Yahoo Entertainment of the same video uploaded by KBAK - KBFX - Eyewitness News.
Upon thorough investigation from reverse image search and keyword search reveals that the recent viral video of US President Joe Biden dozing off during a TV interview is digitally altered to misrepresent the context. The original video dated back to 2011, where American Singer and actor Harry Belafonte was the actual person in the TV interview but not US President Joe Biden.
Hence, the claim made in the viral video is false and misleading.
Conclusion:
In conclusion, the viral video claiming to show US President Joe Biden dozing off during a television interview is digitally manipulated and inauthentic. The video is originally from a 2011 incident involving American singer and actor Harry Belafonte. It has been altered to falsely show US President Joe Biden. It is a reminder to verify the authenticity of online content before accepting or sharing it as truth.
- Claim: A viral video shows in a television interview US President Joe Biden dozing off while the anchor tries to wake him up.
- Claimed on: X (Formerly known as Twitter)
- Fact Check: Fake & Misleading
Related Blogs
Introduction
Generative AI, particularly deepfake technology, poses significant risks to security in the financial sector. Deepfake technology can convincingly mimic voices, create lip-sync videos, execute face swaps, and carry out other types of impersonation through tools like DALL-E, Midjourney, Respeecher, Murf, etc, which are now widely accessible and have been misused for fraud. For example, in 2024, cybercriminals in Hong Kong used deepfake technology to impersonate the Chief Financial Officer of a company, defrauding it of $25 million. Surveys, including Regula’s Deepfake Trends 2024 and Sumsub reports, highlight financial services as the most targeted sector for deepfake-induced fraud.
Deepfake Technology and Its Risks to Financial Systems
India’s financial ecosystem, including banks, NBFCs, and fintech companies, is leveraging technology to enhance access to credit for households and MSMEs. The country is a leader in global real-time payments and its digital economy comprises 10% of its GDP. However, it faces unique cybersecurity challenges. According to the RBI’s 2023-24 Currency and Finance report, banks cite cybersecurity threats, legacy systems, and low customer digital literacy as major hurdles in digital adoption. Deepfake technology intensifies risks like:
- Social Engineering Attacks: Information security breaches through phishing, vishing, etc. become more convincing with deepfake imagery and audio.
- Bypassing Authentication Protocols: Deepfake audio or images may circumvent voice and image-based authentication systems, exposing sensitive data.
- Market Manipulation: Misleading deepfake content making false claims and endorsements can harm investor trust and damage stock market performance.
- Business Email Compromise Scams: Deepfake audio can mimic the voice of a real person with authority in the organization to falsely authorize payments.
- Evolving Deception Techniques: The usage of AI will allow cybercriminals to deploy malware that can adapt in real-time to carry out phishing attacks and inundate targets with increased speed and variations. Legacy security frameworks are not suited to countering automated attacks at such a scale.
Existing Frameworks and Gaps
In 2016, the RBI introduced cybersecurity guidelines for banks, neo-banking, lending, and non-banking financial institutions, focusing on resilience measures like Board-level policies, baseline security standards, data leak prevention, running penetration tests, and mandating Cybersecurity Operations Centres (C-SOCs). It also mandated incident reporting to the RBI for cyber events. Similarly, SEBI’s Cybersecurity and Cyber Resilience Framework (CSCRF) applies to regulated entities (REs) like stock brokers, mutual funds, KYC agencies, etc., requiring policies, risk management frameworks, and third-party assessments of cyber resilience measures. While both frameworks are comprehensive, they require updates addressing emerging threats from generative AI-driven cyber fraud.
Cyberpeace Recommendations
- AI Cybersecurity to Counter AI Cybercrime: AI-generated attacks can be designed to overwhelm with their speed and scale. Cybercriminals increasingly exploit platforms like LinkedIn, Microsoft Teams, and Messenger, to target people. More and more organizations of all sizes will have to use AI-based cybersecurity for detection and response since generative AI is becoming increasingly essential in combating hackers and breaches.
- Enhancing Multi-factor Authentication (MFA): With improving image and voice-generation/manipulation technologies, enhanced authentication measures such as token-based authentication or other hardware-based measures, abnormal behaviour detection, multi-device push notifications, geolocation verifications, etc. can be used to improve prevention strategies. New targeted technological solutions for content-driven authentication can also be implemented.
- Addressing Third-Party Vulnerabilities: Financial institutions often outsource operations to vendors that may not follow the same cybersecurity protocols, which can introduce vulnerabilities. Ensuring all parties follow standardized protocols can address these gaps.
- Protecting Senior Professionals: Senior-level and high-profile individuals at organizations are at a greater risk of being imitated or impersonated since they hold higher authority over decision-making and have greater access to sensitive information. Protecting their identity metrics through technological interventions is of utmost importance.
- Advanced Employee Training: To build organizational resilience, employees must be trained to understand how generative and emerging technologies work. A well-trained workforce can significantly lower the likelihood of successful human-focused human-focused cyberattacks like phishing and impersonation.
- Financial Support to Smaller Institutions: Smaller institutions may not have the resources to invest in robust long-term cybersecurity solutions and upgrades. They require financial and technological support from the government to meet requisite standards.
Conclusion
According to The India Cyber Threat Report 2025 by the Data Security Council of India (DSCI) and Seqrite, deepfake-enabled cyberattacks, especially in the finance and healthcare sectors, are set to increase in 2025. This has the potential to disrupt services, steal sensitive data, and exploit geopolitical tensions, presenting a significant risk to the critical infrastructure of India.
As the threat landscape changes, institutions will have to continue to embrace AI and Machine Learning (ML) for threat detection and response. The financial sector must prioritize robust cybersecurity strategies, participate in regulation-framing procedures, adopt AI-based solutions, and enhance workforce training, to safeguard against AI-enabled fraud. Collaborative efforts among policymakers, financial institutions, and technology providers will be essential to strengthen defenses.
Sources
- https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
- https://www.globenewswire.com/news-release/2024/10/31/2972565/0/en/Deepfake-Fraud-Costs-the-Financial-Sector-an-Average-of-600-000-for-Each-Company-Regula-s-Survey-Shows.html
- https://www.sipa.columbia.edu/sites/default/files/2023-05/For%20Publication_BOfA_PollardCartier.pdf
- https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
- https://www.rbi.org.in/Commonman/English/scripts/Notification.aspx?Id=1721
- https://elplaw.in/leadership/cybersecurity-and-cyber-resilience-framework-for-sebi-regulated-entities/
- https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-driven-deepfake-enabled-cyberattacks-to-rise-in-2025-healthcarefinance-sectors-at-risk-report/articleshow/115976846.cms?from=mdr
Introduction
In the digital era, where technology is growing rapidly, the role of Artificial Intelligence (AI) has been making its way to different corners of the world. Where nothing seems to be impossible, technology and innovation have been moving conjointly and once again, and such innovation is in the limelight with its groundbreaking initiative known as “Project Groot”, which has been announced by the AI chip leader “Nvidia”. The core of this project is the fusion of technology with AI and robotics, where a humanoid can be produced with the capability to understand the natural language and interact with it to further learn from the physical environment by observing human actions and skills. Project Groot aims to assist humans in diverse sectors such as Healthcare and so on.
Humanoid robots are based on NVIDIA’s thor system-on-chip (SoC). The thor powers the intelligence of these robots, and the chip has been designed to handle complex tasks and ensure a safe and natural interaction between humans and robots. However, a big question arises about the ethical considerations of privacy, autonomy and the possible replacement of human workers.
Brief Analysis
Nvidia has announced Project GR00T, or Generalist Robot 00 Technology, which aims to create AI-powered humanoid robots with human-like understanding and movement. The project is part of Nvidia's efforts to drive breakthroughs in robotics and embodied AI, which can interact with and learn from a physical environment. The robots built on this platform are designed to understand natural language and emulate movements by observing human actions, such as coordination, dexterity, and other skills.
The model has been trained on NVIDIA GPU-accelerated simulation, enabling the robots to learn from human demonstrations with imitation learning and from the robotics platform NVIDIA Isaac Lab for reinforcement learning. This multimodal AI system acts as the mind for humanoid robots, allowing them to learn new skills and interact with the real world. Leading names in robotics, such as Figure, Boston Dynamics, Apptronik, Agility Robotics, Sanctuary AI, and Unitree, are reported to have collaborated with Nvidia to leverage GR00T.
Nvidia has also updated Isaac with Isaac Manipulator and Isaac Perceptor, which add multi-camera 3D vision. The company also unveiled a new computer, Jetson Thor, to aid humanoid robots based on NVIDIA's SoC, which is designed to handle complex tasks and ensure a safe and natural interaction between humans and robots.
Despite the potential job loss associated with humanoid robots potentially handling hazardous and repetitive tasks, many argue that they can aid humans and make their lives more comfortable rather than replacing them.
Policy Recommendations
The Nvidia project highlights a significant development in AI Robotics, presenting a brimming potential and ethical challenges critical for the overall development and smooth assimilation of AI-driven tech in society. To ensure its smooth assimilation, a comprehensive policy framework must be put in place. This includes:
- Human First Policy - Emphasis should be on better augmentation rather than replacement. The authorities must focus on better research and development (R&D) of applications that aid in modifying human capabilities, enhancing working conditions, and playing a role in societal growth.
- Proper Ethical Guidelines - Guidelines stressing human safety, autonomy and privacy should be established. These norms must include consent for data collection, fair use of AI in decision making and proper protocols for data security.
- Deployment of Inclusive Technology - Access to AI Driven Robotics tech should be made available to diverse sectors of society. It is imperative to address potential algorithm bias and design flaws to avoid discrimination and promote inclusivity.
- Proper Regulatory Frameworks - It is crucial to establish regulatory frameworks to govern the smooth deployment and operation of AI-driven tech. The framework must include certification for safety and standards, frequent audits and liability protocols to address accidents.
- Training Initiatives - Educational programs should be introduced to train the workforce for integrating AI driven robotics and their proper handling. Upskilling of the workforce should be the top priority of corporations to ensure effective integration of AI Robotics.
- Collaborative Research Initiatives - AI and emerging technologies have a profound impact on the trajectory of human development. It is imperative to foster collaboration among governments, industry and academia to drive innovation in AI robotics responsibly and undertake collaborative initiatives to mitigate and address technical, societal, legal and ethical issues posed by AI Robots.
Conclusion
On the whole, Project GROOT is a significant quantum leap in the advancement of robotic technology and indeed paves the way for a future where robots can integrate seamlessly into various aspects of human lives.
References
- https://indianexpress.com/article/explained/explained-sci-tech/what-is-nvidias-project-gr00t-impact-robotics-9225089/
- https://medium.com/paper-explanation/understanding-nvidias-project-groot-762d4246b76d
- https://www.techradar.com/pro/nvidias-project-groot-brings-the-human-robot-future-a-significant-step-closer
- https://www.barrons.com/livecoverage/nvidia-gtc-ai-conference/card/nvidia-announces-ai-model-for-humanoid-robot-development-BwT9fewMyD6XbuBrEDSp
Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.