#FactCheck-AI-Generated Viral Image of US President Joe Biden Wearing a Military Uniform
Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.

Similar Post:

Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.

Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool

The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.

Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake
Related Blogs

Introduction
In 2019 India got its bill on Data protection in the form of the Personal Data Protection Bill 2019. This bill focused on digital rights and duties pertaining to data privacy. However, the bill was scrapped by the Govt in mid-2022, and a new bill was drafted, Successor bill was introduced as the Digital Personal Data Protection Bill, 2022 on 18th November 2022, which was made open for public comments and consultations and now the bill is expected to be tabled at the parliament in the Monsoon session.
What is DPDP, 2022?
Digital Personal Data Protection Bill, is the lasted draft regulation for data privacy in India. The bill has been essentially focused towards data protection by companies and the keep aspect of Puttaswamy judgement of data privacy as a fundamental right has been upheld under the scope of the bill. The bill comes after nearly 150 recommendations which the parliamentary committee made when the PDP, 2019 was scrapped.
The bill highlights the following keen aspects-
- Data Fiduciary- The entity (an individual, company, firm, state, etc.) which decides the purpose and means of processing an individual’s personal data.
- Data Principle- The individual to whom personal data is related.
- Processing- The entire cycle of operations that can be carried out concerning personal data.
- Gender Neutrality- For the first time in India’s legislative history, “her” and “she” have been used to refer to individuals irrespective of gender.
- Right to Erase Data- Data principals will have the right to demand the erasure and correction of data collected by the data fiduciary.
- Cross-border data transfer- The bill allows cross-border data after an assessment of relevant factors by the Central Government.
- Children’s Rights- The bill guarantees the right to digital privacy under the protection of parents/guardians.
- Heavy Penalties- The bill enforces heavy penalties for non-compliance with the provisions, not exceeding Rs 500 crore.
Data Protection Board
The bill lays down provisions for setting up a Data Protection Board. This board will be an independent body acting solely on the factors of data privacy and protection of the data principles and maintaining compliance by data fiduciaries. The board will be headed by a chairperson of essential and relevant qualifications, and members and various other officials shall assist him/her under the board. The board will serve grievance redressal to the data principles and can conduct investigation, inquiry, proceeding, and pass orders equivalent to a Civil court. The proceeding will be undertaken on the principle of natural justice, and the aggrieved can file an appeal to the High Court of appropriate jurisdiction.
Global Comparison
Many countries have data protection laws that regulate the processing of personal data. Some of the notable examples include:
- European Union: The EU’s General Data Protection Regulation (GDPR) is one of the world’s most comprehensive data protection laws. It regulates public and private entities’ processing of personal data and gives individuals a wide range of rights over their personal data.
- United States: The US has several data protection laws that apply to specific sectors or types of data, such as health data (HIPAA) or financial data (Gramm-Leach-Bliley Act). However, there is no comprehensive federal data protection law in the US.
- Japan: Japan’s Personal Information Protection Act (PIPA) regulates the handling of personal data by private entities and gives individuals certain rights over their personal data.
- Australia: Australia’s Privacy Act 1988 regulates the handling of personal data by public and private entities and gives individuals certain rights over their personal data.
- Brazil: Brazil’s General Data Protection Law (LGPD) regulates the processing of personal data by public and private entities and gives individuals certain rights over their personal data. It also imposes heavy fines and penalties on entities that violate the provisions of the law.
Overall, while there are some similarities in data protection laws across countries, there are also significant differences in scope, applicability, and enforcement. It is important for organisations to understand the data protection laws that apply to their operations and take appropriate steps to comply with these laws.
Parliamentary Asscent
The case of violation of the privacy policy by WhatsApp at the Hon’ble Supreme Court resulted in a significant advocacy for Data privacy as a fundamental right, and it was held that, as suggested otherwise in the privacy policy, Whatsapp was sharing its user’s data with Meta. This massive breach of trust could have led to data mismanagement affecting thousands of Indian users. The Hon’ble Supreme Court has taken due consideration of data privacy and its challenges in India and asked the Govt to table the bill in Parliament. The bill will be tabled for discussion in the monsoon session. The Supreme Court has set up a constitutional bench to check the bill’s scope, extent and applications and provide its judicial oversight. The constitution bench of Justices KM Joseph, Ajay Rastogi, Aniruddha Bose, Hrishikesh Roy and CT Ravikumar has fixed the matter for hearing in August in order to enforce the potential changes and amendments in the act post the parliamentary discussion.
Conclusion
India is the world’s largest democracy, so the crucial aspects of passing laws and amendments have always been followed by the government and kept under check by the judiciary. The discussion over bills is a crucial part of the democratic process, and bills as important as Digital Personal Data Protection need to be discussed and analysed thoroughly in both houses of Parliament to ensure the govt passes a sustainable and efficient law.

Introduction
The advent of AI-driven deepfake technology has facilitated the creation of explicit counterfeit videos for sextortion purposes. There has been an alarming increase in the use of Artificial Intelligence to create fake explicit images or videos for sextortion.
What is AI Sextortion and Deepfake Technology
AI sextortion refers to the use of artificial intelligence (AI) technology, particularly deepfake algorithms, to create counterfeit explicit videos or images for the purpose of harassing, extorting, or blackmailing individuals. Deepfake technology utilises AI algorithms to manipulate or replace faces and bodies in videos, making them appear realistic and often indistinguishable from genuine footage. This enables malicious actors to create explicit content that falsely portrays individuals engaging in sexual activities, even if they never participated in such actions.
Background on the Alarming Increase in AI Sextortion Cases
Recently there has been a significant increase in AI sextortion cases. Advancements in AI and deepfake technology have made it easier for perpetrators to create highly convincing fake explicit videos or images. The algorithms behind these technologies have become more sophisticated, allowing for more seamless and realistic manipulations. And the accessibility of AI tools and resources has increased, with open-source software and cloud-based services readily available to anyone. This accessibility has lowered the barrier to entry, enabling individuals with malicious intent to exploit these technologies for sextortion purposes.

The proliferation of sharing content on social media
The proliferation of social media platforms and the widespread sharing of personal content online have provided perpetrators with a vast pool of potential victims’ images and videos. By utilising these readily available resources, perpetrators can create deepfake explicit content that closely resembles the victims, increasing the likelihood of success in their extortion schemes.
Furthermore, the anonymity and wide reach of the internet and social media platforms allow perpetrators to distribute manipulated content quickly and easily. They can target individuals specifically or upload the content to public forums and pornographic websites, amplifying the impact and humiliation experienced by victims.
What are law agencies doing?
The alarming increase in AI sextortion cases has prompted concern among law enforcement agencies, advocacy groups, and technology companies. This is high time to make strong Efforts to raise awareness about the risks of AI sextortion, develop detection and prevention tools, and strengthen legal frameworks to address these emerging threats to individuals’ privacy, safety, and well-being.
There is a need for Technological Solutions, which develops and deploys advanced AI-based detection tools to identify and flag AI-generated deepfake content on platforms and services. And collaboration with technology companies to integrate such solutions.
Collaboration with Social Media Platforms is also needed. Social media platforms and technology companies can reframe and enforce community guidelines and policies against disseminating AI-generated explicit content. And can ensure foster cooperation in developing robust content moderation systems and reporting mechanisms.
There is a need to strengthen the legal frameworks to address AI sextortion, including laws that specifically criminalise the creation, distribution, and possession of AI-generated explicit content. Ensure adequate penalties for offenders and provisions for cross-border cooperation.
Proactive measures to combat AI-driven sextortion
Prevention and Awareness: Proactive measures raise awareness about AI sextortion, helping individuals recognise risks and take precautions.
Early Detection and Reporting: Proactive measures employ advanced detection tools to identify AI-generated deepfake content early, enabling prompt intervention and support for victims.
Legal Frameworks and Regulations: Proactive measures strengthen legal frameworks to criminalise AI sextortion, facilitate cross-border cooperation, and impose offender penalties.
Technological Solutions: Proactive measures focus on developing tools and algorithms to detect and remove AI-generated explicit content, making it harder for perpetrators to carry out their schemes.
International Cooperation: Proactive measures foster collaboration among law enforcement agencies, governments, and technology companies to combat AI sextortion globally.
Support for Victims: Proactive measures provide comprehensive support services, including counselling and legal assistance, to help victims recover from emotional and psychological trauma.
Implementing these proactive measures will help create a safer digital environment for all.

Misuse of Technology
Misusing technology, particularly AI-driven deepfake technology, in the context of sextortion raises serious concerns.
Exploitation of Personal Data: Perpetrators exploit personal data and images available online, such as social media posts or captured video chats, to create AI- manipulation violates privacy rights and exploits the vulnerability of individuals who trust that their personal information will be used responsibly.
Facilitation of Extortion: AI sextortion often involves perpetrators demanding monetary payments, sexually themed images or videos, or other favours under the threat of releasing manipulated content to the public or to the victims’ friends and family. The realistic nature of deepfake technology increases the effectiveness of these extortion attempts, placing victims under significant emotional and financial pressure.
Amplification of Harm: Perpetrators use deepfake technology to create explicit videos or images that appear realistic, thereby increasing the potential for humiliation, harassment, and psychological trauma suffered by victims. The wide distribution of such content on social media platforms and pornographic websites can perpetuate victimisation and cause lasting damage to their reputation and well-being.
Targeting teenagers– Targeting teenagers and extortion demands in AI sextortion cases is a particularly alarming aspect of this issue. Teenagers are particularly vulnerable to AI sextortion due to their increased use of social media platforms for sharing personal information and images. Perpetrators exploit to manipulate and coerce them.
Erosion of Trust: Misusing AI-driven deepfake technology erodes trust in digital media and online interactions. As deepfake content becomes more convincing, it becomes increasingly challenging to distinguish between real and manipulated videos or images.
Proliferation of Pornographic Content: The misuse of AI technology in sextortion contributes to the proliferation of non-consensual pornography (also known as “revenge porn”) and the availability of explicit content featuring unsuspecting individuals. This perpetuates a culture of objectification, exploitation, and non-consensual sharing of intimate material.
Conclusion
Addressing the concern of AI sextortion requires a multi-faceted approach, including technological advancements in detection and prevention, legal frameworks to hold offenders accountable, awareness about the risks, and collaboration between technology companies, law enforcement agencies, and advocacy groups to combat this emerging threat and protect the well-being of individuals online.

Introduction
In an era where digitalization is transforming every facet of life, ensuring that personal data is protected becomes crucial. The enactment of the Digital Personal Data Protection Act, 2023 (DPDP Act) is a significant step that has been taken by the Indian Parliament which sets forth a comprehensive framework for Digital Personal Data. The Draft Digital Personal Data Protection Rules, 2025 has recently been released for public consultation to supplement the Act and ensure its smooth implementation once finalised. Though noting certain positive aspects, there is still room for addressing certain gaps and multiple aspects under the draft rules that require attention. The DPDP Act, 2023 recognises the individual’s right to protect their personal data providing control over the processing of personal data for lawful purposes. This Act applies to data which is available in digital form as well as data which is not in digital form but is digitalised subsequently. While the Act is intended to offer wide control to the individuals (Data Principal) over their personal information, its impact on vulnerable groups such as ‘Persons with Disabilities’ requires closer scrutiny.
Person with Disabilities as data principal
The term ‘data principal’ has been defined under the DPDP Act under Section 2(j) as a person to whom the personal data is related to, which also includes a person with a disability. A lawful guardian acting on behalf of such person with disability has also been included under the ambit of this definition of Data Principal. As a result, a lawful guardian acting on behalf of a person with disability will have the same rights and responsibilities as a data principal under the Act.
- Section 9 of the DPDP Act, 2023 states that before processing the personal data of a person with a disability who has a lawful guardian, the data fiduciary must obtain verifiable consent from that guardian, ensuring proper protection of the person with disability's data privacy.
- The data principal has the right to access information about personal data under Section 11 which is being processed by the data fiduciary.
- Section 12 provides the right to correction and erasure of personal data by making a request in a manner prescribed by the data fiduciary.
- A right to grievance redressal must be provided to the data principal in respect of any act or omission of performance of obligations by the data fiduciary or the consent manager.
- Under Section 14, the data principal has the right to nominate any other person to exercise the rights provided under the Act in case of death or incapacity.
Provision of consent and its implication
The three key components of Consent that can be identified under the DPDP Act, are:
- Explicit and Informed Consent: Consent given for the processing of data by the data principal or a lawful guardian in case of persons with disabilities must be clear, free and informed as per section 6 of the Act. The data fiduciary must specify the itemised description of the personal data required along with the specified purpose and description of the goods or services that would be provided by such processing of data. (Rule 3 under Draft Digital Personal Data Protection Rules)
- Verifiable Consent: Section 9 of the DPDP Act provides that the data fiduciary needs to obtain verifiable consent of the lawful guardian before processing any personal data of such a person with a disability. Rule 10 of the Draft Rules obligates the data fiduciary to adopt measures to ensure that the consent given by the lawful guardian is verifiable before the is processed.
- Withdrawal of Consent: Data principal or such lawful guardian has the option to withdraw consent for the processing of data at any point by making a request to the data fiduciary.
Although the Act includes certain provisions that focus on the inclusivity of persons with disability, the interpretation of such sections says otherwise.
Concerns related to provisions for Persons with Disabilities under the DPDP Act:
- Lack of definition of ‘person with disabilities’: The DPDP Act or the Draft Rules does not define the term ‘persons with disabilities’. This will create confusion as to which categories of disability are included and up to what percentage. The Rights of Persons with Disabilities Act, 2016 clearly defines ‘person with benchmark disability’, ‘person with disability’ and ‘person with disability having high support needs’. This categorisation is essential to determine up to what extent a person with disability needs a lawful guardian which is missing under the DPDP Act.
- Lack of autonomy: Though the definition of data principal includes persons with disabilities however the decision-making authority has been given to the lawful guardian of such individuals. The section creates ambiguity for people who have a lower percentage of disability and are capable of making their own decisions and have no autonomy in making decisions related to the processing of their personal data because of the lack of clarity in the definition of ‘persons with disabilities’.
- Safeguards for abuse of power by lawful guardian: The lawful guardian once verified by the data fiduciary can make decisions for the persons with disabilities. This raises concerns regarding the potential abuse of power by lawful guardians in relation to the handling of personal data. The DPDP Act does not provide any specific protection against such abuse.
- Difficulty in verification of consent: The consent obtained by the Data Fiduciary must be verified. The process that will be adopted for verification is at the discretion of the data fiduciary according to Rule 10 of the Draft Data Protection Rules. The authenticity of consent is difficult to determine as it is a complex process which lacks a standard format. Also, with the technological advancements, it would be challenging to identify whether the information given to verify the consent is actually true.
CyberPeace Recommendations
The DPDP Act, 2023 is a major step towards making the data protection framework more comprehensive, however, the provisions related to persons with disabilities and powers given to lawful guardians acting on their behalf still need certain clarity and refinement within the DPDP Act framework.
- Consonance of DPDP with Rights of Persons with Disabilities (RPWD) Act, 2016: The RPWD and DPDP Act should supplement each other and can be used to clear the existing ambiguities. Such as the definition of ‘persons with disabilities’ under the RPWD Act can be used in the context of the DPDP Act, 2023.
- Also, there must be certain mechanisms and safeguards within the Act to prevent abuse of power by the lawful guardian. The affected individual in case of suspected abuse of power should have an option to file a complaint with the Data Protection Board and the Board can further take necessary actions to determine whether there is abuse of power or not.
- Regulatory oversight and additional safeguards are required to ensure that consent is obtained in a manner that respects the rights of all individuals, including those with disabilities.
References:
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.meity.gov.in/writereaddata/files/259889.pdf
- https://www.indiacode.nic.in/bitstream/123456789/15939/1/the_rights_of_persons_with_disabilities_act%2C_2016.pdf
- https://www.deccanherald.com/opinion/consent-disability-rights-and-data-protection-3143441
- https://www.pacta.in/digital-data-protection-consent-protocols-for-disability.pdf
- https://www.snrlaw.in/indias-new-data-protection-regime-tracking-updates-and-preparing-for-compliance/