#FactCheck-AI-Generated Viral Image of US President Joe Biden Wearing a Military Uniform
Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.

Similar Post:

Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.

Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool

The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.

Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake
Related Blogs

Introduction
We consume news from various sources such as news channels, social media platforms and the Internet etc. In the age of the Internet and social media, the concern of misinformation has become a common issue as there is widespread misinformation or fake news on the Internet and social media platforms.
Misinformation on social media platforms
The wide availability of user-provided content on online social media platforms facilitates the spread of misinformation. With the vast population on social media platforms, the information gets viral and spreads all over the internet. It has become a serious concern as such misinformation, including rumours, morphed images, unverified information, fake news, and planted stories, spread easily on the internet, leading to severe consequences such as public riots, lynching, communal tensions, misconception about facts, defamation etc.
Platform-centric measures to mitigate the spread of misinformation
- Google introduced the ‘About this result’ feature’. This allows the users to help with better understand the search results and websites at a glance.
- During the covid-19 pandemic, there were huge cases of misinformation being shared. Google, in April 2020, invested $6.5 million in funding to fact-checkers and non-profits fighting misinformation around the world, including a check on information related to coronavirus or on issues related to the treatment, prevention, and transmission of Covid-19.
- YouTube also have its Medical Misinformation Policy which prevents the spread of information or content which is in contravention of the World Health Organization (WHO) or local health authorities.
- At the time of the Covid-19 pandemic, major social media platforms such as Facebook and Instagram have started showing awareness pop-ups which connected people to information directly from the WHO and regional authorities.
- WhatsApp has a limit on the number of times a WhatsApp message can be forwarded to prevent the spread of fake news. And also shows on top of the message that it is forwarded many times. WhatsApp has also partnered with fact-checking organisations to make sure to have access to accurate information.
- On Instagram as well, when content has been rated as false or partly false, Instagram either removes it or reduces its distribution by reducing its visibility in Feeds.
Fight Against Misinformation
Misinformation is rampant all across the world, and the same needs to be addressed at the earliest. Multiple developed nations have synergised with tech bases companies to address this issue, and with the increasing penetration of social media and the internet, this remains a global issue. Big tech companies such as Meta and Google have undertaken various initiatives globally to address this issue. Google has taken up the initiative to address this issue in India and, in collaboration with Civil Society Organisations, multiple avenues for mass-scale awareness and upskilling campaigns have been piloted to make an impact on the ground.
How to prevent the spread of misinformation?
Conclusion
In the digital media space, there is a widespread of misinformative content and information. Platforms like Google and other social media platforms have taken proactive steps to prevent the spread of misinformation. Users should also act responsibly while sharing any information. Hence creating a safe digital environment for everyone.

Introduction
Twitter is a popular social media plate form with millions of users all around the world. Twitter’s blue tick system, which verifies the identity of high-profile accounts, has been under intense scrutiny in recent years. The platform must face backlash from its users and brands who have accused it of basis, inaccuracy, and inconsistency in its verification process. This blog post will explore the questions raised on the verification process and its impact on users and big brands.
What is Twitter’s blue trick System?
The blue tick system was introduced in 2009 to help users identify the authenticity of well-known public figures, Politicians, celebrities, sportspeople, and big brands. The Twitter blue Tick system verifies the identity of high-profile accounts to display a blue badge next to your username.
According to a survey, roughly there are 294,000 verified Twitter Accounts which means they have a blue tick badge with them and have also paid the subscription for the service, which is nearly $7.99 monthly, so think about those subscribers who have paid the amount and have also lost their blue badge won’t they feel cheated?
The Controversy
Despite its initial aim, the blue tick system has received much criticism from consumers and brands. Twitter’s irregular and non-transparent verification procedure has sparked accusations of prejudice and inaccuracy. Many Twitter users have complained that the network’s verification process is random and favours account with huge followings or celebrity status. In contrast, others have criticised the platform for certifying accounts that promote harmful or controversial content.
Furthermore, the verification mechanism has generated user confusion, as many need to understand the significance of the blue tick badge. Some users have concluded that the blue tick symbol represents a Twitter endorsement or that the account is trustworthy. This confusion has resulted in users following and engaging with verified accounts that promote misleading or inaccurate data, undermining the platform’s credibility.
How did the Blue Tick Row start in India?
On 21 May 2021, when the government asked Twitter to remove the blue badge from several profiles of high-profile Indian politicians, including the Indian National Congress Party Vice-President Mr Rahul Ghandhi.
The blue badge gives the users an authenticated identity. Many celebrities, including Amitabh Bachchan, popularly known as Big B, Vir Das, Prakash Raj, Virat Kohli, and Rohit Sharma, have lost their blue tick despite being verified handles.
What is the Twitter policy on blue tick?
To Twitter’s policy, blue verification badges may be removed from accounts if the account holder violates the company’s verification policy or terms of service. In such circumstances, Twitter typically notifies the account holder of the removal of the verification badge and the reason for the removal. In the instance of the “Twitter blue badge row” in India, however, it appears that Twitter did not notify the impacted politicians or their representatives before revoking their verification badges. Twitter’s lack of communication has exacerbated the controversy around the episode, with some critics accusing the company of acting arbitrarily and not following due process.
Is there a solution?
The “Twitter blue badge row” has no simple answer since it involves a complex convergence of concerns about free expression, social media policies, and government laws. However, here are some alternatives:
- Establish clear guidelines: Twitter should develop and constantly implement clear guidelines and policies for the verification process. All users, including politicians and government officials, would benefit from greater transparency and clarity.
- Increase transparency: Twitter’s decision-making process for deleting or restoring verification badges should be more open. This could include providing explicit reasons for badge removal, notifying impacted users promptly, and offering an appeals mechanism for those who believe their credentials were removed unfairly.
- Engage in constructive dialogue: Twitter should engage in constructive dialogue with government authorities and other stakeholders to address concerns about the platform’s content moderation procedures. This could contribute to a more collaborative approach to managing online content, leading to more effective and accepted policies.
- Follow local rules and regulations: Twitter should collaborate with the Indian government to ensure it conforms to local laws and regulations while maintaining freedom of expression. This could involve adopting more precise standards for handling requests for material removal or other actions from governments and other organisations.
Conclusion
To sum up, the “Twitter blue tick row” in India has highlighted the complex challenges that Social media faces daily in handling the conflicting interests of free expression, government rules, and their own content moderation procedures. While Twitter’s decision to withdraw the blue verification badges of several prominent Indian politicians garnered anger from the government and some public members, it also raised questions about the transparency and uniformity of Twitter’s verification procedure. In order to deal with this issue, Twitter must establish clear verification procedures and norms, promote transparency in its decision-making process, participate in constructive communication with stakeholders, and adhere to local laws and regulations. Furthermore, the Indian government should collaborate with social media platforms to create more effective and acceptable laws that balance the necessity for free expression and the protection of citizens’ rights. The “Twitter blue tick row” is just one example of the complex challenges that social media platforms face in managing online content, and it emphasises the need for greater collaboration among platforms, governments, and civil society organisations to develop effective solutions that protect both free expression and citizens’ rights.

AI and other technologies are advancing rapidly. This has ensured the rapid spread of information, and even misinformation. LLMs have their advantages, but they also come with drawbacks, such as confident but inaccurate responses due to limitations in their training data. The evidence-driven retrieval systems aim to address this issue by using and incorporating factual information during response generation to prevent hallucination and retrieve accurate responses.
What is Retrieval-Augmented Response Generation?
Evidence-driven Retrieval Augmented Generation (or RAG) is an AI framework that improves the accuracy and reliability of large language models (LLMs) by grounding them in external knowledge bases. RAG systems combine the generative power of LLMs with a dynamic information retrieval mechanism. The standard AI models rely solely on pre-trained knowledge and pattern recognition to generate text. RAG pulls in credible, up-to-date information from various sources during the response generation process. RAG integrates real-time evidence retrieval with AI-based responses, combining large-scale data with reliable sources to combat misinformation. It follows the pattern of:
- Query Identification: When misinformation is detected or a query is raised.
- Evidence Retrieval: The AI searches databases for relevant, credible evidence to support or refute the claim.
- Response Generation: Using the evidence, the system generates a fact-based response that addresses the claim.
How is Evidence-Driven RAG the key to Fighting Misinformation?
- RAG systems can integrate the latest data, providing information on recent scientific discoveries.
- The retrieval mechanism allows RAG systems to pull specific, relevant information for each query, tailoring the response to a particular user’s needs.
- RAG systems can provide sources for their information, enhancing accountability and allowing users to verify claims.
- Especially for those requiring specific or specialised knowledge, RAG systems can excel where traditional models might struggle.
- By accessing a diverse range of up-to-date sources, RAG systems may offer more balanced viewpoints, unlike traditional LLMs.
Policy Implications and the Role of Regulation
With its potential to enhance content accuracy, RAG also intersects with important regulatory considerations. India has one of the largest internet user bases globally, and the challenges of managing misinformation are particularly pronounced.
- Indian regulators, such as MeitY, play a key role in guiding technology regulation. Similar to the EU's Digital Services Act, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandate platforms to publish compliance reports detailing actions against misinformation. Integrating RAG systems can help ensure accurate, legally accountable content moderation.
- Collaboration among companies, policymakers, and academia is crucial for RAG adaptation, addressing local languages and cultural nuances while safeguarding free expression.
- Ethical considerations are vital to prevent social unrest, requiring transparency in RAG operations, including evidence retrieval and content classification. This balance can create a safer online environment while curbing misinformation.
Challenges and Limitations of RAG
While RAG holds significant promise, it has its challenges and limitations.
- Ensuring that RAG systems retrieve evidence only from trusted and credible sources is a key challenge.
- For RAG to be effective, users must trust the system. Sceptics of content moderation may show resistance to accepting the system’s responses.
- Generating a response too quickly may compromise the quality of the evidence while taking too long can allow misinformation to spread unchecked.
Conclusion
Evidence-driven retrieval systems, such as Retrieval-Augmented Generation, represent a pivotal advancement in the ongoing battle against misinformation. By integrating real-time data and credible sources into AI-generated responses, RAG enhances the reliability and transparency of online content moderation. It addresses the limitations of traditional AI models and aligns with regulatory frameworks aimed at maintaining digital accountability, as seen in India and globally. However, the successful deployment of RAG requires overcoming challenges related to source credibility, user trust, and response efficiency. Collaboration between technology providers, policymakers, and academic experts can foster the navigation of these to create a safer and more accurate online environment. As digital landscapes evolve, RAG systems offer a promising path forward, ensuring that technological progress is matched by a commitment to truth and informed discourse.
References
- https://experts.illinois.edu/en/publications/evidence-driven-retrieval-augmented-response-generation-for-onlin
- https://research.ibm.com/blog/retrieval-augmented-generation-RAG
- https://medium.com/@mpuig/rag-systems-vs-traditional-language-models-a-new-era-of-ai-powered-information-retrieval-887ec31c15a0
- https://www.researchgate.net/publication/383701402_Web_Retrieval_Agents_for_Evidence-Based_Misinformation_Detection