#FactCheck-AI-Generated Viral Image of US President Joe Biden Wearing a Military Uniform
Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.

Similar Post:

Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.

Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool

The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.

Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake
Related Blogs

The global race for Artificial Intelligence is heating up, and India has become one of its most important battlegrounds. Over the past few months, tech giants like OpenAI (ChatGPT), Google (Gemini), X (Grok), Meta (Llama), and Perplexity AI have stepped up their presence in the country, not by selling their AI tools, but by offering them free or at deep discounts.
At first, it feels like a huge win for India’s digital generation. Students, professionals, and entrepreneurs today can tap into some of the world’s most powerful AI tools without paying a rupee. It feels like a digital revolution unfolding in real time. Yet, beneath this generosity lies a more complicated truth. Experts caution that this wave of “free” AI access isn’t without strings attached. This offering impacts how India handles data privacy, the fairness of competition, and the pace of the development of homegrown AI innovation that the country is focusing on.
The Market Strategy: Free Now, Pay Later
The choice of global AI companies to offer free access in India is a calculated business strategy. With one of the world’s largest and fastest-growing digital populations, India is a market no tech giant wants to miss. By giving away their AI tools for free, these firms are playing a long game:
- Securing market share early: Flooding the market with free access helps them quickly attract millions of users before Indian startups have a chance to catch up. Recent examples are Perplexity, ChatGPT Go and Gemini AI which are offering free subscriptions to Indian users.
- Gathering local data: Every interaction, every prompt, question, or language pattern, helps these models learn from larger datasets to improve their product offerings in India and the rest of the world. Nothing is free in the world - as the popular saying goes, “if something is free, means you are the product. The same goes for these AI platforms: they monetise user data by analysing chats and their behaviour to refine their model and build paid products. This creates the privacy risk as India currently lacks specific laws to govern how such data is stored, processed or used for AI training.
- Create user dependency: Once users grow accustomed to the quality and convenience of these global models, shifting to Indian alternatives, even when they become paid, will be difficult. This approach mirrors the “freemium” model used in other tech sectors, where users are first attracted through free access and later monetised through subscriptions or premium features, raising ethical concerns.
Impact on Indian Users
For most Indians, the short-term impact of free AI access feels overwhelmingly positive. Tools like ChatGPT and Gemini are breaking down barriers by democratising knowledge and making advanced technology available to everyone, from students, professionals, to small businesses. It’s changing how people learn, think and do - all without spending a single rupee.But the long-term picture isn’t quite as simple. Beneath the convenience lies a set of growing concerns:
- Data privacy risks: Many users don’t realise that their chats, prompts, or queries might be stored and used to train global AI models. Without strong data protection laws in action, sensitive Indian data could easily find its way into foreign systems.
- Overdependence on foreign technology: Once these AI tools become part of people’s daily lives, moving away from them gets harder — especially if free access later turns into paid plans or comes with restrictive conditions.
- Language and cultural bias: Most large AI models are still built mainly around English and Western data. Without enough Indian language content and cultural representation, the technology risks overlooking the very diversity that defines India
Impact on India’s AI Ecosystem
India’s Generative AI market, valued at USD $ 1.30 billion in 2024, is projected to reach 5.40 billion by 2033. Yet, this growth story may become uneven if global players dominate early.
Domestic AI startups face multiple hurdles — limited funding, high compute costs, and difficulty in accessing large, diverse datasets. The arrival of free, GPT-4-level models sharpens these challenges by raising user expectations and increasing customer acquisition costs.
As AI analyst Kashyap Kompella notes, “If users can access GPT-4-level quality at zero cost, their incentive to try local models that still need refinement will be low.” This could stifle innovation at home, resulting in a shallow domestic AI ecosystem where India consumes global technology but contributes little to its creation.
CCI’s Intervention: Guarding Fair Competition
The Competition Commission of India (CCI) has started taking note of how global AI companies are shaping India’s digital market. In a recent report, it cautioned that AI-driven pricing strategies such as offering free or heavily subsidised access could distort healthy competition and create an uneven playing field for smaller Indian developers.
The CCI’s decision to step in is both timely and necessary. Without proper oversight, such tactics could gradually push homegrown AI startups to the sidelines and allow a few foreign tech giants to gain disproportionate influence over India’s emerging AI economy.
What the Indian Government Should Do
To ensure India’s AI landscape remains competitive, inclusive, and innovation-driven, the government must adopt a balanced strategy that safeguards users while empowering local developers.
1. Promote Fair Competition
The government should mandate transparency in free access offers, including their duration, renewal terms, and data-use policies. Exclusivity deals between foreign AI firms and telecom or device companies must be closely monitored to prevent monopolistic practices.
2. Strengthen Data Protection
Under the Digital Personal Data Protection (DPDP) Act, companies should be required to obtain explicit consent from users before using data for model training. Encourage data localisation, ensuring that sensitive Indian data remains stored within India’s borders.
3. Support Domestic AI Innovation
Accelerate the implementation of the IndiaAI Mission to provide public compute infrastructure, open datasets, and research funding to local AI developers like Sarvam AI, an Indian company chosen by the government to build the country's first homegrown large language model (LLM) under IndianAI Mission.
4. Create an Open AI Ecosystem
India should develop national AI benchmarks to evaluate all models, foreign or domestic, on performance, fairness, and linguistic diversity. And at the same time, they have their own national data Centre to train their indigenous AI models.
5. Encourage Responsible Global Collaboration
Speaking at the AI Action Summit 2025, the Prime Minister highlighted that governance should go beyond managing risks and should also promote innovation for the global good. Building on this idea, India should encourage global AI companies to invest meaningfully in the country’s ecosystem through research labs, data centres, and AI education programmes. Such collaborations will ensure that these partnerships not only expand markets but also create value, jobs and knowledge within India.
Conclusion
The surge of free AI access across India represents a defining moment in the nation’s digital journey. On one hand, it’s empowering millions of people and accelerating AI awareness like never before. On the other hand, it poses serious challenges from over-reliance on foreign platforms to potential risks around data privacy and the slow growth of local innovation. India’s real test will be finding the right balance between access and autonomy, allowing global AI leaders to innovate and operate here, but within a framework that protects the interests of Indian users, startups, and data ecosystems. With strong and timely action under the Digital Personal Data Protection (DPDP) Act, the IndiaAI Mission, and the Competition Commission of India’s (CCI) active oversight, India can make sure this AI revolution isn’t just something that happens to the country, but for it.
References
- https://www.moneycontrol.com/artificial-intelligence/cci-study-flags-steep-barriers-for-indian-ai-startups-calls-for-open-data-and-compute-access-to-level-playing-field-article-13600606.html#
- https://www.imarcgroup.com/india-generative-ai-market
- https://www.mea.gov.in/Speeches-Statements.htm?dtl/39020/Opening_Address_by_Prime_Minister_Shri_Narendra_Modi_at_the_AI_Action_Summit_Paris_February_11_2025
- https://m.economictimes.com/tech/artificial-intelligence/nasscom-planning-local-benchmarks-for-indic-ai-models/articleshow/124218208.cms
- https://indianexpress.com/article/business/centre-selects-start-up-sarvam-to-build-country-first-homegrown-ai-model-9967243/#

Executive Summary:
New Linux malware has been discovered by a cybersecurity firm Volexity, and this new strain of malware is being referred to as DISGOMOJI. A Pakistan-based threat actor alias ‘UTA0137’ has been identified as having espionage aims, with its primary focus on Indian government entities. Like other common forms of backdoors and botnets involved in different types of cyberattacks, DISGOMOJI, the malware allows the use of commands to capture screenshots, search for files to steal, spread additional payloads, and transfer files. DISGOMOJI uses Discord (messaging service) for Command & Control (C2) and uses emojis for C2 communication. This malware targets Linux operating systems.
The DISCOMOJI Malware:
- The DISGOMOJI malware opens a specific channel in a Discord server and every new channel corresponds to a new victim. This means that the attacker can communicate with the victim one at a time.
- This particular malware connects with the attacker-controlled Discord server using Emoji, a form of relay protocol. The attacker provides unique emojis as instructions, and the malware uses emojis as a feedback to the subsequent command status.
- For instance, the ‘camera with flash’ emoji is used to screenshots the device of the victim or to steal, the ‘fox’ emoji cracks all Firefox profiles, and the ‘skull’ emoji kills the malware process.
- This C2 communication is done using emojis to ensure messaging between infected contacts, and it is almost impossible for Discord to shut down the malware as it can always change the account details of Discord it is using once the maliciou server is blocked.
- The malware also has capabilities aside from the emoji-based C2 such as network probing, tunneling, and data theft that are needed to help the UTA0137 threat actor in achieving its espionage goals.
Specific emojis used for different commands by UTA0137:
- Camera with Flash (📸): Captures a picture of the target device’s screen as per the victim’s directions.
- Backhand Index Pointing Down (👇): Extracts files from the targeted device and sends them to the command channel in the form of attachments.
- Backhand Index Pointing Right (👉): This process involves sending a file found on the victim’s device to another web-hosted file storage service known as Oshi or oshi[. ]at.
- Backhand Index Pointing Left (👈): Sends a file from the victim’s device to transfer[. ]sh, which is an online service for sharing files on the Internet.
- Fire (🔥): Finds and transmits all files with certain extensions that exist on the victim’s device, such as *. txt, *. doc, *. xls, *. pdf, *. ppt, *. rtf, *. log, *. cfg, *. dat, *. db, *. mdb, *. odb, *. sql, *. json, *. xml, *. php, *. asp, *. pl, *. sh, *. py, *. ino, *. cpp, *. java,
- Fox (🦊): This works by compressing all Firefox related profiles in the affected device.
- Skull (💀): Kills the malware process in windows using ‘os. Exit()’
- Man Running (🏃♂️): Execute a command on a victim’s device. This command receives an argument, which is the command to execute.
- Index Pointing up (👆) : Upload a file to the victim's device. The file to upload is attached along with this emoji
Analysis:
The analysis was carried out for one of the indicator of compromised SHA-256 hash file- C981aa1f05adf030bacffc0e279cf9dc93cef877f7bce33ee27e9296363cf002.
It is found that most of the vendors have marked the file as trojan in virustotal and the graph explains the malicious nature of the contacted domains and IPs.


Discord & C2 Communication for UTA0137:
- Stealthiness: Discord is a well-known messaging platform used for different purposes, which means that sending any messages or files on the server should not attract suspicion. Such stealthiness makes it possible for UTA0137 to remain dormant for greater periods before launching an attack.
- Customization: UTA0137 connected to Discord is able to create specific channels for distinct victims on the server. Such a framework allows the attackers to communicate with each of the victims individually to make a process more accurate and efficient.
- Emoji-based protocol: For C2 communication, emojis really complicates the attempt that Discord might make to interfere with the operations of the malware. In case the malicious server gets banned, malware could easily be recovered, especially by using the Discord credentials from the C2 server.
- Persistence: The malware, as stated above, has the ability to perpetually exist to hack the system and withstand rebooting of systems so that the virus can continue to operate without being detected by the owner of the hacked system.
- Advanced capabilities: Other features of DISGOMOJI are the Network Map using Nmap scanner, network tunneling through Chisel and Ligolo and Data Exfiltration by File Sharing services. These capabilities thus help in aiding the espionage goals of UTA0137.
- Social engineering: The virus and the trojan can show the pop-up windows and prompt messages, for example the fake update for firefox and similar applications, where the user can be tricked into inputting the password.
- Dynamic credential fetching: The malware does not write the hardcoded values of the credentials in order to connect it to the discord server. This also inconveniences analysts as they are unable to easily locate the position of the C2 server.
- Bogus informational and error messages: They never show any real information or errors because they do not want one to decipher the malicious behavior easily.
Recommendations to mitigate the risk of UTA0137:
- Regularly Update Software and Firmware: It is essential to regularly update all the application software and firmware of different devices, particularly, routers, to prevent hackers from exploiting the discovered and disclosed flaws. This includes fixing bugs such as CVE-2024-3080 and CVE-2024-3912 on ASUS routers, which basically entails solving a set of problems.
- Implement Multi-Factor Authentication: There are statistics that show how often user accounts are attacked, it is important to incorporate multi-factor authentication to further secure the accounts.
- Deploy Advanced Malware Protection: Provide robust guard that will help the user recognize and prevent the execution of the DISGOMOJI malware and similar threats.
- Enhance Network Segmentation: Utilize stringent network isolation mechanisms that seek to compartmentalize the key systems and data from the rest of the network in order to minimize the attack exposure.
- Monitor Network Activity: Scanning Network hour to hour for identifying and handling the security breach and the tools such as Nmap, Chisel, Ligolo etc can be used.
- Utilize Threat Intelligence: To leverage advanced threats intelligence which will help you acquire knowledge on previous threats and vulnerabilities and take informed actions.
- Secure Communication Channels: Mitigate the problem of the leakage of developers’ credentials and ways of engaging with the discord through loss of contact to prevent abusing attacks or gaining control over Discord as an attack vector.
- Enforce Access Control: Regularly review and update the user authentication processes by adopting stricter access control measures that will allow only the right personnel to access the right systems and information.
- Conduct Regular Security Audits: It is important to engage in security audits periodically in an effort to check some of the weaknesses present within the network or systems.
- Implement Incident Response Plan: Conduct a risk assessment, based on that design and establish an efficient incident response kit that helps in the early identification, isolation, and management of security breaches.
- Educate Users: Educate users on cybersecurity hygiene, opportunities to strengthen affinity with the University, and conduct retraining on threats like phishing and social engineering.
Conclusion:
The new threat actor named UTA0137 from Pakistan who was utilizing DISGOMOJI malware to attack Indian government institutions using embedded emojis with a command line through the Discord app was discovered by Volexity. It has the capability to exfiltrate and aims to steal the data of government entities. The UTA0137 was continuously improved over time to permanently communicate with victims. It underlines the necessity of having strong protection from viruses and hacker attacks, using secure passwords and unique codes every time, updating the software more often and having high-level anti-malware tools. Organizations can minimize advanced threats, the likes of DISGOMOJI and protect sensitive data by improving network segmentation, continuous monitoring of activities, and users’ awareness.
References:
https://otx.alienvault.com/pulse/66712446e23b1d14e4f293eb
https://thehackernews.com/2024/06/pakistani-hackers-use-disgomoji-malware.html?m=1
https://cybernews.com/news/hackers-using-emojis-to-command-malware/
https://www.volexity.com/blog/2024/06/13/disgomoji-malware-used-to-target-indian-government/

On 22nd October 2024, Jyotiraditya Scindia, Union Minister for Communications, launched the (DoT) Department of Telecoms’ International Incoming Spoofed Calls Prevention System. This was introduced in light of efforts toward preventing international fraudulent calls that enable cyber crimes. A recent report as per PIB claims for the system to have been effective and played a role in a 90% reduction in the number of spoofed international calls, its instances falling from 1.35 Crore to 6 Lakhs within two months of the launch of the system.
International spoof calls are calls that masquerade as numbers originating from within the country when displayed on the target's mobile screen. This is done by manipulating the calling line identity or the CLI, commonly known as the phone number. Previous cases reported mention that such spoof calls have been used for conducting financial scams, impersonating government officials to carry out digital arrests, and inducing panic. Instances of threats of disconnecting numbers by TRAI officials, and narcotics officials on finding drugs or even contraband through couriers are also rampant.
International Incoming Spoofed Calls Prevention System
As was addressed in the Budget in 2024, the system was previously called the Centralised International Out Roamer (CIOR), and the DoT was allocated Rs.38.76 crore for the same. The Digital Intelligence Unit (DIU) under the DoT is another project that aims to investigate and research fraudulent use of telecom resources, including messages, scams, and spam - the budget for which has been increased from 50 to 85 crores.
The International Incoming Spoofed Calls Prevention System was implemented in two phases, the first one was at the level of the telephone companies (telcos). Telcos can verify their subscribers and Indian SIMs based on the Indian Telecom Service Providers (TSPs) international long-distance (ILD) network. When a user with an Indian number travels abroad, the roaming feature gets activated, and all calls hit the ILD network of the TSP. This allows the TSP to verify whether the numbers starting with +91 are genuinely making calls from abroad or from India. However, a TSP can only verify numbers that are issued with their TSP ILD network and not those of other TSPs. This issue was addressed in the second phase, as the DIU of DoT and the TSPs built an integrated system so that a centralised database could be used to check for genuine subscribers.
CyberPeace Outlook
A press release on 23rd December 2024 encouraged the TSPs to label incoming International calls as International calls on the mobile screen of the receiver. Some of them have already started adding labels and are sending awareness messages informing their subscribers of tips on staying safe from scams. Apart from these, there are also applications available online that help in identifying callers and their location, however, these are at the behest of the users' efforts and have moderate trust value. At the level of the public, the practice of blocking unknown international numbers and not calling back, along with awareness regarding country codes is encouraged. Coordinated and updated efforts on the part of the Government and the TSPs are much appreciated in today's time as scammers continue to find new ways to commit cyber crimes using telecommunication resources.
References
- https://www.hindustantimes.com/india-news/jyotiraditya-scindia-launches-dot-system-to-block-spam-international-calls-101729615441509.html
- https://www.business-standard.com/india-news/centre-launches-system-to-block-international-spoofed-calls-curb-fraud-124102300449_1.html
- https://www.opindia.com/2024/12/number-of-spoofed-international-calls-used-in-cyber-crimes-goes-down-by-90-in-2-months/
- https://www.cnbctv18.com/technology/telecom/telecom-department-anti-spoofed-international-calls-19529459.htm
- https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2067113
- https://pib.gov.in/PressReleasePage.aspx?PRID=2087644
- https://www.hindustantimes.com/india-news/display-international-call-for-calls-from-abroad-to-curb-scams-dot-to-telecos-101735050551449.html