#FactCheck - Fake Image Claiming Patanjali selling Beef Biryani Recipe mix is Misleading
Executive Summary:
A photo that has gone viral on social media alleges that the Indian company Patanjali founded by Yoga Guru Baba Ramdev is selling a product called “Recipe Mix for Beef Biryani”. The image incorporates Ramdev’s name in its promotional package. However, upon looking into the matter, CyberPeace Research Team revealed that the viral image is not genuine. The original image was altered and it has been wrongly claimed which does not even exist. Patanjali is an Indian brand designed for vegetarians and an intervention of Ayurveda. For that reason, the image in context is fake and misleading.
Claims:
An image circulating on social media shows Patanjali selling "Recipe Mix for Beef Biryani”.
Fact Check:
Upon receiving the viral image, the CyberPeace Research Team immediately conducted an in-depth investigation. A reverse image search revealed that the viral image was taken from an unrelated context and digitally altered to be associated with the fabricated packaging of "National Recipe Mix for Biryani".
The analysis of the image confirmed signs of manipulation. Patanjali, a well-established Indian brand known for its vegetarian products, has no record of producing or promoting a product called “Recipe mix for Beef Biryani”. We also found a similar image with the product specified as “National Biryani” in another online store.
Comparing both photos, we found that there are several differences.
Further examination of Patanjali's product catalog and public information verified that this viral image is part of a deliberate attempt to spread misinformation, likely to damage the reputation of the brand and its founder. The entire claim is based on a falsified image aimed at provoking controversy, and therefore, is categorically false.
Conclusions:
The viral image associating Patanjali and Baba Ramdev with "Recipe mix for Beef Biryani" is entirely fake. This image was deliberately manipulated to spread false information and damage the brand’s reputation. Social media users are encouraged to fact-check before sharing any such claims, as the spread of misinformation can have significant consequences. The CyberPeace Research Team emphasizes the importance of verifying information before circulating it to avoid spreading false narratives.
- Claim: Patanjali and Baba Ramdev endorse "Recipe mix for Beef Biryani"
- Claimed on: X
- Fact Check: Fake & Misleading
Related Blogs
Introduction
Pagers were commonly utilized in the late 1990s and early 2000s, especially in fields that needed fast, reliable communication and swift alerts and information sharing. Pagers typically offer a broader coverage range, particularly in remote areas with limited cellular signals, which enhances their dependability. They are simple electronic devices with minimal features, making them easy to use and less prone to technical issues. The decline in their use has been caused by the rise of mobile phones and their extensive features, offering more advanced communication options like voice calls, text messages, and internet access. Despite this, pagers are still used in some specific industries.
A shocking incident occurred on 17th September 2014, where thousands of pager devices exploded within seconds across Lebanon in a synchronized attack, targeting the US-designated terror group Hezbollah. The explosions killed at least 9 and injured over 2,800 individuals in the country that has been caught up in the Israel-Palestine tensions in its backyard.
The Pager Bombs Incident
On Tuesday, 17th September 2024, hundreds of pagers carried by Hezbollah members in Lebanon exploded in an unprecedented attack, surpassing a series of covert assassinations and cyber-attacks in the region over recent years. The Iran-backed militant group claimed the wireless devices began to explode around 3:30 p.m., local time, in a targeted attack on Hezbollah operatives. The pagers that exploded were new and had been purchased by Hezbollah in recent months. Experts say the explosions underscore Hezbollah's vulnerability as its communication network was compromised to deadly effect. Several areas of the country were affected, particularly Beirut's southern suburbs, a populous area that is a known Hezbollah stronghold. At least 9 people were killed, including a child, and about 2,800 people were wounded, overwhelming Lebanese hospitals.
Second Wave of Attack
As per the most recent reports, the next day, following the pager bombing incident, a second wave of blasts hit Beirut and multiple parts of Lebanon. Certain wireless devices such as walkie-talkies, solar equipment, and car batteries exploded, resulting in at least 9 people killed and 300 injured, according to the Lebanese Health Ministry. The attack is said to have embarrassed Hezbollah, incapacitated many of its members, and raised fears about a greater escalation of hostilities between the Iran-backed Lebanese armed group and Israel.
A New Kind of Threat - ‘Cyber-Physical’ Attacks
The incident raises serious concerns about physical tampering with daily-use electronic devices and the possibility of triggering a new age of warfare. This highlights the serious physical threat posed, wherein even devices such as smartwatches, earbuds, and pacemakers could be vulnerable to physical tampering if an attacker gains physical access to them. We are potentially looking at a new age of ‘cyber-physical’ threats where the boundaries between the digital and the physical are blurring rapidly. It raises questions about unauthorised access and manipulation targeting the physical security of such electronic devices. There is a cause for concern regarding the global supply chain across sectors, if even seemingly-innocuous devices can be weaponised to such devastating effect. Such kinds of attacks are capable of causing significant disruption and casualties, as demonstrated by pager bombings in Lebanon, which resulted in numerous deaths and injuries. It also raises questions on the regulatory mechanism and oversights checks at every stage of the electronic device lifecycle, from component manufacturing to the final assembly and shipment or supply. This is a grave issue because embedding explosives and doing malicious modifications by adversaries can turn such electronic devices into weapons.
CyberPeace Outlook
The pager bombing attack demonstrates a new era of threats in warfare tactics, revealing the advanced coordination and technical capabilities of adversaries where they have weaponised the daily use of electronic devices. They have targeted the hardware security of electronic devices, presenting a serious new threat to hardware security. The threat is grave, and has understandably raised widespread apprehension globally. Such kind of gross weaponisation of daily-use devices, specially in the conflict context, also triggers concerns about the violation of International Humanitarian Law principles. It also raises serious questions on the liabilities of companies, suppliers and manufacturers of such devices, who are subject to regulatory checks and ensuring the authenticity of their products.
The incident highlights the need for a more robust regulatory landscape, with stricter supply chain regulations as we adjust to the realities of a possible new era of weaponisation and conflict expression. CyberPeace recommends the incorporation of stringent tracking and vetting processes in product supply chains, along with the strengthening of international cooperation mechanisms to ensure compliance with protocols regarding the responsible use of technology. These will go a long way towards establishing peace in the global cyberspace and restore trust and safety with regards to everyday technologies.
References:
1. https://indianexpress.com/article/what-is/what-is-a-pager-9573113/
5. https://www.theguardian.com/world/2024/sep/18/hezbollah-pager-explosion-lebanon-israel-gold-apollo
Introduction
According to a new McAfee survey, 88% of American customers believe that cybercriminals will utilize artificial intelligence to "create compelling online scams" over the festive period. In the meanwhile, 31% believe it will be more difficult to determine whether messages from merchants or delivery services are genuine, while 57% believe phishing emails and texts will be more credible. The study, which was conducted in September 2023 in the United States, Australia, India, the United Kingdom, France, Germany, and Japan, yielded 7,100 responses. Some people may decide to cut back on their online shopping as a result of their worries about AI; among those surveyed, 19% stated they would do so this year.
In 2024, McAfee predicts a rise in AI-driven scams on social media, with cybercriminals using advanced tools to create convincing fake content, exploiting celebrity and influencer identities. Deepfake technology may worsen cyberbullying, enabling the creation of realistic fake content. Charity fraud is expected to rise, leveraging AI to set up fake charity sites. AI's use by cybercriminals will accelerate the development of advanced malware, phishing, and voice/visual cloning scams targeting mobile devices. The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content.
AI Scams' Increase on Social Media
Cybercriminals plan to use strong artificial intelligence capabilities to control social media by 2024. These applications become networking goldmines because they make it possible to create realistic images, videos, and audio. Anticipate the exploitation of influencers and popular identities by cybercriminals.
AI-powered Deepfakes and the Rise in Cyberbullying
The negative turn that cyberbullying might take in 2024 with the use of counterfeit technology is one trend to be concerned about. This cutting-edge technique is freely accessible to youngsters, who can use it to produce eerily convincing synthetic content that compromises victims' privacy, identities, and wellness.
In addition to sharing false information, cyberbullies have the ability to alter public photographs and re-share edited, detailed versions, which exacerbates the suffering done to children and their families. The study issues a warning, stating that deepfake technology would probably cause online harassment to take a negative turn. With this sophisticated tool, young adults may now generate frighteningly accurate synthetic content in addition to using it for fun. The increasing severity of these deceptive pictures and phrases can cause serious, long-lasting harm to children and their families, impairing their identity, privacy, and overall happiness.
Evolvement of GenAI Fraud in 2023
We simply cannot get enough of these persistent frauds and fake emails. People in general are now rather adept at [recognizing] those that are used extensively. But if they become more precise, such as by utilizing AI-generated audio to seem like a loved one's distress call or information that is highly personal to the person, users should be much more cautious about them. The rise in popularity of generative AIs brings with it a new wrinkle, as hackers can utilize these systems to refine their attacks:
- Writing communications more skillfully in order to deceive consumers into sending sensitive information, clicking on a link, or uploading a file.
- Recreate emails and business websites as realistically as possible to prevent arousing concern in the minds of the perpetrators.
- People's faces and voices can be cloned, and deepfakes of sounds or images can be created that are undetectable to the target audience. a problem that has the potential to greatly influence schemes like CEO fraud.
- Because generative AIs can now hold conversations, and respond to victims efficiently.
- Conduct psychological manipulation initiatives more quickly, with less money spent, and with greater complexity and difficulty in detecting them. AI generative already in use in the market can write texts, clone voices, or generate images and program websites.
AI Hastens the Development of Malware and Scams
Even while artificial intelligence (AI) has many uses, cybercriminals are becoming more and more dangerous with it. Artificial intelligence facilitates the rapid creation of sophisticated malware, illicit web pages, and plausible phishing and smishing emails. As these risks become more accessible, mobile devices will be attacked more frequently, with a particular emphasis on audio and visual impersonation schemes.
Olympic Games: A Haven for Scammers
The 2024 Olympic Games are seen as a breeding ground for scams, with cybercriminals targeting fans for tickets, travel, and exclusive content. Cybercriminals are skilled at profiting from big occasions, and the buzz that will surround the 2024 Olympic Games around the world will make it an ideal time for scams. Con artists will take advantage of customers' excitement by focusing on followers who are ready to purchase tickets, arrange travel, obtain special content, and take part in giveaways. During this prominent event, vigilance is essential to avoid an invasion of one's personal records and financial data.
Development of McAfee’s own bot to assist users in screening potential scammers and authenticators for messages they receive
Precisely such kind of technology is under the process of development by McAfee. It's critical to emphasize that solving the issue is a continuous process. AI is being manipulated by bad actors and thus, one of the tricksters can pull off is to exploit the fact that consumers fall for various ruses as parameters to train advanced algorithms. Thus, the con artists may make use of the gadgets, test them on big user bases, and improve with time.
Conclusion
According to the McAfee report, 88% of American customers are consistently concerned about AI-driven internet frauds that target them around the holidays. Social networking poses a growing threat to users' privacy. By 2024, hackers hope to take advantage of AI skills and use deepfake technology to exacerbate harassment. By mimicking voices and faces for intricate schemes, generative AI advances complex fraud. The surge in charitable fraud affects both social and financial aspects, and the 2024 Olympic Games could serve as a haven for scammers. The creation of McAfee's screening bot highlights the ongoing struggle against developing AI threats and highlights the need for continuous modification and increased user comprehension in order to combat increasingly complex cyber deception.
References
- https://www.fonearena.com/blog/412579/deepfake-surge-ai-scams-2024.html
- https://cxotoday.com/press-release/mcafee-reveals-2024-cybersecurity-predictions-advancement-of-ai-shapes-the-future-of-online-scams/#:~:text=McAfee%20Corp.%2C%20a%20global%20leader,and%20increasingly%20sophisticated%20cyber%20scams.
- https://timesofindia.indiatimes.com/gadgets-news/deep-fakes-ai-scams-and-other-tools-cybercriminals-could-use-to-steal-your-money-and-personal-details-in-2024/articleshow/106126288.cms
- https://digiday.com/media-buying/mcafees-cto-on-ai-and-the-cat-and-mouse-game-with-holiday-scams/
Introduction
A photo circulating on social media depicting modified tractors is being misrepresented as part of the 'Delhi Chalo' farmers' protest narrative. In the recent swirl of misinformation surrounding the 'Delhi Chalo' farmers' protest. A photo, ostensibly showing a phalanx of modified tractors, has been making the rounds on social media platforms, falsely tethered to the ongoing protests. This image, accompanied by a headline suggesting a mechanical metamorphosis to resist police barricades, was allegedly published by a news agency. However, beneath the surface of this viral phenomenon lies a more complex and fabricated reality.
The Movement
The 'Delhi Chalo' movement, a clarion call that resonated with thousands of farmers from the fertile plains of Punjab, the verdant fields of Haryana, and the sprawling expanses of Uttar Pradesh, has been a testament to the agrarian community's demand for assured crop prices and legal guarantees for the Minimum Support Price (MSP). The protest, which has seen the fortification of borders and the chaos at the Punjab-Haryana border on February 13, 2024, has become a crucible for the farmers' unyielding spirit.
Yet, amidst this backdrop of civil demonstration and discourse, a nefarious narrative of misinformation has taken root. The viral image, which has been shared with the fervour of wildfire, was accompanied by a screenshot of an article allegedly published by the news agency. This article, dated February 11, 2024, quoted an anonymous official who claimed that intelligence agencies had alerted the police to the protesters' plans to outfit tractors with hydraulic tools. The implication was clear: these machines had been transformed into battering rams against the bulwark of law enforcement.
The Pursuit of Truth
However, the India TV Fact Check team, in their relentless pursuit of truth, unearthed that the viral photo of these so-called modified tractors is nothing but a chimerical creation, a figment of artificial intelligence. Visual discrepancies betrayed its AI-generated nature.
This is not the first time that the misinformation has loomed over the farmers' protest. Previous instances, including a viral video of a modified tractor, have been debunked by the same fact-checking team. These efforts are a bulwark against the tide of false narratives that seek to muddy the waters of public understanding.
The claim that the photo depicted modified tractors intended for use in the ‘Delhi Chalo’ farmers' protest rally in Delhi on February 13, 2024, was a mirage.
The Fact Check
OpIndia, in their article, clarified that the photo used was a representative image created by AI and not a real photograph. To further scrutinize this viral photo, the HIVE AI detector tool was employed, indicating a 99.4% likelihood of the image being AI-generated. Thus, the claim made in the post was misleading.
The viral photo claiming that farmers had modified their tractors to avoid tear gas shells and remove barricades put up by the police during the rally was a digital illusion. The internet has become a fertile ground for the rapid spread of misinformation, reaching millions in an instant. Social media, with its complex algorithms, amplifies this spread, as any interaction, even those intended to debunk false information, inadvertently increases its reach. This phenomenon is exacerbated by 'echo chambers,' where users are exposed to a homogenous stream of content that reinforces their pre-existing beliefs, making it difficult to encounter and consider alternative perspectives.
Conclusion
The viral image depicting modified tractors for the ‘Delhi Chalo’ farmers' protest rally was a digital fabrication, a testament to the power of AI in creating convincing yet false narratives. As we navigate the labyrinth of information in the digital era, it is imperative to remain vigilant, to question the veracity of what we see and hear, and to rely on the diligent work of fact-checkers in discerning the truth. The mirage of modified machines serves as a stark reminder of the potency of misinformation and the importance of critical thinking in the age of artificial intelligence.
References
- https://www.indiatvnews.com/fact-check/fact-check-ai-generated-tractor-photo-misrepresented-delhi-chalo-farmers-protest-narrative-msp-police-barricades-punjab-haryana-uttar-pradesh-2024-02-15-917010
- https://factly.in/this-viral-image-depicting-modified-tractors-for-the-delhi-chalo-farmers-protest-rally-is-created-using-ai/