#FactCheck - AI Manipulated image showing Anant Ambani and Radhika Merchant dressed in golden outfits.
Executive Summary:
A viral claim circulated in social media that Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe. Thorough analysis revealed abnormalities in image quality, particularly between the face, neck, and hands compared to the claimed gold clothing, leads to possible AI manipulation. A keyword search found no credible news reports or authentic images supporting this claim. Further analysis using AI detection tools, TrueMedia and Hive Moderator, confirmed substantial evidence of AI fabrication, with a high probability of the image being AI-generated or a deep fake. Additionally, a photo from a previous event at Jio World Plaza matched with the pose of the manipulated image, further denying the claim and indicating that the image of Anant Ambani and Radhika Merchant wearing golden outfit during their pre-wedding cruise was digitally altered.
Claims:
Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe.
Fact Check:
When we received the posts, we found anomalies that were usually found in edited images or AI manipulated images, particularly between the face, neck, and hands.
It’s very unusual in any image. So we then checked in AI Image detection software named Hive Moderation detection tool and found it to be 95.9% AI manipulated.
We also checked with another widely used AI detection tool named True Media. True Media also found it to be 100% to be made using AI.
This implies that the image is AI-generated. To find the original image that has been edited, we did keyword search. We found an image with the same pose as in the manipulated image, with the title "Radhika Merchant, Anant Ambani pose with Mukesh Ambani at Jio World Plaza opening”. The two images can be compared to verify that the digitally altered image is the same.
Hence, it’s confirmed that the viral image is digitally altered and has no connection with the 2nd Pre-wedding cruise party in Europe. Thus the viral image is fake and misleading.
Conclusion:
The claim that Anant Ambani and Radhika Merchant wore clothes made of pure gold at their pre-wedding cruise party in Europe is false. The analysis of the image showed signs of manipulation, and a lack of credible news reports or authentic photos supports that it was likely digitally altered. AI detection tools confirmed a high probability that the image was fake, and a comparison with a genuine photo from another event revealed that the image had been edited. Therefore, the claim is false and misleading.
- Claim: Anant Ambani and Radhika Merchant wore clothes made of pure gold during their pre-wedding cruise party in Europe.
- Claimed on: YouTube, LinkedIn, Instagram
- Fact Check: Fake & Misleading
Related Blogs
Introduction
Bumble’s launch of its ‘Opening Move’ feature has sparked a new narrative on safety and privacy within the digital dating sphere and has garnered mixed reactions from users. It was launched against the backdrop of women stating that the ‘message first’ policy of Bumble was proving to be tedious. Addressing the large-scale review, Bumble launched its ‘Opening Move’ feature, whereby users can either craft or select from pre-set questions which potential matches may choose to answer to start the conversation at first glance. These questions are a segue into meaningful and insightful conversation from the get-go and overstep the traditional effort to start engaging chats between matched users. This feature is an optional feature that users may enable and as such does not prevent a user from exercising the autonomy previously in place.
Innovative Approach to Conversation Starters
Many users consider this feature as innovative; not only does it act as a catalyst for fluid conversation but also cultivates insightful dialogue, fostering meaningful interactions that are devoid of the constraint of superficial small talk. The ‘Opening Moves’ feature may also be aligned with unique scientific research indicating that individuals form their initial attractions within 3-seconds of intimate interaction, thereby proving to be a catalyst to the decision-making process of an individual in the attraction time frame.
Organizational Benefits and Data Insights
From an organisational standpoint, the feature is a unique solution towards localisation challenges faced by apps; the option of writing a personalised ‘Opening Move’ implies setting prompts that are culturally relevant and appropriate in a specific area. Moreover, it is anticipated that Bumble may enhance and improve user experience within the platform through data analysis. Data from responses to an ‘Opening Move’ may provide valuable insights into user preferences and patterns by analysing which pre-set prompts garner more responses over others and how often is a user-written ‘Opening Move’ successful in obtaining a response in comparison with Bumble’s pre-set prompts. A quick glance at Bumble’s privacy policy[1] shows that data storing and transferring of chats between users are not shared with third parties, further safeguarding personal privacy. However, Bumble does use the chat data for its own internal purposes after removing personally identifiable information from chats. The manner of such review and removal of data has not been specified, which may raise challenges depending upon whether the reviewer is a human or an algorithm.
However, some users perceive the feature as counterproductive to the company’s principle of ‘women make the first move’. While Bumble aims to market the feature as a neutral ground for matched users based on the exercise of choice, users see it as a step back into the heteronormative gender expectations that most dating apps conform to, putting the onus of the ‘first move’ on men. Many male users have complained that the feature acts as a catalyst for men to opt out of the dating app and would most likely refrain from interacting with profiles enabled with the ‘Opening Move’ feature, since the pressure to answer in a creative manner is disproportionate with the likelihood their response actually being entertained.[2] Coupled with the female users terming the original protocol as ‘too much effort’, the preset questions of the ‘Opening Move’ feature may actively invite users to categorise potential matches according to arbitrary questions that undermine real-life experiences, perspectives and backgrounds of each individual.[3]
Additionally, complications are likely to arise when a notorious user sets a question that indirectly gleans personal or sensitive, identifiable information. The individual responding may be bullied or be subjected to hateful slurs when they respond to such carefully crafted conversation prompts.
Safety and Privacy Concerns
On the corollary, the appearance of choice may translate into more challenges for women on the platform. The feature may spark an increase in the number of unsolicited, undesirable messages and images from a potential match. The most vulnerable groups at present remain individuals who identify as females and other sexual minorities.[4] At present, there appears to be no mechanism in place to proactively monitor the content of responses, relying instead on user reporting. This approach may prove to be impractical given the potential volume of objectionable messages, necessitating a more efficient solution to address this issue. It is to be noted that in spite of a user reporting, the current redressal systems of online platforms remain lax, largely inadequate and demonstrate ineffectiveness in addressing user concerns or grievances. This lack of proactiveness is violative of the right to redressal provided under the Digital Personal Data Protection Act, 2023. It is thought that the feature may actually take away user autonomy that Bumble originally aimed to grant since Individuals who identify as introverted, shy, soft-spoken, or non-assertive may refrain from reporting harassing messages altogether, potentially due to discomfort or reluctance to engage in confrontation. Resultantly, it is anticipated that there would be a sharp uptake in cases pertaining to cyberbullying, harassment and hate speech (especially vulgar communications) towards both the user and the potential match.
From an Indian legal perspective, dating apps have to adhere to the Information Technology Act, 2000 [5], the Information Technology (Intermediary and Digital Media Ethics) Rules 2021 [6] and the Digital Personal Data Protection Act, 2023, that regulates a person’s digital privacy and set standards on the kind of content an intermediary may host. An obligation is cast upon an intermediary to uprise its users on what content is not allowed on its platform in addition to mandating intimation of the user’s digital rights. The lack of automated checks, as mentioned above, is likely to make Bumble non-compliant with the ethical guidelines.
The optional nature of the ‘Opening Move’ grants users some autonomy. However, some technical updates may enhance the user experience of this feature. Technologies like AI are an effective aid in behavioural and predictive analysis. An upgraded ‘matching’ algorithm can analyse the number of un-matches a profile receives, thereby identifying and flagging a profile having multiple lapsed matches. Additionally, the design interface of the application bearing a filter option to filter out flagged profiles would enable a user to be cautious while navigating through the matches. Another possible method of weeding out notorious profiles is by deploying a peer-review system of profiles whereby a user has a singular check-box that enables them to flag a profile. Such a checkbox would ideally be devoid of any option for writing personal comments and would bear a check box stating whether the profile is most or least likely to bully/harass. This would ensure that a binary, precise response is recorded and any coloured remarks are avoided. [7]
Governance and Monitoring Mechanisms
From a governance point of view, a monitoring mechanism on the manner of crafting questions is critical. Systems should be designed to detect certain words/sentences and a specific manner of framing sentences to disallow questions contrary to the national legal framework. An onscreen notification having instructions on generally acceptable manner of conversations as a reminder to users to maintain cyber hygiene while conversing is also proposed as a mandated requirement for platforms. The notification/notice may also include guidelines on what information is safe to share in order to safeguard user privacy. Lastly, a revised privacy policy should establish the legal basis for processing responses to ‘Opening Moves’, thereby bringing it in compliance with national legislations such as the Digital Personal Data Protection Act, 2023.
Conclusion
Bumble's 'Opening Move' feature marks the company’s ‘statement’ step to address user concerns regarding initiating conversations on the platform. While it has been praised for fostering more meaningful interactions, it also raises not only ethical concerns but also concerns over user safety. While the 'Opening Move' feature can potentially enhance user experience, its success is largely dependent on Bumble's ability to effectively navigate the complex issues associated with this feature. A more robust monitoring mechanism that utilises newer technology is critical to address user concerns and to ensure compliance with national laws on data privacy.
Endnotes:
- [1] Bumble’s privacy policy https://bumble.com/en-us/privacy
- [2] Discussion thread, r/bumble, Reddit https://www.reddit.com/r/Bumble/comments/1cgrs0d/women_on_bumble_no_longer_have_to_make_the_first/?share_id=idm6DK7e0lgkD7ZQ2TiTq&utm_content=2&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1&rdt=65068
- [3] Mcrea-Hedley, Olivia, “Love on the Apps: When did Dating Become so Political?”, 8 February 2024 https://www.service95.com/the-politics-of-dating-apps/
- [4] Gewirtz-Meydan, A., Volman-Pampanel, D., Opuda, E., & Tarshish, N. (2024). ‘Dating Apps: A New Emerging Platform for Sexual Harassment? A Scoping Review. Trauma, Violence, & Abuse, 25(1), 752-763. https://doi.org/10.1177/15248380231162969
- [5] Information Technology Act, 2000 https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
- [6] Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules 2021 https://www.meity.gov.in/writereaddata/files/Information%20Technology%20%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf
- [7] Date Confidently: Engaging Features in a Dating App (Use Cases), Consaguous, 10 July 2023 https://www.consagous.co/blog/date-confidently-engaging-features-in-a-dating-app-use-cases
Introduction
In the era of the internet where everything is accessible at your fingertip, a disturbing trend is on the rise- over 90% of websites containing child abuse material now have self-generated images, obtained from victims as young as three years old. A shocking revelation, shared by the (IWF) internet watch foundation, The findings of the IWF have caused concern about the increasing exploitation of children below the age of 10. The alarming trend highlights the increasing exploitation of children under the age of 10, who are coerced, blackmailed, tricked, or groomed into participating in explicit acts online. The IWF's data for 2023 reveals a record-breaking 275,655 websites hosting child sexual abuse material, with 92% of them containing such "self-generated" content.
Disturbing Tactics Shift
Disturbing numbers came that, highlight a distressing truth. In 2023, 275,655 websites were discovered to hold child sexual abuse content, reaching a new record and reflecting an alarming 8% increase over the previous year. What's more concerning is that 92% of these websites had photos or videos generated by the website itself. Surprisingly, 107,615 of these websites had content involving children under the age of ten, with 2,500 explicitly featuring youngsters aged three to six.
Profound worries
Deep concern about the rising incidence of images taken by extortion or coercion from elementary school-aged youngsters. This footage is currently being distributed on very graphic and specialised websites devoted to child sexual assault. The process begins in a child's bedroom with the use of a camera and includes the exchange, dissemination, and gathering of explicit content by devoted and determined persons who engage in sexual exploitation. These criminals are ruthless. The materials are being circulated via mail, instant messaging, chat rooms, and social media platforms, (WhatsApp, Telegram, Skype, etc.)
Live Streaming of such material involves real-time broadcast which again is a major concern as the nature of the internet is borderless the access to such material is international, national, and regional, which even makes it difficult to get the predators and convict them. With the growth, it has become easy for predators to generate “self-generated “images or videos.
Financial Exploitation in the Shadows: The Alarming Rise of Sextortion
Looking at the statistics globally there have been studies that show an extremely shocking pattern known as “sextortion”, in which adolescents are targeted for extortion and forced to pay money under the threat of exposing images to their families or relatives and friends or on social media. The offender's goal is to obtain sexual gratification.
The financial variation of sextortion takes a darker turn, with criminals luring kids into making sexual content and then extorting them for money. They threaten to reveal the incriminating content unless their cash demands, which are frequently made in the form of gift cards, mobile payment services, wire transfers, or cryptocurrencies, are satisfied. In this situation, the predators are primarily driven by money gain, but the psychological impact on their victims is as terrible. A shocking case was highlighted where an 18-year-old was landed in jail for blackmailing a young girl, sending indecent images and videos to threaten her via Snapchat. The offender was pleaded guilty.
The Question on Security?
The introduction of end-to-end encryption in platforms like Facebook Messenger has triggered concerns within law enforcement agencies. While enhancing user privacy, critics argue that it may inadvertently facilitate criminal activities, particularly the exploitation of vulnerable individuals. The alignment with other encrypted services is seen as a potential challenge, making it harder to detect and investigate crimes, thus raising questions about finding a balance between privacy and public safety.
One of the major concerns in the online safety of children is the implementation of encryption by asserting that it enhances the security of individuals, particularly children, by safeguarding them from hackers, scammers, and criminals. They underscored their dedication to enforcing safety protocols, such as prohibiting adults from texting teenagers who do not follow them and employing technology to detect and counteract bad conduct.
These distressing revelations highlight the urgent need for comprehensive action to protect our society's most vulnerable citizens i.e., children, youngsters, and adolescents throughout the era of digital progress. As experts and politicians grapple with these troubling trends, the need for action to safeguard kids online becomes increasingly urgent.
Role of Technology in Combating Online Exploitation
With the rise of technology, there has been a rise in online child abuse, technology also serves as a powerful tool to combat it. The advanced algorithms and use of Artificial intelligence tools can be used to disseminate ‘self-generated’ images. Additional tech companies can collaborate and develop some effective solutions to safeguard every child and individual.
Role of law enforcement agencies
Child abuse knows no borders, and addressing the issues requires legal intervention at all levels. National, regional, and international law enforcement agencies investigate online child sexual exploitation and abuse and cooperate in the investigation of these cybercrimes, Various investigating agencies need to have mutual legal assistance and extradition, bilateral, and multilateral conventions to conduct to identify, investigate, and prosecute perpetrators of online child sexual exploitation and abuse. Apart from this cooperation between private and government agencies is important, sharing the database of perpetrators can help the agencies to get them caught.
How do you safeguard your children?
Looking at the present scenario it has become a crucial part of protecting and safeguarding our children online against online child abuse here are some practical steps that can help in safeguarding your loved one.
- Open communication: Establish open communication with your children, make them feel comfortable, and share your experiences with them, make them understand what good internet surfing is and educate them about the possible risks without generating fear.
- Teach Online Safety: educate your children about the importance of privacy and the risks associated with it. Teach them strong privacy habits like not sharing any personal information with a stranger on any social media platform. Teach them to create some unique passwords and to make them aware not to click on any suspicious links or download files from unknown sources.
- Set boundaries: As a parent set rules and guidelines for internet usage, set time limits, and monitor their online activities without infringing their privacy. Monitor their social media platforms and discuss inappropriate behaviour or online harassment. As a parent take an interest in your children's online activities, websites, and apps inform them, and teach them online safety measures.
Conclusion
The predominance of self-generated' photos in online child abuse content necessitates immediate attention and coordinated action from governments, technology corporations, and society as a whole. As we negotiate the complicated environment of the digital age, we must be watchful, modify our techniques, and collaborate to defend the innocence of the most vulnerable among us. To combat online child exploitation, we must all work together to build a safer, more secure online environment for children all around the world.
References
- https://www.the420.in/over-90-of-websites-containing-child-abuse-feature-self-generated-images-warns-iwf/
- https://news.sky.com/story/self-generated-images-found-on-92-of-websites-containing-child-sexual-abuse-with-victims-as-young-as-three-13049628
- https://www.firstpost.com/world/russia-rejects-us-proposal-to-resume-talks-on-nuclear-arms-control-13630672.html
- https://www.news4hackers.com/iwf-warns-that-more-than-90-of-websites-contain-self-generated-child-abuse-images/
Executive Summary:
The viral social media posts circulating several photos of Indian Army soldiers eating their lunch in the extremely hot weather near the border area in Barmer/ Jaisalmer, Rajasthan, have been detected as AI generated and proven to be false. The images contain various faults such as missing shadows, distorted hand positioning and misrepresentation of the Indian flag and soldiers body features. The various AI generated tools were also used to validate the same. Before sharing any pictures in social media, it is necessary to validate the originality to avoid misinformation.
Claims:
The photographs of Indian Army soldiers having their lunch in extreme high temperatures at the border area near to the district of Barmer/Jaisalmer, Rajasthan have been circulated through social media.
Fact Check:
Upon the study of the given images, it can be observed that the images have a lot of similar anomalies that are usually found in any AI generated image. The abnormalities are lack of accuracy in the body features of the soldiers, the national flag with the wrong combination of colors, the unusual size of spoon, and the absence of Army soldiers’ shadows.
Additionally it is noticed that the flag on Indian soldiers’ shoulder appears wrong and it is not the traditional tricolor pattern. Another anomaly, soldiers with three arms, strengtheness the idea of the AI generated image.
Furthermore, we used the HIVE AI image detection tool and it was found that each photo was generated using an Artificial Intelligence algorithm.
We also checked with another AI Image detection tool named Isitai, it was also found to be AI-generated.
After thorough analysis, it was found that the claim made in each of the viral posts is misleading and fake, the recent viral images of Indian Army soldiers eating food on the border in the extremely hot afternoon of Badmer were generated using the AI Image creation tool.
Conclusion:
In conclusion, the analysis of the viral photographs claiming to show Indian army soldiers having their lunch in scorching heat in Barmer, Rajasthan reveals many anomalies consistent with AI-generated images. The absence of shadows, distorted hand placement, irregular showing of the Indian flag, and the presence of an extra arm on a soldier, all point to the fact that the images are artificially created. Therefore, the claim that this image captures real-life events is debunked, emphasizing the importance of analyzing and fact-checking before sharing in the era of common widespread digital misinformation.
- Claim: The photo shows Indian army soldiers having their lunch in extreme heat near the border area in Barmer/Jaisalmer, Rajasthan.
- Claimed on: X (formerly known as Twitter), Instagram, Facebook
- Fact Check: Fake & Misleading