#FactCheck: A viral claim suggests that by turning on Advance Chat Privacy, Meta AI can avoid reading Whatsapp chats.
Executive Summary:
A viral social media video falsely claims that Meta AI reads all WhatsApp group and individual chats by default, and that enabling “Advanced Chat Privacy” can stop this. On performing reverse image search we found a blog post of WhatsApp which was posted in the month of April 2025 which claims that all personal and group chats remain protected with end to end (E2E) encryption, accessible only to the sender and recipient. Meta AI can interact only with messages explicitly sent to it or tagged with @MetaAI. The “Advanced Chat Privacy” feature is designed to prevent external sharing of chats, not to restrict Meta AI access. Therefore, the viral claim is misleading and factually incorrect, aimed at creating unnecessary fear among users.
Claim:
A viral social media video [archived link] alleges that Meta AI is actively accessing private conversations on WhatsApp, including both group and individual chats, due to the current default settings. The video further claims that users can safeguard their privacy by enabling the “Advanced Chat Privacy” feature, which purportedly prevents such access.

Fact Check:
Upon doing reverse image search from the keyframe of the viral video, we found a WhatsApp blog post from April 2025 that explains new privacy features to help users control their chats and data. It states that Meta AI can only see messages directly sent to it or tagged with @Meta AI. All personal and group chats are secured with end-to-end encryption, so only the sender and receiver can read them. The "Advanced Chat Privacy" setting helps stop chats from being shared outside WhatsApp, like blocking exports or auto-downloads, but it doesn’t affect Meta AI since it’s already blocked from reading chats. This shows the viral claim is false and meant to confuse people.


Conclusion:
The claim that Meta AI is reading WhatsApp Group Chats and that enabling the "Advance Chat Privacy" setting can prevent this is false and misleading. WhatsApp has officially confirmed that Meta AI only accesses messages explicitly shared with it, and all chats remain protected by end-to-end encryption, ensuring privacy. The "Advanced Chat Privacy" setting does not relate to Meta AI access, as it is already restricted by default.
- Claim: Viral social media video claims that WhatsApp Group Chats are being read by Meta AI due to current settings, and enabling the "Advance Chat Privacy" setting can prevent this.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

In a recent ruling, a U.S. federal judge sided with Meta in a copyright lawsuit brought by a group of prominent authors who alleged that their works were illegally used to train Meta’s LLaMA language model. While this seems like a significant legal victory for the tech giant, it may not be so. Rather, this is a good case study for creators in the USA to refine their legal strategies and for policymakers worldwide to act quickly to shape the rules of engagement between AI and intellectual property.
The Case: Meta vs. Authors
In Kadrey v. Meta, the plaintiffs alleged that Meta trained its LLaMA models on pirated copies of their books, violating copyright law. However, U.S. District Judge Vince Chhabria ruled that the authors failed to prove two critical things: that their copyrighted works had been used in a way that harmed their market and that such use was not “transformative.” In fact, the judge ruled that converting text into numerical representations to train an AI was sufficiently transformative under the U.S. fair use doctrine. He also noted that the authors’ failure to demonstrate economic harm undermined their claims. Importantly, he clarified that this ruling does not mean that all AI training data usage is lawful, only that the plaintiffs didn’t make a strong enough case.
Meta even admitted that some data was sourced from pirate sites like LibGen, but the Judge still found that fair use could apply because the usage was transformative and non-exploitative.
A Tenuous Win
Chhabria’s decision emphasised that this is not a blanket endorsement of using copyrighted content in AI training. The judgment leaned heavily on the procedural weakness of the case and not necessarily on the inherent legality of Meta’s practices.
Policy experts are warning that U.S. courts are currently interpreting AI training as fair use in narrow cases, but the rulings may not set the strongest judicial precedent. The application of law could change with clearer evidence of commercial harm or a more direct use of content.
Moreover, the ruling does not address whether authors or publishers should have the right to opt out of AI model training, a concern that is gaining momentum globally.
Implications for India
The case highlights a glaring gap in India’s copyright regime: it is outdated. Since most AI companies are located in the U.S., courts have had the opportunity to examine copyright in the context of AI-generated content. India has yet to start. Recently, news agency ANI filed a case alleging copyright infringement against OpenAI for training on its copyrighted material. However, the case is only at an interim stage. The final outcome of the case will have a significant impact on the legality of these language models being able to use copyrighted material for training.
Considering that India aims to develop “state-of-the-art foundational AI models trained on Indian datasets” under the IndiaAI Mission, the lack of clear legal guidance on what constitutes fair dealing when using copyrighted material for AI training is a significant gap.
Thus, key points of consideration for policymakers include:
- Need for Fair Dealing Clarity: India’s fair-dealing provisions under the Copyright Act, 1957, are narrower than U.S. fair use. The doctrine may have to be reviewed to strike a balance between this law and the requirement of diverse datasets to develop foundational models rooted in Indian contexts. A parallel concern regarding data privacy also arises.
- Push for Opt-Out or Licensing Mechanisms: India should consider whether to introduce a framework that requires companies to license training data or provide an opt-out system for creators, especially given the volume of Indian content being scraped by global AI systems.
- Digital Public Infrastructure for AI: India’s policymakers could take this opportunity to invest in public datasets, especially in regional languages, that are both high quality and legally safe for AI training.
- Protecting Local Creators: India needs to ensure that its authors, filmmakers, educators and journalists are protected from having their work repurposed without compensation, since power asymmetries between Big Tech and local creators can lead to exploitation of the latter.
Conclusion
The ruling in Meta’s favour is just one win for the developer. The real questions about consent, compensation and creative control remain unanswered. Meanwhile, the lesson for India is urgent: it needs AI policies that balance innovation with creator rights and provide legal certainty and ethical safeguards as it accelerates its AI ecosystem. Further, as global tech firms race ahead, India must not remain a passive data source; it must set the terms of its digital future. This will help the country move a step closer to achieving its goal of building sovereign AI capacity and becoming a hub for digital innovation.
References
- https://www.theguardian.com/technology/2025/jun/26/meta-wins-ai-copyright-lawsuit-as-us-judge-rules-against-authors
- https://www.wired.com/story/meta-scores-victory-ai-copyright-case/
- https://www.cnbc.com/2025/06/25/meta-llama-ai-copyright-ruling.html
- https://www.mondaq.com/india/copyright/1348352/what-is-fair-use-of-copyright-doctrine
- https://www.pib.gov.in/PressReleasePage.aspx?PRID=2113095#:~:text=One%20of%20the%20key%20pillars,models%20trained%20on%20Indian%20datasets.
- https://www.ndtvprofit.com/law-and-policy/ani-vs-openai-delhi-high-court-seeks-responses-on-copyright-infringement-charges-against-chatgpt

What is Juice Jacking?
We all use different devices during the day, but they converge to a common point when the battery runs out, the cables and adaptors we use to charge the devices are daily necessities for everyone. These cables and adaptors have access to the only port in the phones and hence are used for juice-jacking attacks. Juice jacking is when someone installs malware or spyware software in your device using an unknown charging port or cable.
How does juice jacking work?
We all use phones and gadgets, like I-phones, smartphones, Android devices: and smartwatches, to simplify our lives. But one thing common in it is the charging cables or USB ports, as the data and power supply pass through the same port/cable.
This is potentially a problem with devastating consequences. When your phone connects to another device, it pairs with it (ports/cables) and establishes a trusted relationship. That means the devices can exchange data. During the charging process, the USB cord opens a path into your device that a cybercriminal can exploit.
There is a default setting in the phones where data transfer is disabled, and the connections which provide the power are visible at the end. For example, in the latest models, when you plug your device into a new port or a computer, a question is pooped asking whether the device is trusted. The device owner cannot see what the USB port connects to in case of juice jacking. So, if you plug in your phone and someone checks on the other end, they may be able to transfer data between your device and theirs, thus leading to a data breach.
A leading airline was recently hacked into, which caused delayed flights across the country. When investigated, it was found that malware was planted in the system by using a USB port, which allowed the hackers access to critical data to launch their malware attack.
FBI’s Advisory
Federal Bureau of Investigation and other Interpol agencies have been very critical of cybercriminals. Inter-agency cooperation has improved the pace of investigation and chances of apprehending criminals. In a tweet by the FBI, the issue of Juice Jakcking was addressed, and public places like airports, railways stations, shopping malls etc., are pinpointed places where such attacks have been seen and reported. These places offer easy access to charging points for various devices, which are the main targets for bad actors. The FBI advises people not to use the charging points and cables at airports, railways stations and hotels and also lays emphasis upon the importance of carrying your own cable and charger.
Tips to protect yourself from juice jacking
There are a few simple and effective tips to keep your smart devices smart, such as –
- Avoid using public charging stations: The best way to protect yourself and your devices is to avoid public charging stations it’s always a good habit to charge your phones in your car, at home, and in offices when not in use.
- Using a wall outlet is a safer option: If it’s too urgent for you to use a public station, try to use wall outlets rather than poles because data can’t get easily transferred.
- Use other methods/modes of charging: If you are travelling, carrying a power bank is always safe, as it is easy to carry.
- Software security: – It’s always advised to update your phone’s software regularly. Once connected to the charging station, lock your device. This will prevent it from syncing or transferring data.
- Enable Airplane mode while charging: If you need to charge your phone from an unknown source in a public area, it is advisable to put the phone on airplane mode or switch it off to prevent anyone from gaining access to your device through any open network.
However, many mobile phones (including iPhones) turn on automatically when connected to power. As a result, your mileage may vary. This is an effective safeguard if your phone does not turn on automatically when connected to power.
Conclusion
As of present, juice-jacking attacks are less frequent. While not the most common type of attack today, the number of occurrences is expected to rise as smartphone gadget usage and penetration are rising across the globe. Our cyber safety and security are in our hands, and hence protecting them is our paramount digital duty. Always remember we see no harm in charging ports, but that doesn’t mean that the possibility of a threat can be ruled out completely. With the increased use of ports for charging, earphones, and data transfer, such crimes will continue and evolve with time. Thus, it is essential to counter these attacks by sharing knowledge and awareness of such crimes and reporting them to competent authorities to eradicate the menace of cybercriminals from our digital ecosystem.
.webp)
Introduction
AI-generated fake videos are proliferating on the Internet indeed becoming more common by the day. There is a use of sophisticated AI algorithms that help manipulate or generate multimedia content such as videos, audio, and images. As a result, it has become increasingly difficult to differentiate between genuine, altered, or fake content, and these AI-manipulated videos look realistic. A recent study has shown that 98% of deepfake-generated videos have adult content featuring young girls, women, and children, with India ranking 6th among the nations that suffer from misuse of deepfake technology. This practice has dangerous consequences and could harm an individual's reputation, and criminals could use this technology to create a false narrative about a candidate or a political party during elections.
The working of deepfake videos is based on algorithms that refine the fake content, and the generators are built and trained in such a way as to get the desired output. The process is repeated several times, allowing the generator to improve the content until it seems realistic, making it more flawless. Deepfake videos are created by specific approaches some of them are: -
- Lip syncing: This is the most common technique used in deepfake. Here, the voice recordings of the video, make it appear as to what was originally said by the person appearing in the video.
- Audio deepfake: For Audio-generated deepfake, a generative adversarial network (GAN) is used to colon a person’s voice, based on the vocal patterns and refine it till the desired output is generated.
- Deepfake has become so serious that the technology could be used by bad actors or by cyber-terrorist squads to set their Geo-political agendas. Looking at the present situation in the past few the number of cases has just doubled, targeting children, women and popular faces.
- Greater Risk: in the last few years the cases of deep fake have risen. by the end of the year 2022, the number of cases has risen to 96% against women and children according to a survey.
- Every 60 seconds, a deepfake pornographic video is created, now quicker and more affordable than ever, it takes less than 25 minutes and costs using just one clean face image.
- The connection to deepfakes is that people can become targets of "revenge porn" without the publisher having sexually explicit photographs or films of the victim. They may be made using any number of random pictures or images collected from the internet to obtain the same result. This means that almost everyone who has taken a selfie or shared a photograph of oneself online faces the possibility of a deepfake being constructed in their image.
Deepfake-related security concerns
As deepfakes proliferate, more people are realising that they can be used not only to create non-consensual porn but also as part of disinformation and fake news campaigns with the potential to sway elections and rekindle frozen or low-intensity conflicts.
Deepfakes have three security implications: at the international level, strategic deepfakes have the potential to destroy precarious peace; at the national level, deepfakes may be used to unduly influence elections, and the political process, or discredit opposition, which is a national security concern, and at the personal level, the scope for using Women suffer disproportionately from exposure to sexually explicit content as compared to males, and they are more frequently threatened.
Policy Consideration
Looking at the present situation where the cases of deepfake are on the rise against women and children, the policymakers need to be aware that deepfakes are utilized for a variety of valid objectives, including artistic and satirical works, which policymakers should be aware of. Therefore, simply banning deepfakes is not a way consistent with fundamental liberties. One conceivable legislative option is to require a content warning or disclaimer. Deepfake is an advanced technology and misuse of deepfake technology is a crime.
What are the existing rules to combat deepfakes?
It's worth noting that both the IT Act of 2000 and the IT Rules of 2021 require social media intermediaries to remove deep-fake videos or images as soon as feasible. Failure to follow these guidelines can result in up to three years in jail and a Rs 1 lakh fine. Rule 3(1)(b)(vii) requires social media intermediaries to guarantee that its users do not host content that impersonates another person, and Rule 3(2)(b) requires such content to be withdrawn within 24 hours of receiving a complaint. Furthermore, the government has stipulated that any post must be removed within 36 hours of being published online. Recently government has also issued an advisory to social media intermediaries to identify misinformation and deepfakes.
Conclusion
It is important to foster ethical and responsible consumption of technology. This can only be achieved by creating standards for both the creators and users, educating individuals about content limits, and providing information. Internet-based platforms should also devise techniques to deter the uploading of inappropriate information. We can reduce the negative and misleading impacts of deepfakes by collaborating and ensuring technology can be used in a better manner.
References
- https://timesofindia.indiatimes.com/life-style/parenting/moments/how-social-media-scandals-like-deepfake-impact-minors-and-students-mental-health/articleshow/105168380.cms?from=mdr
- https://www.aa.com.tr/en/science-technology/deepfake-technology-putting-children-at-risk-say-experts/2980880
- https://wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/