#FactCheck: A digitally altered video of actor Sebastian Stan shows him changing a ‘Tell Modi’ poster to one that reads ‘I Told Modi’ on a display panel.
Executive Summary:
A widely circulated video claiming to feature a poster with the words "I Told Modi" has gone viral, improperly connecting it to the April 2025 Pahalgam attack, in which terrorists killed 26 civilians. The altered Marvel Studios clip is allegedly a mockery of Operation Sindoor, the counterterrorism operation India initiated in response to the attack. This misinformation emphasizes how crucial it is to confirm information before sharing it online by disseminating misleading propaganda and drawing attention away from real events.
Claim:
A man can be seen changing a poster that says "Tell Modi" to one that says "I Told Modi" in a widely shared viral video. This video allegedly makes reference to Operation Sindoor in India, which was started in reaction to the Pahalgam terrorist attack on April 22, 2025, in which militants connected to The Resistance Front (TRF) killed 26 civilians.


Fact check:
Further research, we found the original post from Marvel Studios' official X handle, confirming that the circulating video has been altered using AI and does not reflect the authentic content.

By using Hive Moderation to detect AI manipulation in the video, we have determined that this video has been modified with AI-generated content, presenting false or misleading information that does not reflect real events.

Furthermore, we found a Hindustan Times article discussing the mysterious reveal involving Hollywood actor Sebastian Stan.

Conclusion:
It is untrue to say that the "I Told Modi" poster is a component of a public demonstration. The text has been digitally changed to deceive viewers, and the video is manipulated footage from a Marvel film. The content should be ignored as it has been identified as false information.
- Claim: Viral social media posts confirm a Pakistani military attack on India.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
Recently, in April 2025, security researchers at Oligo Security exposed a substantial and wide-ranging threat impacting Apple's AirPlay protocol and its use via third-party Software Development Kit (SDK). According to the research, the recently discovered set of vulnerabilities titled "AirBorne" had the potential to enable remote code execution, escape permissions, and leak private data across many different Apple and third-party AirPlay-compatible devices. With well over 2.35 billion active Apple devices globally and tens of millions of third-party products that incorporate the AirPlay SDK, the scope of the problem is enormous. Those wireless-based vulnerabilities pose not only a technical threat but also increasingly an enterprise- and consumer-level security concern.
Understanding AirBorne: What’s at Stake?
AirBorne is the title given to a set of 23 vulnerabilities identified in the AirPlay communication protocol and its related SDK utilised by third-party vendors. Seventeen have been given official CVE designations. The most severe among them permit Remote Code Execution (RCE) with zero or limited user interaction. This provides hackers the ability to penetrate home networks, business environments, and even cars with CarPlay technology onboard.
Types of Vulnerabilities Identified
AirBorne vulnerabilities support a range of attack types, including:
- Zero-Click and One-Click RCE
- Access Control List (ACL) bypass
- User interaction bypass
- Local arbitrary file read
- Sensitive data disclosure
- Man-in-the-middle (MITM) attacks
- Denial of Service (DoS)
Each vulnerability can be used individually or chained together to escalate access and broaden the attack surface.
Remote Code Execution (RCE): Key Attack Scenarios
- MacOS – Zero-Click RCE (CVE-2025-24252 & CVE-2025-24206) These weaknesses enable attackers to run code on a MacOS system without any user action, as long as the AirPlay receiver is enabled and configured to accept connections from anyone on the same network. The threat of wormable malware propagating via corporate or public Wi-Fi networks is especially concerning.
- MacOS – One-Click RCE (CVE-2025-24271 & CVE-2025-24137) If AirPlay is set to "Current User," attackers can exploit these CVEs to deploy malicious code with one click by the user. This raises the level of threat in shared office or home networks.
- AirPlay SDK Devices – Zero-Click RCE (CVE-2025-24132) Third-party speakers and receivers through the AirPlay SDK are particularly susceptible, where exploitation requires no user intervention. Upon compromise, the attackers have the potential to play unauthorised media, turn microphones on, or monitor intimate spaces.
- CarPlay Devices – RCE Over Wi-Fi, Bluetooth, or USB CVE-2025-24132 also affects CarPlay-enabled systems. Under certain circumstances, the perpetrators around can take advantage of predictable Wi-Fi credentials, intercept Bluetooth PINs, or utilise USB connections to take over dashboard features, which may distract drivers or listen in on in-car conversations.
Other Exploits Beyond RCE
AirBorne also opens the door for:
- Sensitive Information Disclosure: Exposing private logs or user metadata over local networks (CVE-2025-24270).
- Local Arbitrary File Access: Letting attackers read restricted files on a device (CVE-2025-24270 group).
- DoS Attacks: Exploiting NULL pointer dereferences or misformatted data to crash processes like the AirPlay receiver or WindowServer, forcing user logouts or system instability (CVE-2025-24129, CVE-2025-24177, etc.).
How the Attack Works: A Technical Breakdown
AirPlay sends on port 7000 via HTTP and RTSP, typically encoded in Apple's own plist (property list) form. Exploits result from incorrect treatment of these plists, especially when skipping type checking or assuming invalid data will be valid. For instance, CVE-2025-24129 illustrates how a broken plist can produce type confusion to crash or execute code based on configuration.
A hacker must be within the same Wi-Fi network as the targeted device. This connection might be through a hacked laptop, public wireless with shared access, or an insecure corporate connection. Once in proximity, the hacker has the ability to use AirBorne bugs to hijack AirPlay-enabled devices. There, bad code can be released to spy, gain long-term network access, or spread control to other devices on the network, perhaps creating a botnet or stealing critical data.
The Espionage Angle
Most third-party AirPlay-compatible devices, including smart speakers, contain built-in microphones. In theory, that leaves the door open for such devices to become eavesdropping tools. While Oligo did not show a functional exploit for the purposes of espionage, the risk suggests the gravity of the situation.
The CarPlay Risk Factor
Besides smart home appliances, vulnerabilities in AirBorne have also been found for Apple CarPlay by Oligo. Those vulnerabilities, when exploited, may enable attackers to take over an automobile's entertainment system. Fortunately, the attacks would need pairing directly through USB or Bluetooth and are much less practical. Even so, it illustrates how networks of connected components remain at risk in various situations, ranging from residences to automobiles.
How to Protect Yourself and Your Organisation
- Immediate Actions:
- Update Devices: Ensure all Apple devices and third-party gadgets are upgraded to the latest software version.
- Disable AirPlay Receiver: If AirPlay is not in use, disable it in system settings.
- Restrict AirPlay Access: Use firewalls to block port 7000 from untrusted IPs.
- Set AirPlay to “Current User” to limit network-based attack.
- Organisational Recommendations:
- Communicate the patch urgency to employees and stakeholders.
- Inventory all AirPlay-enabled hardware, including in meeting rooms and vehicles.
- Isolate vulnerable devices on segmented networks until updated.
Conclusion
The AirBorne vulnerabilities illustrate that even mature systems such as Apple's are not immune from foundational security weaknesses. The extensive deployment of AirPlay across devices, industries, and ecosystems makes these vulnerabilities a systemic threat. Oligo's discovery has served to catalyse immediate response from Apple, but since third-party devices remain vulnerable, responsibility falls to users and organisations to install patches, implement robust configurations, and compartmentalise possible attack surfaces. Effective proactive cybersecurity hygiene, network segmentation, and timely patches are the strongest defences to avoid these kinds of wormable, scalable attacks from becoming large-scale breaches.
References
- https://www.oligo.security/blog/airborne
- https://www.wired.com/story/airborne-airplay-flaws/
- https://thehackernews.com/2025/05/wormable-airplay-flaws-enable-zero.html
- https://www.securityweek.com/airplay-vulnerabilities-expose-apple-devices-to-zero-click-takeover/
- https://www.pcmag.com/news/airborne-flaw-exposes-airplay-devices-to-hacking-how-to-protect-yourself
- https://cyberguy.com/security/hackers-breaking-into-apple-devices-through-airplay/

Introduction
Assisted Reproductive Technology (“ART”) refers to a diverse set of medical procedures designed to aid individuals or couples in achieving pregnancy when conventional methods are unsuccessful. This umbrella term encompasses various fertility treatments, including in vitro fertilization (IVF), intrauterine insemination (IUI), and gamete and embryo manipulation. ART procedures involve the manipulation of both male and female reproductive components to facilitate conception.
The dynamic landscape of data flows within the healthcare sector, notably in the realm of ART, demands a nuanced understanding of the complex interplay between privacy regulations and medical practices. In this context, the Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011, play a pivotal role, designating health information as "sensitive personal data or information" and underscoring the importance of safeguarding individuals' privacy. This sensitivity is particularly pronounced in the ART sector, where an array of personal data, ranging from medical records to genetic information, is collected and processed. The recent Assisted Reproductive Technology (Regulation) Act, 2021, in conjunction with the Digital Personal Data Protection Act, 2023, establishes a framework for the regulation of ART clinics and banks, presenting a layered approach to data protection.
A note on data generated by ART
Data flows in any sector are scarcely uniform and often not easily classified under straight-jacket categories. Consequently, mapping and identifying data and its types become pivotal. It is believed that most data flows in the healthcare sector are highly sensitive and personal in nature, which may severely compromise the privacy and safety of an individual if breached. The Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011 (“SPDI Rules”) categorizes any information pertaining to physical, physiological, mental conditions or medical records and history as “sensitive personal data or information”; this definition is broad enough to encompass any data collected by any ART facility or equipment. These include any information collected during the screening of patients, pertaining to ovulation and menstrual cycles, follicle and sperm count, ultrasound results, blood work etc. It also includes pre-implantation genetic testing on embryos to detect any genetic abnormality.
But data flows extend beyond mere medical procedures and technology. Health data also involves any medical procedures undertaken, the amount of medicine and drugs administered during any procedure, its resultant side effects, recovery etc. Any processing of the above-mentioned information, in turn, may generate more personal data points relating to an individual’s political affiliations, race, ethnicity, genetic data such as biometrics and DNA etc.; It is seen that different ethnicities and races react differently to the same/similar medication and have different propensities to genetic diseases. Further, it is to be noted that data is not only collected by professionals but also by intelligent equipment like AI which may be employed by any facility to render their service. Additionally, dissemination of information under exceptional circumstances (e.g. medical emergency) also affects how data may be classified. Considerations are further nuanced when the fundamental right to identity of a child conceived and born via ART may be in conflict with the fundamental right to privacy of a donor to remain anonymous.
Intersection of Privacy laws and ART laws:
In India, ART technology is regulated by the Assisted Reproductive Technology (Regulation) Act, 2021 (“ART Act”). With this, the Union aims to regulate and supervise assisted reproductive technology clinics and ART banks, prevent misuse and ensure safe and ethical practice of assisted reproductive technology services. When read with the Digital Personal Data Protection Act, 2023 (“DPDP Act”) and other ancillary guidelines, the two legislations provide some framework regulations for the digital privacy of health-based apps.
The ART Act establishes a National Assisted Reproductive Technology and Surrogacy Registry (“National Registry”) which acts as a central database for all clinics and banks and their nature of services. The Act also establishes a National Assisted Reproductive Technology and Surrogacy Board (“National Board”) under the Surrogacy Act to monitor the implementation of the act and advise the central government on policy matters. It also supervises the functioning of the National Registry, liaises with State Boards and curates a code of conduct for professionals working in ART clinics and banks. Under the DPDP Act, these bodies (i.e. National Board, State Board, ART clinics and banks) are most likely classified as data fiduciaries (primarily clinics and banks), data processors (these may include National Board and State boards) or an amalgamation of both (these include any appropriate authority established under the ART Act for investigation of complaints, suspend or cancellation of registration of clinics etc.) depending on the nature of work undertaken by them. If so classified, then the duties and liabilities of data fiduciaries and processors would necessarily apply to these bodies. As a result, all bodies would necessarily have to adopt Privacy Enhancing Technologies (PETs) and other organizational measures to ensure compliance with privacy laws in place. This may be considered one of the most critical considerations of any ART facility since any data collected by them would be sensitive personal data pertaining to health, regulated by the Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011 (“SPDI Rules 2011”). These rules provide for how sensitive personal data or information are to be collected, handled and processed by anyone.
The ART Act independently also provides for the duties of ART clinics and banks in the country. ART clinics and banks are required to inform the commissioning couple/woman of all procedures undertaken and all costs, risks, advantages, and side effects of their selected procedure. It mandatorily ensures that all information collected by such clinics and banks to not informed to anyone except the database established by the National Registry or in cases of medical emergency or on order of court. Data collected by clinics and banks (these include details on donor oocytes, sperm or embryos used or unused) are required to be detailed and must be submitted to the National Registry online. ART banks are also required to collect personal information of donors including name, Aadhar number, address and any other details. By mandating online submission, the ART Act is harmonized with the DPDP Act, which regulates all digital personal data and emphasises free, informed consent.
Conclusion
With the increase in active opt-ins for ART, data privacy becomes a vital consideration for all healthcare facilities and professionals. Safeguard measures are not only required on a corporate level but also on a governmental level. It is to be noted that in the 262 Session of the Rajya Sabha, the Ministry of Electronics and Information Technology reported 165 data breach incidents involving citizen data from January 2018 to October 2023 from the Central Identities Data Repository despite publicly denying. This discovery puts into question the safety and integrity of data that may be submitted to the National Registry database, especially given the type of data (both personal and sensitive information) it aims to collate. At present the ART Act is well supported by the DPDP Act. However, further judicial and legislative deliberations are required to effectively regulate and balance the interests of all stakeholders.
References
- The Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011
- Caring for Intimate Data in Fertility Technologies https://dl.acm.org/doi/pdf/10.1145/3411764.3445132
- Digital Personal Data Protection Act, 2023
- https://www.wolterskluwer.com/en/expert-insights/pharmacogenomics-and-race-can-heritage-affect-drug-disposition

A photo featuring Bollywood actor Abhishek Bachchan and actress Aishwarya Rai is being widely shared on social media. In the image, the Kedarnath Temple is clearly visible in the background. Users are claiming that the couple recently visited the Kedarnath shrine for darshan.
Cyber Peace Foundation’s research found the viral claim to be false. Our research revealed that the image of Abhishek Bachchan and Aishwarya Rai is not real, but AI-generated, and is being misleadingly shared as a genuine photograph.
Claim
On January 14, 2026, a user on X (formerly Twitter) shared the viral image with a caption suggesting that all rumours had ended and that the couple had restarted their life together. The post further claimed that both actors were seen smiling after a long time, implying that the image was taken during their visit to Kedarnath Temple.
The post has since been widely circulated on social media platforms

Fact Check:
To verify the claim, we first conducted a keyword search on Google related to Abhishek Bachchan, Aishwarya Rai, and a Kedarnath visit. However, we did not find any credible media reports confirming such a visit.
On closely examining the viral image, several visual inconsistencies raised suspicion about it being artificially generated. To confirm this, we scanned the image using the AI detection tool Sightengine. According to the tool’s analysis, the image was found to be 84 percent AI-generated.

Additionally, we scanned the same image using another AI detection tool, HIVE Moderation. The results showed an even stronger indication, classifying the image as 99 percent AI-generated.

Conclusion
Our research confirms that the viral image showing Abhishek Bachchan and Aishwarya Rai at Kedarnath Temple is not authentic. The picture is AI-generated and is being falsely shared on social media to mislead users.