#FactCheck: Viral video claims Ahmedabad plane crash but actually a Hollywood Movie Clip
Executive Summary:
A viral video claiming the crash site of Air India Flight AI-171 in Ahmedabad has misled many people online. The video has been confirmed not to be from India or a recent crash, but was filmed at Universal Studios Hollywood on a TV or movie set meant to look like a plane crash set piece for a movie.

Claim:
A video that purportedly shows the wreckage of Air India Flight AI-171 after crashing in Ahmedabad on June 12, 2025, has circulated among social media users. The video shows a large amount of aircraft wreckage as well as destroyed homes and a scene reminiscent of an emergency, making it look genuine.

Fact check:
In our research, we took screenshots from the viral video and used reverse image search, which matched visuals from Universal Studios Hollywood. It became apparent that the video is actually from the most famous “War of the Worlds" set, located in Universal Studios Hollywood. The set features a 747 crash scene that was constructed permanently for Steven Spielberg's movie in 2005. We also found a YouTube video. The set has fake smoke poured on it, with debris scattered about and additional fake faceless structures built to represent a scene with a larger crisis. Multiple videos on YouTube here, here, and here can be found from the past with pictures of the tour at Universal Studios Hollywood, the Boeing 747 crash site, made for a movie.


The Universal Studios Hollywood tour includes a visit to a staged crash site featuring a Boeing 747, which has unfortunately been misused in viral posts to spread false information.

While doing research, we were able to locate imagery indicating that the video that went viral, along with the Universal Studios tour footage, provided an exact match and therefore verified that the video had no connection to the Ahmedabad incident. A side-by-side comparison tells us all we need to know to uncover the truth.


Conclusion:
The viral video claiming to show the aftermath of the Air India crash in Ahmedabad is entirely misleading and false. The video is showing a fictitious movie set from Universal Studios Hollywood, not a real disaster scene in India. Spreading misinformation like this can create unnecessary panic and confusion in sensitive situations. We urge viewers to only trust verified news and double-check claims before sharing any content online.
- Claim: Massive explosion and debris shown in viral video after Air India crash.
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction
In 2019 India got its bill on Data protection in the form of the Personal Data Protection Bill 2019. This bill focused on digital rights and duties pertaining to data privacy. However, the bill was scrapped by the Govt in mid-2022, and a new bill was drafted, Successor bill was introduced as the Digital Personal Data Protection Bill, 2022 on 18th November 2022, which was made open for public comments and consultations and now the bill is expected to be tabled at the parliament in the Monsoon session.
What is DPDP, 2022?
Digital Personal Data Protection Bill, is the lasted draft regulation for data privacy in India. The bill has been essentially focused towards data protection by companies and the keep aspect of Puttaswamy judgement of data privacy as a fundamental right has been upheld under the scope of the bill. The bill comes after nearly 150 recommendations which the parliamentary committee made when the PDP, 2019 was scrapped.
The bill highlights the following keen aspects-
- Data Fiduciary- The entity (an individual, company, firm, state, etc.) which decides the purpose and means of processing an individual’s personal data.
- Data Principle- The individual to whom personal data is related.
- Processing- The entire cycle of operations that can be carried out concerning personal data.
- Gender Neutrality- For the first time in India’s legislative history, “her” and “she” have been used to refer to individuals irrespective of gender.
- Right to Erase Data- Data principals will have the right to demand the erasure and correction of data collected by the data fiduciary.
- Cross-border data transfer- The bill allows cross-border data after an assessment of relevant factors by the Central Government.
- Children’s Rights- The bill guarantees the right to digital privacy under the protection of parents/guardians.
- Heavy Penalties- The bill enforces heavy penalties for non-compliance with the provisions, not exceeding Rs 500 crore.
Data Protection Board
The bill lays down provisions for setting up a Data Protection Board. This board will be an independent body acting solely on the factors of data privacy and protection of the data principles and maintaining compliance by data fiduciaries. The board will be headed by a chairperson of essential and relevant qualifications, and members and various other officials shall assist him/her under the board. The board will serve grievance redressal to the data principles and can conduct investigation, inquiry, proceeding, and pass orders equivalent to a Civil court. The proceeding will be undertaken on the principle of natural justice, and the aggrieved can file an appeal to the High Court of appropriate jurisdiction.
Global Comparison
Many countries have data protection laws that regulate the processing of personal data. Some of the notable examples include:
- European Union: The EU’s General Data Protection Regulation (GDPR) is one of the world’s most comprehensive data protection laws. It regulates public and private entities’ processing of personal data and gives individuals a wide range of rights over their personal data.
- United States: The US has several data protection laws that apply to specific sectors or types of data, such as health data (HIPAA) or financial data (Gramm-Leach-Bliley Act). However, there is no comprehensive federal data protection law in the US.
- Japan: Japan’s Personal Information Protection Act (PIPA) regulates the handling of personal data by private entities and gives individuals certain rights over their personal data.
- Australia: Australia’s Privacy Act 1988 regulates the handling of personal data by public and private entities and gives individuals certain rights over their personal data.
- Brazil: Brazil’s General Data Protection Law (LGPD) regulates the processing of personal data by public and private entities and gives individuals certain rights over their personal data. It also imposes heavy fines and penalties on entities that violate the provisions of the law.
Overall, while there are some similarities in data protection laws across countries, there are also significant differences in scope, applicability, and enforcement. It is important for organisations to understand the data protection laws that apply to their operations and take appropriate steps to comply with these laws.
Parliamentary Asscent
The case of violation of the privacy policy by WhatsApp at the Hon’ble Supreme Court resulted in a significant advocacy for Data privacy as a fundamental right, and it was held that, as suggested otherwise in the privacy policy, Whatsapp was sharing its user’s data with Meta. This massive breach of trust could have led to data mismanagement affecting thousands of Indian users. The Hon’ble Supreme Court has taken due consideration of data privacy and its challenges in India and asked the Govt to table the bill in Parliament. The bill will be tabled for discussion in the monsoon session. The Supreme Court has set up a constitutional bench to check the bill’s scope, extent and applications and provide its judicial oversight. The constitution bench of Justices KM Joseph, Ajay Rastogi, Aniruddha Bose, Hrishikesh Roy and CT Ravikumar has fixed the matter for hearing in August in order to enforce the potential changes and amendments in the act post the parliamentary discussion.
Conclusion
India is the world’s largest democracy, so the crucial aspects of passing laws and amendments have always been followed by the government and kept under check by the judiciary. The discussion over bills is a crucial part of the democratic process, and bills as important as Digital Personal Data Protection need to be discussed and analysed thoroughly in both houses of Parliament to ensure the govt passes a sustainable and efficient law.

Introduction
In 2025, the internet is entering a new paradigm and it is hard not to witness it. The internet as we know it is rapidly changing into a treasure trove of hyper-optimised material over which vast bot armies battle to the death, thanks to the amazing advancements in artificial intelligence. All of that advancement, however, has a price, primarily in human lives. It turns out that releasing highly personalised chatbots on a populace that is already struggling with economic stagnation, terminal loneliness, and the ongoing destruction of our planet isn’t exactly a formula for improved mental health. This is the truth of 75% of the kids and teen population who have had chats with chatbot-generated fictitious characters. AI, or artificial intelligence, Chatbots are becoming more and more integrated into our daily lives, assisting us with customer service, entertainment, healthcare, and education. But as the impact of these instruments grows, accountability and moral behaviour become more important. An investigation of the internal policies of a major international tech firm last year exposed alarming gaps: AI chatbots were allowed to create content with child romantic roleplaying, racially discriminatory reasoning, and spurious medical claims. Although the firm has since amended aspects of these rules, the exposé underscores an underlying global dilemma - how can we regulate AI to maintain child safety, guard against misinformation, and adhere to ethical considerations without suppressing innovation?
The Guidelines and Their Gaps
The tech giants like Meta and Google are often reprimanded for overlooking Child Safety and the overall increase in Mental health issues in children and adolescents. According to reports, Google introduced Gemini AI kids, a kid-friendly version of its Gemini AI chatbot, which represents a major advancement in the incorporation of generative artificial intelligence (Gen-AI) into early schooling. Users under the age of thirteen can use supervised accounts on the Family Link app to access this version of Gemini AI Kids.
AI operates on the premise of data collection and analysis. To safeguard children’s personal information in the digital world, the Digital Personal Data Protection Act, 2023 (DPDP Act) introduces particular safeguards. According to Section 9, before processing the data of children, who are defined as people under the age of 18, Data Fiduciaries, entities that decide the goals and methods of processing personal data, must get verified consent from a parent or legal guardian. Furthermore, the Act expressly forbids processing activities that could endanger a child’s welfare, such as behavioural surveillance and child-targeted advertising. According to court interpretations, a child's well-being includes not just medical care but also their moral, ethical, and emotional growth.
While the DPDP Act is a big start in the right direction, there are still important lacunae in how it addresses AI and Child Safety. Age-gating systems, thorough risk rating, and limitations specific to AI-driven platforms are absent from the Act, which largely concentrates on consent and damage prevention in data protection. Furthermore, it ignores the threats to children’s emotional safety or the long-term psychological effects of interacting with generative AI models. Current safeguards are self-regulatory in nature and dispersed across several laws, such as the Bhartiya Nyaya Sanhita, 2023. These include platform disclaimers, technology-based detection of child-sexual abuse content, and measures under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Child Safety and AI
- The Risks of Romantic Roleplay - Enabling chatbots to engage in romantic roleplaying with youngsters is among the most concerning discoveries. These interactions can result in grooming, psychological trauma, and relaxation to inappropriate behaviour, even if they are not explicitly sexual. Having illicit or sexual conversations with kids in cyberspace is unacceptable, according to child protection experts. However, permitting even "flirtatious" conversation could normalise risky boundaries.
- International Standards and Best Practices - The concept of "safety by design" is highly valued in child online safety guidelines from around the world, including UNICEF's Child Online Protection Guidelines and the UK's Online Safety Bill. This mandating of platforms and developers to proactively remove risks, not reactively to respond to harms, is the bare minimum standard that any AI guidelines must meet if they provide loopholes for child-directed roleplay.
Misinformation and Racism in AI Outputs
- The Disinformation Dilemma - The regulations also allowed AI to create fictional narratives with disclaimers. For example, chatbots were able to write articles promulgating false health claims or smears against public officials, as long as they were labelled as "untrue." While disclaimers might give thin legal cover, they add to the proliferation of misleading information. Indeed, misinformation tends to spread extensively because users disregard caveat labels in favour of provocative assertions.
- Ethical Lines and Discriminatory Content - It is ethically questionable to allow AI systems to generate racist arguments, even when requested. Though scholarly research into prejudice and bias may necessitate such examples, unregulated generation has the potential to normalise damaging stereotypes. Researchers warn that such practice brings platforms from being passive hosts of offensive speech to active generators of discriminatory content. It is a difference that makes a difference, as it places responsibility squarely on developers and corporations.
The Broader Governance Challenge
- Corporate Responsibility and AI Material generated by AI is not equivalent to user speech—it is a direct reflection of corporate training, policy decisions, and system engineering. This fact requires a greater level of accountability. Although companies can update guidelines following public criticism, that there were such allowances in the first place indicates a lack of strong ethical regulation.
- Regulatory Gaps Regulatory regimes for AI are currently in disarray. The EU AI Act, the OECD AI Principles, and national policies all emphasise human rights, transparency, and accountability. The few, though, specify clear guidelines for content risks such as child roleplay or hate narratives. This absence of harmonised international rules leaves companies acting in the shadows, establishing their own limits until contradicted.
An active way forward would include
- Express Child Protection Requirements: AI systems must categorically prohibit interactions with children involving flirting or romance.
- Misinformation Protections: Generative AI must not be allowed to generate knowingly false material, disclaimers being irrelevant.
- Bias Reduction: Developers need to proactively train systems against generating discriminatory accounts, not merely tag them as optional outputs.
- Independent Regulation: External audit and ethics review boards can supply transparency and accountability independent of internal company regulations.
Conclusion
The guidelines that are often contentious are more than the internal folly of just one firm; they point to a deeper systemic issue in AI regulation. The stakes rise as generative AI becomes more and more integrated into politics, healthcare, education, and social interaction. Racism, false information, and inadequate child safety measures are severe issues that require quick resolution. Corporate regulation is only one aspect of the future; other elements include multi-stakeholder participation, stronger global systems, and ethical standards. In the end, rather than just corporate interests, trust in artificial neural networks will be based on their ability to preserve the truth, protect the weak, and represent universal human values.
References
- https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
- https://www.lakshmisri.com/insights/articles/ai-for-children/#
- https://the420.in/meta-ai-chatbot-guidelines-child-safety-racism-misinformation/
- https://www.unicef.org/documents/guidelines-industry-online-child-protection
- https://www.oecd.org/en/topics/sub-issues/ai-principles.html
- https://artificialintelligenceact.eu/

Introduction
AI has penetrated most industries and telecom is no exception. According to a survey by Nvidia, enhancing customer experiences is the biggest AI opportunity for the telecom industry, with 35% of respondents identifying customer experiences as their key AI success story. Further, the study found nearly 90% of telecom companies use AI, with 48% in the piloting phase and 41% actively deploying AI. Most telecom service providers (53%) agree or strongly agree that adopting AI would provide a competitive advantage. AI in telecom is primed to be the next big thing and Google has not ignored this opportunity. It is reported that Google will soon add “AI Replies” to the phone app’s call screening feature.
How Does The ‘AI Call Screener’ Work?
With the busy lives people lead nowadays, Google has created a helpful tool to answer the challenge of responding to calls amidst busy schedules. Google Pixel smartphones are now fitted with a new feature that deploys AI-powered calling tools that can help with call screening, note-making during an important call, filtering and declining spam, and most importantly ending the frustration of being on hold.
In the official Google Phone app, users can respond to a caller through “new AI-powered smart replies”. While “contextual call screen replies” are already part of the app, this new feature allows users to not have to pick up the call themselves.
- With this new feature, Google Assistant will be able to respond to the call with a customised audio response.
- The Google Assistant, responding to the call, will ask the caller’s name and the purpose of the call. If they are calling about an appointment, for instance, Google will show the user suggested responses specific to that call, such as ‘Confirm’ or ‘Cancel appointment’.
Google will build on the call-screening feature by using a “multi-step, multi-turn conversational AI” to suggest replies more appropriate to the nature of the call. Google’s Gemini Nano AI model is set to power this new feature and enable it to handle phone calls and messages even if the phone is locked and respond even when the caller is silent.
Benefits of AI-Powered Call Screening
This AI-powered call screening feature offers multiple benefits:
- The AI feature will enhance user convenience by reducing the disruptions caused by spam calls. This will, in turn, increase productivity.
- It will increase call privacy and security by filtering high-risk calls, thereby protecting users from attempts of fraud and cyber crimes such as phishing.
- The new feature can potentially increase efficiency in business communications by screening for important calls, delegating routine inquiries and optimising customer service.
Key Policy Considerations
Adhering to transparent, ethical, and inclusive policies while anticipating regulatory changes can establish Google as a responsible innovator in AI call management. Some key considerations for AI Call Screener from a policy perspective are:
- The AI screen caller will process and transcribe sensitive voice data, therefore, the data handling policies for such need to be transparent to reassure users of regulatory compliance with various laws.
- AI has been at a crossroads in its ethical use and mitigation of bias. It will require the algorithms to be designed to avoid bias and reflect inclusivity in its understanding of language.
- The data that the screener will be using is further complicated by global and regional regulations such as data privacy regulations like the GDPR, DPDP Act, CCPA etc., for consent to record or transcribe calls while focussing on user rights and regulations.
Conclusion: A Balanced Approach to AI in Telecommunications
Google’s AI Call Screener offers a glimpse into the future of automated call management, reshaping customer service and telemarketing by streamlining interactions and reducing spam. As this technology evolves, businesses may adopt similar tools, balancing customer engagement with fewer unwanted calls. The AI-driven screening will also impact call centres, shifting roles toward complex, human-centred interactions while automation handles routine calls. They could have a potential effect on support and managerial roles. Ultimately, as AI call management grows, responsible design and transparency will be in demand to ensure a seamless, beneficial experience for all users.
References
- https://resources.nvidia.com/en-us-ai-in-telco/state-of-ai-in-telco-2024-report
- https://store.google.com/intl/en/ideas/articles/pixel-call-assist-phone-screen/
- https://www.thehindu.com/sci-tech/technology/google-working-on-ai-replies-for-call-screening-feature/article68844973.ece
- https://indianexpress.com/article/technology/artificial-intelligence/google-ai-replies-call-screening-9659612/