#FactCheck: Viral AI image shown as AI -171 caught fire after collision
Executive Summary:
A dramatic image circulating online, showing a Boeing 787 of Air India engulfed in flames after crashing into a building in Ahmedabad, is not a genuine photograph from the incident. Our research has confirmed it was created using artificial intelligence.

Claim:
Social media posts and forwarded messages allege that the image shows the actual crash of Air India Flight AI‑171 near Ahmedabad airport on June 12, 2025.

Fact Check:
In our research to validate the authenticity of the viral image, we conducted a reverse image search and analyzed it using AI-detection tools like Hive Moderation. The image showed clear signs of manipulation, distorted details, and inconsistent lighting. Hive Moderation flagged it as “Likely AI-generated”, confirming it was synthetically created and not a real photograph.

In contrast, verified visuals and information about the Air India Flight AI-171 crash have been published by credible news agencies like The Indian Express and Hindustan Times, confirmed by the aviation authorities. Authentic reports include on-ground video footage and official statements, none of which feature the viral image. This confirms that the circulating photo is unrelated to the actual incident.

Conclusion:
The viral photograph is a fabrication, created by AI, not a real depiction of the Ahmedabad crash. It does not represent factual visuals from the tragedy. It’s essential to rely on verified images from credible news agencies and official investigation reports when discussing such sensitive events.
- Claim: An Air India Boeing aircraft crashed into a building near Ahmedabad airport
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs
.webp)
Introduction
The Senate bill introduced on 19 March 2024 in the United States would require online platforms to obtain consumer consent before using their data for Artificial Intelligence (AI) model training. If a company fails to obtain this consent, it would be considered a deceptive or unfair practice and result in enforcement action from the Federal Trade Commission (FTC) under the AI consumer opt-in, notification standards, and ethical norms for training (AI Consent) bill. The legislation aims to strengthen consumer protection and give Americans the power to determine how their data is used by online platforms.
The proposed bill also seeks to create standards for disclosures, including requiring platforms to provide instructions to consumers on how they can affirm or rescind their consent. The option to grant or revoke consent should be made available at any time through an accessible and easily navigable mechanism, and the selection to withhold or reverse consent must be at least as prominent as the option to accept while taking the same number of steps or fewer as the option to accept.
The AI Consent bill directs the FTC to implement regulations to improve transparency by requiring companies to disclose when the data of individuals will be used to train AI and receive consumer opt-in to this use. The bill also commissions an FTC report on the technical feasibility of de-identifying data, given the rapid advancements in AI technologies, evaluating potential measures companies could take to effectively de-identify user data.
The definition of ‘Artificial Intelligence System’ under the proposed bill
ARTIFICIALINTELLIGENCE SYSTEM- The term artificial intelligence system“ means a machine-based system that—
- Is capable of influencing the environment by producing an output, including predictions, recommendations or decisions, for a given set of objectives; and
- 2. Uses machine or human-based data and inputs to
(i) Perceive real or virtual environments;
(ii) Abstract these perceptions into models through analysis in an automated manner (such as by using machine learning) or manually; and
(iii) Use model inference to formulate options for outcomes.
Importance of the proposed AI Consent Bill USA
1. Consumer Data Protection: The AI Consent bill primarily upholds the privacy rights of an individual. Consent is necessitated from the consumer before data is used for AI Training; the bill aims to empower individuals with unhinged autonomy over the use of personal information. The scope of the bill aligns with the greater objective of data protection laws globally, stressing the criticality of privacy rights and autonomy.
2. Prohibition Measures: The proposed bill intends to prohibit covered entities from exploiting the data of consumers for training purposes without their consent. This prohibition extends to the sale of data, transfer to third parties and usage. Such measures aim to prevent data misuse and exploitation of personal information. The bill aims to ensure companies are leveraged by consumer information for the development of AI without a transparent process of consent.
3. Transparent Consent Procedures: The bill calls for clear and conspicuous disclosures to be provided by the companies for the intended use of consumer data for AI training. The entities must provide a comprehensive explanation of data processing and its implications for consumers. The transparency fostered by the proposed bill allows consumers to make sound decisions about their data and its management, hence nurturing a sense of accountability and trust in data-driven practices.
4. Regulatory Compliance: The bill's guidelines call for strict requirements for procuring the consent of an individual. The entities must follow a prescribed mechanism for content solicitation, making the process streamlined and accessible for consumers. Moreover, the acquisition of content must be independent, i.e. without terms of service and other contractual obligations. These provisions underscore the importance of active and informed consent in data processing activities, reinforcing the principles of data protection and privacy.
5. Enforcement and Oversight: To enforce compliance with the provisions of the bill, robust mechanisms for oversight and enforcement are established. Violations of the prescribed regulations are treated as unfair or deceptive acts under its provisions. Empowering regulatory bodies like the FTC to ensure adherence to data privacy standards. By holding covered entities accountable for compliance, the bill fosters a culture of accountability and responsibility in data handling practices, thereby enhancing consumer trust and confidence in the digital ecosystem.
Importance of Data Anonymization
Data Anonymization is the process of concealing or removing personal or private information from the data set to safeguard the privacy of the individual associated with it. Anonymised data is a sort of information sanitisation in which data anonymisation techniques encrypt or delete personally identifying information from datasets to protect data privacy of the subject. This reduces the danger of unintentional exposure during information transfer across borders and allows for easier assessment and analytics after anonymisation. When personal information is compromised, the organisation suffers not just a security breach but also a breach of confidence from the client or consumer. Such assaults can result in a wide range of privacy infractions, including breach of contract, discrimination, and identity theft.
The AI consent bill asks the FTC to study data de-identification methods. Data anonymisation is critical to improving privacy protection since it reduces the danger of re-identification and unauthorised access to personal information. Regulatory bodies can increase privacy safeguards and reduce privacy risks connected with data processing operations by investigating and perhaps implementing anonymisation procedures.
The AI consent bill emphasises de-identification methods, as well as the DPDP Act 2023 in India, while not specifically talking about data de-identification, but it emphasises the data minimisation principles, which highlights the potential future focus on data anonymisation processes or techniques in India.
Conclusion
The proposed AI Consent bill in the US represents a significant step towards enhancing consumer privacy rights and data protection in the context of AI development. Through its stringent prohibitions, transparent consent procedures, regulatory compliance measures, and robust enforcement mechanisms, the bill strives to strike a balance between fostering innovation in AI technologies while safeguarding the privacy and autonomy of individuals.
References:
- https://fedscoop.com/consumer-data-consent-training-ai-models-senate-bill/#:~:text=%E2%80%9CThe%20AI%20CONSENT%20Act%20gives,Welch%20said%20in%20a%20statement
- https://www.dataguidance.com/news/usa-bill-ai-consent-act-introduced-house#:~:text=USA%3A%20Bill%20for%20the%20AI%20Consent%20Act%20introduced%20to%20House%20of%20Representatives,-ConsentPrivacy%20Law&text=On%20March%2019%2C%202024%2C%20US,the%20U.S.%20House%20of%20Representatives
- https://datenrecht.ch/en/usa-ai-consent-act-vorgeschlagen/
- https://www.lujan.senate.gov/newsroom/press-releases/lujan-welch-introduce-billto-require-online-platforms-receive-consumers-consent-before-using-their-personal-data-to-train-ai-models/

Introduction
On March 12, the Ministry of Corporate Affairs (MCA) proposed the Bill to curb anti-competitive practices of tech giants through ex-ante regulation. The Draft Digital Competition Bill is to apply to ‘Core Digital Services,’ with the Central Government having the authority to update the list periodically. The proposed list in the Bill encompasses online search engines, online social networking services, video-sharing platforms, interpersonal communications services, operating systems, web browsers, cloud services, advertising services, and online intermediation services.
The primary highlight of the Digital Competition Law Report created by the Committee on Digital Competition Law presented to the Parliament in the 2nd week of March 2024 involves a recommendation to introduce new legislation called the ‘Digital Competition Act,’ intended to strike a balance between certainty and flexibility. The report identified ten anti-competitive practices relevant to digital enterprises in India. These are anti-steering, platform neutrality/self-preferencing, bundling and tying, data usage (use of non-public data), pricing/ deep discounting, exclusive tie-ups, search and ranking preferencing, restricting third-party applications and finally advertising Policies.
Key Take-Aways: Digital Competition Bill, 2024
- Qualitative and quantitative criteria for identifying Systematically Significant Digital Enterprises, if it meets any of the specified thresholds.
- Financial thresholds in each of the immediately preceding three financial years like turnover in India, global turnover, gross merchandise value in India, or global market capitalization.
- User thresholds in each of the immediately preceding 3 financial years in India like the core digital service provided by the enterprise has at least 1 crore end users, or it has at least 10,000 business users.
- The Commission may make the designation based on other factors such as the size and resources of an enterprise, number of business or end users, market structure and size, scale and scope of activities of an enterprise and any other relevant factor.
- A period of 90 days is provided to notify the CCI of qualification as an SSDE. Additionally, the enterprise must also notify the Commission of other enterprises within the group that are directly or indirectly involved in the provision of Core Digital Services, as Associate Digital Enterprises (ADE) and the qualification shall be for 3 years.
- It prescribes obligations for SSDEs and their ADEs upon designation. The enterprise must comply with certain obligations regarding Core Digital Services, and non-compliance with the same shall result in penalties. Enterprises must not directly or indirectly prevent or restrict business users or end users from raising any issue of non-compliance with the enterprise’s obligations under the Act.
- Avoidance of favouritism in product offerings by SSDE, its related parties, or third parties for the manufacture and sale of products or provision of services over those offered by third-party business users on the Core Digital Service in any manner.
- The Commission will be having the same powers as vested to a civil court under the Code of Civil Procedure, 1908 when trying a suit.
- Penalty for non-compliance without reasonable cause may extend to Rs 1 lakh for each day during which such non-compliance occurs (max. of Rs 10 crore). It may extend to 3 years or with a fine, which may extend to Rs 25 crore or with both. The Commission may also pass an order imposing a penalty on an enterprise (not exceeding 1% of the global turnover) in case it provides incorrect, incomplete, misleading information or fails to provide information.
Suggestions and Recommendations
- The ex-ante model of regulation needs to be examined for the Indian scenario and studies need to be conducted on it has worked previously in different jurisdictions like the EU.
- The Bill should be aimed at prioritising the fostering of fair competition by preventing monopolistic practices in digital markets exclusively. A clear distinction from the already existing Competition Act, 2002 in its functioning needs to be created so that there is no overlap in the regulations and double jeopardy is not created for enterprises.
- Restrictions on tying and bundling and data usage have been shown to negatively impact MSMEs that rely significantly on big tech to reduce operational costs and enhance customer outreach.
- Clear definitions of "dominant position" and "anti-competitive behaviour" are essential for effective enforcement in terms of digital competition need to be defined.
- Encouraging innovation while safeguarding consumer data privacy in consonance with the DPDP Act should be the aim. Promoting interoperability and transparency in algorithms can prevent discriminatory practices.
- Regular reviews and stakeholder consultations will ensure the law adapts to rapidly evolving technologies.
- Collaboration with global antitrust bodies which is aimed at enhancing cross-border regulatory coherence and effectiveness.
Conclusion
The need for a competition law that is focused exclusively on Digital Enterprises is the need of the hour and hence the Committee recommended enacting the Digital Competition Act to enable CCI to selectively regulate large digital enterprises. The proposed legislation should be restricted to regulate only those enterprises that have a significant presence and ability to influence the Indian digital market. The impact of the law needs to be restrictive to digital enterprises and it should not encroach upon matters not influenced by the digital arena. India's proposed Digital Competition Bill aims to promote competition and fairness in the digital market by addressing anti-competitive practices and dominant position abuses prevalent in the digital business space. The Ministry of Corporate Affairs has received 41-page public feedback on the draft which is expected to be tabled next year in front of the Parliament.
References
- https://www.medianama.com/wp-content/uploads/2024/03/DRAFT-DIGITAL-COMPETITION-BILL-2024.pdf
- https://prsindia.org/files/policy/policy_committee_reports/Report_Summary-Digital_Competition_Law.pdf
- https://economictimes.indiatimes.com/tech/startups/meity-meets-india-inc-to-hear-out-digital-competition-law-concerns/articleshow/111091837.cms?from=mdr
- https://www.mca.gov.in/bin/dms/getdocument?mds=gzGtvSkE3zIVhAuBe2pbow%253D%253D&type=open
- https://www.barandbench.com/law-firms/view-point/digital-competition-laws-beginning-of-a-new-era
- https://www.linkedin.com/pulse/policy-explainer-digital-competition-bill-nimisha-srivastava-lhltc/
- https://www.lexology.com/library/detail.aspx?g=5722a078-1839-4ece-aec9-49336ff53b6c
.webp)
Introduction
Big Tech has been pushing back against regulatory measures, particularly regarding data handling practices. X Corp (formerly Twitter) has taken a prominent stance in India. The platform has filed a petition against the Central and State governments, challenging content-blocking orders and opposing the Center’s newly launched Sahyog portal. The X Corp has furthermore labelled the Sahyog Portal as a 'censorship portal' that enables government agencies to issue blocking orders using a standardized template.
The key regulations governing the tech space in India include the IT Act of 2000, IT Rules 2021 and 2023 (which stress platform accountability and content moderation), and the DPDP Act 2023, which intersects with personal data governance. This petition by the X Corp raises concerns for digital freedom, platform accountability, and the evolving regulatory frameworks in India.
Elon Musk vs Indian Government: Key Issues at Stake
The 2021 IT Rules, particularly Rule 3(1)(d) of Part II, outline intermediaries' obligations regarding ‘Content Takedowns’. Intermediaries must remove or disable access to unlawful content within 36 hours of receiving a court order or government notification. Notably, the rules do not require government takedown requests to be explicitly in writing, raising concerns about potential misuse.
X’s petition also focuses on the Sahyog Portal, a government-run platform that allows various agencies and state police to request content removal directly. They contend that the failure to comply with such orders can expose intermediaries' officers to prosecution. This has sparked controversy, with platforms like Elon Musk’s X arguing that such provisions grant the government excessive control, potentially undermining free speech and fostering undue censorship.
The broader implications include geopolitical tensions, potential business risks for big tech companies, and significant effects on India's digital economy, user engagement, and platform governance. Balancing regulatory compliance with digital rights remains a crucial challenge in this evolving landscape.
The Global Context: Lessons from Other Jurisdictions
The ‘EU's Digital Services Act’ establishes a baseline 'notice and takedown' system. According to the Act, hosting providers, including online platforms, must enable third parties to notify them of illegal content, which they must promptly remove to retain their hosting defence. The DSA also mandates expedited removal processes for notifications from trusted flaggers, user suspension for those with frequent violations, and enhanced protections for minors. Additionally, hosting providers have to adhere to specific content removal obligations, including the elimination of terrorist content within one hour and deploying technology to detect known or new CSAM material and remove it.
In contrast to the EU, the US First Amendment protects speech from state interference but does not extend to private entities. Dominant digital platforms, however, significantly influence discourse by moderating content, shaping narratives, and controlling advertising markets. This dual role creates tension as these platforms balance free speech, platform safety, and profitability.
India has adopted a model closer to the EU's approach, emphasizing content moderation to curb misinformation, false narratives, and harmful content. Drawing from the EU's framework, India could establish third-party notification mechanisms, enforce clear content takedown guidelines, and implement detection measures for harmful content like terrorist material and CSAM within defined timelines. This would balance content regulation with platform accountability while aligning with global best practices.
Key Concerns and Policy Debates
As the issue stands, the main concerns that arise are:
- The need for transparency in government orders for takedowns, the reasons and a clear framework for why they are needed and the guidelines for doing so.
- The need for balancing digital freedom with national security and the concerns that arise out of it for tech companies. Essentially, the role platforms play in safeguarding the democratic values enshrined in the Constitution of India.
- This court ruling by the Karnataka HC will have the potential to redefine the principles upon which the intermediary guidelines function under the Indian laws.
Potential Outcomes and the Way Forward
While we wait for the Hon’ble Court’s directives and orders in response to the filed suit, while the court's decision could favour either side or lead to a negotiated resolution, the broader takeaway is the necessity of collaborative policymaking that balances governmental oversight with platform accountability. This debate underscores the pressing need for a structured and transparent regulatory framework for content moderation. Additionally, this case also highlights the importance of due process in content regulation and the need for legal clarity for tech companies operating in India. Ultimately, a consultative and principles-based approach will be key to ensuring a fair and open digital ecosystem.
References
- https://www.thehindu.com/sci-tech/technology/elon-musks-x-sues-union-government-over-alleged-censorship-and-it-act-violations/article69352961.ece
- https://www.hindustantimes.com/india-news/elon-musk-s-x-sues-union-government-over-alleged-censorship-and-it-act-violations-101742463516588.html
- https://www.financialexpress.com/life/technology-explainer-why-has-x-accused-govt-of-censorship-3788648/
- https://thelawreporters.com/elon-musk-s-x-sues-indian-government-over-alleged-censorship-and-it-act-violations
- https://www.linklaters.com/en/insights/blogs/digilinks/2023/february/the-eu-digital-services-act---a-new-era-for-online-harms-and-intermediary-liability